Six Coding Interview Formats to Replace LeetCode Bullshit

Michael Hoffman
8 min readNov 7, 2022

--

Based on my recent job market experience, I’m happy to report that esoteric algorithm interviews seem to be on the decline. Below I describe six other, better coding interview formats.

AI-generated image of an oil painting of a dog staring intensely at a laptop.

LeetCode-style interviews are bad.

You can tell that engineering interviews are broken because there is a burgeoning industry that’s sprung up around interview prep. Services such as AlgoExpert, Interview Kickstart, Coderbyte, HackerRank, and of course LeetCode promise to help you ace the coding interview. Engineers with many years of experience writing code professionally are turning to these services to help them succeed. I know because I just did so myself.

But why? Prospective employers want programmers to do jobs similar to the ones they’ve already succeeded in. Doesn’t the work itself prepare you for the work? The problem is that many companies aren’t testing whether candidates have the skills to do the job. Instead, they make candidates do algorithmic coding challenges. These challenges are distinct enough from actual engineering work that even seasoned practitioners study for them every time they change jobs. What a waste!

Are LeetCode-style interviews falling out of fashion?

The good news is that the tide may be turning. Or at least that’s how it felt, anecdotally, based on the small and specific sample of interviews I encountered in my October 2022 job search.

I completed 4 interview loops, each with at least three technical modules (including first-round screens). Two of these loops were with large (>5k employees) tech companies, one was with a large media company, and the last was with a small tech company. (None were FAANG companies.)

In total, I was asked exactly one Leetcode-style algorithms and data structures question. Just one! (And even that one was deemphasized. It was tacked on to the last 10 minutes of an outsourced first-round coding screen. I was barely able to start it and still passed the screen.)

What I encountered instead was a wide variety of other technical interview formats, each of which tested for skills that more closely resemble the ones engineers use in their daily work. Some of these formats are commonplace, while others were new to me.

I’ve compiled them here in the hope that they might inspire folks to ditch LeetCode challenges in favor of interview formats that evaluate more relevant skills.

1. Solve a Problem with Code

The interviewer presents a prompt that describes a problem the candidate must solve using code, with an emphasis on behavioral correctness, speed of execution, and code style.

Evaluates for: Code organization, language fluency, speed of execution, problem solving style, verbal communication, testing.

In format, the problem solving interview looks a lot like an algorithms interview. The biggest difference is that algorithmic complexity is deemphasized. A solution that has the correct behavior and is well written is a good solution; there is no significant distinction here between a brute force solution and an optimal one.

This kind of interview is not just an easy LeetCode problem. Swapping out a LeetCode-y interview for a problem solving interview is not lowering the bar. These interviews can be designed to be very challenging. They may require the candidate to model complex phenomenon with tricky edge cases. They may also require the candidate to work quite quickly. I encountered several instances that had multiple, cumulative parts, where the prompt was revealed in stages. Optionally, this interview can also emphasize testing methodology.

2. Integrate with an API

The candidate must accomplish a task that requires making requests to an external API. The interviewer provides API documentation.

Evaluates for: Effective use of documentation, use and handling of HTTP requests, code organization, language fluency, speed of execution.

This is arguably a variant of the “problem solving with code” format, but working with HTTP APIs is such a common task in modern development that it makes sense to specifically evaluate a candidate’s skill at it.

There are a couple variants of this format.

  1. The candidate makes a simple (probably GET) request to an API and does some work with the data the API returns. Here, the work is focused on parsing and transforming the response to solve a problem.
  2. The interviewer provides the candidate with a dataset from which the candidate must construct a (probably POST) request to an API. Here, the work is focused on parsing and transforming the provided data prior to making the request that will solve a problem.

Like the problem solving format, this type of interview can be broken into progressive stages. For example, I encountered an API integration problem that was revealed in three stages:

  • Stage 1: Parse and filter provided data into specified a structure (one that would later be used in an API request).
  • Stage 2: Make a relatively trivial POST request to a specified API.
  • Stage 3: Put it all together by using the work of stage one and some additional logic to make a POST request to the API with a complex body.

A nice feature of this format is that it is language agnostic. Any language that has a reasonable interface for making HTTP requests should be fair game for candidates to use in this interview.

3. Investigate a Bug

The interviewer presents the candidate with the codebase for an application that has a defect identified by a failing test. The candidate must search for the source of the bug and, if they can, fix it.

Evaluates for: Code navigation, code comprehension, ability to get oriented in a new codebase, methodical hypothesis forming and testing, communication.

I love this interview format because it closely mirrors something engineers do on the job all the time: try to fix something that’s broken with little more context than a description of the unexpected behavior. The candidate must form hypotheses about which codepaths might be related to the defect and methodically evaluate these hypotheses.

Importantly, the evaluation rubric for this format does not necessarily require the candidate to fix the bug to pass. One version of this I saw used the codebase for a highly abstract testing-related library. It took me some time merely to understand what the expected and actual behaviors were. By the end of the interview, I’d found the codepath related to the bug and could describe in detail what the code was doing wrong, but I hadn’t begun to attempt a fix. And, apparently, I passed. Bug investigation interviews are not about writing code, they’re about the methods and skills you bring to reading code.

Companies who want to do a debugging interview should have codebases ready in many languages candidates might want to use.

4. Write a Technical Proposal

The interviewer presents the candidate with a codebase and a product spec for a feature. The candidate must write a technical proposal describing how they would implement the spec in the codebase. (Take-home / async)

Evaluates for: Written communication, ability to get oriented in a new codebase, technical judgement and vision.

Ok, this isn’t quite a coding interview, but I think it gets at some of the skills coding interviews are usually meant to evaluate. Many software teams require engineers to write proposals for major code or architecture changes. The purposes of these technical briefs are many: early correction of design mistakes, knowledge sharing, documentation, etc. It makes sense, then, to evaluate engineers on their ability to produce such documents quickly and at a high quality.

This type of interview works best as an asynchronous, take-home project, perhaps with a discussion of the document on interview day. Because take-home assignments should not require many hours of work, asking someone to understand (and likely run) a codebase with an unfamiliar language, frameworks, and toolchain is probably unreasonable. In cases where it is too burdensome to create technical proposal interview prompts for every language a candidate might want to use, this type of interview is best for roles where fluency in a particular language is one of the requirements.

5. Implement a Feature

The candidate is given access to an existing codebase and a spec for a new feature they must add to the codebase. (Could be take-home / async)

Evaluates for: Code comprehension, ability to get oriented in a new codebase, code style and organization, product sense, speed of execution, testing.

This type of interview is difficult to design well, but if done right it can give candidates a chance to show a broad range of job-relevant skills. Adding features to existing codebases is, of course, a common task for engineers. The trick is being able to simulate that task in such a way that it can be reasonably accomplished in the timeframe of an interview. For this reason, feature implementation interviews are good candidates for take-home interviews. Once again, companies may want to prepare prompts for this interview in a wide variety of languages to accommodate all candidates.

6. Review Code

The interviewer presents the candidate with a pull request and the candidate must review it.

Evaluates for: Empathy and emotional intelligence, written communication, code comprehension, ability to get oriented in a new codebase, technical judgement.

This is the only interview format I’m including that I have not personally experienced for either side—but I hope that changes someday soon! Code review is a near-daily activity for many engineers, and doing it well requires technical sophistication, clarity of writing, and emotional intelligence. Candidates get to show that they can offer actionable feedback in a respectful way—something that is hard to get direct signal on in other interviews.

Code review interviews present the same challenges for companies as debugging, feature writing, and technical proposal interviews; namely, you may need to have PRs in many languages to accommodate candidates with a range of technical backgrounds.

Why do LeetCode-style interviews still exist?

Given the wide variety of superior alternatives, why do some companies persist in screening candidates using LeetCode-style algorithms and data structures interviews? The likely answers are boring: inertia and laziness. They’ve been doing it this way for years and algorithms interviews are easy to design and administer. As noted above, many of the more practical interview formats require significant up-front and maintenance work for the companies that offer them.

There’s likely also a gatekeeping element. LeetCode-style interviews don’t prove you can do the work of the job, they prove you’re part of the club. We all had to study this crap so you do too.

In my more conspiratorial moments I’ve wondered whether LeetCode-style interviews are really testing for willingness to conform. Are you eager to jump through arbitrary hoops without explanation to secure a role at our company? Great, you’re exactly the kind of cog we’re looking for.

Conclusion

If your company gives LeetCode-style interviews, reflect on whether they’re evaluating the skills you want you colleagues to have. If not, propose replacing them.

When you interview, ask recruiters what kind of coding interviews are involved in the process. (“Are the coding interviews focused on data structures and algorithms or do they tend to be more practical?”) If you have LeetCode bullshit coming your way — and only if you feel safe and comfortable doing so!—ask your recruiter and any engineering leaders you meet why they give that kind of interview. In my experience it kicks off an interesting conversation.

The old-style coding interview is deeply entrenched in the tech industry. It will take all of us to make it a relic of the past.

Thanks to Matt Bacchi and Justin Blank for helpful comments on a draft of this post, and to Rebecca Borison for a clarifying discussion of this topic!

--

--

Michael Hoffman
Michael Hoffman

Responses (5)