Thought I’d expand on this a little bit more: Up until mid 2019, Square had essentially one generalist interview process: A tech screen or two, 3 pair programming interviews, then two Q&A interviews; past experience, and Q&A architecture. (1/?) https://twitter.com/kyleve/status/1341416078786789379
(The tech screen and pair programming interviews were similar; you’d get on a google hangout and use some sort of collaborative coding environment or screen share to work with the interviewer on a question together. Each engineer picked their own question.)
(The Q&A interviews were: Past experience (tell us about a project you’d worked on), and Q&A architecture (with the aid of a whiteboard, let’s high level design a system together).)
The intent of this was good – trying to make the process as collaborative as possible, and remove any whiteboard coding – only use a whiteboard for diagrams, etc.
And honestly, I think it did work pretty well for a number of years. However, as the company got bigger and bigger, and we hired more and more engineers, a number of things happened:
1) If you were interviewed by someone of the same discipline as you became a bit more of luck of the draw. This made it harder for interviews to pull signal from the interview, and harder for candidates to understand the interviewer’s desires.
2) For Q&A past experience, this signal especially suffered – sure you can evaluate someone’s high level engineering prowess, but eg me interviewing a backend Java engineer, I can only derive so much. Especially for senior candidates.
3) For Q&A architecture, this was especially bad. We really try to make sure candidates get to lead the architecture in a direction they’re comfortable with (mobile for mobile, front end for front end, etc), but this... didn’t always happen, to put it nicely.
... You’d have backend engineers asking mobile engineers to design SQL query plans, explain how they’d do DB sharding, etc, and then rank them lower when they couldn’t do that. Of course they couldn’t do that.
...This is of course, terrible. Especially for senior candidates, the ability to architect a system well in their discipline is incredibly important. We’d basically lose that signal in those interviews and either pass on the candidate, or have to interview them again 🤦‍♂️
5) No shared rubrics for interview evaluation. This makes scores highly variable rather than unbiased and uniform.
4) As the number of engineers grew, it became harder and harder to scale the interview training process - so we moved from more hands-on training to automated training like videos and READMEs. This lost the collaborative, informative aspect of in-room training.
As well, as the # of engineers grew, the questions became less interesting and more algorithm-y. My pair programming question is: Let’s build a game of Tic Tac Toe. But in hiring bars**, it was common to end up with 1, 2, or even 3 algorithm pairings that were nearly identical.
Especially for disciplines like mobile where architecture and code maintainability matters much more than algorithms (by orders of magnitude), you again lack signal to make a hiring decision in hiring bar**.
(**Hiring bar: An unbiased set of people unrelated to the hiring manager or interview panel that review the interview feedback in a structured way to determine if the candidate meets the criteria for their engineering level (L3 through L7).)
...Honestly, it’s not to say that this interview process was awful or anything – in fact, most candidates that came through said that of all the other companies they were interviewing at, they far and away enjoyed their experience at Square the most.
...But it definitely wasn’t doing everything it could or should be doing, and it definitely over the years had trended towards a very generalist process, despite most of our engineering hires being specialists.
Similarly, as a hiring manager who focuses nearly explicitly on mobile; I’d seen people who went on to, by far, be my best mobile engineers, barely pass the existing process. We clearly aren’t screening for the right things.
I’m also biased here: As someone with no CS degree, I doubt I could have passed the process a (mainly the pairings if I ended up unlucky with an alg heavy panel), and I was really worried we were screening out or under-leveling other candidates with non-traditional backgrounds.
Oh also, something I forgot is that all these interviews focused basically entirely on technical ability: We didn’t explicitly screen for any of the non-technical parts of your job: Code review, working with PMs or design, debugging issues in the field, feature rollouts, etc..
Double also, high pressure environments like tech screens or pair programming or Q&A interviews where you’re coming in cold and trying to solve a problem you have never heard of before in front of someone you don’t know are just not realistic environments.
So anyways, there had to be a better way
Seeing all this, we decided to design a section, optional process for candidates to opt into – they could choose the old process if they wanted it, or they could choose the new process. What was the new process?
1) A take home question that should take you about 3-4 hours, and you have a few weeks to complete it.

2) You come “on-site” and do two pair programming interviews where you extend your own take home question’s functionality, so you’re working on known ground.
3) The Q&A past experience is replaced with a Q&A getting stuff done. One difference is this interview comes with a codified list of questions for the interviewers to begin with, to reduce bias and ensure interview coverage. It on the non-coding aspects of engineering.
4) Finally, the Q&A architecture is replaced with a Q&A past software design. Again comes with a codified list of questions, and focuses on the technical, but largely non-coding aspects of building software: Feature breakdown, architecture, logging, debugging, etc etc.
(Note: We removed the 2 tech screens and one on-site pairing (3 hours between these) to make the time spent on the whole process about the same as the old “normal” process.)
You’ll also notice that in all these interviews, we try to focus as much on putting the candidate in control: You do the take home question on your own time, you extend it on-site, and for the Q&As you explicitly talk about the work you’ve done.
Through the take home question; we also get an eye into how the candidates works “at rest”: What architectures and design patterns do they pick, do they correctly handle platform-specific problems well, are they happily shipping code with warnings, etc.
And then through the on-site extension pairings, we get an idea of how candidates work through extending existing code and adding features – since this most of your day to day job.
Overall, I think this process has given us a more realistic look into how candidates approach problems, especially for more senior candidates.
I think the process has also done what it meant to accomplish – we’re definitely seeing more success in the interview process from candidates with non-traditional backgrounds, and I think the Q&As tend to derive more signal.
We’re continuing to iterate on this process – eg developing rubrics and examples for better grading, investing more in hands on training of interviewers, etc. This is one of the asks from interviewers, because it is such a different process than they’re used to.
I don’t think there’s ever going to be a perfect process, that’s basically impossible. Overall I think the more choices you can give interview candidates in how they craft their own interview, the better – but I’m happy with how this turned out.
You can follow @kyleve.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.