We want you to fail
No matter how you design your hiring process, it will always be flawed. Still, there are some things that have worked well for me in the past for software engineering positions.
One of them is the somewhat controversial Coding Challenge. When you're in a smaller team and doing the hiring yourself, you can gain some valuable insights from a coding challenge. However, it's also easy to draw the wrong conclusions.
Here's what we did and – almost more importantly – what we didn't do.
First of all, we put some effort into communicating what we were looking for. For example, in the job posting for a Junior Software Engineer, we were more descriptive of what the role and environment would look like and stated only a few hard requirements. In the calls with each applicant, we went into further detail about the company, the team, the role and our expectations.
The coding challenge was then designed around some of these requirements, only one of which was technical, asking for some basic experience with programming and the language and framework we use. For the challenge we extracted some real-world data from our application, simplified its structure and gave a few simple tasks (filtering, grouping and sorting).
When the call started, we carefully explained what we were looking for and what not to worry about. In particular we asked the candidates to:
- talk us through their approach and how it might change as they go
- look things up on the web when they need to
- let us know whenever they got stuck so we could help them along
- just get it to work first and not worry about cleaning up and re-factoring
We made it clear that using any of the help offered or allowed would not be held against them and that we were most interested in how they approached the task at hand. And while the tasks were well within junior territory, we were curious to see how candidates were dealing with failed attempts and getting stuck and how they worked with any help we would give. At the same time, whenever we noticed someone was going down the completely wrong track, we would point that out, so they wouldn't waste all their time on an approach that was certain to fail. In order to allow for the candidates to ask or look for help, we gave them three times as much time as we'd normally estimate the tasks to take.
What we saw was interesting. Some of the candidates would heavily rely on built-in libraries to do the work for them, others would loop through records in a more low-level approach. Several candidates finished just in time or didn't quite get all the tasks done, others only took a few minutes. Some were comfortable talking while they were thinking things through and even while coding, others were mostly quiet and then caught us up after finishing a few steps or whenever they got stuck. A few candidates needed very little support from us, for some it took quite a bit of help.
Even more so than we had expected, we found candidates on either end of these examples that we advanced to the next round. At the same time, it was clear to see that some candidates were just not who we were looking for.
Overall, by using this approach we learned a lot more about the candidates than how well they can memorize different sorting algorithms, as shown in this anecdote: