Should You Ask Brain-Teaser Questions in Technical Interviews?

There is an interesting discussion on LinkedIn about the use of brain-teaser questions in interviewing testers.  To those who aren’t members of that group and can’t view the content, here’s the excerpt of the question and my response.

Do brain teaser questions in a QA interview tell the interviewer how good a job a QA professional will perform if hired? Are these questions relevant?

My response

…While in my role I only have to interview developers, not testers, but I think the interviewing principles are pretty much the same.

I don’t ask brain-teaser questions to find out if they are problem-solvers or able to think outside the box because I can discover that more easily and accurately by asking them to solve coding problems. After all, I hire developer to code, not to play puzzles. They might be the worst puzzle solvers (nobody’s perfect eh) but as long as they can deal with the types of coding problems my company need, that’s good enough. Likewise, why can’t we just ask tester write test case, execute test, document bugs, deal with developers etc. if their ability to perform those tasks effectively and creatively is exactly what needed for the job at hand?

Brain-teaser questions used to be popular, e.g. in Microsoft (read this book for the types of questions used to be asked at Microsoft, good read if you want to learn some puzzles anyway), but not anymore (see this post by one of the designers of the C# language at Microsoft, aside the joking part of course). Why? Simply because they discovered better way to predict one’s future performance (e.g. ask developers code and testers test) without all the inherent deficiencies of brain-teaser type questions (google for these problems).

I want to elaborate bit on this topic.  Frankly speaking, I think interviewers resorting to brain-teaser questions to, as they usually claim, find out about the candidate’s ability to solve problem and think outside the box only waste their time.  Usually interviewing puzzles, as found in the internet or books, are stripped out of their contexts and significantly simplified in a way that any immediate response to them is almost certainly a canned answer, probably retrieved from the same source.  More often than not, when an interviewer asks a question like “why are manhole covers round but not square?”, he’s expecting a canned answer, for if the candidate were to provide a different answer the interviewer wouldn’t have the knowledge to assess the validity of that answer anyway.  Or when encountering a real smart and knowledgeable candidate without prior exposure of such puzzle, the interviewer would have a hard time “patching” his question with more constraints in response to sincere (or wickedly sincere, if the candidate was Feynman) inquiries.  Why?  Because when asking that question about manhole covers, one’s talking in the domain of manhole construction and all those physics and geology around it.  Unless he’s an expert in that particular domain, how is he supposed to know what makes sense or not except from recognizing the words from a canned answer?  (Read this joke to see how Feynman would response to such answer.  You don’t want to be that interviewer, do you?)

But, what if the interviewer is really an expert in the domain of the puzzles he’s asking about?  Or what if his puzzles are truly “domain-agnostic” that anyone with sufficient intelligence can solve?  Well then, first and foremost, he’s too supposed to be an expert in the domain which is in need of a position, why not taking advantages of such domain knowledge instead?  Second, since there is no such thing as a universal problem solver or creativity thinker (ok, there might be a few of these people in the world, but they’re certainly not looking for a job), why not spend the valuable interviewing time assessing the candidate in the exact domain the job requires?  In order words, if you interview developers, ask them to code and if you interview testers, ask them test.  More often than not, you will come across candidates who can talk all day about their skills and achievements and solve all kinds of puzzles by rote but can’t code or test well.  Some people have first hand experience with such epidemic, so read what they have to say.  I am one of these people and have tons of stories to share.  Like this one.  Recently I interviewed a candidate for a .NET developer position.  This candidate appeared to be very knowledgeable about many things like WPF, MVP, MVVM and so forth.  (As if he had done a lot of reading the night before.)  When it came to the part that I asked him to write a piece of code on the whiteboard, strange thing happened.

Me: do you understand binary search?

Guy: sure, some algorithm in university

Me: good, can you write a function performing binary search on a given array of ordered numbers in any language you’re comfortable with?

Guy: sure

…after 15 struggling minutes of pseudo-coding (or at least I thought, for I couldn’t recognize the language he used) consisting of overly complicated (or more precisely, convoluted) logic which didn’t seem to go anywhere near solving the problem…

Guy: okay, I’m not happy doing this, I can talk about the algorithm, but I’m just not happy coding it

Me (still puzzled from his sudden push back): why?  Aren’t developers supposed to be adept at expressing algorithms via code, not words? (yes, I was that brutally honest)

Guy: I’m just not comfortable, can we talk about something else?

…Well, let’s just say I kept looking for more candidates

To be clear, binary search was just an arbitrary problem I thought of at the moment.  It could have been anything, including design and architecture questions requiring candidates to draw some diagrams, present some approaches and resolve the tradeoffs and so on.  When asking about a named algorithm however, I will generally be glad to change the algorithm if the candidate says he isn’t aware of it.  After all I want to assess his ability, not memory.  That being said, this particular binary search problem is among those classic algorithmic problems that if some programmers claim to not be aware of, an alarm will ring on me.

That’s a recent but not the only encounters I have experienced since I ask almost all candidates to write code in an interview, except for those who are apparently qualified based on their speaking, blogging, projects or recommendations (still, I put the bar for automatic acceptance high, for the mis-hire is a big loss that I don’t want to take).  This method has served me well ever since I started doing technical interviews.  Granted, this isn’t the only thing I ask in a programming interview, there are plenty of others but they are not relevant in this post where we talk about a better way than brain-teaser questions to find out how good and creative a problem solver the candidate is.  Let’s save the others for future posts.

Although I have interviewed many developers, I can’t recall interviewing a single tester.  Yet, I think the interviewing principles, particularly this type of method of validating skills, apply equivalently well to both developer and tester interviews.  I’ll be glad to hear from your own experience.  Have you tried this method?  Did it work for you?  Any interesting story to share?  Or, for the shake of being enlightened, I’ll be very happy to hear from those who have reaped success with brain-teaser type of questions.

1 comment on Should You Ask Brain-Teaser Questions in Technical Interviews?

  1. Avatar William Aramoni says:

    Thanks for the explanation. I also like to add that when a construction Manager is hiring a civil engineer why not ask the following question: what’s the difference between web based testing and desktop testing? It is also not the civil engineer domain knowledge but it tells the Construction Manager if the applicant is smart in software testing.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Great Content

Get Started with qTest