linkedin Skip to Main Content
Just announced: We now support interviewing in spreadsheets!
Back to blog

Worried About ChatGPT in Coding Interviews? Here’s 3 Ways to Continue Running Great Interviews

Interviewing

Remember when you only had to worry about candidates copying and pasting code from StackOverflow?

Now there’s a new robot kid on the block, and boy, is it making interviewers and technical recruiters nervous.

Three panel comic. First a bartender is throwing a patron out of the bar, the patron is labelled "candidates using stackoverflow". in the second panel the bartender is labelled "interviewers". in the third panel, the patron is shown sneaking behind the bartender, and the patron is labelled "candidates using chatgpt".

And rightfully so – it opens a whole new way for candidates to pull the wool over your eyes and fake skills they don’t have, meaning you’ll have a bunch of useless new hires that will cost you hundreds of thousands of dollars.

Or not.

Let me remind you of this piece of wisdom I doled out in April of 2022:

We at CoderPad do not philosophically believe in “anti-cheat” tactics. That’s an arms race that only produces temporary winners – and it’s never good to start an interview coming from a place of distrust.

“But unknown CoderPad blogger,” you say (my coworkers call me Ken, by the way), “That’s all well and good for you to say, but I’m the one staking my company’s reputation on the line with these phony candidates. Do you just expect me to ignore ChatGPT?”

Of course not. First and foremost one of the best things you can do is to be clear with candidates of expectations and the consequences of using ChatGPT in an interview.

There’s also a spectrum of interview designs to help you mitigate ChatGPT’s influence. On one end, you can embrace it and create interview questions around its use to letting candidates know it’s okay to use it (like an open-book test). On the other end, you can work on creating questions that are ChatGPT-resistant. This blog post will focus on the latter – though I’d encourage you to consider the other two options.

Get ‘em talkin

Nothing puts a stop to using someone else’s solution than being asked to explain the logic behind it.

It’s unusual for a candidate to skip straight to the most ideal answer. Humans think through complex problems on an evolutionary path, starting simple then abandoning all but the most ideal branch. Eventually the best candidates do arrive at the right answer, but you can see their methodical thought process.

We’ve talked extensively about the benefits of making your interviews a collaborative discussion, but one of the lesser-known benefits these days is encouraging your candidates to explain the large chunk of pasted code you just saw pop up in your interview screen:

Nothing fishy here….

Instead of waiting for the interview playback to notice their massive copypasta and terminating their candidacy because of perceived cheating, ask them about the code they copied and pasted.

They may have a good reason – maybe they know it would be more efficient to use their time to outsource the trivial code to ChatGPT and focus instead on the lines that take more critical thinking. Or perhaps they knew the basic idea of what they wanted to code but needed a little help verifying the implementation – help like what they’d usually receive from a coworker. 

Or maybe they don’t have a good reason and lied about their coding skills. Whether or not their reasons (or lack thereof) are red flags is ultimately up to you. But you won’t know if the copy-paste you just witnessed is in good or bad faith until you get the candidate to open up about it.

The more you talk with them and make the interview experience collaborative, the more comfortable the candidate will be and the more likely they’ll explain their thought process to you.

You’ll learn more about their skill set by asking them how they think than just watching them code anyways. Or you can watch them squirm when ChatGPT answers their question incorrectly.

Create questions with nuance

One of the things we’re learning about current iterations of AI-powered tools like ChatGPT is that it doesn’t do so well at understanding nuance or the technical intricacies of specific topics. 

Take this question about creating palindromes from date formats:

Despite 1) knowing what a palindrome is and 2) knowing that different countries use different date formats, ChatGPT struggles with combining those two topics. 

There’s also this date-time issue:

Here, we can see that ChatGPT doesn’t quite understand timezones to give the user a correct answer when they ask for the date. 

ChatGPT will either give you a wrong answer (like above) or “beat around the bush” when faced with nuanced questions. Consider this conversation:

question: "who is the greatest football player of all time?" chatgpt answer: "The answer to this question is highly subjective and can vary depending on personal opinions and preferences. However, many people consider the Brazilian forward, Pelé, to be the greatest football player of all time. Pelé won three FIFA World Cups with Brazil and scored a remarkable 1,281 goals during his professional career. He was known for his incredible skill, speed, and agility on the field and his ability to score goals in seemingly impossible situations. Other players such as Diego Maradona, Lionel Messi, and Cristiano Ronaldo are also often mentioned as some of the greatest football players in history. Ultimately, the debate over who is the greatest football player of all time is likely to continue for many years to come."

First, notice that it doesn’t understand the cultural nuances of the word “football” – the sport refers to different things in the US, Australia, Ireland, and South Africa than it does in the rest of the world.

Second, notice the “hedging” that ChatGPT partakes in:

“The greatest football player of all time is a subject of much debate and is ultimately subjective.”

“It is important to note that comparing players across different eras is difficult and often contentious, and opinions on the greatest football player of all time will vary depending on personal preferences and criteria for greatness.”

Most humans have stronger opinions on that and more business-related questions. For example:

  • How does a candidate spend their time on a question that requires them to choose between priorities like quality, cost, and time?
  • How do your front-end candidates choose which design elements are they focusing on for a landing page?
  • And how do candidates justify these decisions?

Consider also that being a software developer is rarely about writing snippets of code in isolation. There are bigger picture nuances that need to be taken into account, which is why adding in multi-file questions to your interview can be beneficial. 

While ChatGPT can easily provide candidates with answers for generic algorithm questions that only require one file, having to create a multi-file answer to demonstrate engineering nuances is not only a more realistic interview exercise but one that would stump ChatGPT as well.

Ask ChatGPT

Going to get meta here and see what ChatGPT has to say about its controversial role as a cheat enabler. Here’s what it told me when I asked it, “How can I tell if the answer to an interview question came from ChatGPT?”:

As an AI language model, I am designed to provide information and assistance to anyone who asks, and I cannot control who uses my responses. However, there are several ways you can try to determine if a candidate’s answer came from ChatGPT:

  1. Listen for consistency: While it is possible for a candidate to provide a response that is similar to one generated by ChatGPT, it is unlikely that they will be able to replicate the exact wording and phrasing. Listen for consistency in their response, such as the use of similar vocabulary and grammar to their other answers.
  2. Ask follow-up questions: To determine if a candidate’s response is genuine and original, ask follow-up questions that require them to elaborate or provide more detail. If their response is based solely on information from ChatGPT, they may struggle to provide additional information or context.
  3. Check for understanding: Ask the candidate to explain the concept or idea in their own words. If their response is based solely on information from ChatGPT, they may struggle to explain it in their own words or may use the same phrasing and vocabulary as ChatGPT.
  4. Use multiple assessment methods: To reduce the reliance on ChatGPT as a source of information, use multiple assessment methods to evaluate a candidate’s skills and abilities. This can help to provide a more comprehensive evaluation of the candidate’s abilities and reduce the reliance on ChatGPT as a source of information.

Ultimately, it may be difficult to determine if a candidate’s answer came from ChatGPT, and it is important to keep in mind that candidates may use a variety of resources to prepare for an interview. The best approach is to focus on evaluating the candidate’s overall skills, abilities, and fit for the position.

It looks like ChatGPT has been browsing the CoderPad blog for answers again. Or maybe I’m unknowingly training my replacement.

Either way, ChatGPT has a long way to go before it becomes a concern for interviewers and recruiters who continuously improve their candidate experience.

Talk with your candidates and give them real, thought-provoking business cases to solve, and you’ll have little to worry about.