linkedin Skip to Main Content
Back to blog

Generative AI and the Future of Technical Interviews: Addressing Concerns of Increased Cheating

Hiring Developers

The current popularity of generative AI (Gen AI) like ChatGPT, Claude, and other large language models (LLMs) has certainly raised no shortage of questions for all of us. These questions seem mostly to be grounded in fear – anxiety that candidates will use these technologies to deceive recruiters about their skills and lead to low-performing new hires within your organization.

Due to this, many of the questions we get asked at CoderPad have to do with how to keep ChatGPT from messing up the recruiting process.

  • How do we ensure candidates aren’t cheating by using ChatGPT in an interview?
  • What measures are we planning to take to ensure ChatGPT isn’t used in the code submitted in assessments?

At CoderPad, we believe in embracing new technologies, including the notorious ChatGPT. In the world of software development, we don’t consider using Google or StackOverflow to help look up a function or using autocomplete and GitHub Copilot for efficiency to be cheating. In fact, these tools increase efficiency and make your hard-working developers even more efficient.

These technologies are a critical part of a developer’s tool kit.

They help developers to code smarter and more efficiently than they could have otherwise.

They prevent developers from recreating the wheel and instead allow them to move projects forward by creating the next wheel better and faster.

We are not saying that the job of developers is to be an expert copy and paster, but rather to leverage tools available to develop more creative, well-thought out solutions in the limited time that they have.

Developers utilizing StackOverflow and Google for guidance on technical issues was also once seen as a threat to the recruiting process, as this was seen as a way for candidates to have an unfair advantage over candidates who did not know where to look for technical guidance. 

We find the modern conversation about the validity of using co-pilot and ChatGPT as a tool in interviewing as a similar conversation to the one we were having decades ago about StackOverflow and Google.   

Amanda Richardson – CEO of CoderPad

Knowing how to use technologies is a skill that developers are expected to have and one that should be assessed for. We see models like ChatGPT as the next set of technologies – and these technologies help developers be better, but can’t replace problem solving, creativity and logic.

🔖 Further reading: “Don’t Expect Candidates to Have Everything Memorized”, the Dos and Don’ts of Cheating Prevention With Nathan Sutter

How should employers hire for developer talent in the world of GenAI? 

While there’s no doubt GenAI has been and will be used for cheating, this concern can be mitigated with the right interview and assessment practices. In fact, by integrating AI into your process, you can gain a more accurate understanding of candidates’ skills and save valuable time.

Play the long game, build trust

We believe that building trust and transparency throughout the screening process is the most effective solution on multiple levels. This approach not only distinguishes your process from irrelevant or stressful hiring methods but also reduces the likelihood of candidates resorting to dishonest tactics by making them feel respected and valued. Therefore, you should:

  • Clearly communicate the purpose of the assessment.
  • Offer appropriate support at all levels of the process — the candidates should not be afraid to ask you questions!
  • Foster an environment that reflects genuine collaboration. Show the candidate you care about their growth and success.

By integrating these best practices, you can minimize cheating while also enhancing the overall candidate experience, ensuring a fair and effective assessment process.

Create better questions

Complex, realistic work-related questions not only resist AI manipulation but also offer a far more engaging experience.

Develop questions that address your business’s current challenges and mirror real-world job scenarios. This approach will attract candidates who:

  • Understand how to address your key problems.
  • Are genuinely motivated by the work you do.
  • Align with your company’s mission.

By crafting complex, iterative questions that require multiple steps or deeper understanding—rather than simple, one-step questions—you can significantly reduce the risk of cheating. These questions should involve:

  • Critical thinking
  • Problem-solving
  • Creativity

These are areas where AI typically struggles to provide comprehensive answers without human-like context or reasoning. This strategy not only makes the assessment more relevant to the actual work the candidate would perform but also minimizes the chances of AI providing complete responses.

The image is titled "AI 'resistant' questions" and discusses the concept of creating questions that are more resistant to cheating by AI tools during assessments or screenings. It suggests that more complex and iterative questions are harder for AI to solve compared to simpler, single-pass questions. The key points are:

Less Resistant Questions: These include "Algo questions," "Single pass," and "Single file exercise."
More Resistant Questions: These are described as "Complex questions," "Iterative questions," and "Multi-file exercise."
The implication is that moving towards more complex, iterative, and multi-faceted question structures can make it more difficult for AI to cheat or provide correct answers without truly understanding the problem. This approach is recommended to minimize the chances of AI-assisted cheating in screening processes.

With that said, Gen AI can still be an effective and important part of this process, especially if your developers will be using it in their day-to-day jobs.

So how do you go about implementing this while addressing potential cheating concerns?

Gen AI and the asyncronous technical assessment

During the asynchronous technical assessment phase, allow candidates to use AI models to come up with answers. You can then follow up with written questions asking the candidate to critique the code produced by the model.

  • How does it work? Write a readme file on how the code works.
  • What would they have done differently to solve the problem?
  • What could improve the code performance? 

Another option available to you with CoderPad is to use an AI-generated and validated follow-up question. If it detects a suspicious answer, our own ChatGPT integration will generate a follow-up question asking a candidate to explain a piece of their code and then validate the answer to ensure they understood what they were typing or pasting into the answer box.

Allowing Gen AI during a live interview

During a live interview, we recommend spending time digging into the why behind the candidate’s work.

Ask questions that help you to assess how the candidate is thinking and problem solving. Allow candidates to use ChatGPT or other Gen AI to assist in problem solving and then talk through how you would approach a code review.

If you have concerns that the candidate used an AI model to develop code and just copied it, you can determine so by using CoderPad’s playback feature. This allows the interviewer to playback the candidate’s keystrokes and will make it obvious if code was copied and pasted in from an AI model. 

Ultimately, ChatGPT is not an excuse for a developer not to think or a replacement for understanding how code works – in the same way that using a calculator is not a replacement for a math student knowing how to subtract or divide. However, it is a way to get things done in a new and faster way. And when coupled with problem solving, creativity, and logic – it can make a great developer into an even stronger one.