• Career advice, Employers

Welcome to the latest instalment of .js MarketWatch. The series that spotlights developers and engineering leaders across the JavaScript landscape. Each month, wwe explore a specific JS framework or trend through the lens of someone working at the heart of it, uncovering what’s shaping the market and where the opportunities lie. This month we’re talking to Mike Guta, Tech Principal at AND Digital, about the international use of AI in technical tests. 

 

Give us an overview of your experience with technical interviews and how you’ve seen AI usage evolve in candidate assessments?

I’ve been working as a software engineer for over 23 years, and during this time I’ve interviewed and tested numerous candidates. For a very long time, candidates were not allowed to use any additional materials during technical tests. Over time, we gradually allowed them to use Google search or Stack Overflow, as long as they didn’t try to find the solution directly. But now, with more companies expecting their developers to make use of AI coding assistants in their day jobs, it’s only natural that we’re starting to see AI allowed in interview tests as well.

 

Why do you think AI use during technical tests is especially important right now for the developer community?

Developers and companies already make use of AI tools and agents like Cursor, Copilot, and Windsurf during daily software engineering tasks. Many companies already expect their engineers to be comfortable, familiar, and adept at using AI. Similarly, plenty of engineers rely on AI for repetitive code and boilerplate, and this repeated reliance becomes normal. It’s only fair that we allow candidates some of the modern conveniences they’re used to while we assess their coding ability – yet still keeping an eye on how they use these tools to complete the task.

 

Is this something companies should be adopting more readily in their hiring process – and what’s the risk if they ignore this trend?

First, I wouldn’t call it a trend yet. It’s only in the past few months that I’ve started conducting interviews for one of AND Digital’s largest clients where I’ve allowed candidates to use AI for code suggestions. I’ve explicitly restricted them from using it in chat or agent mode, because I don’t want them to use it as a shortcut for their own thinking. I’ve heard of a handful of other instances, but it’s still not routine practice across the industry.

That said, I think companies who already make use of AI should be ready to adopt this in their hiring process. After all, if they expect all their developers to make good use of AI tools that they pay for, then they need to look for candidates who can make good use of AI as well. We’ve seen research from DORA recently showing that AI can be an amplifier for a good engineer, while the opposite is true for a less skilled one. So it’s only fair that you look for good engineers who can be amplified by AI in your hiring process.

 

What kind of impact do you think this will have over the next 6-12 months on developers interviewing for roles, or on businesses hiring talent?

Personally, I think within a year, more than half of the companies that make day-to-day use of AI will allow its use in a limited capacity during the interview process. The expectation for candidates to interact effectively with AI will become increasingly common over the next year.

If you’re a candidate, you need to be good at “talking” to AI – but without actually and explicitly talking to it. You need to give it enough hints through your coding for its suggestions to help rather than hinder you. And if you’re a company that cannot yet assess good candidates amplified by AI, then you may get fewer of those through your pipeline – which isn’t necessarily a bad thing if you’ve made that choice intentionally.

 

Are there any specific AI tools or experiences that have shaped how you think about AI-assisted technical interviews?

I make regular use of GitHub Copilot, Cursor, and CodeRabbit while working for my current client. However, these tools haven’t really shaped how I think about AI-assisted interviews. What has really shaped my thoughts is seeing how other people use them. It’s very revealing when a developer ignores their own knowledge and experience and gets misled by AI – they can very quickly go down the wrong rabbit holes. Sometimes they realise and pull themselves back up, but at other times you have to help them back on track and steer their thinking in the right direction.

 

Where would you recommend companies start if they want to integrate intentional AI use into their technical assessment process?

If you want to identify good engineers that can be amplified by AI, here’s what I recommend:

  • Set slightly more difficult challenges than you otherwise would. Perhaps add an extra bonus requirement, just to see whether they focus on the relevant context from the start or they get sidetracked.
  • Continue requiring research of official documentation or Google Search – candidates shouldn’t rely exclusively on AI. Gemini results will also show up and that’s fine.
  • Allow AI suggestions in their IDE only – they should not use chats or agents. They should never be pasting the contents of the requirements.
  • Maintain the need for candidates to explain their thinking – the problem they solve and the code they write is still for you to read and understand, even if they get a bit of help with the typing.
  • Assess their critical thinking more intensively – this is probably even more important now than before.
  • Give them more opportunities to correct mistakes that an AI could make. Like intentionally give them slightly misleading stub implementations and see how they deal with them.

Can you give me an example of a misleading stub implementation?

Since this is JavaScript-focused, here’s a practical example: If you require candidates to implement a function to fetch a resource via HTTP, it will need some kind of async code. But if you provide them an empty function to start with, don’t label the function as async. Since it’s empty when you write it, it doesn’t matter at that point. But if they write the implementation, they should recognise they need to change it to async.

An AI assistant will often assume the absence of the async keyword is intentional and will turn to Promise chaining rather than async/await statements. I’ve seen at least two candidates who accepted the AI’s suggestion and carried on down that rabbit hole without questioning whether they should modify the function signature.

Or simply replace an exact word like “heaviest” in your requirements with a more vague word like “biggest” when you give them the implementation stub.

How are you and your team at AND Digital currently leveraging this approach in your hiring process?

Different organisations have different levels of AI maturity. The client I’ve been personally helping for nearly two years has very high aspirations in empowering their engineering workforce with effective AI tools. They’ve invested significant time and effort in trialing and rolling them out, as well as actively monitoring key metrics. This is something I’ve been directly involved in.

Since AI proficiency is so important to them, I’ve explicitly allowed candidates for senior engineering positions to make use of AI code suggestions. The strongest ones shone through faster, while the cracks in the weaker ones became more visible. It was almost like an amplifier – exactly what the research suggests.

What are the key challenges or considerations companies should be aware of when allowing AI in technical tests?

You still have to do your interviewer job really well. Whether candidates use AI or not, you still need to notice when they’re anxious and reassure them, notice when they go down rabbit holes, and guide them back on the right track. Even the strongest candidates will do this – it’s entirely natural. But the better ones learn from their mistakes.

The challenge with AI tools is that there are fewer natural opportunities to make mistakes, so the only viable approach is that you may have to artificially increase the chances of making mistakes when using AI. That’s something you have to be comfortable doing. But you’re not trying to mislead the candidate, just the AI that helps them.

Consultant insights

“Across the technical hiring landscape, we’re seeing AI move from experimentation to structured organisation wide adoption. Many of our clients & particularly those with mature engineering practices are now establishing AI councils or enablement groups to evaluate tools, set standards, build training programmes, and track performance improvements. This is no longer about testing out tools, it’s about rolling out AI intentionally and responsibly across engineering teams.

That shift is now influencing hiring. We’re seeing companies that use AI in day-to-day delivery increasingly expect candidates to demonstrate not just core software engineering ability, but the ability to work effectively alongside AI and knowing when to accept or override its suggestions. As AI becomes a natural part of the engineering toolkit, live technical interview assessments are beginning to reflect this reality. However, that’s not to say every organisation is ready to allow AI in interviews. Many still prefer to test an engineer’s capability without assistance, believing it’s important to understand their raw problem-solving ability first.

As AI becomes a natural part of the development workflow, we expect more teams to experiment with AI-inclusive assessments.” Says Marcus Tansey, Team Leader of JavaScript & Mobile

Want to find out the top AI tools being used across the industry right now? We’re asking hundreds of professionals every month across engineering and product what their favourite AI tools are at the moment. Download the report here:

 

Looking to expand your Dev Team? Or hoping to take the next step in your career? Get in touch with our JavaScript Specialist, Marcus Tansey on LinkedIn or give them a call on 020 3940 7464.

 

 


 

Written by Marcus Tansey, Senior JavaScript & Mobile Team Leader

Share this