Have you ever heard of Even G2 or Ray Ban Meta? These are glasses with built-in small computers that display information in the field of vision of the person wearing them. Such smart glasses can be worn by job seekers in job interviews and provide them with digital assistance. During online interviews, candidates are apparently increasingly using AI tools or teleprompters, as we reported in an article. This article received a lot of feedback on LinkedIn. At the center of the discussion is the question of whether it is clever and innovative to use such means, or unfair, reprehensible or even justiciable?
Use of AI as a trivial offense?
Experienced human resources specialists know: Anyone who answers technical questions from the proverbial FF, also brings domain knowledge of the job provider and also appears confident at the same time, has either done their homework and mastered their profession – or can dazzle well.
However, conversations take a turn for the worse when HR has to get the impression that the other person on the screen is not acting alone, but is using covert technical and/or human support in the background. But is that really so problematic?
Some voices in the discussion are of the opinion that this is ultimately a triviality and a trivial offense. After all, according to this perspective, companies have long been using AI tools in many places to screen, pre-select and structure applications. According to Susan Reppe, the fact that applicants are now turning to electronic aids is “ultimately just the other side of the same development”.
Is it the company’s fault?
Goethe’s Sorcerer’s Apprentice is also mentioned indirectly in this context in some contributions. Since companies are increasingly demanding AI skills, it is hardly surprising and no reason for grief if applicants also use them under live conditions. True to the motto: “I can’t get rid of the spirits that I called.” Or as the user and consultant Nico Rose writes on Linkedin: “Oh, the irony.”
For some people, the timing of their use also seems to play a role when evaluating digital assistants in job interviews. From Stefan Epler’s point of view, it is appropriate and definitely a sign of structured and careful work if someone prepares “a Q&A or script for a conversation” using large language model systems. Covert help, however, is a taboo.
Where is the red line?
However, it becomes not only morally questionable, but also at least legally tricky when applicants use digital helpers to claim skills that they actually do not have. This applies to supposed expertise in the interview itself, but especially to entries in your own CV.
If skills are shown here that only exist on (digital) paper, the path to fraudulent deception (§ 123 BGB), which could potentially make an employment contract voidable, is at least not far away. And that usually has consequences: Anyone who acts like this is not only acting dishonestly, but also runs the risk of doing themselves a disservice. In this respect, sincerity is the ultimate litmus test – and not just here.
Ultimately, it usually becomes apparent during the probationary period and during onboarding at the latest whether someone actually understands their profession or whether supposedly familiar areas of responsibility are actually the proverbial Bohemian villages for him or her.
Transparency and trustworthiness are therefore not only a requirement of fair play and good parenting for everyone involved in the job interview, but also a small categorical imperative legally and ethically. It is no coincidence that the Apostle Paul wrote in his second letter to the Corinthians: “We are by no means adorning ourselves with foreign feathers…”

Frank Strankmann is an editor and writes offline and online. His focus is on the topics of labor law, co-determination and regulation. He is also responsible for other projects for media brands from FAZ Business Media GmbH.


