
Picture: PC Pro Issue 362
Don’t think of the EU’s artificial intelligence act as another piece of boring legislation. On this occasion, everyone’s a winner – apart from the charlatans.
On the 1st August this year, the world’s first comprehensive artificial intelligence law came into force in the EU. Well, mostly. The AI Act will be phased in with milestones set at six months, 12 months and about one-and-a-half years, with the whole of the Act fully in force by end of July 2026.
This new law comes not a moment too soon in my opinion, and 2026 can’t come fast enough. Am I such a fan of regulation that I welcome every new legislative development? No, I’m not. But I have seen AI being used by scoundrels, charlatans and fraudsters to create a smokescreen, making the proper scrutiny of products and services a near impossible task.
As I never tire of repeating, the fundamental principle of criminology is that crime follows opportunity. Anything that eliminates or reduces the opportunities exploited by scammers to make a quick buck is worth the legislative effort of those tasked with drafting laws to protect society. This is not a victimless crime, as I’ll go on to demonstrate. I believe that this Act quite cannily removes an important feature from the Handbook for Total Chancers.
What made me so cynical so young? Settle in, folks. It all started when, a little over ten years ago, I sat and watched the co-founder of a startup pitch an idea for an AI tool. The audience, made up of non-technical people, was assured this tool would be perfect for in-house recruitment teams and recruitment firms. The tool, the audience was told, could determine with a high degree of certainty which interview candidates would turn out to be dishonest, reducing the amount of time needed to search through candidates, thus reducing costs and improving the outcome for the recruiter, the business and society as a whole. Better results for less, who can argue with that?
If any of the above strikes you as odd, imagine how baffled I was as I sat there. To provide some further context and a partial excuse for the apparent gullibility of many of those who parted with their cash to fund this project, the consequences of the global financial crisis were still being felt and investors were still smarting from the failure of supposedly safe triple A-rated financial products. The scene was set for whizzy new solutions to protect businesses and their investors.
Now, while I accept it’s the job of co-founders to dream big, challenge the norm and consider a variety of solutions, the purported features of this tool so closely resembled the talents of Tim Roth’s character in the TV show Lie to Me that it could have been lifted directly from the script. I was baffled on two fronts. One: had no-one in the room watched the quite fun but wholly implausible show? Two: how did they believe that this tool worked? Well, turns out no-one had to ask.
Sneaky blinkers
That question was dealt with by the co-founder directly and without prompting. She helpfully explained that the tool would be able to measure micro expressions and non-verbal clues and extrapolate – based on the number of times someone blinked or twitched – whether the candidate was likely to be trustworthy or not.
The trick, you see, was to capture the interview on camera. From this, the tool could review and forensically examine every twitch, blink and nostril flare. Having mapped the physical responses and calculated to what extent they deviate from the startup’s library of facial expressions (I can’t even get into this), the tool would then conduct further analysis. The results would then be set against the series of questions asked. So, it might have looked something like this: do you like Marmite? Candidate answers yes, I do like Marmite. Nostrils flare. Bingo, you’ve got a live one here.
All of this woo-woo was to provide the tool with sufficient data to make a sensible determination and in turn produce a report for the recruiter. Even a stopped clock is right twice a day, so you may have someone who is dishonest. Or you have a candidate that practises yoga or thinks it’s a daft question or they’ve caught a whiff of something troubling or, or, or.
If only I was making this up. I was sceptical, disinterested but with hindsight far too polite. Rather than issuing a guffaw that filled the room, I asked how the tool worked. In direct response to that question the room was told that the tool was devised by a talented mathematician and computer scientist using AI. Interesting titbit of information there, but it went no way to answering the question: how did the tool work?
I’m not suggesting that everyone who flashes AI as part of their solution is a fraudster or charlatan, but undoubtedly investors have been duped by those who deliberately concealed the limitations (and, worse than that, fabricated capabilities) of the technology. When tools deviate wildly from what we know about how the real world operates then we need to take a breath.
Sell the sizzle, not the sausage
The problem with “AI” tools such as this is that nearly every person up and down the chain is a victim of this fraud, whether they know it or not. It’s easy to imagine the brainstorming for the marketing of products such as these.
Unconstrained by reality, never mind the actual sausage, the sizzle they have at their disposal to sell is only limited by their imaginations.
And everyone loses. The investors who contribute to the firm that will fail lose their money. The business buying into the “remedy” to reduce the administrative burden of combing through CVs won’t only fail to buy a product that delivers on the promise but will also create liabilities – as Estée Lauder found out when it engaged HireVue (see issue 356, p116). Our third loser: the unfortunate and unsuspecting candidate with dry eyes or hay fever who is removed from the selection process and loses out because the tool imagines that they’re somehow persona non grata.
The other victims of this hype are somewhat more removed. They’re the founders of businesses that utilise AI in ways that will work and benefit firms and more broadly our society, but as they don’t engage in the same hyperbolic rhetoric that comes with selling the sizzle, they miss out on crucial funding and are either delayed in getting to market or never make it.
What has the EU AI Act ever done for us?
The EU AI Act runs to over 80,000 words and is helpful in a myriad of ways that align with the broad ambition to address risks to health, safety and fundamental rights, while also protecting democracy, the rule of law and the environment. It’s impossible to make a reasonable fist of highlighting all the benefits, but it is possible to showcase that events like the above will no longer occur.
This is because within the unacceptable risk category lives all the AI that will be banned outright. Specifically, the EU AI Act, Chapter 2 prohibits “the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions”.
Hey presto, the AI tool that can infer your emotions from blushing, blinking or breathing through your nose is not so much a thing of the past, but now not so much a thing of the future, either. Which is good news for everyone, but mostly it’s helpful because tools such as these don’t and can’t exist.
The legislators calculated for useful AI systems, so this prohibition doesn’t extend to AI systems that are intended for medical or safety reasons. And that makes sense. Inferring emotions of a person for medical or safety reasons can be obvious even to untrained observers: pacing up and down on a bridge at midnight looks vastly different to a rational objective viewer than a crooked smile, rapid blinking and a nostril flare.
A tool that could determine that the figure is human, that it’s the same human moving up and down, and that the timing and proximity to the bridge should trigger human intervention is a benefit. It could mean more eyes on the lookout and better outcomes. Importantly, in the bridge instance a false positive harms no-one.
Bright future
Place your bets, folks. I’ll start: I’ll bet that the world will never be in a position where it can figure out someone’s future intentions based on a crooked smile, nostril flare, raised eyebrow, blink rate or eyeroll. Not just because I think this is the case; we all know this is the case. It also helps to take the view of one of the world’s leading experts on nonverbal communication and body language, Joe Navarro seriously. Whether you rub your nose, fold your arms or blink too often, Navarro’s view is that scientifically and empirically there’s no Pinocchio effect.
The Act provides boundaries for those with unlimited imaginations and a self-serving disregard for those whose lives may be affected by their imagineering. At least in this regard, the days are numbered for scoundrels deploying word salad and technobabble to dupe investors and potential clients.
With regards to genuine creativity, imagination and ingenuity, the criticism that the Act is a constraint looks misconceived. If anything, this legislation will likely have a noise-cancelling effect and help useful products and projects to thrive.
Article by: Dr Rois Ni Thuama
First printed: PC Pro Magazine, Issue 362, Dated 1st November 2024, Pages 116-117, ISSN 1466-3821, issue available from https://www.pressreader.com/magazines/m/pc-pro/20241101
Subscribe to PC Pro Magazine: https://www.magazinesdirect.com/uk/pc-pro-subscription/dp/8ce631dc
Reproduced here with kind permission from PC Pro Magazine.