Trust is at the heart of the EU AI Act which is set to become law in 2026. The act will apply to every organisation which uses AI for any purpose and will require them to inform people if they are interacting with AI in any way.
For example, if elements of a piece of advertising copy were generated using AI, consumers must be informed of the fact. Similarly, if an online chat agent on a website is AI powered, customers must be informed. If a newspaper account of sports results has been compiled using AI, readers must be informed. And if an image has been altered, enhanced or created using AI the people who see it must be informed.
This is extremely important for public trust, not just in AI but the organisations using it. Because, as we all know, AI can sometimes get things spectacularly wrong and people need to know if it has been involved in the generation of any information or other content.
An example of AI getting things very wrong was highlighted recently by Dr Daniel Hook, chief executive of technology research specialist company Digital Science. Dr Hook exposed a basic failing of AI to grasp the real world by asking the Midjourney art-generating AI tool to draw a single banana on a plain background. The result was a drawing of two bananas. He conducted the same experiment over a period of weeks and the result was the same each time.
‘A gas emergency would quickly turn into an electricity emergency. It is low-risk, but high-consequence’
The secret to cooking a delicious, fuss free Christmas turkey? You just need a little help
How LEO Digital for Business is helping to boost small business competitiveness
‘I have to believe that this situation is not forever’: stress mounts in homeless parents and children living in claustrophobic one-room accommodation
Digital Science subsequently launched its #MindTheTrustGap campaign which is aimed at raising awareness of global issues of trust and integrity in science, innovation and research.
“I asked for a single solitary banana, on its own, but none of the variants I received contained just one banana,” Dr Hook says.
Thinking he must have made an error, Dr Hook tried different instructions such as, “a perfect ripe banana on a pure grey background casting a light shadow, hyperrealistic”, or “a single perfect ripe banana alone on a pure grey background casting a light shadow, hyperrealistic photographic”, and “ONE perfect banana alone on a uniform light grey surface, shot from above, hyperrealistic photographic”. All produced images of two or more bananas.
Upping the ante somewhat, Dr Hook then asked the app to generate “an invisible monkey with a single banana”. That produced very visible monkeys holding two or more bananas.
Dr Hook eventually achieved some success with the instruction “a single banana on its own casting a shadow on a grey background”. Three of the four images generated were of a single banana, but the fourth still contained two bananas.
He has some advice for people using AI: “The use cases where we deploy AI have to be appropriate for the level at which we know the AI can perform and any functionality needs to come with a ‘health warning’ so that people know what they need to look for – when they can trust an AI and when they shouldn’t.”