AI dupe images and videos are here to stay, and they are already creating real-life chaos and confusion, including on UK railways.
Videos and images created with AI are everywhere online – some are more convincing than others, while AI content is often all too easy to identify thanks to sloppy and often disturbing details like people’s mangled hands, a telltale sign of the tech’s involvement.
And it is not just individuals who are being fooled by fake images.
Network Rail, the owner of Britain’s massive rail infrastructure with over 40,000 employees, fell for a hoax image showing a badly damaged railway bridge in Lancaster last week and shut down an entire railway line.
The convincing image appeared on social media sites on Thursday night, showing chunks of the Carlisle Bridge scattered on the road underneath.
Sign up for all of the latest stories
Start your day informed with Metro’s News Updates newsletter or get Breaking News alerts the moment it happens.
To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video
At a quick glance, the image appeared genuine, and railway staff made the decision to close a section of the UK’s busiest West Coast Main Line while safety checks were carried out and no faults were found.
In total, 32 passenger and freight trains were halted, with services as far as Scotland stopped, causing delays and inconvenience to passengers.
The context of the image made it even more believable as Lancashire and the Lake District had been rocked by an earthquake just hours before.
The image was found to have been altered, potentially using AI, after a BBC investigation used a reversing tool.
What happens if a similar chain reaction starts with nationally critical infrastructure, or if trigger-happy governments get it wrong?
It demonstrates the enormous power a single doctored image can have, Naomi Owusu, the CEO and co-founder of live digital publishing platform Tickaroo, said.
She told Metro that a few things stood out about the image, including too intense lighting and a hole on the ground at the forefront of the image, signalled that something was wrong.
Naomi said: ‘The small arc of fallen stones looks oddly fresh next to the older ones, and anyone familiar with the area would question whether the house sits quite where the image suggests. Even the metal fence that should be visible isn’t there. None of these details alone prove it’s fake, but together they signal that something isn’t right.
‘Imagery like this should always be confirmed. I don’t know the specific security protocols, but there needs to be a clear process for checking these situations before action is taken.
‘Otherwise, roads or train lines get closed for no real reason, emergency crews waste time, and the public’s day-to-day lives are unnecessarily impacted.’
Network Rail told Metro it always takes any safety concerns seriously and teams spent around an hour and a half inspecting the bridge until the image was ‘found to have been a hoax’.
A spokesperson said: ‘The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer. It also adds to the high workload of our front-line teams, who work extremely hard to keep the railway running smoothly.’
Who created the hoax image?
It is not known who is behind the image and what their intention was. Police said they were aware of the incident, but there was no ongoing investigation.
The most likely creator is someone seeking attention or wanting to cause disruption, Naomi said, adding that they wanted to trigger ‘panic, delays and reputational damage.’
If someone was trying to be creative or clever, they should ‘know how much disruption they can cause and what the legal or professional consequences might be.’
How to spot AI images and videos
We have rounded up the best ways people can try to stay one step ahead of AI and protect themselves from being duped.
You can read details about the best ways to stay one step ahead of AI here, but these are the key things to remember:
- Focus on details of the video and images like hands, limbs and shadow alignment – Does something seem out of place or even too perfect?
- Robotic writing can be a giveaway
- Deepfake videos using face swaps
- Use reverse image search, other tools and common sense
- Handpick and curate your social media feeds to try to clear out sources spewing out AI slop
‘It’s easy to make an image that looks real now, but that ease should come with responsibility,’ she said.
Besides just extra work, the broader risks for transport and infrastructure go deeper.
Naomi said: ‘A convincingly false image could trigger false alarms, leading to unnecessary emergency responses, wasted staff time, crowd chaos, or shutdowns. It could damage the credibility of real warnings, erode public trust, and provoke overreaction.
To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video
‘AI hoax images don’t just distort what we see, they distort how we decide. A single photo can nudge people toward a conclusion long before they stop to ask if it’s real.
‘Imagery is fast, emotional, and sticky, which is why false visuals spread so quickly and can shape public opinion before facts have a chance to catch up.’
While some lighthearted AI content can be entertaining, it has a dark side, with deepfakes of women being used in revenge porn and to blackmail victims.
People are being targeted by AI scams too. John Cairns, 61, lost thousands of pounds after investing money following a convincing deep fake of Elon Musk giving advice.
An elderly couple was duped by an AI-generated video advertising a cable car in Malaysia. The tourists travelled for hours only to find that the attraction didn’t exist.
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.