Social media platforms are currently awash with conspiracy theories claiming that Benjamin Netanyahu has been killed or injured and replaced by AI-generated deepfakes. Between clips that supposedly show the Israeli Prime Minister sporting extra fingers and drinking from a bottomless, gravity-defying cup of coffee, only one thing is apparent: reality used to be much easier to prove.
There’s very little credible evidence to suggest that Netanyahu isn’t alive. But credibility is a rare commodity now that AI can convincingly clone real people across image, video, and audio formats, so it’s getting tougher to conclusively dispel the rumors. This is what it looks like when nobody can trust their own eyes anymore.
The conspiracy theories started following a press conference live stream hosted by Netanyahu on Friday. A clip of the broadcast was widely shared by social media users who claim the footage briefly shows the Israeli PM with six fingers on his right hand. Older generative AI tools have a history of struggling with hands so the apparent extra appendage pushed speculation that Israel is using deepfake footage to hide that Netanyahu had died during an Iranian missile strike.
On closer inspection, the “extra” finger can be easily explained by video quality degradation and even lighting. Fact checkers including Snopes and the Poynter Institute’s Politifact have debunked claims that the video was AI-generated. We should also consider the run time of the video itself, which at almost 40 minutes, is far longer than the maximum clip lengths that can be generated by current AI video models.
In an attempt to put the AI clone conspiracies to rest, Netanyahu published a video to his X account yesterday showing him inside a coffee shop, and asking the person behind the camera to count his fingers. However, social media users promptly called out apparent visual inconsistencies, suggesting the footage was also an AI deepfake.
Some of these comments have merit, pointing to moments within the video that show liquid moving unnaturally (or not depleting) within the coffee cup in Netanyahu’s hand, and the ring on his finger seemingly vanishing in and out of the skin that surrounds it, though that could also be explained by video degradation. The background environment itself has also been called into question — the till on the counter appears to be displaying a date from 2024, for example. Others have denounced the video as fake over claims that Netanyahu is left-handed, but is seen drinking the beverage with his right hand.
If you read the comments on some of these speculative posts, the reasons that people are giving to be suspicious of fakery in these videos get increasingly bizarre, questioning how naturally Netanyahu is holding the cup and the general “aura” he gives off. None of it actually matters though, because it’s almost impossible to definitively prove whether either of these videos are genuinely authentic.
Neither of the clips carry metadata from a system like C2PA Content Credentials or SynthID, which could either verify their authenticity or track where and how AI tools were used. Out of the platforms like Instagram and YouTube that pledge to tag AI-generated or manipulated content, none of the clips they hosted gave any indication that the footage was fake, verified as authentic, or otherwise.
People want assurances that what they’re seeing is real, especially with the ongoing conflict between Iran, Israel, and the US. Our online landscape isn’t currently equipped to facilitate that, forcing us to constantly adapt by learning how professional fact checkers are debunking synthetic or misleading media, or trusting others to tell us when something is fake.
Even before AI became rampant, people were occasionally paranoid about it being used to manipulate news — like the viral Kate Middleton proof-of-life photoshoot that turned out to be a botched edit — and now of course, it’s much worse. AI tools are now capable of generating content with fewer of the usual “tells,” and it’s becoming harder to say with absolute certainty if a photo or video of something actually happened. In turn, that’s creating a crisis of trust even when people have no clear evidence of manipulation — as in the original Netanyahu video.
That uncertainty is already being used to spark distrust on all sides of this war. In a Truth Social post on Sunday, President Donald Trump accused Iran of using AI as a “disinformation weapon” to falsely depict successful attacks against the US, and called for media outlets that generated it to be charged with treason “for the dissemination of false information.” It’s true that AI-generated disinformation is rife, but this is coming from the same man who has personally used deepfakes to cause his own political mayhem, and leads the US administration that spends more time sharing AI-generated edgelord memes and manipulative disinformation to social media than actual policy bulletins.
And yet Trump still had the audacity to tell reporters after making that Truth Social post on Sunday that “AI can be very dangerous,” and that “we have to be very careful with it.” Perhaps the Trump administration could start by leading by example. For now, we can’t even trust how people are holding their coffee cups.

