Deepfake incidents up 500% from 2024, as key warning signs revealed

Deepfake incidents up 500% from 2024, as key warning signs revealed

AI scams involving deepfakes have increased 500% in 2025. That’s according to the latest report from the forex broker experts at BrokerChooser, who analysed fraud incident logs to calculate how often different celebrities’ likeness were used to create fake messaging through deepfakes and other AI mediums. They also collected data on the rise in incidents involving AI and deepfakes over the years. 

Key Findings:

  • Donald Trump is the celebrity most often deepfaked (12.01% of reports) followed by Will Smith (5.39% of reports).
  • Celebrities related to politics (32.43%) and music (31.08%) are most commonly used in deepfakes. 
  • Incidents involving the deepfake of a celebrity accounted for 30.49% of all AI scam reports.
  • 2025 saw a massive 500% increase in the number of incidents (300) involving deepfakes when compared to 2024 (50).

Celebrities most involved in deepfake incidents

RankCelebrity NameOccupation% of appearances with more than one occurrence
1Donald TrumpPolitics12.01%
2Will SmithActor5.39%
=3James Earl JonesActor4.41%
=3Taylor SwiftMusician4.41%
5Elon MuskBusiness3.92%
=6Kamala HarrisPolitics2.45%
=6Joe BidenPolitics2.45%
8Alexandria Ocasio-CortezPolitics2.21%
9Sydney SweeneyActor1.96%
10DanieltheDemonInfluencer1.72%

*The full data set is available upon request.

The President of the United States, Donald Trump, is the celebrity most involved in deepfake incidents and scams, with over 12% involving the politician. In the build up to the 2024 presidential election, deepfakes were reported to be circulating – now, in more recent times, his staff have also been impersonated in deepfakes. Trump has been involved in various AI hoaxes online over the years, one example being fake images depicting Trump being arrested which circulated in 2023.

Actor Will Smith follows in second, with around half the amount of deepfake incidents compared to that of Trump (5.39%). Will Smith is the most commonly faked actor among numerous other Hollywood names that have been used in AI deepfakes.

In third, world renowned actor James Earl Jones has 4.41% reported incidents involving deepfakes. Various news outlets revealed that the actor gave permission for and supported the use of AI to replicate his voice in the Darth Vader role for future Star Wars media – but has faced scrutiny since popular online game Fortnite used AI to recreate Darth Vader’s character and voice.

Also in joint third, singer-songwriter and global pop star Taylor Swift has had 4.41% incidents involving deepfakes. Reports state that Swift has been deepfaked for scam purposes including ticket scams, giveaway scams and disinformation. The singer has also previously been reported to be among the celebrities most exploited for use in scams.

In fifth, international businessman and entrepreneur Elon Musk has 3.92% reported incidents involving deepfakes. Recent news has revealed that fraudulent deepfakes of Musk have resulted in victims losing billions of dollars across the U.S. 

Number of AI incidents involving deepfakes over the years

Years% change% of incidents involving deepfakes of celebrities
2025500.00%36.19%
2024127.27%29.24%
2023340.00%16.06%
2022400.00%5.21%
20210.00%1.37%

The data has revealed that 2025 has seen a huge 500% increase in the number of incidents involving deepfakes compared to that of 2024. Additionally, the percentage of incidents involving deepfakes of celebrities is 36.19% in 2025, up from 29.24% in 2024.

The data shows that since 2021, deepfake incidents have continuously risen year on year as AI grows more prolific. The number of deepfake incidents started increasing more rapidly from 2022 as it started gaining more and more popularity with models such as OpenAI’s DALL-E 2. The percentage of deepfake incidents which involve celebrities have increased year on year alongside this, indicating that this is also likely to grow without further regulations and preventions being put in place.

Number of AI incidents over the years

AI incidents in general over the years have been rising since 2018, but have seen their biggest rise yet from 2024 to 2025. 171 incidents were reported in 2024, compared to a huge 829 in 2025 so far. This is a massive 384.8% increase. 

Prior to 2025, the biggest jump in AI related incidents was seen in 2020, with a 97.62% increase. Reports in 2020 stated that the new level of growth in the number of deepfakes confirmed their exponential nature. This proves to be the case to this day, making many concerned for what lies ahead. 

Balázs Faluvégi, on behalf of forex broker experts at BrokerChooser, has provided expert insight into the data, as well as advice for those concerned about the rise in deepfakes and how to stay vigilant: 

The data reveals a worrying trend with the amount of AI incidents and deepfake incidents rising year on year. As deepfake impersonation incidents are expected to grow, driven by the consistent advancement of AI technologies, understanding the warning signs to look out for is crucial. 

“Deepfakes often aren’t able to perfect the fine details of a real image or video. Therefore, it is advised to keep an eye out for distorted aspects, particularly when it comes to the edges of faces. Additionally, inconsistencies in lighting, or reflections could also be a key give away. Be attentive to the fine details and attributes that you may not normally look at but might not look quite right, for example, errors in hair, nails, or teeth, can be key indicators that the image has been created using AI. It is vital to examine the image in depth before judging the legitimacy of it. 

“A key rule is to question whether the content seems too good to be true. Look closely at what is being said or shown: are there claims of guaranteed, risk-free profits? Is the story unusually flawless or suspiciously persuasive? These can be signs that the material is being manipulated, possibly through deepfakes or other deceptive techniques. If you notice such red flags, the safest approach is not to engage further and instead report the content to the platform where you saw it, or even to your local fraud and cybercrime centre.”

Images for your editorial use are available here.

Note to editors:

We kindly ask that you include a link to https://brokerchooser.com/best-brokers/best-forex-brokers if you utilise this research. Crediting in this way ensures that we can continue to send you future studies that you may find useful.

Methodology:

  1. Brokerchooser sought to look at AI incidents data and find which celebrities are most often deepfaked. To do this, two AI incident reporting websites were scraped of their historical AI incident data. In the end almost 1,900 AI incident reports were scraped.
  2. Each incident contained a description which was processed using deepseek in order to extract names of celebrities which were involved as well as to recognise which incidents involved a deepfake of celebrities.
  3. The number of times each celebrity was deepfaked was counted as well as the data being further aggregated by year to find insights into how the issue of deepfakes has changed over the years, and if it has risen or lowered.
  4. The percentage change is calculated using the formula: C= (x2-x1)/x1, where
    1. x2= new year
    2. x1= previous year
    3. So % change for 2025 would be calculated such as: (2025 data – 2024 data)/2024 data
  5. Sources:
    1.  https://incidentdatabase.ai/summaries/incidents/
    2.  https://www.resemble.ai/deepfake-database/ 
  6. Data was collected September 2025, and is accurate as of then.

Leave a Reply

Your email address will not be published. Required fields are marked *