How deepfake scams use celebrities to lure victims


Wondering why you’re seeing videos of a national news anchor promoting a cannabis company on YouTube? Or why tech billionaire Elon Musk is being featured in an ad promoting an investment opportunity that sounds too good to be true?


No matter how convincing it may have looked, the video is likely a deepfake, a term that refers to media manipulated or fabricated using artificial intelligence.


Jeff Horncastle, the Canadian Anti-Fraud Centre’s (CAFC) client and communications officer, is warning Canadians about rising video and audio scams that use the likeness of media personalities to advertise fraudulent cryptocurrency platforms and other scams.


“All (fraudsters) need is a little bit of audio from the person who they want to deepfake, potentially a photo or short audio clip, and they use that as an extra tool to try to convince potential victims that it’s a real,” Horncastle told CTV National News.


These fraudsters often draw on household names to build trust, such as U.S. TV personalities Gayle King, Tucker Carlson and Bill Maher.


Deepfake videos advertising scams on YouTube have also featured CTV National News’s own Chief News Anchor and Senior Editor Omar Sachedina. One of these ads appears to show Sachedina presenting a news item, but instead praising a cannabis company. The audio, while well-matched to the video, is fake.


Another video shows Tesla founder Elon Musk selling stock in fraudulent crypto companies.


Although the AI technology behind this isn’t new, it’s becoming easier to access apps and websites that can be used to generate convincing fake images, videos and audio clips featuring familiar faces.


Similar to with robocalls and spam emails, the creators of these deepfake videos have scammed people out of thousands of dollars. While there isn’t any concrete data on how many Canadians have been scammed specifically with deepfake content, the CAFC reported in 2022 that Canadians lost a total of $531 million to fraud. As of June, Canadians have lost $283.5 million this year.


AI IS ONLY GETTING ‘BETTER,’ MAKING SCAMS WORSE


Aside from their eerily realistic look, the most frightening aspect of these deepfakes is that the technology is only getting better, making it harder for a person to identify a fake, says technology expert Mark Daley.


“The important point to remember is the deepfakes you see now are going to be the worst you’re ever going to see for the rest of your life. It’s only going to get better,” Daley, who is the chief digital information officer at Western University, told CTV National News.


Daley explains advancing technology has intensified in the last five years, as creating a deepfake used to require a highly skilled individual with access to exclusive technical software. Now, anyone with an average knowledge of AI and access to a gaming computer can spread misinformation through the likeness a well-known figure, such as a politician running for office.


This disinformation is particularly alarming to psychotherapist and tech journalist Georgia Dow, who says fabricated videos can feed into people’s hate for certain groups and individuals, or sway them to believing something their favourite celebrity appeared to say during an interview that never really happened.


“It’s almost like those revenge fantasies. People we don’t like, we want to see in certain situations, people we do like, we want to see in certain situations. These produce a lot of clicks and people are now trying to get that social currency,” Dow told CTV National News.


One of the ways this technology can be harmful is when used amid important political events.


For example, ahead of the 2024 U.S. presidential election, potential candidates are already seeing videos of themselves working against their campaigns. Florida Gov. Ron DeSantis, who is currently seeking a nomination for the Republican Party, was seen in a deepfake video last week appearing to announce that he would be dropping out of the race.


“It’s really kind of planted little tiny seeds in our head that perhaps this person is nefarious, or perhaps they have alternative feelings. I think that politically this is a really big deal,” Dow said.


Recently, Google announced it would using its own technology to add watermark warnings to AI-generated images in an effort to shut down false claims and denounce phony photographs; however, the concern over the potential spread of disinformation exists even with efforts such as these.


In a statement to CTV News, a spokesperson for the company said: “We’re committed to keeping people safe on our platforms and when we find content that violates these policies we take action.


“We continue to enhance our enforcement practices to combat abuse and fraud. We’ve launched new certification policies, ramped up advertiser verification, and increased our capacity to detect and prevent co-ordinated scams in recent years.”


HOW TO SPOT A DEEPFAKE


As technology advances, experts say it’s important for people to question the media they consume online. Red flags that suggest a video could be fake include audio that doesn’t match a person’s mouth movements, video that features unnatural eye movements and differences in lighting on the person talking and the background.


Dow also advised focusing on the “outline” of a person compared to the background, including their hair and areas around their face, especially when the speaker is moving.


Horncastle recommended asking questions about why certain media personalities would be promoting products outside of their interest, and why they would be promoting anything at all if it’s not something they normally do.


“The first red flag should be, ‘I’m thinking this celebrity isn’t promoting this stuff,'” he said.


“And if you’re still not sure, do as much research as you can, but chances are that these websites that they’re promoting are fraudulent.”


Horncastle says researching the companies behind these products before buying from them will provide insight on how legitimate they are and where exactly your money could be going.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Yours Headline is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment