When AI started deceiving the world: one-click face swapping, voice cloning, money, sex, media and fear mongering .....

When AI started deceiving the world: one-click face swapping, voice cloning, money, sex, media and fear mongering .....

When AI started deceiving the world: one-click face swapping, voice cloning, money, sex, media and fear mongering …..

The rapid development of AI is a mixed blessing.

On the one hand, people marvel at the powerful work efficiency that carbon-based creatures can’t match, and expect AI to help save unnecessary labor. On the other hand, they are afraid that it will blur the boundary between reality and fiction and cut off the ability of human beings to distinguish truth from fiction.

Almost from the moment AI was born, the fear of AI counterfeiting spread.

article images#452px #311px #B

It turns out that this day won’t be long in coming. Today, stories fabricated with AI are not only able to fool ordinary netizens, but even major media outlets around the world have fallen for them.

AI News Fake, Global Media Deceived

In 2023, dailymail published a news story – 22-year-old Canadian Saint Von Colucci died after having plastic surgery 12 times in order to make a successful debut in South Korea, modeling himself after Park Ji Min of the Bulletproof Boys Group.

The news instantly attracted a lot of attention. TMZ then followed up, as did NDTV in India, Postmedia in Canada, and YTN, Osen, and star in Korea.

The stories have been graphic, with all sorts of details ranging from Xiao’s surgical procedure to his experience of breaking into Korea …….

The netizens were naturally not too skeptical about the facts that had been screened by the mainstream media, and they felt sorry for what happened to Xiao Xiao with true feelings.

But immediately after, the matter ushered in a reversal ……

Some journalists based in Korea, the assistant editor of India’s Rolling Stone, and the well-known American radio station iHeartRadio, etc., after further checking, questioned: this Saint Von Colucci’s story should be fabricated, and even this person may not exist at all.

The doubts are many.

article images#518px #311px #B The article said that he had written songs for a number of Kpop lovebirds, but he died without any industry insiders publicly mourning him. In addition, Korean media outlet MBC asked the police and got the response that no similar deaths were reported at the time.

Someone also took his photo to a testing site, which showed a 75% chance that it was generated by AI. The story was immediately labeled as “AI news fakery”.

Seeing the growing skepticism, dailymail has silently deleted the article, TMZ also immediately clarified that the news is wrong.

But this is not the end of the matter. Then came the reversal again and again.

Media outlets then reported that Saint Von Colucci’s family was preparing to sue his Korean agency, as well as the reporter who claimed their son’s death was a hoax. The implication was that the reports were all true.

However, again, there is no substantial evidence or interviews in the story, and no family members are featured. Netizens who have already been lied to once simply don’t know who to believe anymore. What was a tragedy has now become a farce.

article images#435px #452px #B

Now, more than half a month has passed since the first report was published. So many media outlets from the British media, Canada, South Korea, and India have been mobilized, and yet they still can’t tell if a person’s identity is real or fake.

This is a profound realization that AI, besides fabricating facts, is even more frightening in that it can make people lose the ability to discern the truth.

AI rumors abound, nearly causing social panic

The incident with the Canadian boy involves only one person for the moment. There are much worse AI counterfeiting operations that can not only cause social panic, but can also trigger a crisis of confidence against the government.

One such incident occurred in China in late 2023. The Internet Security Brigade of the Kongdong Branch of the Pingliang Municipal Public Security Bureau in Gansu found that a local “news” story appeared simultaneously on as many as 21 websites: A train crashed into road workers in Gansu, killing nine people.

Such an appalling title plus a large amount of concentrated reports, this “news” hits quickly more than 1.5 million, triggering a lot of netizens to discuss.

article images#513px #308px #B

But this matter is purely false, is a Bo flow of front.

The local Internet police immediately intervened, the investigation found that the suspect Hong Mou is the use of AI technology for news forgery.

He purchased a large number of accounts, through the ChatGPT network to capture the hot social news material, simple modification of some can avoid the platform check review, concocted fake news. Finally, through the purchased software to upload a large number of different accounts, you can make illegal profits.

Luckily, the police found out in time and arrested the culprits before the “news” caused any panic. But this incident is also enough to make people realize that AI counterfeiting has a lot of consequences.

AI technology is becoming more and more perfect, picture and video to be fake and real

With the rise of midjourney, Stable Diffusion and other mapping tools, now using AI to generate the effect of the photo, has long since been removed from the “one glance fake” embarrassment, to the extent that the naked eye is difficult to distinguish, formally announced that “there is a picture of the truth! The era of “truth in pictures” has come to an end.

Experts have been warning that the future abuse of AI technology may produce a variety of fake news. In fact, there is no need for the future, this day has long come.

Small to spoof celebrities, self-indulgence:

For example, a foreign netizen used Midjourney to generate the “Pope wearing a down jacket”, the high quality of the picture is completely fake, was viewed more than 28 million times, not to mention that, but also caused a lot of commotion.

Since there are no news reports that can be cross-referenced at the moment, many netizens have taken it at face value.

It’s as big as fabricating international events and directly rewriting history:

Remember the 9.1 magnitude earthquake and tsunami that hit Cascadia back in 2001? Houses were collapsing, families of the victims were sobbing, rescuers were working through the night, and journalists from all over the world were rushing to the scene to report.

Of course I don’t remember.

Because there was no such thing as an earthquake, these are AI graphs posted by a netizen on Reddit.

In total, the netizen generated more than 20 photos imitating news reports, with all sorts of angles, faking a large-scale disaster event.

Although the pictures were crudely created, and the flaws can be seen in quite a few places, there’s no denying that it did fool quite a few people at the time:

“Am I the only one asking myself ‘Why don’t I remember this?’ Until I saw the section partition?”

In addition to photos, AI’s video counterfeiting technology has also long been mature. faceswap.tech and other AI face-swapping software finished product, only by the naked eye is almost indistinguishable.

For example, the following piece of Tom Cruise made by netizens. Those few classic expressions are exactly Cruise himself, to the extent that fans would misjudge them. If you look at the original video, you’ll see that the voice is also exactly the same.

There’s nothing terribly wrong with another golfing sequence. At first glance, there’s no way to tell if it’s real or virtual.

This was made in 2021, and the technology has changed countless times since then.

AI clones human voices for repeated scams

Human voice synthesis is the hardest hit area of AI technology abuse.

Nowadays, AI models – such as VALL-E – can perfectly clone the voice of a person with just a three-second voice sample, and even the tone and habits of speech are identical.

A number of scammers have taken advantage of this to commit fraud. Most recently, Peabody Films, a small Spanish film production company, was a victim.

Bob William, the company’s writer and director, revealed that someone had previously faked Benny’s voice to call them about a collaboration.

The AI’s voice sounded exactly like Benny’s, and at first the company believed it and was glad that such a famous actor would be interested in their script.

But after a few conversations, the other party began to reveal its faults. “Benny” is not willing to meet, but also asked the company to give him 200,000 pounds in advance.

Only then did the people realize that they had been cheated. Fortunately, the money has not yet hit, the loss is not great.

Compared to the “Benny” incident, another fraudster is obviously much more clever.

In early 2023, an Internet user under the alias of mourningassasin sold a number of “leaked tracks” by R&B singer Frank Ocean. After listening to a short public clip, the netizens were convinced that the songs were real, as the sound and style were identical.

Many fans contacted Mourningassasin to purchase the tracks, and he made more than $13,000. But it was later proved that the songs had nothing to do with Frank Ocean at all, they were all produced by AI.

Current AI technology can mimic any singer’s voice, song style, and singing habits, making it difficult for even die-hard fans to tell the difference between the real and the fake. In this case, unless Fa Hai himself denies it, it is difficult to rely on passers-by alone to fight the fake, which gives the crooks an opportunity to take advantage of.

A while ago there were also popular AI versions of Duck, Kanye, and Sun Yanzi. While most of it was just for fun, it just goes to prove the shakeup that AI technology has had on the music industry.

Celebrities are easy to access and imitate, so they are the focus of AI scams, but this does not mean that ordinary people will not be affected.

Europe and the United States have a lot of examples of fraud using AI cloning voice. In Canada alone, the amount of money involved has exceeded one million.

Fraudsters mostly collect audio material through social media, and then use AI to imitate the voice of the owner of the voice, and then take it to deceive their families.

A year ago, a mother in the United States, Jennifer DeStefano received a call from a “kidnapper” who claimed to have abducted her 15-year-old daughter, Brie, and demanded a ransom of 1 million dollars.

On the other end of the phone came the “daughter’s” cry for help, Jennifer heard the instantly believed, because not only the voice, even the way to cry and daughter exactly the same. Luckily, her husband confirmed that her daughter was safe in time, and the scammer didn’t succeed.

However, this incident still makes people afraid, but also to the majority of netizens caused a strong uneasiness. In front of the technology that can be comparable to the “real or fake Monkey King”, no one can guarantee that they have “fiery eyes” and never be fooled.

A new way of financial fraud, a scam under the AI gimmick

If there is any area where AI abuse can produce the most devastation, it is most likely the financial world. Once it is utilized by people with good intentions, the shock is enough to make countless people lose their money.

As we all know, Musk has let loose a TruthGPT AI project, claiming to go about understanding the nature of the universe and exploring the truth. From criticizing the potential danger of AI to society, and then going down to the sea to enter the game …… the program was highly regarded once it was announced.

But I didn’t realize that Musk hadn’t made a name for himself yet, and the program was first taken by someone else to scam money.

On Wednesday, Texas regulators urgently called off a token program called TruthGPT. Back in March, it was jointly launched by two of the man Horatiu Charlie Caragaceanu’s companies, The Shark of Wall Street and Hedge4.ai.

At first glance at the name, many assumed that the program was related to Musk’s TruthGPT, a related product launched by his team.

In fact, the team has conveyed that “even Musk himself is very supportive”. The promotional page also features a headshot and animated image of Musk himself, giving the impression that this has been professionally certified by the cryptocurrency kingpin.

In addition to this, the official website of TruthGPT tokens also features top industry figures such as Zhao Changpeng, the founder of cryptocurrency trading platform CoinSafe, and Vitalik Buterin, the co-founder of Ether.

When ordinary investors see this, they will easily be impressed by the endorsement of the authority figures and just believe in the legitimacy of the token.

Plus the project has also been outwardly brainwashing its own superb utility: Using Elon Musk AI’s model, it can analyze various cryptocurrencies and thus value various digital assets. It can also differentiate between investable products and scams. In this way, it is possible to make a profit while keeping your investment safe.

The promotional copy boldly states that, according to statistics, the return on investment will be multiplied by 1,000. The whole look down to give people the feeling is: money not earn son of a bitch.

This series of operations then attracted the attention of Texas regulators. As a result, the investigation found that this is simply an artificial intelligence investment scam.

The project side borrowed the name of Musk TruthGPT publicity, the so-called token project, packaged as the use of AI models to obtain high profits investment means. In essence, they don’t have any AI models at all, all they have is a packaging talk.

Texas regulators have uncovered that the TruthGPT tokens were never legally registered locally and that neither of Caragaceanu’s companies are licensed to operate. In short, it’s a total scam.

What’s scary is that financial scams like this one in the name of AI are not an isolated case. In April alone, the California Department of Financial Protection and Innovation (DFPI) identified five companies that were all committing AI fraud.

With the development of AI technology, uncovering a TruthGPT token, there will also be the next xxx tokens appear. For the financial community, what should be done to prevent this has become a major challenge that needs to be overcome.

AI face swapping ravages copyright infringement, legal protection

AI’s face-swapping technology should be one of the most controversial and hated by the public in recent years. Because using it to create yellow rumors about women is pretty much a matter of lifting a finger.

There are too many victims ……

Previously, a Twitch anchor by the name of Atrioc had been caught by netizens while live streaming, with her browser showing that she was watching porn. The content of the porn was generated by other female anchors who were faked by AI technology.

This incident immediately caused outrage. Many netizens angrily denounced Atrioc’s behavior as a violation of the rights and interests of female anchors. But at the same time, there was no lack of obscene people going around begging for resources, bringing secondary damage to the victims.

One of the victims is named QTCinderella.

She found out too late that “her” pornographic videos were circulating on the Internet. Many people discussed it with great interest as if it were real.

At one point, QTCinderella had an emotional breakdown and cried on air:

“It feels like I’m being violated, used, and watching my naked self being spread around …… I shouldn’t be the one paying to have this removed, nor should I be the one being harassed and watching my ’nude photos’ fly around! …… I promise you on my word of honor that I will TM sue you guys.”

A similar experience happened to Taylor Klein, a female student.

In her senior year she suffered from cyberbullying and was often verbally abused by strange men on ins. It wasn’t until one day Taylor received a link from a close friend that she realized why.

The link opened to a porn site. The main character of the video looked exactly like her, and the title listed her name.

Taylor, of course, knew it wasn’t her.

It turned out that all along, her face had been stolen and faked by the AI for porn. That’s why the strange men had insulted her as a “disgusting bitch”.

She tried to get help from the police, but the response was that there is no law in her state about deep-fake pornography.

The existing law on “non-consensual pornography” does not provide for sanctions, because the pornography only shows Taylor’s face, not her face.

Taylor had no choice but to team up with other victims to find the suspects. But after they begged the police to call the suspects, the only response they got was “it won’t happen again”. As for punishment, it doesn’t exist ……

The stigmatized victims have done everything they can, but they still can’t get a fair treatment. That’s why until now, the use of AI to create fake porn is still prohibited.

If technology has been allowed to develop without corresponding laws and policies to follow to regulate it, it is not difficult to imagine the worst consequences people face in this era of such dependence on the Internet:

Even if you see pictures and videos, you don’t dare to believe the news easily.

When you get a call from your family, you need to double-check if it’s a real person.

Taking satisfactory photos and uploading them to the Internet, and the next day finding out that you have become the protagonist of a yellow rumor, there is no way to defend ……

Everyone is afraid of the false world of Truman, but everyone has become the protagonist.

Tags :

Related Posts

Create High Quality Face Swap Videos with AI Face Swap

Create High Quality Face Swap Videos with AI Face Swap

In the ever-evolving landscape of digital creativity, AI Face Swap has emerged as a game-changer, redefining the boundaries of image and video manipulation.

Read More
Revolutionizing Creativity: Exploring the Wonders of AI Face Swap with Faceswap.tech – Free, Seamless, and Secure

Revolutionizing Creativity: Exploring the Wonders of AI Face Swap with Faceswap.tech – Free, Seamless, and Secure

In the digital age where technology intertwines with creativity, AI Face Swap has emerged as a game-changer, allowing users to defy reality’s boundaries and unleash their imagination.

Read More
The Ultimate AI-Powered Face Swap Tool Revolutionizing Photo and Video Editing with FaceSwap

The Ultimate AI-Powered Face Swap Tool Revolutionizing Photo and Video Editing with FaceSwap

Many people want to learn something new in the digital age, and AI happens to be the simplest and most accessible.

Read More