Deepfakes is the most worrisome AI crime
Deepfakes is the most worrisome AI crime

Deepfakes technology is the most used artificial intelligence technology in crime and terrorism, according to the latest report from the University of London College.

The London College research team first identified 20 different ways criminals would use AI over the next 15 years, then asked 31 AI experts to build on the damage and potential money they could earn, ease of use, and the difficulty to stop criminals. Arrange the risks.

Deepfakes technology, which criminals use to create AI videos for real people who say fictional things, is number one for two reasons:

First, it is difficult to identify and prevent because automatic detection methods are still not reliable, and the deep counterfeiting technique to trick human eyes has improved, as was the case recently with Facebook's algorithms to find out that their competitors are telling researchers that this is a big, unresolved issue. .

Secondly, deep counterfeiting technology can be used in a variety of crimes and bad behavior, from destroying public figures to raising funds from the public by pretending to be others. Just this week, Nancy Pelosi's video spread, looked very drunk, and Deepfakes Tech helped criminals steal millions of dollars.

In addition, researchers fear that technology will be wary of audiovisual evidence because it will cause major social harm.

Research author Dr. Matthew Caldwell said: "The longer we live on the Internet, the greater the risk. Unlike many traditional crimes, crimes in the digital world can be easily shared and repeated. Enable criminal technology to initiate and present crime as a service. This means that criminals can outsource aspects The most difficult of their crimes based on artificial intelligence.

The study also identified five other major threats to AI crimes: unmanned vehicles such as weapons, phishing, online data blackmail, attacks on AI control systems, and false information.

However, researchers aren't too concerned about (anti-theft software) entering the home through mailboxes and cat boards because they are so easy to access. They also classify AI persecution as a restricted crime, even though it is extremely harmful to the victim. Because it can not work widely.

Since the term was published on Reddit in 2017, researchers have been aware of the dangers of Deepfakes technology, as the term has always attracted attention, but so far people haven't paid much attention to it. But researchers believe this is the case. As technology advances, things will change and it will be easier to have a deep tradition.

Previous Post Next Post