Facebook has developed a way to reverse engineer deepfakes
Facebook has developed a way to reverse engineer deepfakes

For now, deepfakes aren't much of a problem across the Facebook platform, but the company continues to fund research into the technology to help prevent future threats.

The latest work was carried out in collaboration with scientists from Michigan State University, and the joint team developed a reverse-engineered method for pervasive falsification.

This method analyzes the images generated by artificial intelligence to reveal the characteristics of the machine learning model that created it.

This work is useful because it can help Facebook track down actors who spread deepfakes on various social networks.

This content may contain misleading information or pornographic content, and it is a common application of deepfake technology. The work is currently still in the research phase and is not yet ready for publication.

Previous research in this area has identified AI models that are known to cause deep fraud.

However, this work, led by Vishal Asnani of Michigan State University, went further by outlining the architectural features of the lesser known models.

These properties, known as hyperparameters, must be modified as part of the engine in each machine learning model.

Together, these features leave a unique mark on the final image that can then be used to identify its source.

Facebook's director of research said it's important to identify unknown characteristics of the model as deep manipulation can be easily modified.

When investigators try to track their activities, this can allow actors to cover their tracks.

Facebook and deepfakes:

Facebook's director of research likened this work to the technique of discovering which camera model should be used to take pictures by looking for patterns in the resulting images.

The generated algorithm can determine the properties of the generated model. It can also identify the known model who created the image and whether the image is fake.

However, it should be noted that even these peak results are not at all reliable.

When Facebook ran a deepfake detection contest last year, the probability that the winning algorithm would be able to detect AI-manipulated videos was only 65.18%.

The participating researchers said that using algorithms to detect internal tampering remains a largely unresolved problem.

One reason for this is that the field of generative AI is very active. New technologies are coming into the market every day and hardly any candidate can keep up.

Actors in the field are aware of this dynamic. When asked if releasing new algorithms would result in research that could be discovered using these methods, Facebook's Director of Research agreed: I look forward to it.



Save 80.0% on select products from RUWQ with promo code 80YVSNZJ, through 10/29 while supplies last.

HP 2023 15'' HD IPS Laptop, Windows 11, Intel Pentium 4-Core Processor Up to 2.70GHz, 8GB RAM, 128GB SSD, HDMI, Super-Fast 6th Gen WiFi, Dale Red (Renewed)
Previous Post Next Post