Artificial intelligence (AI) technology has rapidly advanced in recent years, providing benefits to industries such as transportation and energy. However, AI face-swapping technology presents both advantages and disadvantages to individuals and society. This technology enables users to obtain facial features, expressions, body movements, and voice characteristics of targeted images, and then uses that information to create fake videos that can deceive viewers. While AI face-swapping technology supports the growth of the entertainment industry and tackles obstacles in producing works, such as allowing for Paul Walker to make a posthumous appearance in Fast and Furious 7, it also poses the risk of being misused and infringing on personal dignity rights.
The risks posed by AI face-swapping technology include fraudulent activities such as identity theft, social engineering attacks, phishing scams, political manipulation, financial fraud, consumer fraud, and more. Perpetrators can steal identities, commit crimes, orchestrate social engineering attacks, fabricate videos featuring relatives or friends of victims, and solicit money and sensitive personal information. They can also use this technology for phishing scams, disseminating realistic videos and images online to trick victims into sharing sensitive information or downloading malicious software. The technology can also perpetrate financial fraud by persuading investors or customers to make endorsements or promises. Additionally, fraudsters can use AI face-swapping, voice-swapping, and fake videos to imitate targeted persons, thereby prompting unsuspecting individuals to lower their guard.
The challenges posed by AI technology have not yet been integrated into the criminal legal system, making it challenging for authorities to investigate related crimes. The unreasonable collection and use of user information by the ZAO app generated severe backlash, and although the app could evade legal responsibility due to its unreasonable user agreements and market advantage, there is a need to clarify the interpretation of personal information under the law or issue judicial interpretations to determine that facial data are part of protected personal information.
The authorities should take targeted measures to address the risks associated with AI face-swapping technology. The experts suggest incorporating the personal information required by AI face-swapping technology into the legal definition of personal information under the law and further strengthen regulations to hold platform managers accountable for any misuse of personal information. The government, research institutions, and enterprises should work closely together to strengthen the research and development of countermeasures and upgrade them, publicize relevant information on social risks, and enhance the public&https://adarima.org/?aHR0cHM6Ly9tY3J5cHRvLmNsdWIvY2F0ZWdvcnJ5Lz93cHNhZmVsaW5rPTI3dk5OVEdGdzl3QXBic0NhZGZFZUZsZ2lIbmlrY205NWVrdzRXVE5CT0VWM1dYaFVNa0UwWm5GVVp6MDk-8217;s awareness, digital literacy, and media literacy to prevent the misuse of personal information.
It is crucial to identify fraudulent AI face-swapping technology to prevent crime. People can train AI to recognize human voices, facial features, and body postures to create face-swapping videos, but the same principle can also train AI to identify fake ones. By taking these measures, the government can strengthen the fight against crimes such as obscenity, defamation, rumormongering, fraud, and personal information infringement and address the problem of AI face-swapping technology&https://adarima.org/?aHR0cHM6Ly9tY3J5cHRvLmNsdWIvY2F0ZWdvcnJ5Lz93cHNhZmVsaW5rPTI3dk5OVEdGdzl3QXBic0NhZGZFZUZsZ2lIbmlrY205NWVrdzRXVE5CT0VWM1dYaFVNa0UwWm5GVVp6MDk-8217;s misuse in social governance.