Ever since the launch of ChatGPT last year, generative AI has captured the public’s imagination, allowing people to create sophisticated content on just about any topic with basic prompts. However, there are growing concerns about the technology’s ability to cause harm, from making up supposed facts to making inappropriate suggestions to users. Tech industry insiders and legal experts have raised worries about the possibility of copyright and data-privacy violations, the potential for discrimination as humans encode their own biases into algorithms, and the difficulty of knowing exactly what is happening within self-learning AI programs. All of this has prompted governments around the world to call for protective regulations.
Regulation of generative AI is still in its early stages and differs across borders. Although every country is free to make its own rules, experts argue that there should be some form of harmonization between the US, EU, and most Western countries in the future. While the details of legislation put forward by each jurisdiction might differ, the overarching theme that unites all governments that have so far outlined proposals is the desire to realize the benefits of AI while minimizing its risks to society. In the UK, a white paper published in March this year outlined five principles that companies should follow to avoid “heavy-handed legislation.” By contrast, the European Commission has published the first draft of its AI Act, which includes requirements for generative AI models to mitigate foreseeable risks. The proposed legislation would forbid the use of AI when it could become a threat to safety, livelihoods, or people’s rights.
However, regulation of generative AI is not without its challenges. Legislators have to constantly play catch-up to new technologies, trying to understand their risks and rewards. Furthermore, lawmakers have different approaches to regulation depending on politics, ethics, and culture. AI regulations implemented by the Chinese government, for instance, would be completely unacceptable in North America or Western Europe due to strict security assessments and the requirement for generative AI content to conform to the country’s values. Ultimately, the regulation of generative AI will need to be carefully balanced to allow for innovation and minimize the risks posed by the technology.