Regulators Turn to Existing Laws to Address Generative AI, Like ChatGPT

Mon May 22 2023
icon-facebook icon-twitter icon-whatsapp

BRUSSELS: As the development of powerful artificial intelligence (AI) services, such as ChatGPT, gains momentum, regulators are resorting to existing laws to manage a technology that has the potential to revolutionize societies and businesses.

The European Union (EU) is taking the lead in formulating new AI regulations that could set a global standard in addressing privacy and safety concerns arising from the rapid advancements in generative AI, the technology behind OpenAI’s ChatGPT. However, the enforcement of such legislation is expected to take several years.

“In the absence of regulations, the only thing governments can do is to apply existing rules,” stated Massimiliano Cimnaghi, a European data governance expert at consultancy BIP. “If it’s about protecting personal data, they apply data protection laws; if it’s a threat to the safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”

In April, Europe’s national privacy watchdogs formed a task force to address concerns related to ChatGPT after Italian regulator Garante temporarily took the service offline, accusing OpenAI of violating the EU’s General Data Protection Regulation (GDPR).

OpenAI later made changes, including installing age verification features and enabling European users to block their information from being used to train the AI model, resulting in the reinstatement of ChatGPT. Other European data protection authorities in France and Spain initiated probes into OpenAI’s compliance with privacy laws.

Regulators are striving to apply existing rules that cover aspects such as copyright, data privacy, and two key concerns: the data fed into AI models and the content they generate. Agencies in both the United States (US) and Europe are encouraged to reinterpret their mandates and utilize existing regulatory powers to address emerging challenges.

For example, the US Federal Trade Commission (FTC) is investigating algorithmic discrimination practices under existing regulations. In the EU, proposed regulations like the AI Act will require companies like OpenAI to disclose any copyrighted material used to train their models, potentially exposing them to legal challenges.

Regulators exploring ways to apply existing laws to AI

Regulators are exploring creative ways to apply existing laws to AI, with French data regulator CNIL leading the charge in considering how laws can be adapted. CNIL is investigating the effects of AI on data protection, privacy, and potential discrimination, leveraging provisions within the GDPR that protect individuals from automated decision-making. However, differences in regulatory interpretations may arise, given the complexity of the technology and the diverse viewpoints among regulators.

The Financial Conduct Authority in the UK, along with other state regulators, is in the process of developing new guidelines to govern AI. Collaborating with institutions such as the Alan Turing Institute in London, the Financial Conduct Authority aims to enhance its understanding of AI technology.

While regulators grapple with the rapid pace of technological advancements, calls have emerged for increased engagement between regulators and corporate leaders. Some industry insiders believe that dialogue between the two parties has been limited thus far, raising concerns about striking the right balance between consumer protection and business growth.

As regulators dust off their rulebooks and adapt to the challenges posed by generative AI technologies like ChatGPT, a delicate balance between oversight and innovation needs to be struck to ensure the responsible development and deployment of AI systems.

icon-facebook icon-twitter icon-whatsapp