In a broader European context, the European Parliament recently discussed a proposal for a regulation that establishes harmonized rules on artificial intelligence, known as the "AI Act." This act aims to provide a comprehensive regulatory framework for the use of Artificial Intelligence in the European Union and to ensure its consistent application among member countries. Antonio Nicita, the proponent of the Italian bill, has indicated that, while waiting for this European regulatory framework, it is urgent to introduce measures that make the non-human origin of AI-generated content clear.
The Italian bill proposal requires entities responsible for the publication and dissemination of AI-generated content to ensure immediate recognizability of the artificial and manipulated nature of such content to users. This can be achieved through a unique labeling, for example, with the wording "AI made/produced by artificial intelligence," as determined by the Authority for Communications Guarantees (Agcom). Agcom, the Italian regulatory authority for communications, will be tasked with defining the methods of implementing this labeling, which must be clearly visible to users. Agcom will also be responsible for monitoring and enforcing the provisions of the law and will have the power to report and remove content that violates these provisions. Sanctions will be proportionate to the severity of the violations.
In the increasingly digitalized world we live in, Artificial Intelligence (AI) is playing an increasingly central role in the production of audiovisual and editorial content. Generative AI, which includes machine learning and deep machine learning, is capable of creating audio, video, and textual content that may appear authentic but is actually the result of automated processing. This trend has raised various concerns regarding the authenticity of information, copyright violations, image manipulation, and the spread of disinformation. To address these challenges, the Democratic Party (Pd) has introduced a bill proposing the introduction of mandatory labeling for AI-generated content.
The bill, with Antonio Nicita as its primary proponent, aims to ensure transparency and clarity regarding the origin of AI-generated content. The primary goal is to protect consumers from potential information manipulation and copyright violations.
One of the main points of concern highlighted in the bill proposal is the growing dissemination of so-called "deep fakes." These are audio, image, or video contents that have been altered or created in such a way as to appear authentic or genuine but are, in fact, the result of complex AI-based processing. Automated learning and deep machine learning allow AI to learn from vast amounts of data and create content that is difficult to distinguish from reality. This raises a series of issues related to the authenticity of content and its potential fraudulent use.
Challenges
The bill proposal recognizes that the widespread use of AI-generated content poses several challenges. Among the main concerns are:
Authenticity
Deep fakes and other AI-generated content can deceive the public as they appear real. This raises the issue of the authenticity of information and the difficulty of verifying its source.
Source Verification
AI has no sources but creates content based on previously learned data and models. This makes it challenging to trace the origin of specific content.
Copyright
The creation of content by AI raises questions about copyright ownership. Who is the legitimate owner of content generated by an algorithm?
Image Protection
Individuals involved in deep fakes or similar content may suffer damage to their image and reputation, raising legal protection concerns.
Disinformation
The unethical use of AI can lead to the dissemination of false news and disinformation, with potentially severe consequences for society.
Defamation Liability
Who is responsible for defamation or damages caused by AI-generated content? The bill seeks to address this issue.
The bill proposal presented by the Democratic Party with the mandatory labeling of AI-generated content represents a significant step towards protecting transparency and clarity in digital communications. As AI continues to evolve and be used for content creation, it is crucial to address challenges related to authenticity, copyright, and responsibility. The proposal aims to strike a balance between technological innovation and the protection of rights and public trust. The debate on the regulation of AI and its generated content will be essential to ensure responsible use of this emerging technology.