The advent of Artificial Intelligence (AI) has brought about a tremendous technological revolution, enabling the automation of complex tasks, the creation of machine learning algorithms, and progress in various fields. However, the rapid development of AI and its increasing capacity to replace humans in various contexts raise important ethical and philosophical questions concerning the very nature of humanity and our relationship with technology.

One of the most evident ethical implications of AI concerns the future of human labor. Recently, even the Prime Minister Meloni has raised the socio-ethical issue that accelerated use of AI tools could turn into a risk. Worried reactions are not few, even among those who have closely followed the development of neural networks underlying the most complex artificial intelligences. Many ethical doubts and a sort of Pandora's syndrome in predicting an increasingly cyberpunk and hostile future for humans have taken hold, even among minds that have worked closely with the latest technologies that revolutionized the world in 2023.

In an interview with Wired, Geoffrey Hinton, a leading expert in AI and a contributor to the development of major artificial neural networks, issued a warning regarding the use and evolution of these systems:

Hinton confesses that artificial intelligence is advancing more rapidly than he and other experts expected, and it is urgent to ensure that humanity can contain and manage it. He is particularly concerned about short-term risks, such as the new disinformation campaigns generated by AI, but he is also convinced that given the gravity of possible long-term problems, we must start addressing the issue now.

As AI becomes increasingly sophisticated, many human occupations are at risk of being automated. This raises questions about how society should address the changes in the world of work and how to ensure that automation does not lead to inequalities and mass unemployment. Philosopher Jeremy Rifkin, in his book "The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era," explores these concerns and emphasizes the need for a reconsideration of the economic system.

However, while the economy, in many ways, seems to spiral in on itself in a system somewhat stuck in the now overloaded chain of processes and waste, AI appears as the long-awaited innovation, the futuristic frontier longed for and imagined (for a century), coming to solve every problem with a click. It is both a savior and a cause for concern, with a dual, terrible and magnificent nature that multiplies fears and dark sides whenever an attempt is made to downsize them, just like the heads of a fateful hydra.

Humanity, following the alarmism of social media and the press, seems to find itself facing a thriller-like warning, and artificial intelligence seems to whisper "beware of what you wish for."

Yet, the much-desired artificial solution has been described in great detail in novels, illustrations, films, and essays, just like a perfect prompt that has tried to plan every nuance of what we are experiencing right now.

Nevertheless, right now, humanity is not ready for a technological turning point. Perhaps it is the inherent flawed and imperfect matrix that has generated AI that underlies this anxious and distrustful approach to a new world, like a self-reflecting mirror that makes its own limitations and defects fearsome.

Thus, the autonomy of an artificial system similar to ours makes everything less predictable, and the degree of independence granted to AI raises the question of whether machines should be able to make autonomous decisions in critical situations. Philosopher Nick Bostrom, in his work "Superintelligence: Paths, Dangers, Strategies," highlights the risks associated with superintelligent AI and the need to define rigorous ethical guidelines to ensure human control over advanced AI.

AI is created by humans and can reflect the biases and inclinations of its creators. Ethical issues arise regarding ethics in the design and development of AI. Kate Crawford and Ryan Calo, in an article titled "There is a Blind Spot in AI Research," highlight how the data used for training neural networks can lead to racial, gender, and cultural biases. These biases can influence the decisions made by AI and have significant impacts on society.

But all this anxious system fueled by often dishonest journalism is actually balanced by an enthusiastic drive that sees progress as a new step towards an opportunity to rethink the socio-economic structure.

AI that can replicate and surpass human abilities raises profound questions about our very identity. The acceleration of medical progress gives hope for the future resolution of problems that currently have no cure, Smart cities are in the planning phase, suggesting a sustainable eco-friendly structure, and machines appear as a new workforce for automatic and alienating actions. Just when it seems that humanity is on the brink of complex ecological balances that are changing, looming pandemic threats, catastrophes, and famines where demography leads to insufficient production and distribution, the light of "tekne" appears, illuminating an alternative path. Transhumanist Nick Bostrom, in "Human Enhancement," explores the implications of a future in which AI and technology could improve human abilities or even surpass them, leading to a challenge to the very essence of humanity: humans evolve through technology and by means of it. The underlying argument is that natural evolution cannot push beyond boundaries that are possible - or at least open - for technology and progress in the field of the artificial.

Therefore, it is essential that human evolutionary progress coincides with the progress of technological evolution, admitting an interpenetration of natural and artificial dimensions. What sets us apart is the ability to generate our own progress, for better or for worse. Therefore, we are facing a new change similar to the industrial revolution, and in order to be assimilated and understood, to have a homogeneous and reliable vision, a stratification is necessary, a technological Anthropocene that produces a new, broader design connected to the next level.

Left B - Web Idea


newsletter image