Have we let our guard down regarding uncontrolled AI development? Concerns have given way to a more aggressive approach to progress

In the past, OpenAI invested significant resources to reassure the public and institutions about the risks of uncontrolled artificial intelligence development. It did so, for example, shortly after the launch of ChatGPT, fearing that systems of this type could become so advanced as to escape the control of their own creators.

If until a few years ago these concerns were at the center of the debate in the AI sector, today the big tech companies seem much less interested in the topic. Several factors have driven this change in approach, including growing competition and the influx of new investments in the field, which have pushed individual companies to focus increasingly on expansion and the search for long-term economic sustainability. OpenAI itself has changed a lot: recently, for example, it has effectively become a private company, whereas in the past it also had a rather significant no-profit component.

What are the dangerous applications of AI?

@sineadbovell The #ai risks no one is talking about #ai #thefuture #artificialintelligence #futurist #chatgpt #gpt3 original sound - Sinead Bovell

OpenAI was born in 2015 as a non-profit organization, with the stated goal of developing artificial intelligence systems for the benefit of humanity. However, the company's recent change of pace has not been painless, leading, for example, some prominent figures to leave the company.

Among these is Jan Leike, who led a project dedicated to AI system safety and decided to join Anthropic, the company developing the chatbot Claude, founded in 2021 by former OpenAI employees. But also contributing to a more aggressive approach toward progress in the field of artificial intelligence are the enormous sizes reached by this market.

Such technologies, for example, have extremely lucrative applications in the military sector, as well as in surveillance and national security, and every step forward is considered potentially decisive in strategic and economic terms.

@hardfork Sam Altman on the dangers of AI. From our interview 48 hours before he was fired from OpenAI #ai #apocalypse #agi original sound - Hard Fork Podcast

It's also for this reason that the US government has introduced strict limits on the export of the most advanced technologies, such as the latest-generation chips produced by the tech company Nvidia, whose sale to China is banned in order to slow down the development of the local tech sector and maintain the United States' competitive advantage.

In a context dominated by economic and geopolitical logics, the more cautious approaches to AI – which call for slowing down and reflecting on the risks and contradictions present in the sector – struggle today to find space and hearing. Supporting this more reckless line is also the Trump administration itself, which is seeking to regulate the tech sector as little as possible, particularly that of AIs, in order to leave as much room for maneuver as possible.

What worries the authorities?

What worries observers is not only the most extreme scenarios, but also more concrete and immediate phenomena, such as the spread of chatbots designed to provide companionship to users, presenting themselves as friends or virtual partners.

Companies like xAI, founded by Elon Musk, have been proposing digital “Companions” for a few months – that is, customizable chatbots equipped with seductive-looking avatars. OpenAI is also moving in a similar direction: recently it announced that starting in December, adult users will be able to use ChatGPT for conversations with an erotic background. However, this type of service has already been at the center of controversies and legal disputes, and the further expansion of the field scares experts quite a bit.