Browse all

Will we ever really be able to regulate artificial intelligence?

What's in the AI Act and in what context it was born

Will we ever really be able to regulate artificial intelligence? What's in the AI Act and in what context it was born

Recently, the Parliament and the Council of the European Union reached an agreement that will soon lead to the world's first law capable of regulating artificial intelligence. This is the so-called AI Act, which in its final version will regulate the development and use of artificial intelligence systems, with the aim of protecting the privacy of citizens. While it will not curb the innovative potential and growth of the sector, the European Union is the first institution to adopt such a legal instrument - the agreement has been called «historic»by the Internal Market Commissioner, Thierry Breton. The European initiative covers a wide range of areas and applications of AI, from the algorithms that make self-driving cars work to online misinformation, via the systems on which tools such as ChatGPT or those regulating new personnel hiring. One of the central issues in the agreement is related to facial recognition systems for security cameras. Several countries were in favor of very restrictive measures, to avoid the risk of mass filing, while Italy, Hungary and France supported a more permissive stance. However, the practice has been banned altogether, except in specific cases-such as terrorist threats or searching for victims. But the regulation includes additional prohibitions in the use of AI, such as the recognition of emotions in the workplace and schools, or the calculation of the "social score" of individuals. Here, each citizen is assigned points based on his or her behaviors, which-unlike those with low scores-allow access to certain services. A practice, this, which is common in China - a country that does not pose ethical problems in the field of artificial intelligence - and which well represents the most dangerous and dystopian implications of the uncontrolled use of AI systems.

In what context does the AI Act fit

@torcha L’Europa avrà il suo AI Act, la prima legge al mondo per regolamentare lo sviluppo e l’utilizzo dei sistemi di intelligenza artificiale. Questo documento dovrà indicare gli usi dell'intelligenza artificiale per tutelare la privacy e gli altri diritti dei cittadini europei. Adesso il testo sarà affinato dai tecnici e non c’è ancora una data ufficiale della sua entrata in vigore. In particolare, il riconoscimento biometrico, ovvero l'insieme di sistemi per riconoscere le persone con telecamere di sicurezza, è statio uno dei punti più discussi. Secondo le ultime decisioni il riconoscimento facciale è stato vietato, ad eccezione di tre casi: l’evidente minaccia di un attacco terroristico, la ricerca di vittime, le indagini che riguardano reati gravi come omicidi, sequestri, violenza sessuale. Ma cosa ne pensano gli altri Paesi del suo utilizzo? La Cina ha cercato nel tempo di regolamentare gli usi dell'AI. La Cyberspace Administration of China (il principale sistema di controllo e censura di Internet del Paese ) ha pubblicato una serie di linee guida per regolamentare il settore dell'intelligenza artificiale generativa. Si è parlato anche dei sistemi biometrici, ma le norme più stringenti sono rivolte unicamente ai privati, mentre mancano limiti o restrizioni per le autorità pubbliche. L’unica nuova disposizione generale dispone il semplice obbligo di segnalare ogni occasione in cui viene utilizzato il riconoscimento facciale, comprese quindi anche piazze o uffici pubblici #AI #intelligenzaartificiale #Ue #legge #aiact #unioneuropea #act #thereiruinedit #devileyes #sometimes #canirobot #cina #china #greenscreen Devil Eyes - There I Ruined It

The AI Act proposal was submitted by the European Commission as early as 2021. After finding political agreement, the text will be refined by technicians called upon to write the final version of the law, which will then have to be finally approved again. Arriving at political agreement alone required very lengthy evaluations and negotiations, partly due to the fact that artificial intelligence is a booming field with still blurred contours. The field, summarizing much, is divided between those who believe that unchecked development of AI will eventually harm humanity, and those who argue that over-regulating it would limit its crucial role in ensuring a better future for us. The recent crisis at OpenAI, the company behind ChatGPT, which fired and then rehired its CEO Sam Altman, reflects the significant divisions within the company between these two culturally different factions. The tension between "optimists" and "pessimists," if you can call them that, is nothing new in this area. OpenAI itself was originally born as a nonprofit that was supposed to foster AI development in a cautious and transparent manner, distinguishing itself from the likes of Google, whose rapid advances in the field alarmed insiders.

Altman himself has made several concerned statements in recent months about the possible future effects of AI. He was, for example, among the signatories to a open letter, which called for «mitigating the risk of extinction from AI» and to conceive of these systems as «a global priority alongside other societal-scale risks such as pandemics and nuclear war». In the same direction moved Elon Musk,in another open letter calling for a pause of at least six months in the development of similar technologies. Publishing it was the Future of Life Institute, association that aims to «steering transformative technology towards benefiting life and away from extreme large-scale risks». With the spread of suitable facilities and the increasing availability of big data, the costs of artificial intelligence are decreasing, the areas of application are increasing, and the number of companies active in the field, or simply exploiting these technologies, is growing.

This escalation has attracted the interests and investments of many business entities, but it has also helped to radicalize the opinions of those in the field. The most direct consequence is the abandonment of the more cautious and ethical attitude related to progress in the field of AI, accused of holding back the investments made and, indirectly, the very future of humanity. This scenario was becoming apparent, at least on our continent, in the absence of any legislation on the subject, issued by national governments or international organizations. Europe, precisely because artificial intelligence is the most discussed and at the same time attractive business of today, has chosen to change course and start regulating the sector.