Blog Layout

AI Ethics Officer: La Professione del Futuro nel Campo dell'Intelligenza Artificiale

Jan 19, 2024

AI Ethics Officer, tutto quello che c'è da sapere su una delle professioni emergenti nel settore dell'AI


In questo articolo, vi forniremo una panoramica completa per comprendere a fondo chi è l'AI Ethics officer, e perché è una delle professioni più promettente nel contesto dell'Intelligenza Artificiale


Indice dei contenuti


  • Introduzione all'importanza dell'Etica nell'Intelligenza Artificiale
  • AI Ethics Officer: una professione emergente e strategica
  • L'AI Act e il crescente bisogno  di Etica nell'AI
  • Ruolo e competenze dell'AI Ethics Officer 
  • Master in AI Ethics: un percorso per diventare pionieri dell'Etica dell'AI


Introduzione all'Importanza dell'Etica nell'Intelligenza Artificiale

Negli ultimi anni, l’Intelligenza Artificiale è passata dallo status di tecnologia del futuro a realtà concreta e influente nelle nostre vite.

Divenuta argomento principale nel dibattito pubblico, l’AI si è prefigurata per la prima volta come strumento avanzato per la risoluzione di innumerevoli compiti negli ambiti più disparati. La rapida evoluzione di queste tecnologie solleva però questioni etiche urgenti, che ci pongono davanti alla necessità di definire come umani e macchine debbano interagire nei nuovi scenari che si aprono dinanzi a noi. 

Le sfide e i rischi legati all'etica dell'intelligenza artificiale assumono un'importanza e una complessità senza precedenti. Le aziende stanno rapidamente espandendo la portata e l'ambito dei loro sistemi di AI, inserendoli in contesti sempre più ampi e diversificati. Questa evoluzione porta con sé un'aspettativa fondamentale: i sistemi di AI devono aderire a norme sociali ed etiche, oltre che supportare processi di decision-making equi, coerenti, trasparenti, spiegabili e privi di pregiudizi. Tuttavia, definire cosa sia etico e socialmente accettabile non è semplice, nemmeno per gli operatori umani.

Uno dei problemi più persistenti e complessi sia per gli individui che per la società è il bias sistemico, una questione che l'intelligenza artificiale tende a esacerbare ed amplificare a livello di frequenza ed impatto concreto. In ambienti aziendali, il comportamento non etico ha sempre rappresentato un rischio, ma con l'introduzione dell'AI, questo tende ad aumentare a causa dell’impatto di predizioni inesatte e compiti svolti in modo non trasparente. In parallelo, i rischi in ambito pubblico sono anche maggiori, considerato il ruolo svolto dalla Pubblica Amministrazione e dagli enti governativo in settori chiave quali la gestione dei servizi pubblici essenziali, l’amministrazione della giustizia, la prevenzioni dei reati e così via. Le azioni di un sistema AI, sebbene tecnicamente corrette, potrebbero infatti rivelarsi inaccettabili dal punto di vista etico-sociale.

Nei prossimi anni sarà quindi fondamentale implementare sistemi di AI che non solo rispettino le leggi, ma che siano anche
allineati con i valori etici e morali della società. Questo compito richiede una comprensione profonda dei principi etici sottostanti, oltre che competenze tecniche per integrare questi valori nei sistemi di AI. Ed è proprio in questo contesto che nasce il ruolo dell'AI Ethics Officer: un professionista che guida lo sviluppo responsabile e l'uso etico dell'AI, garantendo che la tecnologia lavori per il bene della società nel suo insieme.

AI Ethics Officer: Una Professione Emergente e Strategica

L'EU AI Act rappresenta un tentativo significativo dell'Unione Europea di guidare la regolamentazione globale dell'intelligenza artificiale e di stabilire uno standard a livello mondiale. L'obiettivo principale dell'AI Act è dare priorità ai diritti umani nello sviluppo e nell'impiego dell'AI, categorizzando i sistemi in base all'impatto che possono avere sulla vita delle persone. In particolare, i sistemi di AI ad alto rischio dovranno soddisfare requisiti molto stringenti, ed essere valutati sia prima di essere messi sul mercato che durante tutto il loro ciclo di vita. Le aziende e le organizzazioni che non rispetteranno i requisiti di conformità per i loro sistemi, saranno infatti soggette a sanzioni. 

Il ruolo dell'AI Ethics Officer diventa fondamentale in questo contesto. Questi professionisti dovranno guidare le aziende nell'adeguamento alle norme dell'AI Act, assicurando che i sistemi di AI sviluppati e implementati rispettino i requisiti di transparency, fairness, privacy e security. L'AI Act si applica ai fornitori, distributori e utilizzatori di sistemi di AI commercializzati o messi in servizio nell'UE, la responsabilità dell'AI Ethics Officer si estenderà quindi oltre le frontiere nazionali, influenzando le pratiche globali delle aziende.

L’AI Ethics Officer dovrà assicurare l’adozione di framework di gestione del rischio, di creare documentazione tecnica dettagliata, di facilitare la tenuta di registri e lo sviluppo di sistemi che consentano un’adeguata supervisione umana per prevenire o minimizzare i rischi per la salute, la sicurezza o i diritti fondamentali. Inoltre, l’AI Ethics Officer dovrà garantire un livello adeguato di accuratezza, robustezza e sicurezza informatica durante tutto il ciclo di vita del sistema.

Con l'imminente formalizzazione e l'entrata in vigore dell'Act, le aziende devono iniziare a prepararsi, con l'aiuto degli AI Ethics Officer, per soddisfare nuove e stringenti regolamentazioni.

Ruolo e Competenze dell'AI Ethics Officer

L'AI Ethics Officer rivestirà un ruolo cruciale nell'ambito dell'intelligenza artificiale grazie alla sua natura interdisciplinare.
Ma quali sono le competenze tecniche del ruolo?
Ecco una lista delle principali competenze dell’AI Ethics Officer:

Conoscenze Tecniche

  • Capacità di comprendere i dettagli tecnici che influenzano la progettazione, lo sviluppo e l’implementazione dei sistemi di AI.

Conoscenze Normativa

  • Conoscenza della normativa applicabile in contesti di AI, come ad esempio quella in materia di protezione dei dati, e una prospettiva strategica sulle possibili future normative e sfide etiche.
  • Capacità di navigare nel contesto legale in rapida evoluzione.

Conoscenza dei Settori di riferimento

  • Abilità di contestualizzazione l'etica dell'AI nel mondo corporate e istituzionale, al fine di tradurre le politiche e i quadri etici applicabili a scenari concreti.
  • Comprensione del settore o dei settori di riferimento, e delle implicazioni dell'etica dell'AI sui processi aziendali, sistemi e gruppi di stakeholder, e consapevolezza di come l'industria sta affrontando l'etica dell'AI.

Abilità Comunicative e Capacità di Lavorare Oltre i Confini Organizzativi


  • Capacità di comunicare in modo efficace, al fine di facilitare i team manageriali e operativi nel percorso di adozione e implementazione di sistemi di AI. 
  • Abilità di operare efficacemente attraverso i confini organizzativi, sia verticalmente all'interno della gerarchia aziendale sia orizzontalmente attraverso funzioni e unità di business.


Il Master in AI Ethics: Un Percorso per Diventare Pionieri dell'Etica nell'AI

Il Master in AI Ethics di Dexai offre un percorso formativo completo che mira a preparare i futuri AI Ethics Officers.
Il programma copre una vasta gamma di argomenti, fornendo solide basi del contesto teorico dell’AI e dell’AI Ethics, per poi muoversi all’interno di tematiche pratiche legate ai
bias dell’AI, ai principi di protezione dati, dello sviluppo etico, per arrivare alle metodologie di valutazione degli impatti sociali e sostenibilità dell’AI.


Case Studies e Approccio Pratico del Master in AI Ethics


L'approccio pratico del master in AI Ethics è fondamentale per preparare gli studenti alle sfide reali del campo. Attraverso l'analisi di casi di studio concreti, gli studenti impareranno a navigare in situazioni complesse, dove le decisioni etiche devono essere prese rapidamente e con sicurezza. I casi di studio spaziano da questioni di bias e discriminazione nell'AI a questioni di privacy e consenso, offrendo agli studenti una comprensione approfondita delle sfide etiche che dovranno affrontare nella loro carriera.

Conclusione: Perché Scegliere una Carriera in AI Ethics

Scegliere una carriera in AI Ethics significa non solo intraprendere un percorso professionale all'avanguardia, ma anche contribuire attivamente a plasmare il futuro dell'AI in modo responsabile ed etico. Il master in AI Ethics si pone l’obiettivo di formare i professionisti che guideranno il cambiamento tecnologico dei prossimi anni. 


Se sei interessato/a a conoscere i dettagli dell'offerta formativa, visita la pagina dedicata


Master in AI Ethics

Leave us a message!

A Guide to  the AI Act: December Edition
By DEXAI 07 Dec, 2022
During the WP TELECOM meeting on 25 October 2022, the Czech Presidency intends to present the changes made in the fourth compromise proposal and invites the delegations to indicate any outstanding issues with the text. Moreover, on 25 November 2025, the EU ministers green-lighted a general approach to the AI Act at the Telecom Council meeting on Tuesday (6 December). This approach, namely "Harmonised rules on artificial intelligence and amending certain Union legislative acts - General approach," was proposed on (25 November). In this report presents the latest changes proposed in this fourth proposal and compares them with the previous versions.
Just Think About it, and Generative AI Will Do The Rest for you.  A New Generative Reality based on
By DEXAI 07 Nov, 2022
Consider something you would like to create, recreate or imitate (art, photos, videos, music, stories, coding, a deepfake of a prime minister etc.), and generative AI will be happy to assist you. Does this mean that we have reached an era where saying just a few words enables you to create or replicate almost anything you can imagine (pretty much like a utopian science fiction novel)? And, for which price are we willing to have this assistance? This blog post will briefly summarise what generative AI is, and can do, its multiple types (i.e. Generative Adversarial Networks - GANS), how this differs from traditional AI, and its benefits and risks. We will conclude with a short introduction to Actor-Network Theory (ANT) by Bruno Latour as a theoretical and methodological approach to account for the relational nature of the society. Actors, in this theory, are not just humans or collectives of humans. Technical objects or artificial agents such as GANs, a class of deep neural network models able to simulate content with extreme precision, are also considered by ANT equally significant for the circulation of contents, beliefs and information inside a social system. GANs, have been primarily used to create media simulating human beings conveying politically-relevant content. What is Generative Artificial Intelligence? To what extent can we say AI can assist us in almost any task? In the face of the latest advancements in AI, the list of such tasks seems to be diminishing almost daily. Furthermore, even though we are still far away from achieving Artificial General Intelligence (AGI) or "Human-Level AI", as Kurzwei l (2005) would say, recent advancements in the sub-fields of AI, such as Natural Language Processing (giving computers the capacity to understand the text and spoken), and Computer Vision (computers see, observe, and 'understand), have shown that we are not the only ones capable of instantly and creatively generating images, videos, stories, music, or code from just a short text. This hype around generative technologies has been developing around tech giants such as Elon Musk with its DALL·E Open AI text-to-image technology. We can sub-divide generative AI's three famous models into: General Adversarial Networks (GANS) GANs are an emerging technology for both semisupervised and unsupervised learning and are composed of the i) generator and ii) the discriminator. The discriminator has access to a set of training images depicting real content. The discriminator aims to discriminate between "real" images from the training set and "fake" images generated by the generator. On the other hand, the generator generates images as similar as possible to the training set. The generator starts by generating random images and receives a signal from the discriminator whether the discriminator finds them real or fake. At equilibrium, the discriminator should not be able to tell the difference between the images generated by the generator and the actual images in the training set; hence, the generator generates images indistinguishable from the training set (Elgammal et al., 2017). AutoRegressive Convolutional Neural Networks (AR-CNN) AR-CNN is a deep convolutional network architecture for multivariate time series regression. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of sub-predictors. At the same time, the weights are data-dependent functions learnt through a convolutional network. (Bińkowski et al., 2017). These transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarisation, as in the case of Open AI with DALLE-E. Transformer-based Models A transformer is a deep learning (DL) model that adopts the self-attention mechanism, weighing the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP) (Vaswani et al., 2017) and computer vision (CV) (Brown, 1982). Where can we see generative AI? Beyond data augmentation, widely used in the medical field, generative AI can, just with a short paragraph of text, assist you with creating art and pictures ( open AI's DALL-E system that enables the creation of hyper-realistic images from text descriptions), videos ( Meta's text-to-video AI platform), AI avatars ( synthesia's text-to-speech synthesis which generates a professional video with an AI avatar reciting your text), music ( Jukebox neural net ) and much more. The magic behind most of this generative reality stands within Generative pre-trained Transformer (GPT-3) - a third-generation, autoregressive language model that uses deep learning to produce human-like texts and use the previous distinction to analyse them (Floridi, 2020). Simply speaking, we can understand it as 'text in — text out' - the model processes the text given and produces what it 'believes' should come next based on the probabilities of sequences of particular words in a sentence. Below we can see the realistic images entirely created by DALL·E from the prompt of an -> impressionist oil painting of an AI system having a drink at the beach
By DEXAI 15 Sep, 2022
On 12 August 2022, in Geneva, The Human Rights Council Advisory Committee (HCR) concluded its 28th session after advancing its studies on the impact of new technologies for climate protection on the enjoyment of human rights and the advancement of racial justice and equality. In such a session, the Committee decided to submit the human rights implications of neuro-technologies for consideration and approval by the Human Rights Council: "Towards the recognition of neuro-rights. " Lenca (2021) describe these rights as the essential normative guidelines for the defense and maintenance of the human brain and mind; in other words, the ethical, legal, social, or natural principles of freedom or entitlement relating to a person's cerebral and mental domain. Such considerations lead to concepts like "neuroethics," which can be defined as "the examination of what is right and wrong, good and bad about the treatment of, the perfection of, or unwelcome invasion of and worrisome manipulation of the human brain" (Safire, 2002). The unquestionable medical benefits that such advancements may bring should not, however, obscure the risk that they also pose. For instance, neural processes within neurotechnologies may be exposed to manipulation due to their unregulated commercialization. Equal access to neurotechnology for medical purposes has also been promoted, along with individual access to justice and adequate accountability mechanisms. All of which pose numerous ethical and legal concerns regarding our ability as individuals to govern our behavior or "psyche," as the ancient greeks would say, or the right to fair access to "mental augmentation". Furthermore, fundamental human rights such as the protection of "individual mental integrity and identity" may be threatened. They question whether such protection should be introduced as new human rights norms or as guidelines for interpreting or applying already-existing rights. Thus, it is suggested in the report that the United Nations (UN) and Human Rights Council should launch a public, transparent and inclusive debate among States, civil society, and other interested stakeholders. The session ultimately outlines their main research objectives and announces that the topic of neurotechnologies will be afresh at the 56th (July 2024) or 57th (September 2024) session of the Human Rights Council. In today's blog, we will summarize the 28th session's main points, which are outlined in the official document prepared by Milena Costas Trascasas on 12 August 2022. What can we define as neurotechnologies? ​​ "Today, the term ‘neurotechnologies’ refers to any electronic device, method or process conceived to access the human brain’s neuronal activity including the capacity to record, interfere or modify brain activity. These applications allow for a two-way connection (brain-computer interfaces) between the individual’s central nervous system (brain and spinal cord) and an electronic system. The aim is to collect information on the activity of neurons containing a representation of brain activity". A new generation of neurotechnologies is rapidly developing "Increasingly promoted as a necessary development that needs to be pursued for the good of humanity, such emerging technologies are paving the way to an incredibly lucrative business which may contribute to widen power asymmetries. Advancements in research are being fuelled by large quantities of public and private funding allocated on ‘brain’ initiatives. This massive scale of global investment is prompting a new technological race where States, businesses and other stakeholders are staking out positions. Even if many of these technologies are currently in an experimental phase, experts warn that we must be prepared for this to change very quickly. In coming years, neurotechnology will help to improve cognitive capacities by connecting the brain directly to digital networks. This will require not only the systematic collection of neural data but the decoding of thoughts deriving from the neural activity of the person". Lack of regulation - The ethical implications "The continued and unregulated development of some neurotechnologies poses a number of ethical, legal and societal questions that need to be addressed. These devices are being quickly developed and commercialized in an environment where responsible innovation cannot be granted. Neuro-rights also pose some concerns regarding “freedom of thought” and the “foundation of a democratic state ruled by law Nawrot (2019) . Furthermore, in a globalized market, national attempts to regulate neurotechnologies might not be sufficient. So far, only a few States have enacted legislation specifically aimed at protecting mental integrity and indemnity (Chile) or have prompted amendments to include neuro-data in the personal data protection laws (Spain, Colombia, Brazil)". The government in Chile is already stepping ahead. Its citizens must be safeguarded from technologies that are able to execute mind control, mind reading, or any other abnormal interference with their brains. This was possible thanks to a constitutional amendment that was passed by the National Congress of Chile and signed by the president. Thus, Chile's citizens are the first ever in the world to be granted with “neurorights”- which, "advocates say are made necessary by rapid advances in neurotechnology" (Gallucci, 2022). Potential solutions "The UN is in the best position to launch a public, transparent and inclusive debate among States, civil society and other interested stakeholders on the issue of neurotechnologies, which so far has been only treated at expert level. In this context, the Human Rights Council has a significant role to play and the Advisory Committee may be of great support in this respect. Given its quality as think-tank body, our Committee, is best placed to assess the human rights impact of these technologies and to make recommendations for action to Member States". A new challenge for Human-Rights "Experts suggest that a new set of rights should be recognized with a view to introducing “normative specifications related to the protection of the person’s cerebral and mental domain”, which includes individual mental integrity and identity. Here the question is whether such protection should be introduced as a new human rights norms or rather as standards of application or interpretation of existing rights, while reinforcing the applications of the principles on business and human rights and other specific initiatives in parallel. ​​This assessment requires a careful and balanced analysis of the new norms that are being proposed as rights. It is true that specific standards may be needed to ensure the protection against interferences and misuses of certain mental aspects such as cognitive freedom, mental privacy, mental integrity and psychological continuity. The equal access to neurotechnology for medical purposes has also been promoted, together with individual’s access to justice and adequate accountability mechanisms. There are, however, other much more disputable interpretations, such as the claim that a right to fair access to “mental augmentation” should be recognized". We can differentiate between four new neuro-rights in accordance to (Lenca, 2021). the right to cognitive liberty (the first poses the right to mental integrity seems to have the broadest theoretical support and the most solid legal foundation) the right to mental privacy the right to mental integrity the right to psychological continuity Objective of the research proposal posed on 28th HCR session: Focusing on the legal, ethical, and societal ramifications of the various applications being developed, particularly outside the medical arena. The research of the Advisory Committee would give an overview of neurotechnologies' leading human rights issues. A thorough examination of the current framework can help identify the pertinent rules, applicable concepts, and standards and any gaps and difficulties. An evaluation of the opportunity and necessity to recognize additional rights, specifically the so-called "neurorights," will also be included in the study. It will analyze other options, such as the potential for shifting interpretation of the most pertinent rights, and explore what kind of normative instrument might be produced. The paper will then discuss how to create a cohesive governance and accountability structure. This difficult issue could be included on the agenda of the UN Human Rights Council thanks to a research that would help States grasp the consequences for human rights. It is necessary and timely to have a public discussion about this issue in order to decide what steps should be done going forward to prevent the use of these technologies against the goals and principles of the UN Charter and the Universal Declaration of Human Rights. The committee has concluded that "is ready and has the necessary expertise to continue discussions with a view to submitting a report on the human rights impact of neurotechnologies at the 56th (July 2024) or 57th (September 2024) session of the Human Rights Council"
Share by: