Blog Layout

A Guide to the AI Act: December Edition

DEXAI • Dec 07, 2022

A Guide to  the AI Act: December Edition

Fourth presidency compromise text & Harmonised rules on artificial intelligence and amending certain Union legislative acts - General approach


The EU AI Act, also called the Harmonised Rules on Artificial Intelligence, was proposed by the European Commission on April 2021, and it aims to set a global standard for AI legislation. The guidelines aim to build an "environment of trust" that manages AI risk and gives human rights priority in developing and deploying AI systems. Such guidelines are likely to become the global gold standard for AI regulation, much like the general data protection regulation (GDPR) legislation did for privacy in 2016. Following a protracted consultation process, several rule changes in the form of compromise texts have been put forth since the regulations were first suggested.


According to the Act, all companies that provide artificial intelligence (AI) systems and solutions based in the EU and those from foreign countries that sell AI systems must abide by the rules. In addition, if the system's output is used inside the EU, it also applies to the suppliers and consumers headquartered there. However, public authorities in third-party nations and those that deploy AI systems for military operations are exempt from the rule. It is to note that, the AI act will not apply to the world of scientific research.


In this report, we aim at highlighting i) the core purpose and relevancy of the AI EU act by ii) giving the latest updates on the fourth presidency compromise text from 25 October and 25 November 2022.


Timeline:


  • The Slovenian Presidency drafted the first, partial compromise proposal, which covered Articles 1-7 and Annexes I-III of the proposed AIA. The French Presidency continued the drafting process and by the end of its term it redrafted the remaining parts of the text (Articles 8-85 and Annexes IV-IX) and presented the entire first consolidated compromise proposal on the AIA on 17 June 2022.
  • On 5 July 2022, the Czech Presidency held a policy debate in WP TELECOM on the basis of a policy options paper, the outcomes of which were used to prepare the second compromise text. Based on the reactions of the delegations to this compromise, the Czech Presidency prepared the third compromise text, which was presented and discussed in WP TELECOM on 22 and 29 September 2022.
  • During the WP TELECOM meeting on 25 October 2022, the Czech Presidency intends to present the changes made in the fourth compromise proposal, and invites the delegations to indicate any outstanding issues with the text.
  • On 18 November the Coreper examined this compromise proposal and unanimously agreed to submit it to the TTE (Telecommunications) Council, without any changes, in view of a general approach at its meeting on 6 December 2022. On 25 November 2022, the Council has adopted a (‘general approach’) on the Artificial Intelligence Act. Its aim is to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.


We have prepared a #December report which summarises the following:


  • Risk-based approach: The proposal for the EU AI Act sets forth a risk-based approach for regulating AI systems.


  • What do they really mean by high-risk? The concept of high-risk AI system is not explicitly defined in the proposal. Instead, the proposal provides two lists along with certain conditions according to which high-risk AI systems shall be identified.


  • Application of high-risk systems: According to the latest presidency compromise (19 October 2022), there are eight main use cases where an AI system may be classified as high-risk.


  • Three use cases: Biometrics, Insurance and Financial Intelligence: We illustrate three use cases to present the nature of the discussions of what should and what should not be considered high-risk, the most discussed topics by the latest presidency compromise text being i) biometrics, ii) insurance and iii) financial intelligence, as well as what does it entail for the user and the provider.


🀝 The main topics that the report tackles are the following:


βœ… The EU AI Act Nature (old vs. new version).

βœ… Fundamental Rights

βœ… A Risk-Based Approach

βœ… An Overview of High-Risk AI Systems & Obligations

βœ… Use Case 1: Biometrics

βœ… Use Case 2: Insurance

βœ… Use Case 3: Financial Intelligence

βœ… Obligations on Providers and Users and Users of these Systems



You can download our report here:


References:


  • https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • https://artificialintelligenceact.eu/wp-content/uploads/2022/07/AIA-CZ-1st-Proposal-15-July.pdf
  • https://artificialintelligenceact.eu/wp-content/uploads/2022/09/AIA-CZ-2nd-Proposal-16-Sept.pdf
  • https://artificialintelligenceact.eu/wp-content/uploads/2022/10/AIA-CZ-4th-Proposal-19-Oct-22.pdf
  • https://media-exp1.licdn.com/dms/document/C4E1FAQE_BP_RNyevMw/feedshare-document-pdf-analyzed/0/1670330160433?e=1671062400&v=beta&t=omFUdLb0cTJGsQTmAEukfQ9neyjRtvgh5E1Q82IT0Lo


Leave us a message!

AI Ethics Officer blogpost background
19 Jan, 2024
Scopri come il master in AI Ethics prepara per una carriera innovativa come AI Ethics Officer, un ruolo chiave per il futuro dell'AI. Esplora competenze, opportunità e il valore etico in questo campo in rapida evoluzione.
Just Think About it, and Generative AI Will Do The Rest for you.  A New Generative Reality based on
By DEXAI 07 Nov, 2022
Consider something you would like to create, recreate or imitate (art, photos, videos, music, stories, coding, a deepfake of a prime minister etc.), and generative AI will be happy to assist you. Does this mean that we have reached an era where saying just a few words enables you to create or replicate almost anything you can imagine (pretty much like a utopian science fiction novel)? And, for which price are we willing to have this assistance? This blog post will briefly summarise what generative AI is, and can do, its multiple types (i.e. Generative Adversarial Networks - GANS), how this differs from traditional AI, and its benefits and risks. We will conclude with a short introduction to Actor-Network Theory (ANT) by Bruno Latour as a theoretical and methodological approach to account for the relational nature of the society. Actors, in this theory, are not just humans or collectives of humans. Technical objects or artificial agents such as GANs, a class of deep neural network models able to simulate content with extreme precision, are also considered by ANT equally significant for the circulation of contents, beliefs and information inside a social system. GANs, have been primarily used to create media simulating human beings conveying politically-relevant content. What is Generative Artificial Intelligence? To what extent can we say AI can assist us in almost any task? In the face of the latest advancements in AI, the list of such tasks seems to be diminishing almost daily. Furthermore, even though we are still far away from achieving Artificial General Intelligence (AGI) or "Human-Level AI", as Kurzwei l (2005) would say, recent advancements in the sub-fields of AI, such as Natural Language Processing (giving computers the capacity to understand the text and spoken), and Computer Vision (computers see, observe, and 'understand), have shown that we are not the only ones capable of instantly and creatively generating images, videos, stories, music, or code from just a short text. This hype around generative technologies has been developing around tech giants such as Elon Musk with its DALL·E Open AI text-to-image technology. We can sub-divide generative AI's three famous models into: General Adversarial Networks (GANS) GANs are an emerging technology for both semisupervised and unsupervised learning and are composed of the i) generator and ii) the discriminator. The discriminator has access to a set of training images depicting real content. The discriminator aims to discriminate between "real" images from the training set and "fake" images generated by the generator. On the other hand, the generator generates images as similar as possible to the training set. The generator starts by generating random images and receives a signal from the discriminator whether the discriminator finds them real or fake. At equilibrium, the discriminator should not be able to tell the difference between the images generated by the generator and the actual images in the training set; hence, the generator generates images indistinguishable from the training set (Elgammal et al., 2017). AutoRegressive Convolutional Neural Networks (AR-CNN) AR-CNN is a deep convolutional network architecture for multivariate time series regression. The model is inspired by standard autoregressive (AR) models and gating mechanisms used in recurrent neural networks. It involves an AR-like weighting system, where the final predictor is obtained as a weighted sum of sub-predictors. At the same time, the weights are data-dependent functions learnt through a convolutional network. (BiΕ„kowski et al., 2017). These transformers are designed to process sequential input data, such as natural language, with applications towards tasks such as translation and text summarisation, as in the case of Open AI with DALLE-E. Transformer-based Models A transformer is a deep learning (DL) model that adopts the self-attention mechanism, weighing the significance of each part of the input data. It is used primarily in the fields of natural language processing (NLP) (Vaswani et al., 2017) and computer vision (CV) (Brown, 1982). Where can we see generative AI? Beyond data augmentation, widely used in the medical field, generative AI can, just with a short paragraph of text, assist you with creating art and pictures ( open AI's DALL-E system that enables the creation of hyper-realistic images from text descriptions), videos ( Meta's text-to-video AI platform), AI avatars ( synthesia's text-to-speech synthesis which generates a professional video with an AI avatar reciting your text), music ( Jukebox neural net ) and much more. The magic behind most of this generative reality stands within Generative pre-trained Transformer (GPT-3) - a third-generation, autoregressive language model that uses deep learning to produce human-like texts and use the previous distinction to analyse them (Floridi, 2020). Simply speaking, we can understand it as 'text in — text out' - the model processes the text given and produces what it 'believes' should come next based on the probabilities of sequences of particular words in a sentence. Below we can see the realistic images entirely created by DALL·E from the prompt of an -> impressionist oil painting of an AI system having a drink at the beach
By DEXAI 15 Sep, 2022
On 12 August 2022, in Geneva, The Human Rights Council Advisory Committee (HCR) concluded its 28th session after advancing its studies on the impact of new technologies for climate protection on the enjoyment of human rights and the advancement of racial justice and equality. In such a session, the Committee decided to submit the human rights implications of neuro-technologies for consideration and approval by the Human Rights Council: "Towards the recognition of neuro-rights. " Lenca (2021) describe these rights as the essential normative guidelines for the defense and maintenance of the human brain and mind; in other words, the ethical, legal, social, or natural principles of freedom or entitlement relating to a person's cerebral and mental domain. Such considerations lead to concepts like "neuroethics," which can be defined as "the examination of what is right and wrong, good and bad about the treatment of, the perfection of, or unwelcome invasion of and worrisome manipulation of the human brain" (Safire, 2002). The unquestionable medical benefits that such advancements may bring should not, however, obscure the risk that they also pose. For instance, neural processes within neurotechnologies may be exposed to manipulation due to their unregulated commercialization. Equal access to neurotechnology for medical purposes has also been promoted, along with individual access to justice and adequate accountability mechanisms. All of which pose numerous ethical and legal concerns regarding our ability as individuals to govern our behavior or "psyche," as the ancient greeks would say, or the right to fair access to "mental augmentation". Furthermore, fundamental human rights such as the protection of "individual mental integrity and identity" may be threatened. They question whether such protection should be introduced as new human rights norms or as guidelines for interpreting or applying already-existing rights. Thus, it is suggested in the report that the United Nations (UN) and Human Rights Council should launch a public, transparent and inclusive debate among States, civil society, and other interested stakeholders. The session ultimately outlines their main research objectives and announces that the topic of neurotechnologies will be afresh at the 56th (July 2024) or 57th (September 2024) session of the Human Rights Council. In today's blog, we will summarize the 28th session's main points, which are outlined in the official document prepared by Milena Costas Trascasas on 12 August 2022. What can we define as neurotechnologies? ​​ "Today, the term ‘neurotechnologies’ refers to any electronic device, method or process conceived to access the human brain’s neuronal activity including the capacity to record, interfere or modify brain activity. These applications allow for a two-way connection (brain-computer interfaces) between the individual’s central nervous system (brain and spinal cord) and an electronic system. The aim is to collect information on the activity of neurons containing a representation of brain activity". A new generation of neurotechnologies is rapidly developing "Increasingly promoted as a necessary development that needs to be pursued for the good of humanity, such emerging technologies are paving the way to an incredibly lucrative business which may contribute to widen power asymmetries. Advancements in research are being fuelled by large quantities of public and private funding allocated on ‘brain’ initiatives. This massive scale of global investment is prompting a new technological race where States, businesses and other stakeholders are staking out positions. Even if many of these technologies are currently in an experimental phase, experts warn that we must be prepared for this to change very quickly. In coming years, neurotechnology will help to improve cognitive capacities by connecting the brain directly to digital networks. This will require not only the systematic collection of neural data but the decoding of thoughts deriving from the neural activity of the person". Lack of regulation - The ethical implications "The continued and unregulated development of some neurotechnologies poses a number of ethical, legal and societal questions that need to be addressed. These devices are being quickly developed and commercialized in an environment where responsible innovation cannot be granted. Neuro-rights also pose some concerns regarding “freedom of thought” and the “foundation of a democratic state ruled by law Nawrot (2019) . Furthermore, in a globalized market, national attempts to regulate neurotechnologies might not be sufficient. So far, only a few States have enacted legislation specifically aimed at protecting mental integrity and indemnity (Chile) or have prompted amendments to include neuro-data in the personal data protection laws (Spain, Colombia, Brazil)". The government in Chile is already stepping ahead. Its citizens must be safeguarded from technologies that are able to execute mind control, mind reading, or any other abnormal interference with their brains. This was possible thanks to a constitutional amendment that was passed by the National Congress of Chile and signed by the president. Thus, Chile's citizens are the first ever in the world to be granted with “neurorights”- which, "advocates say are made necessary by rapid advances in neurotechnology" (Gallucci, 2022). Potential solutions "The UN is in the best position to launch a public, transparent and inclusive debate among States, civil society and other interested stakeholders on the issue of neurotechnologies, which so far has been only treated at expert level. In this context, the Human Rights Council has a significant role to play and the Advisory Committee may be of great support in this respect. Given its quality as think-tank body, our Committee, is best placed to assess the human rights impact of these technologies and to make recommendations for action to Member States". A new challenge for Human-Rights "Experts suggest that a new set of rights should be recognized with a view to introducing “normative specifications related to the protection of the person’s cerebral and mental domain”, which includes individual mental integrity and identity. Here the question is whether such protection should be introduced as a new human rights norms or rather as standards of application or interpretation of existing rights, while reinforcing the applications of the principles on business and human rights and other specific initiatives in parallel. ​​This assessment requires a careful and balanced analysis of the new norms that are being proposed as rights. It is true that specific standards may be needed to ensure the protection against interferences and misuses of certain mental aspects such as cognitive freedom, mental privacy, mental integrity and psychological continuity. The equal access to neurotechnology for medical purposes has also been promoted, together with individual’s access to justice and adequate accountability mechanisms. There are, however, other much more disputable interpretations, such as the claim that a right to fair access to “mental augmentation” should be recognized". We can differentiate between four new neuro-rights in accordance to (Lenca, 2021). the right to cognitive liberty (the first poses the right to mental integrity seems to have the broadest theoretical support and the most solid legal foundation) the right to mental privacy the right to mental integrity the right to psychological continuity Objective of the research proposal posed on 28th HCR session: Focusing on the legal, ethical, and societal ramifications of the various applications being developed, particularly outside the medical arena. The research of the Advisory Committee would give an overview of neurotechnologies' leading human rights issues. A thorough examination of the current framework can help identify the pertinent rules, applicable concepts, and standards and any gaps and difficulties. An evaluation of the opportunity and necessity to recognize additional rights, specifically the so-called "neurorights," will also be included in the study. It will analyze other options, such as the potential for shifting interpretation of the most pertinent rights, and explore what kind of normative instrument might be produced. The paper will then discuss how to create a cohesive governance and accountability structure. This difficult issue could be included on the agenda of the UN Human Rights Council thanks to a research that would help States grasp the consequences for human rights. It is necessary and timely to have a public discussion about this issue in order to decide what steps should be done going forward to prevent the use of these technologies against the goals and principles of the UN Charter and the Universal Declaration of Human Rights. The committee has concluded that "is ready and has the necessary expertise to continue discussions with a view to submitting a report on the human rights impact of neurotechnologies at the 56th (July 2024) or 57th (September 2024) session of the Human Rights Council"
Share by: