Onde estamos

Al. Joaquim Eugênio de Lima, 680 - 1º andar - Jardim Paulista, São Paulo - SP

Entre em contato
Fonte: ICO
In the first detailed element of our AI framework blog series, Reuben Binns, our Research Fellow in AI, and Valeria Gallo, Technology Policy Adviser, explore how organisations can ensure ‘meaningful’ human involvement to make sure AI decisions are not classified as solely automated by mistake.
This blog forms part of our ongoing work on developing a framework for auditing AI. We are keen to hear your views in the comments below or you can email us.
Artificial Intelligence (AI) systems[1] often process personal data to either support or make a decision. For example, AI could be used to approve or reject a financial loan automatically, or support recruitment teams to identify interview candidates by ranking job applications.
Article 22 of the General Data Protection Regulation (GDPR) establishes stricter conditions in relation to AI systems that make solely automated decisions, i.e. without human input, with legal or similarly significant effects about individuals. AI systems that only support or enhance human decision-making are not subject to these conditions. However, a decision will not fall outside the scope of Article 22 just because a human has ‘rubber-stamped’ it: human input needs to be ‘meaningful’.
The degree and quality of human review and intervention before a final decision is made about an individual is the key factor in determining whether an AI system is solely or non-solely automated.