Onde estamos

Al. Joaquim Eugênio de Lima, 680 - 1º andar - Jardim Paulista, São Paulo - SP

Entre em contato

Fonte: Future Of Privacy Forum

Advanced algorithms, machine learning (ML), and artificial intelligence (AI) are appearing across digital and technology sectors from healthcare to financial institutions, and in contexts ranging from voice-activated digital assistants, to traffic routing, identifying at-risk students, and getting purchase recommendations on various online platforms. Embedded in new technologies like autonomous cars and smart phones to enable cutting edge features, AI is equally being applied to established industries such as agriculture and telecomm to increase accuracy and efficiency. We see already that machine learning is becoming the foundation of many of the products and services in our daily lives, the underlying structure in much the same way that electricity faded from novelty to background during the industrialization of modern life 100 years ago.

Understanding AI and its underlying algorithmic processes presents new challenges for privacy officers and others responsible for data governance in companies ranging from retailers to cloud service providers. In the absence of targeted legal or regulatory obligations, AI poses new ethical and practical challenges for companies that strive to maximize consumer benefits while preventing potential harms.

Along with the benefits from the increased use of artificial intelligence and machine learning models underlying new technology, we also have seen public examples of the ways in which these algorithms can reflect some of the most glaring biases within society. From chatbots that “learn” to be racist, policing algorithms with questionable results, and cameras which do not recognize people of certain races, the past few years have shown that AI is not immune to problems of discrimination and bias. AI however, also has many potential benefits, including promising outlooks for the disability community and the increased accuracy of diagnosis and other applications to improve healthcare. The incredible potential of AI means that it is important to address concerns around its implementation in order to ensure consumer trust and safety. The problems of bias or fairness in ML systems are a key challenge in achieving that reliability. This issue is complex – fairness is not a fixed concept. What is “fair” by one measure might not be equitable in another. While many industry leaders have identified controlling bias as a goal in their published AI policies, there is no consensus on exactly how this can be achieved.

In one of the most notable cases of apparent AI bias, ProPublica published a report in which they claimed an algorithm, designed to predict the likelihood a defendant would reoffend, displayed racial bias. The algorithm assigned a score from 1 to 10, claiming to offer an assessment of the risk that a given defendant would go on to reoffend. This number was then often used as a factor in determining eligibility for bail. Notably, “race” was not amongst the various inputs which were used in determining the risk level. However, in their report, ProPublica found that among defendants who went on to not reoffend, black defendants were more than twice as likely as white defendants to have received a mid- or high-risk score. ProPublica correctly highlighted the unfairness of such disparate outcomes, but the issue of whether the scores were simply racially biased, it turns out, is more complicated.

The algorithm had been calibrated to ensure the risk level of reoffending “meant the same thing” from one defendant to another. Thus, of the defendants who were given a level 7, 60% of white defendants and 61% of black defendants went on to reoffend – a statistically similar outcome. However, in designing the program to achieve this level of equity (a “7” means ~60% chance of reoffending, across the board) means that the program forced distribution between low, mid, and high-risk categories in a way that resulted in more Black defendants receiving a higher score. There is no mathematical way to equalize both of these measures at the same time, within the same model. Data scientists have shown that multiple measures of “fairness” may be impossible to achieve simultaneously.

Clique aqui e leia o artigo completo.

Share: