A Practical Guide to Building Ethical AI
However, the decision-making rests again on the assumptions the algorithm developers have adopted, e.g., on the relative importance of false positive and false negative (i.e., the weights attributed to the different typologies of errors, and the accuracy sought (Berk, 2019)). In all these fields, an increasing amount is ai ethical of functions are being ceded to algorithms to the detriment of human control, raising concern for loss of fairness and equitability (Sareen et al., 2020). Furthermore, issues of garbage-in-garbage-out (Saltelli and Funtowicz, 2014) may be prone to emerge in contexts when external control is entirely removed.
It would be bad news if we found out that the “great
filter” is ahead of us, rather than an obstacle that Earth has
already passed. These issues are sometimes taken more narrowly to be
about human extinction (Bostrom 2013), or more broadly as concerning
any large risk for the species (Rees 2018)—of which AI is only
one (Häggström 2016; Ord 2020). Bostrom also uses the
category of “global catastrophic risk” for risks that are
sufficiently high up the two dimensions of “scope” and
“severity” (Bostrom and Ćirković 2011; Bostrom
2013). Bias typically surfaces when unfair judgments are made because the
individual making the judgment is influenced by a characteristic that
is actually irrelevant to the matter at hand, typically a
discriminatory preconception about members of a group.
Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice
Artificial intelligence systems “understand” and shape a lot of what happens in people’s lives. AI applications “speak” to people and answer questions when the name of a digital voice assistant is called out. They run the chatbots that handle customer-service issues people have with companies. They scour the use of credit cards for signs of fraud, and they determine who could be a credit risk. In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholders and seriously engage with high-level ethical principles.
For instance, The Alan Turing Institute released a guide for responsible design and implementation of AI (Leslie, 2019) that covers the whole life-cycle of design, use, and monitoring. However, the field of AI ethics is just at its infancy and it is still to be conceptualised how AI developments that encompass ethical dimensions could be attained. Some authors are pessimistic, such as Supiot (2017) who speaks of governance by numbers, where quantification is replacing the traditional decision-making system and profoundly affecting the pillar of equality of judgement.
Data Privacy
As discussed, the way that algorithms work is inherently different from the human brain. These are small manipulations in the AI’s behavior that occur due to minute changes in the input data, invisible to humans in most cases. To some, Eugene Goostman’s victory might read as progress in the quest to create human-like robots. To others, it signals how far along robots have come in fooling humans – a threat that raises the case for ethical AI for business. It would be unethical for that AI to consider gender, race or a variety of other factors. Nonetheless, even if those features are explicitly excluded from the training set, the training data might well encode the biases of human raters, and the AI could pick up on secondary features that infer the excluded ones (e.g., silently inferring a proxy variable for race from income and postal address).
Recent Comments