Moral decision-making in the age of Artificial Intelligence

Artificial intelligence, or AI as it is more commonly known, has emerged as a potent force capable of impacting every fabric of our lives. From virtual assistants and chatbots handling simple tasks to autonomous vehicles and medical imaging systems aiding in complex decision-making, AI has become an integral part of our daily lives, making them more accessible and more efficient. However, the development of AI has raised significant ethical concerns.

Life and death decisions

Imagine a runaway trolley is heading towards a group of five people, and a decision must be made to divert the trolley onto a side-track by pulling a lever, which would result in killing one person. What would you do? This thought experiment, known as the “trolley problem”, a classic in moral philosophy, poses the question of whether it is ethical to sacrifice one life to save many. Although such dilemmas have been criticized as unrealistic, they raise fundamental questions about moral decision-making. When applied to self-driving vehicles, the trolley problem becomes a decision about how to distribute risk among road users. Should the self-driving car prioritize the safety of its passengers over that of pedestrians? Spare the life of a young person or that of an elderly one? The answer to these questions is not clear cut. In other instances, like organ donation, who should receive a life-saving transplant? Recent studies have shown that AI developers’ design choices can influence these life-and-death decisions.

Biases

The above examples reveal how the design of AI systems influences decision outcomes. Besides decisions that have life-and-death stakes, there are numerous other cases where AI raises ethical concerns. Although designed to overcome human subjectivity, bias, and prejudice in decision-making, AI is as flawed as humans are. Decisions made by AI are only as good as the quality of the data they have been trained on; as the saying goes, “garbage in, garbage out”. Biases, such as algorithmic bias, have been described as the “Achilles heel” of AI, as they have led to unjust outcomes based on factors such as gender, race, or skin tone.

What can we do?

Developing morally behaving AI systems is a challenging task due to the complexity of morality as a concept and process.  As AI becomes more integrated in our lives, addressing the ethical concerns surrounding its use has become an urgent matter. To mitigate the risks associated with AI development and use, companies and governments have established guidelines and frameworks based on principles, such as diverse and unbiased data, transparency, and accountability. By taking these steps, humans can harness the power of AI while addressing the ethical concerns that come with it.

Share the Post: