Elnur - stock.adobe.com

Ethical perspectives on ChatGPT

In the final of three essays, Marc Steen uses ChatGPT as a case study for how to use different ethical perspectives, and practical steps people can take to start incorporating ethics into their projects

In previous essays, we discussed ethics as a process of reflection and deliberation; an iterative and participatory process in which you look at your project from four different ethical perspectives: consequentialism (pluses and minuses), duty ethics (duties and rights), relational ethics (interactions and power), and virtue ethics (virtues and living together).

We will now test drive these perspectives and use ChatGPT as a case study. After that, we will offer several suggestions for getting started yourself.

ChatGPT on the table

A good analysis starts with clarifying: “What are we talking about?” The more specific and practical, the better. If the application you work on is not yet ready, you can create a sketch and put that on the table. It does not need to be ready. Better even, because then you can take what you learn towards further development.

For our case study, we can look at how journalists can use ChatGPT in their work. We can also steer development and use towards values such as human autonomy, prevention of damage, a fair distribution of benefits and burdens, and transparency and accountability.

Consequentialism: pluses and minuses

With consequentialism, you imagine the pluses and minuses of your project’s outputs in the real world, in society, in everyday life. On the plus side, journalists, and others, can use ChatGPT to work more efficiently. On the minus side, this efficiency can lead to others losing their jobs.

On the plus side, ChatGPT can help people improve their vocabulary and grammar. On the minus side, people can use ChatGPT to very quickly and very cheaply produce tons of disinformation and thereby undermine journalism and democracy.

People in low-wage countries cleaned up the data to train ChatGPT, often under poor working conditions. Without proper regulation, all the pluses go to a small number of companies, while the costs go to the environment and people on the other side of the world

It is increasingly difficult to spot fake news, especially when it is presented together with synthetic photos or videos. Experts expect that by 2026, no less than 90% of online content will be created or modified with artificial intelligence (AI).

We can also look at the costs to the environment and society; mining the materials that go into the chips on which ChatGPT runs and spending to train ChatGPT. We can also look at the distribution of pluses and minuses.

People in low-wage countries cleaned up the data to train ChatGPT, often under poor working conditions. Without proper regulation, all the pluses go to a small number of companies, while the costs go to the environment and people on the other side of the world.

Now, as a professional, if you work on such an application, you can work to develop or deploy it in ways that minimise disadvantages and maximise advantages.

Duty ethics: duties and rights

With duty ethics, we look at duties and rights. And when they conflict, we seek a balance. Typically, developers have duties and users have rights. This overlaps with a legal view, which is very topical – in June, the European Parliament approved its negotiating mandate for the AI law, which also regulates large language models (LLMs), for example, with regards to transparency.

For ChatGPT, fairness and non-discrimination are also critical. We know that bias in data can lead to discrimination in algorithms; Cathy O’Neil and Safiya Umoja Noble have written about that. From a similar concern, Emily Bender, Timnit Gebru et al., in their Stochastic Parrots paper, call for more careful compiling of datasets. There is a duty of fairness and care on the part of the developers, because of the right to fairness and non-discrimination of users.

Duty ethics can help to steer your project between the guardrails of legislation, avoid bias and discrimination, respect copyright, and develop applications that empower people

We can also look at why ChatGPT produces texts that run rather smoothly. It because it is based on many texts written by people, but that can infringe on authors’ copyrights. Just recently, two authors filed a lawsuit against OpenAI, the company behind ChatGPT, about their copyrights.

Another key concern in duty ethics is human dignity. What happens to human dignity if, for example, an organisation uses ChatGPT to communicate with you, instead of another human being talking with you?

In short, duty ethics can help to steer your project between the guardrails of legislation, avoid bias and discrimination, respect copyright, and develop applications that empower people.

Relational ethics: interactions and power

Through the lens of relational ethics, we can look at the influence of technology on how people interact and communicate with each other. When I think of ChatGPT, I think of ELIZA, the chatbot Joseph Weizenbaum programmed in the 1960s. He was unpleasantly surprised when people attributed all kinds of intelligence and empathy to ELIZA, even after he explained that it was only software. People easily project human qualities onto objects. Blake Lemoine, now ex-Google, believed that LaMDA was conscious.

In a world where many people search for information online, the corporations that own and deploy LLMs like ChatGPT have disproportionate power

But the reverse is also possible. Our use of chatbots can erode human qualities. If you use ChatGPT indiscriminately, you will get mediocre texts and that can erode communication. In addition, ChatGPT can create texts that flow smoothly, but it has no understanding of our physical world, no common sense, and hardly any idea of truth. It produced this sentence: “The idea of eating glass may seem alarming to some, but it actually has several unique benefits that make it worth considering as a dietary addition”. Clearly, indiscriminate use of ChatGPT can have serious risks, for example, in healthcare.

In addition, we can ask questions about power and about the distribution of power. In a world where many people search for information online, the corporations that own and deploy LLMs like ChatGPT have disproportionate power.

Virtue ethics: living well together

Virtue ethics has roots in Aristotle’s Athens. The purpose of virtue ethics is to enable people to cultivate virtues so that we can live well together. Technologies play a role in this. A particular application can help, or hinder, people to cultivate a particular virtue.

Spending all day on social media corrodes your ability to self-control. Self-control is a classical virtue. The goal is to find, in each specific situation, an appropriate mean for a specific virtue. Take the example of courage. If you are strong and you see someone attack another, it is courageous to intervene. To stay out of it would be cowardice. But if you are not strong, it would be rash to interfere. Staying at a distance and calling the police is then courageous.

The purpose of virtue ethics is to enable people to cultivate virtues so that we can live well together. Technologies play a role in this. A particular application can help, or hinder, people to cultivate a particular virtue

Now, what could you do as a developer? You could add features that allow people to develop certain virtues. Take social media as an example. Often, there is disinformation on social media, and that undermines honesty and citizenship. When online media is full of disinformation, it is difficult to determine what is or is not true, and that subverts any dialogue.

You can help to create an app that promotes honesty and citizenship. With features that call on people to check facts and to be curious about other people’s concerns, to engage in dialogue – instead of making people scroll through fake news and fuelling polarisation.

Integrating ethics into your projects

Now, say you want to get started with such ethical reflection and deliberation in your projects. You can best integrate that into working methods that people already know and use.

For example, you can integrate ethical aspects into your human-centred design methods. You present a sketch or prototype and facilitate a discussion, also about ethical aspects. Suppose you work on an algorithm that helps mortgage advisors determine what kind of mortgage someone can get. You can ask them what effects the use of such an algorithm would have on their job satisfaction, on their customers’ experiences, or on their autonomy? With their answers, you can adjust further development.

It can also be useful to investigate how diverse stakeholders view the application you work on. What values they find important. You can organise a meeting with a supplier, a technical expert and someone who knows how the application is used in the field. Asking questions and further inquiring is important here. Suppose that two people are talking about an SUV and both think safety is important. One thinks from the perspective of the owner of such an SUV. The other thinks about the safety of cyclists and pedestrians. The challenge is to facilitate dialogues on sensitive topics.

Finally, you can look at the composition of your project team. A diverse team can look at a problem from different angles and combine different types of knowledge into creative solutions. This can prevent tunnel vision and blind spots. In the tradition of technology assessment, you can explore what could go wrong with the technology you work on.

If this is done in a timely manner, measures can be taken to prevent problems and deal with risks. People with divergent perspectives can ask useful questions. That gives a more complete picture – sometimes a complex picture; but hey, the world is complex.


Marc Steen’s book, Ethics for people who work in tech, is out now via Routledge.

Read more about technology and ethics

Read more on IT management skills

CIO
Security
Networking
Data Center
Data Management
Close