kirill_makarov - stock.adobe.com
Ethics is a hot-button topic of discussion for the technology industry right now, especially in the rapidly developing fields around artificial intelligence (AI), machine learning and automated decision systems.
Consumers and tech industry workers alike are raising their voices to influence what kinds of automated decision-making systems are designed, what decisions they should be allowed to make – both in terms of industry verticals and specific applications within them – and what datasets should be used to build the models that power those systems.
Google, for example, is pledging $25m towards “AI for social good”. Yet some critics say they are “filled with dread” at the prospect of efforts spearheaded by a tech giant that may accelerate the dominance of one mode of thinking to the detriment of other groups.
There is a growing emphasis on de-risking negative public perception of the tech industry through robust policies on data privacy and data security. One emergent strategy is developing self-checking mechanisms in the form of “chief ethical officers”.
Much like chief diversity officers at companies that continue to struggle to make meaningful changes in their makeup, these executives may end up like the eponymous character in Lois Lowry’s 1993 young adult novel The Giver.
These individuals, particularly at the highest levels of leadership, may end up bearing the lone responsibility for remembering and viscerally experiencing all the painful lessons of history, while the rest of the population exists in a state of blissful ignorance.
Swimming in the ethics sea
Technology platforms, including automated decision-making systems, can exacerbate these rifts, but technology does not create them in isolation. How can we build an ethics for AI when we are not even aligned on ethics in technology as a broader field or, indeed, in most other forms of human endeavour?
The data science community is beginning to use conversations around the risks of automating systemic biases to push for better understanding of how existing biases show up. When we can measure the impact of systemic inequality on particular populations, we are better able to understand and measure the impact of AI interventions on those populations.
Despite – or perhaps because of – the lack of consensus about whose ethics we are building for, various approaches are being developed to create more ethical AI and more ethical technology in general. These range from data literacy campaigns to flight plan-like checklists, from codes of practice to the emergent industry of algorithmic auditing.
Where do we go from here?
With so many ethical frameworks, toolkits and consulting services to choose from, what steps do industry leaders need to take to keep the dogs of AI firmly on the ethical leash?
Conduct an ethics audit. What are the internal perceptions of ethical responsibility within your company? Do managers and team members feel empowered to make and take ownership of ethical decisions? Do they recognise areas where their decisions have an ethical impact? Are parts of your organisation already using ethical frameworks to guide their decision-making?
Decide what your company ethics are. Drafting an ethical framework must be more than an opinion survey, but it necessarily involves coming to a consensus about what the guiding principles of the group or organisation must be. How do these relate to your core company values? What internal and external resources can you draw upon to create a framework that will protect your employees and customers?
Read more about ethics and AI
- Chair of the new Centre for Data Ethics and Innovation says collaboration is vital for developing effective frameworks to manage the proliferation of AI and data-driven technologies.
- Digital Catapult launches Ethics Framework for AI and machine learning.
- Ethical software development: Ask Uber and Volkswagen.
Think about your considerations for the ethics of obedience, the ethics of care and the ethics of reason.
Design ethical feedback loops in your projects. Depending on the level of autonomy in your organisation’s management style, this may entail getting team members on board to build their own canvases to support a discussion-led approach, or you may evolve a more directive checklist of responsibilities.
Have a plan. Above all, think about what tools you have in place when ethical risks and ethical breaches are identified. You probably have a business continuity plan that outlines procedures for what happens in the event of fire, floods and other calamities – so why not for ethical risks, too?
Don’t do this alone
If there is one thing to take away, it is this: don’t do this alone. Ethics is a systemic-level conversation about the right thing to do. Follow the emerging discourse about AI ethics and the broader field of tech ethics to make informed contributions to this evolving dialogue.
At a time when conversations about what is right are becoming increasingly heated inside and outside the tech industry, the worst thing we can do is abdicate our responsibility to shape our ethical future. Start talking.