Understanding the ethics of AI

In 2018, Elaine Herzberg was killed in what is believed to be the first pedestrian fatality by an autonomous vehicle.  The incident caught the world’s…

Marc St-Pierre profile picture

Marc St-Pierre

June 2, 20206 minutes read

In 2018, Elaine Herzberg was killed in what is believed to be the first pedestrian fatality by an autonomous vehicle.  The incident caught the world’s attention and shone a light on the ethics of AI.

The death of Elaine Herzberg was shocking because it raised questions about whether we could trust Artificial Intelligence (AI) with something as important as our lives. But what does that mean for businesses looking to exploit this technology? The World Economic Forum (WEF) calls this responsible AI and research has shown that organizations are increasingly concerned about the implications of AI.

As Angel Gurria, Secretary-General of the Organization for Economic Co-Operation and Development (OECD) commented, “To realize the full potential of AI technology, we need one critical ingredient. That critical ingredient is trust.”

Defining the ethics of AI

We’ve witnessed quite a lot of concern from the likes of Tesla’s Elon Musk about AI’s potential for damage. The good news is that AI is not about to reach a state of general intelligence – where machines can understand and learn just like humans – any time soon. In the meantime, it remains a tool created by humans to support human activity.

AI – especially machine learning – works by taking data inputs, learning something from the data and, from that, inferring something to make predictions.

This raises the issue of how we judge whether an output from an AI system is safe and will not succumb to bias or cause any harm. That is the crux of the ethics of AI.

How does the AI developer teach and help its autonomous vehicle to make a decision in the event of an accident?  Should the AI prioritize the protection of its passengers or the people outside the vehicle?

Determining whether an outcome is ethically acceptable can raise reasonable disagreements. In this period of COVID-19, physicians, politicians, and the public may disagree on the ethics around healthcare decisions like prioritizing ventilators for between the young instead of the older patients. If humans can disagree, how can an AI do better?

In a business setting where AI is being used to automate processes or improve the customer experience, the ethics may seem slightly less important. But, for all organizations, the major purpose of AI will be to deliver insight that improves decision-making. So, being able to trust and rely on that information is essential.

In a recent Accenture research report, over 90% of the most successful AI deployment had a focus on ethics – compared with under half of the least successful deployments.

Key issues within the ethics of AI

There are many ethical questions about the societal impact of AI, and if you are interested If I’d recommend the excellent Hitchhiker’s Guide to AI Ethics. For now, I am specifically going to concentrate on the ethics of creating AI solutions within a business environment.


The area of ethics that has perhaps received the greatest attention is bias, when skewed data models or developer prejudice unintentionally creeps into the AI system. Even giants like Apple and Goldman Sachs have fallen foul with the Apple Card accused of gender bias. This isn’t surprising when you consider that there are 188 different cognitive biases. Whether an unconscious prejudice of the AI system creator or a bias built into the data model the system uses, the results will be that the outputs are likely to be unfair, discriminatory or just plain wrong.

Accountability and explainability

The concepts of accountability and explainability are well understood in everyday life. If you are responsible for something, then you should be able to explain why it happened. The same is true in the world of AI. It’s essential that any actions that AI takes can be fully explained and audited. It has to be able to be held accountable.


To be accountable, the AI system has to be transparent. However, many AI solutions take a ‘black box’ approach that doesn’t allow visibility of the underlying algorithms. This can be because the algorithms are incredibly complex but often it’s down to the vendor wishing to protect its own intellectual property. However, a new generation of AI solutions that embrace open source – such as OpenText Magellan – allow organizations to integrate their own algorithms and check the quality of algorithms against their own data. This also has the added benefit of embracing the open source creation of new algorithms leading to faster and better quality innovations.

Data assurance

A key point in the creation of AI systems is how you work with the data – especially personal data – used to populate your models. Machine learning, and deep learning, requires huge data sets to learn and improve. The more data, the better the outcomes over time. However, privacy legislation – such as GDPR or CCPA – imposes new levels of responsibility on organizations about how they capture, store, use, share and report the personal data they hold. You need to be aware of how and why you’re processing the data and the risks involved.

Establishing ethics in your AI capabilities

Even if you have a team of experienced data scientists in your organization, many of the ethical challenges will still be relatively new. It’s good practice to establish a steering team from across the business, and put in place an ethical framework that outlines what the AI is meant to do, how it should be created and what the expected outcomes are.

OpenText Professional Services have been working with organizations across the globe to ensure they embed – and follow – an ethical approach to the development of their AI systems. The Harvard Business Review lists technology, including artificial intelligence, as one of the key elements to achieving an effective and successful digital transformation. We can help by:

  • Conducting an assessment of your current AI processes, policies, and people skills
  • Developing risk mitigation strategies for AI development and deployment
  • Designing and overseeing tactics that realize your risk mitigation strategies
  • Developing a comprehensive prevention, risk management, and educational program
  • Creating a quality assurance and monitoring program with established KPIs
  • Establishing what AI success looks like from the beginning

The team includes a wide range of ethics experts, AI experts and data scientists that can help you successfully address the ethical challenges as AI is deployed deeper into your business.

If you’d like to know more about how OpenText Professional Services can help you manage the ethical risk in your AI journey, please visit our website.

Share this post

Share this post to x. Share to linkedin. Mail to
Marc St-Pierre avatar image

Marc St-Pierre

Marc is VP of Consulting Services for the Security + Artificial Intelligence + Linguistics & Translation practice. For more than 15 years, Marc has led services groups specialized in advanced and emerging technologies. He has lectured on semantic technologies and lead solution development such as Ai-Augmented Voice of the Customer and Magellan Search+.

See all posts

More from the author

Cybersecurity Services combat an APT with NDR

Cybersecurity Services combat an APT with NDR

Attackers linked to Iran and China are actively targeting critical infrastructure.  Both the U.S. Environmental Protection Agency and National Security Agency have requested that each…

March 28, 2024 4 minutes read
Strengthening Higher Education Institutions against evolving cyberthreats

Strengthening Higher Education Institutions against evolving cyberthreats

As cyberthreats continue to evolve, it is crucial for higher education institutions and universities to be vigilant.  Enforcing security strategies prudently designed to safeguard digital…

January 24, 2024 4 minutes read
Strengthening cyber resilience

Strengthening cyber resilience

Cyberattacks are on track to cause $10.5 trillion a year in damage by 2025. That’s a 300 percent increase from 2015 levels. A robust cybersecurity…

December 19, 2023 4 minutes read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.