Everything Will Change: A Conversation on Ethical AI with Dr. Sasha Luccioni

We are in the midst of a massive platform shift—from digital to cognitive. I once wrote that the internet changed everything. But with AI, everything…

Mark J. Barrenechea profile picture

Mark J. Barrenechea

September 19, 20237 minutes read

We are in the midst of a massive platform shift—from digital to cognitive. I once wrote that the internet changed everything. But with AI, everything will change. Every role. Every organization. Every industry.

At OpenText, we are embracing this change by continuing to learn and raise the bar. As part of this journey, we invited Dr. Sasha Luccioni, Research Scientist and Climate Lead at Hugging Face, and one of MIT Technology Review’s “35 Innovators Under 35,” to speak with OpenTexters about one of the most crucial topics of our time—ethical AI.

You can read some of Dr. Luccioni’s extraordinary insights below, including how we can create more diversity in the field of machine learning, and why she chose not to sign the open letter calling for a pause on AI research. She also revealed how companies have an opportunity to take strong action at the crossroads of AI and climate change.

My warmest thanks to Dr. Luccioni for sharing her clear and powerful point of view.

* * *

Mark: What does it mean to be a computer scientist today, in the world of AI, automation and language models? I find that those who come into the field now are part philosopher, part physicist, part electrical engineer and part IT expert.

Sasha: I agree that it’s becoming more complex. In the past, we would build some toy network on some toy problem, and then you would just publish a paper and move on. Nowadays, you have to consider the societal implications. You have to consider the dual-use problem. For example, if you create something that can potentially be used to create new antibiotics or new medicine, it can also be used to create new poison. You have to cultivate this new way of thinking, and it’s really not something that gets taught in school.

  Acting on the challenges at the intersection of climate change and AI

Mark: You’re a fourth-generation woman in science in your family. I’d love to hear your perspective on your pursuit of academic excellence and any learnings you can share to elevate OpenTexters.

Sasha: Especially in computer science, especially right now, it’s really important to have all sorts of voices, not only women but diverse communities and backgrounds. Early on, I was looking around my computer science classes, and I realized there were only a couple of women. In my PhD, I was the only one in my cohort. That was a really strong signal for me that I needed to stay, in order to make sure that there was a woman in the room and a woman at the table.

 I’m actually involved with an organization called Women in Machine Learning (WiML). Only 11% of people publishing at AI conferences are women, for example, which is very, very low. WiML was created as a way of cultivating networks, creating mentorships, organizing events. We really try to think through, how can we keep women in the ML community and make sure they’re not feeling alone? I had both my kids during my PhD, so I really want to make sure that others have support, have childcare at conferences and have the opportunity to meet other women, including senior women who have gone through this and can give advice and be mentors.

Mark: OpenText is on our AI Journey. Our view is that there’s going to be a lot of large language models. There could be thousands, all highly specialized. What’s your view? Is it going to rain large language models?

Sasha: I think it’s already starting to drizzle! But I see this as a really important point in time, where it can go different ways. It could be that we’ll see everyone want to implement general purpose generative AI models into everything. That’s not the best way forward, because having interacted with these systems, they’re very good at answering questions, but it is a very central, average point of view. It’s not nuanced. It doesn’t represent other cultures, other languages. If we start going in that direction, we might have an echo chamber of the same opinions.

Whereas if we have multiple models—I really am a fan of specific models, because often they’re more efficient, they’re more lightweight and they are better suited for whatever task you’re trying to do. I’ve worked in applied AI, and I’ve never encountered a situation when you could say, “let’s take a vanilla model and throw it at the issue, and it’s going to solve all the customer’s problems.”

What you need is a model that you can fine-tune, that you can adapt, where you can take your data and continue training it, or train from scratch. You need a way to customize. And there’s issues of privacy as well. There are so many subtle issues that don’t get considered.

Mark: I completely agree. If you’re trying to solve, for example, a liability contract problem across 10 million contracts, do you use a general learning model versus something specific? We’re building our platform to be able to plug in open-source models, specialized language models, and of course to deploy work from Google, OpenAI and others as well.

Why Dr. Luccioni didn’t sign the open letter calling for a pause on AI

Mark: We have 25,000 OpenTexters listening today. It’s a community that’s very eager to learn. One of our values is “raise the bar and own the outcome.” So, Sasha I would love to hear your thoughts on AI and anything you want to leave OpenTexters with today.

Sasha: I think you all are doing great work, and I truly believe in the potential of AI. Something I like talking about is the fact that responsible AI is not a completely separate field of study. I’m not an ethical AI or responsible AI practitioner. Everyone is a responsible AI practitioner. Can you imagine if there were “cars” and “safe cars”? All cars are supposed to be safe! So, all AI is supposed to be responsible.

I invite everyone to think about that and how you can consider these ethical impacts. Even from a technical perspective, something like adding an extra layer of testing of your model before deploying it, to make sure that it represents different people, different languages, different communities in a way that’s more or less equitable. Just these small additions or tweaks to your everyday practice can make a really big difference and can make systems more robust, and therefore more ethical.

* * *

As Dr. Luccioni observed, right now it’s drizzling language models. But I believe we’ll soon see a downpour—and they will be lightweight, low-cost, openly available and specialized. Companies in every industry will be able to apply AI to solve their most complex problems.

In the midst of this shift, we need to keep innovating, but hold ourselves accountable and raise the bar for value-based design. We need to orchestrate AI, but ask vital questions about its impacts on our planet and society, and own the outcomes. The right technology, deployed effectively and ethically, can spur phenomenal growth, and change the world for the better.

To learn more about OpenText’s strategy on AI, read my position paper, opentext.ai.

Share this post

Share this post to x. Share to linkedin. Mail to
Mark J. Barrenechea avatar image

Mark J. Barrenechea

Mark J. Barrenechea joined OpenText as President and Chief Executive Officer in January 2012, and also serves as a member of the Board of Directors. In January 2016, Mark took on the role of Chief Technology Officer and was appointed Vice Chair in September 2017.

See all posts

More from the author

OpenText Committed to Climate Innovation

OpenText Committed to Climate Innovation

As I return home from OpenText World Europe, I am feeling invigorated by the powerful conversations that occurred throughout the week. I am also feeling…

April 22, 2024 5 minutes read
The Future Needs You Today: A Conversation on AI & Decolonization with Karen Palmer

The Future Needs You Today: A Conversation on AI & Decolonization with Karen Palmer

AI is bringing us into a new epoch of human society—it is a force multiplier for human potential. OpenText is about Information Management + Data…

February 29, 2024 8 minutes read
OpenText World 2023—Welcome to the AI Revolution

OpenText World 2023—Welcome to the AI Revolution

Welcome to the AI Revolution. AI is not just a technology, it is a new ontology—for creativity, data, trust. No business or individual will be…

October 19, 2023 6 minutes read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.