The Marketing Journal
  • About
  • Interviews
  • Articles
  • Videos
  • Book Reviews
  • Views
  • Subscribe
“What is ‘ethical AI’ and how can companies achieve it?” by Dennis Hirsch and Piers Norris Turner

“What is ‘ethical AI’ and how can companies achieve it?” by Dennis Hirsch and Piers Norris Turner

May 27, 2023

 

The rush to deploy powerful new generative AI technologies, such as ChatGPT, has raised alarms about potential harm and misuse. The law’s glacial response to such threats has prompted demands that the companies developing these technologies implement AI “ethically.”

But what, exactly, does that mean?

The straightforward answer would be to align a business’s operations with one or more of the dozens of sets of AI ethics principles that governments, multistakeholder groups and academics have produced. But that is easier said than done.

We and our colleagues spent two years interviewing and surveying AI ethics professionals across a range of sectors to try to understand how they sought to achieve ethical AI – and what they might be missing. We learned that pursuing AI ethics on the ground is less about mapping ethical principles onto corporate actions than it is about implementing management structures and processes that enable an organization to spot and mitigate threats.

This is likely to be disappointing news for organizations looking for unambiguous guidance that avoids gray areas, and for consumers hoping for clear and protective standards. But it points to a better understanding of how companies can pursue ethical AI.

Grappling with ethical uncertainties

Our study, which is the basis for a forthcoming book, centered on those responsible for managing AI ethics issues at major companies that use AI. From late 2017 to early 2019, we interviewed 23 such managers. Their titles ranged from privacy officer and privacy counsel to one that was new at the time but increasingly common today: data ethics officer. Our conversations with these AI ethics managers produced four main takeaways.

First, along with its many benefits, business use of AI poses substantial risks, and the companies know it. AI ethics managers expressed concerns about privacy, manipulation, bias, opacity, inequality and labor displacement. In one well-known example, Amazon developed an AI tool to sort résumés and trained it to find candidates similar to those it had hired in the past. Male dominance in the tech industry meant that most of Amazon’s employees were men. The tool accordingly learned to reject female candidates. Unable to fix the problem, Amazon ultimately had to scrap the project.

Generative AI raises additional worries about misinformation and hate speech at large scale and misappropriation of intellectual property.

Second, companies that pursue ethical AI do so largely for strategic reasons. They want to sustain trust among customers, business partners and employees. And they want to preempt, or prepare for, emerging regulations. The Facebook-Cambridge Analytica scandal, in which Cambridge Analytica used Facebook user data, shared without consent, to infer the users’ psychological types and target them with manipulative political ads, showed that the unethical use of advanced analytics can eviscerate a company’s reputation or even, as in the case of Cambridge Analytica itself, bring it down. The companies we spoke to wanted instead to be viewed as responsible stewards of people’s data.

The challenge that AI ethics managers faced was figuring out how best to achieve “ethical AI.” They looked first to AI ethics principles, particularly those rooted in bioethics or human rights principles, but found them insufficient. It was not just that there are many competing sets of principles. It was that justice, fairness, beneficence, autonomy and other such principles are contested and subject to interpretation and can conflict with one another.

This led to our third takeaway: Managers needed more than high-level AI principles to decide what to do in specific situations. One AI ethics manager described trying to translate human rights principles into a set of questions that developers could ask themselves to produce more ethical AI software systems. “We stopped after 34 pages of questions,” the manager said.

Fourth, professionals grappling with ethical uncertainties turned to organizational structures and procedures to arrive at judgments about what to do. Some of these were clearly inadequate. But others, while still largely in development, were more helpful, such as:

  • Hiring an AI ethics officer to build and oversee the program.
  • Establishing an internal AI ethics committee to weigh and decide hard issues.
  • Crafting data ethics checklists and requiring front-line data scientists to fill them out.
  • Reaching out to academics, former regulators and advocates for alternative perspectives.
  • Conducting algorithmic impact assessments of the type already in use in environmental and privacy governance.

Ethics as responsible decision-making

The key idea that emerged from our study is this: Companies seeking to use AI ethically should not expect to discover a simple set of principles that delivers correct answers from an all-knowing, God’s-eye perspective. Instead, they should focus on the very human task of trying to make responsible decisions in a world of finite understanding and changing circumstances, even if some decisions end up being imperfect.

In the absence of explicit legal requirements, companies, like individuals, can only do their best to make themselves aware of how AI affects people and the environment and to stay abreast of public concerns and the latest research and expert ideas. They can also seek input from a large and diverse set of stakeholders and seriously engage with high-level ethical principles.

This simple idea changes the conversation in important ways. It encourages AI ethics professionals to focus their energies less on identifying and applying AI principles – though they remain part of the story – and more on adopting decision-making structures and processes to ensure that they consider the impacts, viewpoints and public expectations that should inform their business decisions.

Man in a blue suit is seated at a desk speaking into a microphone with people seated behind him.
In testimony to a Senate committee in May 2023, OpenAI CEO Sam Altman called for stricter oversight, including licensing requirements, for companies that develop AI software.
AP Photo/Patrick Semansky

Ultimately, we believe laws and regulations will need to provide substantive benchmarks for organizations to aim for. But the structures and processes of responsible decision-making are a place to start and should, over time, help to build the knowledge needed to craft protective and workable substantive legal standards.

Indeed, the emerging law and policy of AI focuses on process. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such consequential decisions. These laws emphasize processes that address in advance AI’s many threats.

Some of the developers of generative AI have taken a very different approach. Sam Altman, the CEO of OpenAI, initially explained that, in releasing ChatGPT to the public, the company sought to give the chatbot “enough exposure to the real world that you find some of the misuse cases you wouldn’t have thought of so that you can build better tools.” To us, that is not responsible AI. It is treating human beings as guinea pigs in a risky experiment.

Altman’s call at a May 2023 Senate hearing for government regulation of AI shows greater awareness of the problem. But we believe he goes too far in shifting to government the responsibilities that the developers of generative AI must also bear. Maintaining public trust, and avoiding harm to society, will require companies more fully to face up to their responsibilities.The Conversation

Dennis Hirsch, Professor of Law and Computer Science; Director, Program on Data and Governance; core faculty TDAI, The Ohio State University and Piers Norris Turner, Associate Professor of Philosophy & PPE Coordinator; Director, Center for Ethics and Human Values, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article. Image: Oscar Wong/Moment via Getty Images

Related Posts

“Technology and the Common Good” – Christian Sarkar and Philip Kotler

AI /

“Technology and the Common Good” – Christian Sarkar and Philip Kotler

“Wicked Problems” – An Interview with Philip Kotler and Christian Sarkar

B2B Marketing /

“Wicked Problems” – An Interview with Philip Kotler and Christian Sarkar

OP-ED: “Autopsy Of a Brand: Tesla” – George Tsakraklides

B2C Marketing /

OP-ED: “Autopsy Of a Brand: Tesla” – George Tsakraklides

‹ “How the US military used magazines to target ‘vulnerable’ groups with recruiting ads” – Jeremiah Favara › “The allure of the ad-lib: New research identifies why people prefer spontaneity in entertainment” – Jacqueline Rifkin and Katherine Du
A D V E R T I S E M E N T
A D V E R T I S E M E N T

Recent Posts

  • “Technology and the Common Good” – Christian Sarkar and Philip Kotler
  • “Cultural Presence: The Social Function of Milan Design Week” – Barbara Dal Corso
  • “Wicked Problems” – An Interview with Philip Kotler and Christian Sarkar
  • “Dragon proofing your legacy brand” – Grant McCracken
  • OP-ED: “Autopsy Of a Brand: Tesla” – George Tsakraklides
  • “The 5th P is Purpose” – Christian Sarkar and Philip Kotler
  • “The CEO-as-Brand Era: How Leadership Ego is Fueling Tesla’s Meltdown” – Ilenia Vidili
  • “The Future of Marketing is the Quest for Good” – Christian Sarkar and Philip Kotler
  • “Questions for the New Year” – John Hagel
  • “Enlightened Management – An Interview with Gabriele Carboni”
  • “If you’re not thinking segments, you’re not thinking” – Anthony Ulwick
  • “Does Marketing Need Curtailment for the Sake of Sustainability?” – Philip Kotler
  • ‘Social profit orientation’ can help companies and nonprofits alike do more good in the world by Leonard L. Berry, Lerzan Aksoy, and Tracey Danaher
  • “Understanding Hallyu: The Impact of Korean Pop Culture” by Sanya Anand and David Seyheon Baek
  • “Go-to-Market (GTM): A New Definition” – Karthi Ratnam
  • “Jobs-to-be-Done for Government” – Anthony Ulwick
  • “The Power of Superconsumers” – Christopher Lochhead, Eddie Yoon, & Katrina Kirsch
  • “Zoom Out/Zoom In – Making It Personal” – John Hagel
  • “Regeneration or Extinction?” – a discussion with Philip Kotler, Christian Sarkar, and Enrico Foglia
  • “Climate scientists: concept of net zero is a dangerous trap” – James Dyke, Robert Watson, and Wolfgang Knorr
  • “The allure of the ad-lib: New research identifies why people prefer spontaneity in entertainment” – Jacqueline Rifkin and Katherine Du
  • “What is ‘ethical AI’ and how can companies achieve it?” by Dennis Hirsch and Piers Norris Turner
  • “How the US military used magazines to target ‘vulnerable’ groups with recruiting ads” – Jeremiah Favara
  • “Ethics and AI: Policies for Governance and Regulation” – Aryssa Yoon, Christian Sarkar, and Philip Kotler
  • “Product Feature Prioritization —How to Align on the Right List” – Bob Pennisi
  • “The Community Value Pyramid” – Christian Sarkar, Philip Kotler, Enrico Foglia
  • “Next Practices in Museum Experience Design” – Barbara Dal Corso
  • “What does ESG mean?” – Luciana Echazú and Diego C. Nocetti
  • “ChatGPT could be a game-changer for marketers, but it won’t replace humans any time soon” – Omar H. Fares
  • “If Your Brand Comes Before Your Category, You’re Doing It Wrong” – Eddie Yoon, Nicolas Cole, Christopher Lochhead

Categories

  • Advertising
  • AI
  • Analytics
  • B2B Marketing
  • B2C Marketing
  • Big Data
  • Book Reviews
  • Brand Activism
  • Branding
  • Category Design
  • Community
  • Content Marketing
  • COVID-19
  • Creativity
  • Customer Culture
  • Customer Engagement
  • Customer Experience
  • Dark Marketing
  • Decision Making
  • Design
  • Digital Marketing
  • Ecosystems & Platforms
  • Ethics
  • Go to Market
  • Innovation
  • Internet of Things
  • Jobs-to-be-Done
  • Leadership
  • Manipulation
  • Marketing Technology
  • Markets & Segmentation
  • Meaning
  • Metrics & Outcomes
  • Millennials
  • Mobile Marketing
  • Non Profit Marketing
  • Organizational Alignment
  • Peace Marketing
  • Privacy
  • Product Marketing
  • Regeneration
  • Regenerative Marketing
  • Research
  • Retail
  • Risk & Reputation
  • Sales
  • Services Marketing
  • Social Media
  • Strategy & Business Models
  • Sustainability
  • Uncategorized
  • Videos

Archives

  • May 2025
  • April 2025
  • March 2025
  • January 2025
  • December 2024
  • September 2024
  • March 2024
  • October 2023
  • September 2023
  • June 2023
  • May 2023
  • April 2023
  • February 2023
  • January 2023
  • October 2022
  • August 2022
  • May 2022
  • January 2022
  • November 2021
  • September 2021
  • July 2021
  • June 2021
  • May 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • October 2020
  • September 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016

Back to Top

© 2016-19 The Marketing Journal and the individual author(s). All Rights Reserved
Produced by: Double Loop Marketing LLC
By using this site, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies, our privacy policy, and our terms of use.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept Read More
Privacy & Cookies Policy