The inevitable AI takeover: How c-suite roles and AI governance models are evolving as healthcare becomes AI-driven

Kay Firth-Butterfield, BA(hons), LLM, MA

Head of AI and a Member of the Executive Committee at the World Economic Forum

The inevitable AI takeover: How c-suite roles and AI governance models are evolving as healthcare becomes AI-driven

6 July 2022 | 10min

Quick Takes

  • With the increased usage of artificial intelligence (AI), organizations are reassessing the roles and responsibilities of their board members and c-suite executives

  • New compliance, governance, and executive roles are being created to adapt and fully implement AI as a core business function

  • It is a matter of time before AI is regulated at the government level, so organizations must act now to strengthen compliance, governance, and risk mitigation at all levels of the organization

With the increasing development and adoption of artificial intelligence (AI) solutions in healthcare, there comes as much risk as reward. This new risk AI poses has resulted in required changes to the roles and responsibilities across all levels of an organization, including at the c-suite and board levels. 

Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum (WEF), discusses how these roles are evolving alongside AI governance models to help ensure the ethical use of AI as a core function of the business. 

Roles and responsibilities related to ethical and non-biased use of AI

HT: As roles and risk responsibilities evolve throughout organizations to accommodate the advancement of technology, who is responsible for the ethical and non-biased use of AI-based tools in healthcare organizations?

Kay Firth-Butterfield: There is an emerging role we are seeing which is the position of the Chief AI Ethics Officer (CAIEO), or Chief AI Responsible Use Officer – whatever title you want to give them. Currently, we are seeing AI being allotted to the Chief AI Officer (CAIO) or more commonly, the Chief Technology Officer (CTO) or the Chief Innovation Officer (CIO). 

When you appoint a CAIO to recreate your business using AI insights, you should also employ someone to look at the responsible use of AI at the same level. It is one thing to want all these great insights from AI to drive revenues from choosing the right way to sell to customers, for example, but it is a major problem when you don’t think about the responsible use of it. 

You need your board to understand its oversight responsibilities, and then on the next layer down, you need the whole c-suite to understand how you are using AI. You need to employ someone at the c-suite level that can go to the CEO or to the board and say, “There is something we need to do differently here.” 

That will help enormously to mitigate against scandals when you might incorrectly use AI, even if you don’t mean to incorrectly use it. That person can then look at your whole AI production line and how you are using it vertically and horizontally across the organization to ensure that you won’t offend your customers. This avoids costly errors.

Being responsible and trustworthy are two key characteristics when hiring for the AI c-suite role

HT: If you were hiring someone for this c-suite role, is it more important that they have an ethics background?

Kay Firth-Butterfield: I prefer responsible or trustworthy to ethics because it allows you to avoid the conversation about whose ethics are you going to be applying – and there are different ethical regimes around the world. 

If you instead talk about responsible or trustworthy AI, the questions become: is it going to do harm to society? Is it going to benefit my company? Is it going to benefit my patients, or are there some problems in this that will lead to the algorithm being unfair to certain classes of people, for example? These questions will apply regardless of where the algorithm is going to be deployed.

So, it’s about responsible development rather than ethics, and part of responsible development is its level of transparency.

  • Have you thought about privacy and the laws around privacy? 
  • Have you created an algorithm that benefits people? 
  • Have you created a human-centered algorithm? 

It’s those things that you want to be looking at, which we’ve called ‘ethics’ in the past. 

It’s partly my fault because when I was first appointed to do this, I called myself Chief AI Ethics Officer. I think we were talking about ethics in those days, but it’s not the right nomenclature anymore because risk and compliance are also involved and will become even more so. 

So, this partly sits with the CTO (Chief Technology Officer), partly with the Corporate Social Responsibility (CSR), partly with the Environmental, Social, and Governance (ESG), and partly with the general council. Therefore, you need a chief AI responsible officer because then you have somebody who can interact with all of these other people who are looking at the risk of using AI or algorithms.

Input from all levels of an organization is needed for responsible AI implementation  

HT: There seems to be a pressing need for governance around AI-based solutions and their use cases. Who needs to be in the room for these discussions?

Kay Firth-Butterfield: Firstly, the board needs to understand that you are using AI and the ways in which you might be using it. They need to understand that this is about total business transformation  – otherwise, they can’t exercise their governance and oversight functions. 

Unfortunately, at the moment, many boards are not particularly technically literate, which requires changing your board so that you do have someone who is technical, or who at least understands enough about responsible AI to be able to lead this piece of work. The WEF has created a board toolkit to help members learn about their responsibility as far as AI is concerned.1 

Secondly, the c-suite needs to be involved in discussions around how the company will utilize AI and how they will ensure its responsible use. Recently, the WEF published a c-suite toolkit, which sets out everyone’s roles and responsibilities for companies that wish to be AI-driven.1 

Lastly, you need the whole organization on board to enable the transformation. Organizations that do this well have good training and education for all their employees about AI. This helps to sustain the company going forward. Employees are educated on how AI is being used in the company, what it means for the company, and what it means for employees – especially in terms of their jobs as this may be an area of concern for them.

The emergence of AI governance 

HT: What best practices for AI governance models have you seen specifically in healthcare, and do you see this becoming governmentally regulated?

Kay Firth-Butterfield: The Food and Drug Administration (FDA) has already started approving AI as a medical device. We will also see the AI Act in Europe, which is a proposed European law on AI that may serve as the first law on AI developed by a major regulatory body.2  There is also some work going on in India regarding AI used in their healthcare system as well. 

In our work, we’ve concentrated on how to bring medical ethics together with responsible AI. For example, technologists tend to want to use technology and push forward with devices that are totally new to the community. They may need to be reminded to do simple things like keeping the human patient at the forefront and ensuring humans knew they were working with machines. It is essential that computers don’t pretend to be human in order to build trust. This will be especially true once robots begin to look more and more human-like.

We have tested this approach in a chatbot used in healthcare and we hope we’ve now created a framework that is good enough for anybody who creates a chatbot to use in any patient interface. Essentially, that responsible AI foundation can be used for any patient-facing chatbot.

Currently, this framework is being used in triage cases in Rwanda. Rwanda is important because it is thinking about its healthcare laws and about the use of AI in a holistic fashion to deliver healthcare. There is one doctor for every 27,000 people in Rwanda, so they need to use AI to enhance their healthcare. There will be some governance coming out of Rwanda, although it might be what we call “soft law” as they pilot this approach. 

These are the places that I would begin to look for up and coming legislation and regulation, but you will find gradually that any AI which delivers healthcare advice is going to be regulated.

The need for flexibility and creativity in a future world where AI does all the work 

HT: How will we find a way to satisfactorily balance between the use of the tools and opportunities AI gives and our human need to work and create?

Kay Firth-Butterfield: There are two schools of thought here. If you look at the WEF’s jobs report, or even the consulting companies’ jobs reports, everyone says that there will be more jobs created because of the AI revolution than there will be lost.3 

Of course, it might be that there are a lot of jobs lost before there are a lot of jobs gained. We might have that interregnum where there are a lot of people being made redundant by algorithms before we start seeing those jobs of the future, which of course we don’t know what those are yet.

We didn’t know data scientists or AI scientists would be as necessary as they are now 20 years ago. My job didn’t exist five years ago. Chief Responsible AI Officers didn’t exist four years ago. What we do know is that we create new jobs as we create new parts of our industry. 

There is another perspective, and it’s particularly well put by Stuart Russell that as AI gets to the point where it can do all the work that we do, we will become a race of beings that only enjoy leisure.4 He firmly believes that we should be educating ourselves for a world where we have complete leisure to do what we want.

Having artificial general intelligence means that a machine can do everything that we can do as a human, meaning we no longer need to do the thinking. Once we create robots that can do the jobs that we do, we won’t be needed anymore for the manual bits. Manual dexterity is something that robots find quite hard but we’re already seeing robots in surgery.

It depends on which view you take, but what we are seeing is that jobs are changing rather quickly now. 

Implications of AI governance to healthcare professionals

HT: What are the implications of all this to the roles and responsibilities of healthcare professionals such as nurses and doctors and carers?

Kay Firth-Butterfield: This is hard because doctors, nurses, and carers are going to be using algorithms rather than creating them. So, what is their duty of care when they are using tools that have been provided to them by their employers? They must fall back on their Hippocratic oath and learn more about algorithms so that they can say something if they see something is not working properly. 

Let’s take radiologists. We know that algorithms are better able to diagnose many cancers from x-rays than radiologists, or cancer radiologists. It is easy for us human beings to think that we can and should rely on the machine because it’s bound to be telling us the truth. However, due to possible bias and discrimination in algorithms, healthcare providers need to make sure that they are prepared to challenge what the computer is advising because it won’t always be right. 

With carers, we expect to see substantial amounts of care being given by AI and AI-enabled robots, especially to older people. Finding the right balance of how patients will be cared for by robots, how data privacy and consent will be dealt with (especially in cases where the patient has dementia), and how their private information will be used – by the algorithm or to improve the algorithm – will be a challenge. 

To hear more of Kay’s insights around AI, be sure to read AI in healthcare: How to deal with bias, discrimination, and regulatory uncertainty.

Kay Firth-Butterfield, BA(hons), LLM, MA is one of the foremost experts in the world on the governance of AI. She is a barrister, former Judge, Professor, technologist, and entrepreneur who has an abiding interest in how humanity can equitably benefit from new technologies, especially AI. Kay is an Associate Barrister (Doughty Street Chambers), Master of the Inner Temple, London and serves on the Lord Chief Justice’s Advisory Panel on AI and Law. She co-founded AI Global and was the world's first Chief AI Ethics officer in 2014 and created the AIEthics Twitter hashtag. Kay is Vice-Chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles. She is on the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. She has also been consistently recognized as a leading woman in AI since 2018 and was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

References

  1. World Economic Forum. (2021). Report available from https://www.weforum.org/projects/ai-board-leadership-toolkit [Accessed June 2022]
  2. The Artificial Inteligence Act. Article available from https://artificialintelligenceact.eu/ [Accessed June 2022]
  3. World Economic Forum. (2020). Report available from https://www.weforum.org/reports/the-future-of-jobs-report-2020 [Accessed June 2022]
  4. Stuart Russell. (2022). Lecture available from https://www.bbc.co.uk/programmes/m0012fnc [Accessed June 2022]
[class^="wpforms-"]
[class^="wpforms-"]