AI in healthcare: How to deal with bias, discrimination, and regulatory uncertainty 

Kay Firth-Butterfield, BA(hons), LLM, MA

Head of AI and a Member of the Executive Committee at the World Economic Forum

AI in healthcare: How to deal with bias, discrimination, and regulatory uncertainty 

8 June 2022 | 12min

Quick Takes

  • More must be done to eliminate AI bias and discrimination so that healthcare companies can truly advance and embrace AI as a core function

  • The business impacts of AI bias can include lost revenue, lost patients, missed hiring opportunities, and even the violation of human rights

  • AI bias may begin in the early development of the algorithm and can cause serious impact if the algorithm is incorrect

“All companies will be AI companies in the future”, says Kay Firth-Butterfield, the head of artificial intelligence (AI) and machine learning at the World Economic Forum. To ensure the future is fair, with equal access to quality care for everyone, healthcare leaders face two significant challenges: AI bias and discrimination.

In this interview, Kay shares key insights and considerations about how to safeguard your healthcare organization and patients against the risks of AI, and how to prepare for an uncertain, yet more regulated environment. 

AI bias and AI discrimination in healthcare

HT: How would you define AI bias versus AI discrimination?

Kay Firth-Butterfield: Bias is a form of discrimination, and discrimination is what happens when you have bias. As an example, you might be a class of protected people, but you’re only a class of protected people until something happens to you such as being denied a job because you are Black. That then is discrimination; you’ve been discriminated against. Ultimately, the discrimination is the result of the bias in the AI algorithm. 

HT: A recent survey suggested that over 36% of companies, not just in the healthcare industry, experienced challenges or direct business impact due to an occurrence of AI bias.1 How does bias work its way into an algorithm? 

Kay Firth-Butterfield: Bias comes into algorithms in several ways. The first is from the developers of the algorithm. For example, if you’re building an algorithm with a world view and yet your developers are all young men in their twenties, when they’re thinking about what data to include and how to create the algorithm, they aren’t bringing that world view to it. 

The other way that bias gets into algorithms is when you use the wrong data. Perhaps you are not collecting enough data on a given type or class of person, or your industry may not have a good enough sample of data to apply a particular drug discovery. For example, it might be that you can’t test your algorithm against the needs of somebody who is in South Africa, because you simply haven’t got the data.

That leads to two types of bias. One, you might not create a drug for a class of people that you don’t have sufficient data about. Two, you might test on data that is not going to be helpful for the class of person.

Another way that we have discrimination from data is that all data is historic. What I said at the start is now historic. If you have bias in the data then it’s easy to allow that bias into the algorithm that you’re creating. This is because an algorithm is created by putting data about the topic into the algorithm, and then the algorithm makes sense of that data.

The business impacts of AI bias

Kay Firth-Butterfield, AI in Healthcare_Balancing the business impacts of AI bias

HT: What are some direct business impacts due to AI bias in healthcare and other industries. Could you please give us some examples?

Kay Firth-Butterfield: In healthcare, the one thing we don’t want to do is provide the wrong treatment. If you are treating a Black person with something or in a way that has only been tested on data from White people, for example, then you are likely to be giving the wrong treatments. 

Another way that we see direct business impact is through reduced customers and sales. For example, if your algorithm doesn’t have sufficient data to be able to serve certain customers or patients, then those people simply won’t go to your organization. Another impact is where you are seen, by your customers, to be discriminating against them and your customers vote with their feet and simply don’t buy from you because they feel discriminated against.

Using AI in hiring is another impact on business. For example, if your algorithm is set up in the wrong way, then you are going to lose the opportunity of hiring people who will make your business succeed. An example of this is the banking company that trained the algorithm on people who had previously succeeded in the bank only to find that the only people that the algorithm recommended were White males who went to an Ivy League university.2

I recently interviewed Commissioner Sonderling, one of the commissioners of the Equal Employment Opportunity Commission (EEOC) in the US. He gave some great examples of how protected classes under the civil rights act can be adversely affected by AI in hiring and employment. We discussed existing laws that could be applied to machine learning bias situations in hiring. We agreed that laws already exist (Civil Rights Act) to protect certain classes of persons from being discriminated against in hiring decisions by employers. What we are now seeing is the testing of that law to see if it applies equally to machines that are used in hiring.

It has even been proven that if you employ an algorithm to predict the likelihood of someone recommitting a crime after being booked into jail, the data is so biased in the United States against people of color that even if the Black person was originally charged with a lesser offense than the White person, it will suggest that the Black person is more likely to recommit a crime.3

The other example comes from the Algorithmic Justice League – an organization that aims to raise public awareness about the impacts of AI –  about how algorithms typically in the past have not identified Black women correctly. They’ve identified them specifically for example, as Black men. 4

Women and diversity’s role in diminishing AI bias 

Kay Firth-Butterfield, AI in healthcare_Women and diversity’s role in diminishing AI bias

HT: Why is diversity and inclusion of minorities so important in AI development?

Kay Firth-Butterfield: There are a few things. Firstly, when you’re developing an algorithm, you need the developers to look like the cohort of people you are developing the algorithm for. Women are in fact, not a minority, and yet we are. 

Only 22% of women are AI scientists, which inevitably causes a bias and a problem with the way that the algorithms are going to be created.There is also a paucity of Black AI scientists. The majority of people working in AI and creating algorithms are White males or Indian subcontinent males. 

Algorithms created for North America and Europe also find their way into Africa, Latin America, the Middle East, and Southeast Asia. The issue is that they’re not being created by the people who would be using them, so it causes a problem because the algorithms are not necessarily correct for the cohort of people it is applied to. It is critically important to make sure that you get that diversity in there. 

One of the ways that we’ve seen this being corrected is when you start to think about the algorithms that you’re going to use, bringing those diverse voices into the room. So, you might have a room full of young, White, and Indian scientists, but you can also bring in others from your company who look like the population that you want to serve with that algorithm.

For instance, due to the underrepresentation of women AI scientists, we’ve got to bring them in a different way, such as with your social scientists, which is great because often they bring that societal perspective that aids in the development of the algorithm anyway. 

Similarly, we need to bring people from the specific countries that the algorithm’s going to serve, because we know, particularly in healthcare, that different diseases beset different people in different ways around the world depending upon their race and genetics.

Evaluating AI algorithms

HT: How can organizations assess the fairness of their AI algorithms? What criteria can they use to evaluate them?

Kay Firth-Butterfield: There are several tools being developed that audit or certify your algorithm. The EU is about to pass its AI act, based on risk and sector. You might be in the healthcare sector where the use case for your algorithm is high-risk. If that’s the case, then you are going to be required by law to either do an audit of that algorithm or to have a certificate for it. 

To do this, you either employ someone to do it, or you consider what we talk about a lot is responsible AI, which is transparency and explainability.

  • Can you explain the data that you put into the algorithm? 
  • Can you explain the way that the algorithm was created? 
  • Can you explain who created it, and can you explain how the decision was made? 

In many algorithms, you can’t because there are so many layers of that algorithm that you can’t disentangle the black box, but you can at least make sure that you’ve used the right data and you put the right people together to create the algorithm. There are a lot of small companies starting up that do this audit of your algorithm.

The present and future of AI evaluation and regulation

HT: Are there any regulatory agencies within each country that approves or looks at the work of these audits and certifications?

Kay Firth-Butterfield: There will be the European AI act, predicted to come into effect in 2024, but other than that there are little to no regulations as of today.6 There is some legislation in the US coming up around the use of algorithms in hiring because it is a huge impact on someone’s life. Similarly, if you get the drug wrong or apply the learnings from the wrong set of people to a different set of people, that’s an enormous impact on someone’s life. So, you are seeing some legal cases being brought on this topic at the state level in America.

As mentioned before the US Equal Employment Opportunity Commission (EEOC) is looking at algorithms based on the Civil Rights Act. The law exists to prevent discrimination against people by people and I think we will soon see cases brought by the EEOC against algorithms that discriminate against protected classes. What will be interesting is who the Court finds liable, the creators of the algorithm or the users, or both. So, there’s no regulation around the world today apart from the existing human rights legislation that exists in many countries in the world.

HT: Do you see AI regulations changing? 

Kay Firth-Butterfield: I think what will happen is that the EU will pass its laws, and then much like with General Data Protection Regulation (GDPR), that will trickle out to other countries that will adopt or maybe amend and adopt, but at least somebody’s done the work. Furthermore, we will see more regulations coming as a result of that. What we are seeing in America is a little bit of case law coming, but we haven’t seen the big fines. So, somebody like the EEOC can ask for huge fines. Once we see that happening, that will provide us with a baseline of how people and how companies are going to create and use algorithms.

Responsible AI and business transformation 

HT: How should companies start thinking about responsible AI and business transformation?

Kay Firth-Butterfield: This is so important, not only because there may be regulation, but also because of the business and human impact. Healthcare companies are using AI often to interface with their patients and to impact their own bottom line. 

The big tech companies have already encountered these problems and have set up a whole body of work that others can draw from, so you don’t have to start from scratch anymore. That can look like an ethics advisory panel, which was the first thing that DeepMind set up, for example, in 2014.7 

It can look like a large number of people that you actually employ to think about whether the algorithms that you’re creating are created properly and to define some best practices around that. Google, for example, has a team of about 350 people working on this.

It could also mean bringing those diverse communities into the creation of algorithms. It does mean that the company should consider itself liable for the algorithm from the moment that they start designing it through to the end of production, sometimes beyond the point of sale too.

The companies that do best with AI consider themselves to be AI companies and have realized organizational change as a result. If we take pharmaceutical companies, for example, that are looking to use AI for drug discovery as part of their core business and also increasingly in the back office, gradually all systems are going to be run by AI. So you become an AI company first even though your core products are pharmaceuticals.

Let’s go back to AI and hiring. If you’re a company producing that hiring algorithm and you sell it to another company that then has its workforce biased because you got it wrong, then you could be liable for the project that you created. Same with cars, for example. AI and autonomous cars. I don’t think we’ve seen people taken to court for autonomous vehicle crashes in the way that we might do in the future.

Key insights and takeaways

HT: What key takeaways or insights would you give to healthcare leaders preparing for this expected increase in regulation for AI-based tools, offerings, and products?

Kay Firth-Butterfield: There are four key takeaways for healthcare leaders when it comes to preparing for the expected increase in regulation for AI: 

1 Start instituting a risk responsible AI regime across the company. If companies do this now, they will be well equipped to deal with any regulation that comes their way. 
2 Conducting a holistic, company-wide restructuring and rethinking how your business works is critical. Look at all levels of the company including the board, c-suite, and employees to ensure the fair and responsible creation and use of AI algorithms. 
3 Make AI human-centered. If you’re looking at patient care or healthcare, I encourage you to read the work that we did on chatbots*, because you’re thinking about how medical ethics and your industry’s ethics, fits with the new responsible AI criteria, and you’re going to find that there are many commonalities. Your work is human-centered, and responsible AI means that your algorithms are human-centered. 
4 Take into consideration the privacy and insurance legislation and regulations in the country or state in which you are operating. Your work is matching the needs of your patient’s privacy to the needs of receiving good healthcare. For example, in America there’s Health Insurance Portability and Accountability Act (HIPAA) legislation.
5 Be transparent in the way that you build your algorithms. Get that experience of transparency and talking to your customers to make sure that your customers understand you and understand the care that they’re going to get from your industry. There are a lot of synergies between what we talk about in responsible AI and what the healthcare industry is already doing.

*The WEF’s research on the use of chatbots in the responsible use of conversational AI in healthcare

Kay Firth-Butterfield, BA(hons), LLM, MA is one of the foremost experts in the world on the governance of AI. She is a barrister, former Judge, Professor, technologist, and entrepreneur who has an abiding interest in how humanity can equitably benefit from new technologies, especially AI. Kay is an Associate Barrister (Doughty Street Chambers), Master of the Inner Temple, London and serves on the Lord Chief Justice’s Advisory Panel on AI and Law. She co-founded AI Global and was the world's first Chief AI Ethics officer in 2014 and created the AIEthics Twitter hashtag. Kay is Vice-Chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems and was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles. She is on the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI and AI4All. She has also been consistently recognized as a leading woman in AI since 2018 and was featured in the New York Times as one of 10 Women Changing the Landscape of Leadership.

References

  1. DataRobot. (2022). Article available from https://www.datarobot.com/newsroom/press/datarobots-state-of-ai-bias-report-reveals-81-of-technology-leaders-want-government-regulation-of-ai-bias/ [Accessed June 2022]
  2. Eubanks B. (2018). Artificial Intelligence for HR: Use AI to Support and Develop a Successful Workforce. London: Kogan Page, Ltd.
  3. Angwin J. et al. (2016). Article available from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed May 2022]
  4. Buolamwini J. (2018). Video available from http://gendershades.org/ [Accessed May 2022]
  5. World Economic Forum. (2018). Report available from https://reports.weforum.org/global-gender-gap-report-2018/assessing-gender-gaps-in-artificial-intelligence/?doing_wp_cron=1651137053.5639100074768066406250 [Accessed May 2022]
  6. Business of Data. (2022). Article available from https://business-of-data.com/podcasts/frans-van-bruggen-preparing-for-eu-ai-act-2024/#:~:text=That%20means%20data%20to%20train,come%20into%20effect%20in%202024 [Accessed May 2022]
  7. Shead S. (2019). Article available from https://www.forbes.com/sites/samshead/2019/03/27/google-announced-an-ai-council-but-the-mysterious-ai-ethics-board-remains-a-secret/?sh=41723563614a [Accessed May 2022]