Artificial Intelligence: the good, the bad, and the transformative

Vivienne Ming, PhD

Founder and Executive Chair of Socos Labs

Artificial Intelligence: the good, the bad, and the transformative

15 July 2021 | 14min

Quick Takes

  • AI is a tool that can be used for good or bad and there needs to be some collective understanding of what is right and what is wrong when applying the technology

  • In order to construct a powerful and useful AI the problem, how to solve it, and a deep understanding of the interconnectivity of life outcomes must be understood

  • It’s not about the data, it’s about the human story behind it

Artificial Intelligence (AI) has immense transformative potential, but the ethics surrounding the use of AI are complex. How can we ensure it is being used responsibly and to the best of its ability to solve an intended problem?  

In this second half of our interview, Vivienne Ming shares her insights and experience about building AI that is both ethical, and powerfully impactful. Click here to read the first half.

Unethical AI stems from misunderstanding the problem and/or its consequences 

Vivienne Ming_Ethics of Artificial Intelligence_Content Image#01

HT: What role, if any, should governments or regulatory bodies such as the FDA play to ensure the intended, ethical, and/or unbiased use of AI?

Vivienne Ming: AI is a tool, very much like a hammer, that can be used for good or bad. It would be naive to think that all we need, for example, is an ethics council. How can we be sure that an ethics council that can override its CEO is any more informed about the complexities of the problems being addressed or are more or less ethical in the use of AI over its creators, for example?

The core starting point of ethical AI is, “Do you understand the problem you’re trying to solve and the consequences associated with those problems?” So much of what has emerged as deeply problematic, including racism in face recognition systems and health diagnostics, are ultimately because the people involved intended the best but didn’t actually understand the historical implications of the data sets used, or the consequences on communities that they were not a part of. 

It’s naive to think that one wise person will be able to figure it out for us, rather than us learning how to figure it out for ourselves.

On the issue of regulation, this is a serious problem. We need to think about what the root causes of those problems are. One is the people building it don’t understand the problems they’re trying to solve. Another is a very small number of people, institutions, and organizations actually control almost all of the AI infrastructure in the world, so their priorities dominate. 

These companies and the two countries – China and the United States – that really dominate this space, are not necessarily the villain but what is in their interest is not necessarily the collective interest and frequently won’t be. This creates problems of its own. 

It’s scary to hear individual legislators wanting to pass laws regulating a specific algorithm that they couldn’t possibly understand on a problem they are not experts in. I would not want to see this in pharmaceuticals, in AI, or in global foreign policy. 

There is no place where a legislator can pass a law that goes all the way down to specific algorithms. There still needs to be some collective understanding of what is right and what is wrong.

The potential role of government in the ethical use of AI

Vivienne Ming: Government can play a role in empowering institutions to use AI ethically. Facial recognition is a great example where there’s a serious and historical problem with people misusing algorithms, whether thoughtlessly or not.

One of the issues right now is you have a relatively small number of people deciding where the funding goes. For example, you would have to work for Baidu or Google, or Facebook if you wanted to do great work in facial recognition. It would be amazing if there were international institutions or federal ones that could come in and say, “You know what, we believe that there are good uses of facial recognition, and we’re willing to support those”. 

I can give you two concrete examples in both directions. A startup claimed that its facial recognition can help improve hiring decisions. As an expert in both facial recognition and machine learning for hiring, that is an absolutely irresponsible nonsense claim. No such science exists today that could possibly support this claim, and yet companies use this startup’s product to help with their hiring. It would have been incredibly useful to have auditors come in and say, “We won’t divulge your intellectual property, but you have to prove to us that this works, or we will say otherwise publicly.” That’s the kind of power an institution like this might be able to bring to bear.

The alternative has also happened in many places including in San Francisco where the city council has banned facial analysis algorithms. I did a  project with face recognition, which aimed to help autistic children learn how to read facial expressions using Google Glass. Is that now illegal? For public institutions in San Francisco, it might be.

Not only were the autistic kids using our system able to learn how to read facial expressions, but it also increased their theory of mind, which is their ability to understand why other people do what they do. It’s a trait many people on the autism spectrum struggle with. What a transformative thing to do for someone’s life, you’re shown a hidden language you’ve never seen before, one that helps you understand the world around you. 

We need the nuance of people that truly understand the problem space, not a law that says you cannot do this or you must do that.

Only well designed AI can help transform lives for the better 

Vivienne Ming: As an area to apply AI, recruiting tends to be very risk-averse. The worst thing many employers can imagine is hiring a bad person, being stuck with them disrupting your culture and workflow. Even well-designed, well-intentioned AI systems end up being used to support the same risk-averse, biased hiring practices that have always existed.

Previously, I mentioned the company Gild, where I was Chief Scientist. Our value proposition was, “We’ll find all the people that you’re missing, all of those diamonds in the rough, and we’ll do it better with AI”. Our research and algorithms, trained on 122 million people, revealed that the biggest predictive signals of success on the job are rarely the traditional ones. Yet, most recruiters using our technology still selected candidates based on traditional predictive signals such as name, school, and previous job.

Our best story of our system used right was this young man who’d grown up in LA. He never worked a corporate job or went to university — no one in his family had ever gone to university. He taught himself how to program by making a website for t-shirts he was designing, and soon was sending resumes up to Silicon Valley. No one was even reading his resume. Why would they? He had no school, no last job, and a last name, Dominguez, that didn’t fit recruiters’ stereotypes of a software engineer. To recruiters, it wasn’t even worth their time to send a “no” back to him.

When we ran our own algorithm to hire an engineer for ourselves, the system identified him as the second-best Ruby programmer in Los Angeles. How did we know since we didn’t have all these traditional measures? We knew because our algorithms looked into all the digital clues he’d left, like his comments on coding message boards. He and I ended up on the cover of The New York Times because we were the example of what it means to look beyond those traditional signals.1

Powerful AI begins with a human understanding that factors leading to happiness, health, and life success are all interconnected

Vivienne Ming_Ethics of Artificial Intelligence_Content Image#02

In my research, my collaborators and I have developed a concept of meta-learning, learning how to learn. It is a rich collection of cognitive abilities, social skills, emotional intelligence, creativity-related skills, and metacognition, which is essentially the ability to monitor yourself.

Today, we track about 50 different factors associated with meta-learning, and what’s fascinating about them is that we not only use them to look at work outcomes but also in our education work with young children and in our health work.

These constructs not only predict who works well on the job, but also who will have better life outcomes – these people are likely to end up with a lower BMI, better insulin sensitivity later in life, a higher walking speed at age 65, stronger social networks, better subjective well-being, and higher levels of happiness and wealth. These outcomes are all more strongly related to these constructs than traditional things you find on a resume.

Success on the job and success in life and our health are strongly interrelated because we are complex and interconnected. Using the national UK health data set, a research group that looked at life skills and how they related to wealth and wellbeing and everything else, found that five, seemingly very soft constructs about people were hugely predictive of long-term life outcomes: conscientiousness, emotional stability, determination, control, and optimism.2,3

Another study by the same group looked at thousands of people and found that just one idea, life meaningfulness, predicted variations in mortality and morbidity, social network size, happiness, wealth, income, and education. 4

So why aren’t qualities like these a standard part of hiring? We know these predictive factors play such a big role in people’s lives. Why aren’t they looked at as a part of health as well?

I’ve done a lot of work using sounds, for instance, building an AI to diagnose pneumonia based on listening to coughs. Why not take these other factors into account as well and roll them into a much more holistic look at who a person is, and what it means to have a good job and good health? 

These are things that are open to us, but a lot of this work starts with understanding the people and the interconnectedness, rather than, “We have a known pain point, a certain kind of sarcoma, and here’s the large data set of MRIs related to it. Train an algorithm to discriminate the actual sarcomas from the non.” That’s a perfectly valid thing to do, but there’s something so much richer available to us by understanding people before they get to that moment in their lives, from the most traditional medical sense.

Hiring for resilience in healthcare and role-modeling change

HT: You discuss resilience as one of the constructs which help employee success. How do healthcare providers find resilient healthcare workers?

Vivienne Ming: You have four options as a healthcare provider who is hiring, with the first one being the status quo of traditional hiring as it is right now.

Next, is gamification. A good example of this is a company called Pymetrics. They’ve taken a lot of traditional cognitive psychological experiments and turned them into games. If you apply for a job with a company that uses their service, then you play some of these games that let you know who tends to be good fits for certain jobs. They are a kind of an algorithm; all of these options are, but they have a particular known bias. You’re still going to do your due diligence to ensure that the person you’re hiring knows how to do the job, such as hiring a nurse with nursing degrees. 

The next option is called naturalistic assessment: a kind of data that we can use to understand an individual’s potential. I’ve used this method for job potential and to predict education outcomes. This notion made it into my upcoming book called How to Robot-Proof Your Kids, to be released late this year. What we were writing about was a simple idea of “I don’t care who you are, I care who you could be, and what it would take to get you there”. 

So, if we’re staring at nursing shortages, wouldn’t it be amazing to say, “Listen, this person right now lacks the resilience to be a successful nurse in this intense context, but here’s what it will take to get them there”. The next question is “Well, can that person change and develop?” And then you can make a decision of whether you can invest that in them. 

In a lot of our research, one of the biggest drivers of change, particularly of these meta-learning soft skill qualities, is role modeling. So, if role modeling is a powerful tool for developing that missing skill, is there another nurse or a small team they could be paired with to actually role model resilience and change their outcome? We’ve had the chance to do this in education, we’ve had the chance to do this in the workforce, and it works.

Using complimentary diversity to deliver successful outcomes

Vivienne Ming: Pairing together five students seems like such an abstract question. I’ve got 100,000 students joining this giant online course, and I’m going to break them up into teams of five to 10 students. I want to predict the right groups to predict success for the individual students. 

What we found is that the biggest predictor of success wasn’t to just pair a bunch of kids that were similar together, it was what we call complementary diversity. Finding people that shared some similarities, complementary, but who also had distinct differences from one another which they could learn from. 

One of the things we specifically focused on was resilience. When you put a couple of resilient people on either a cohort of students or a team of employees for a couple of hours, months, or years, what you get is a whole team where everyone’s resiliency has gone up. That’s an amazing thing to know that you can hire not simply for who someone is, but also for who they can be. 

How healthcare leaders can use AI to enhance patient outcomes and experience

HT: What would be your top recommendations for healthcare leaders who want to solve a specific challenge with AI that would lead to improved patient outcomes or experience?

Vivienne Ming:

1) Start with the problem

The most notorious project I think I’ve ever worked on was my own son’s diabetes. The doctors had us manually track a wide range of data, which they then eyeballed and said, “Okay, here’s what we’ll do for the next three months.” And I thought, there has to be a better way. 

Long story short, I built the first-ever AI that predicts blood glucose levels about one to three hours into the future. The reason I did this—the reason I suspect no one had done it before—isn’t because I’m some astounding genius, but because of a confluence of one terrible piece of bad news, and then some good news. It just so happened that a mom, who was also a biological computational scientist, was able to look at her son’s blood glucose levels recorded throughout a day and came to understand that she knew how his day was going. 

We’re building a new version now, in fact, that only uses the continuous glucose monitor (and no other data), and can still predict a solid hour into the future and throughout the entire night. Once we get it robust enough, we’ll start working with partners: partner organizations that then go to the FDA and work through the details.

This has the opportunity to be truly transformative, but at the heart of this story is heart. You had to see a little boy’s life story in those numbers—something more than just the biophysics of insulin release and digestion—before you realized there was actually something to predict. 

2) Find a leader who knows how to solve the problem without AI and build a diverse team.

For anyone putting together teams to work in AI in medicine or any domain, when you’re finding the leader of that team, ask them how they would solve the problem if they didn’t have AI. If they don’t have a good answer to that, AI is not going to help them. It is not a magic wand or a fancy black box that solves problems for you. You have to solve the problem. AI is a tool that can transform that process.

Secondly, that team needs to be diverse. They need to understand all the different aspects of the problem being addressed. You can come up with an amazing solution but if the patient doesn’t take the drug, or if they can’t take it on the regular basis needed, then it doesn’t matter what the AI can do….you will still have the bad outcomes.

You need to understand these problems much more holistically than is currently done in AI if you truly want to make a difference. That starts with a highly interdisciplinary team that can come up with solutions that aren’t constrained by AI but rather accelerated by AI, which is very different to the current way of thinking about using machine learning 

3) Be transformative yet mindful of the challenges.

What you need to build AI for is the inevitable outcome. In some cases, these are easy, like building it straight into some imaging technology where it’s going to enhance the image and provide some extra guidance. In that case, you’re not asking anybody to change their behaviors or do anything different.

I’m encouraging everyone now, if they’re thinking about using AI in hiring, in medicine, in any domain that involves people, to do that aspirational thing. Really think about doing something transformative, but also keep in mind all of the additional work it will take to change peoples’ behaviors to truly support the system. 

Will the doctors actually use this system? Will the nurses trust the advice that your AI is providing? Will the patients feel that they have agency when they’re using the system? If the answer to any of those questions is “maybe”, then they won’t change and your whole system will be useless.

Vivienne Ming, PhD is frequently featured for her research and inventions in The Financial Times, The Atlantic, The Guardian, Quartz and the New York Times. Dr. Vivienne Ming is a theoretical neuroscientist, entrepreneur, and author. She co-founded Socos Labs, her fifth company, an independent institute exploring the future of human potential. In her free time, Vivienne has invented AI systems to help treat her diabetic son, predict manic episodes in bipolar sufferers weeks in advance, and reunited orphan refugees with extended family members.


  1. Richtel. (2013). Article available from: [Accessed July 2021]
  2. Steptoe et al. (2017). Proceedings of the National Academy of Sciences 114, 4354-4359
  3. Golstein. (2018). Article available from [Accessed July 2021]
  4. Steptoe et al. (2019). Proceedings of the National Academy of Sciences 116, 1207-121