Your AI debate questions answered! By Nell Watson and Jim Stolze

Jim Stolze

Tech Entrepeneur and founder of Aigency

Nell (Eleanor) Watson

Tech ethicist, researcher, reformer

Your AI debate questions answered! By Nell Watson and Jim Stolze

8 February 2021 | 7min

Towards the end of last year, we invited artificial intelligence (AI) experts Nell Watson and Jim Stolze to our first ever live debate. The topic was about the potential and perils of AI in healthcare, as summarized in our article, “Artificial intelligence in healthcare: More harm than good? You be the judge”.

The event was highly engaging and we received many interesting questions from the audience – too many in fact to address all of them them during the live Q&A period.

In this special follow-up article, we went back to both Nell and Jim who share their answers to your inspirational questions here.

Fast forward to real AI applications in healthcare

Q1: How do you see AI in healthcare in five years? 

Jim Stolze: Once AI in healthcare works, we won’t be calling it AI anymore. 

Just think of your car. When you’re using a navigation app like Tomtom, Google Maps or Waze, you are not thinking about “the AI” that guides you from A to B, often correcting your own navigation errors. The same will be true for advanced statistical modelling in healthcare systems and or equipment. The moment we stop speaking about AI in healthcare, we will be close to achieving the real goal. 

Nell Watson: I expect healthcare to be considerably further in five years. Three P’s come to mind: personalized, participatory and predictive. 

Personalized – healthcare will become increasingly oriented towards the specific physiology and preferences of individuals in a way that creates more favorable outcomes and upholds autonomy and dignity in new ways.

Participatory – our various wellness gadgets will enable us to contribute to the co-management of our health in partnership with practitioners.

Predictive – data-driven methods will enable us to get advance warnings of impending issues before they arrive. This will enable just in time delivery of care, before requirements become critical.

Q2: Do you have an example of how AI has been recently implemented in healthcare that really paves the way for future use? What factors contribute to making this example a success story? 

Nell Watson: Medical imaging has been revolutionized in the past decade. Revolutionary deep learning technologies, along with Convolutional Neural Networks built upon such architectures, have enabled machine vision techniques orders of magnitude more capable and reliable than before. 

Better imaging enables practitioners to resolve smaller details, enabling developing issues to be spotted early when they are most treatable. Image segmentation techniques also enable the highlighting of points of interest to ensure that they receive appropriate levels of scrutiny. We can provide more intelligent differential diagnostics at an early stage, and that translates into more survivable conditions and greatly improved outcomes.

We appear to be at a similar discontinuity as deep learning was, but this time with attention-based models. These are massive, expensive but very powerful and flexible forms of machine learning that are able to better deal with nuances and interpretations, less restricted to narrow domains as deep learning has been. I have no doubt that this next wave will be at least as transformative within healthcare as the last has been.

However, we must find ways to make machine learning systems more transparent and explicable, which is challenging given that the powerful neural network approaches in recent years are typically a ‘black box’ that is very challenging to understand. Such networks are part-stochastic (random), meaning that the same input doesn’t even necessarily correspond to the same output. This may be acceptable in the consumer sphere, but not in systems critical to human wellbeing. 

The greatest limiting factor in the adoption of AI within healthcare is not the technology of machine learning per se, but rather the associated technologies and practices that help to make it safe, reliable, trustworthy, transparent, and well-regulated.

Jim Stolze: I think Nell already gave a great example of medical imaging. Let me add to that

“Ada, the chatbot”.

This is a very clever tool that uses machine learning to give people at home quick insights in their healthcare journeys and helps people take care of themselves. It really feels like a one-on-one chat conversation with your personal doctor, like being in a whatsapp chat, but it is actually an AI that compares your symptoms to the ever growing database of 10 million users and 20 million symptom assessments.

What we can learn from other industries

Q3: Do you think the most impactful applications of AI in healthcare will come from the healthcare industry itself, or from a different industry (i.e. tech, retail) industry? Can you name an example, or explain why you think this industry will probably lead the way?

Nell Watson: Google’s Deepmind claims to have created a machine learning system AlphaFold 2.0 that is able to resolve those problems in a matter of days, given only the primary structure (the sequence of amino acids in the polypeptide chain). 

There are several limitations: The reported precision isn’t perfect, but it’s generally a decent approximation of reality. Results still appear strongly based on existing input data and known references. For example, the system appears to predict that certain exotic proteins fold like common ones, and reportedly only around 2/3 of Deepmind’s protein predictions matched with empirical truth.

Regardless, this is one of the hardest and most important problems in computer science, and appears to be a very significant advance. Right now we have access to a tiny percentage of all known protein structures. Soon, we may have an educated guess about all of them.

However, AlphaFold wasn’t originally engineered for this purpose. It is a derivative of AlphaGo, the machine learning system designed to play Go, one that invented tactics so far outside the realm of human thought that expert players couldn’t recognize it as being prudent, until it suddenly won.

This is just an example of how transferable machine learning can be, especially as AI becomes more generalizable. Recently, attention-focused models such as GPT-3 are showing enormous flexibility, being able to handle all sorts of problems from solving high school math, to believable (and funny) conversations, and interpreting poetry between languages whilst preserving meter and meaning. 

Missed what GPT-3 is? Check the recording of the debate to learn more. 

The next quantum leap forward in healthcare could conceivable come from anywhere, and in fact, given the long time to market in that space, it’s very likely that disruptive developments will emerge from gaming, search, or automotive sectors.

Jim Stolze: If you look at other industries, the patterns become obvious. I think the music industry was the first industry where this really showed.

First there is a consumer need (we love to listen to music). Then, some way or another, a technology comes around the corner that actually helps you fulfill that need (downloading mp3-files from the internet). The incumbents don’t like this new behavior because it’s not aligned with their interests (eg. business model) and try to dismiss it or fight it in courts. Fast forward in time and you see new players (streaming services like Spotify) who actually use the new technology to help their customers fulfill their needs (we still love to listen to music) while coming up with a new business model (subscription). 

The same pattern applies to banks, travel agencies, retail and so on. Healthcare seems to be the final frontier. The industry where the consumers’ need is paramount (I want to be in charge of my own health), the technologies are there (internet of things, big data, machine learning) but the incumbents have a hard time adopting these technologies. 

What we need is orchestration. Hospitals, insurance companies, pharmaceuticals, patients, startups, caregivers need to work together by sharing data, best practices and coming up with vertical innovations (it’s a healthcare supply chain after all). Just like it took Apple back in the days, to sit together with all the record labels, musicians and tech companies to be able to come up with iTunes, we need someone to guide this journey.

For this, I am counting on you (The healthcare transformers)., to step up to the plate!

How to outsmart the hazard 

Q4: What could be unintended consequences of the use of AI in healthcare? What is your opinion on who should take responsibility when AI makes errors/takes wrong decisions, like a wrong diagnosis? Are there possible legal consequences of applying AI deterring its implementation?

Jim Stolze: An unintended consequence of AI is that users get lazy. Don’t get me wrong, this is not necessarily a bad thing. If a computer program can sift through thousands of excel sheets and give you back the records that match your query, I don’t see why you should go through each row in the database yourself. Machines are there to do the boring, repetitive tasks for us.

But, if an AI system is handling sensitive data. And it has not been validated by a third party (eg. using new data to reproduce the outcomes) this is not a time to lean back and just ‘trust’ the math on this one. New times require new skills. We need critical thinkers, people who can outsmart the system, a hacker mindset to double check the outcomes. 

And in response to the legal consequences, I think that is already too far in the process. Because -more importantly- if people find these vulnerabilities in the system, or if they point out the biases that are emerging from the data… you don’t fire them. Don’t shoot the messenger. You should reward people who rock the boat, who speak truth to power. 

Nell Watson: We may need to change our training, how we educate ourselves. The grind of a junior physician, analyst, or administrator trains their brains to recognize complex patterns through repetition and reinforcement. If many of those rote tasks end up being performed by machine learning, then it may be harder for human beings to have opportunities to hone their skills. 

Another unintended consequence could be the Algorithmic Taylorism, the practice of using AI-driven metrics to score performance. Such mechanisms may be intrusive and inhumane, ratcheting up stress and risk of burnout even further than it already is. I would caution against allowing Goodhart’s Law to create useless metrics that bully human beings into meaningless compliance to fulfil potemkin performance dashboards.

Accountability for errors must be audited. Responsibility may lie with the engineer, the operator, some infrastructure or regulatory issue, or a combination thereof. It’s very important that we understand precisely what were the causal elements, so that we can prevent it. 

Preserving transparency of systems, training and incentives can greatly help with such auditing processes. Event recorders must scrupulously log every input, output and state change, as well as monitoring for potential degraded performance (e.g. dirty sensors), flagging degraded sensory input as being less trustworthy.

The availability and affordability of insurance will be a big factor that enables the rollout of machine learning technologies. Insurers will drive the adoption of practices that tend to minimise risk and will refuse liability if they have not been upheld. Safety, quality and certification marks will be important clues for the pricing of premiums. 

We can expect more sophisticated insurance mechanisms in a data-driven world, also. Algorithmic actuaries may begin to offer real-time fluctuating insurance costs as risk category information distils, which may then also be available for reinsurers to trade. 

The importance of orchestration, by collaborating and coordinating 

Q5: What role will governments play to ensure the ethical use of AI, or do we need to rely more on companies to take the lead here? How do we ensure alignment across health systems, governments and/or countries? 

Nell Watson: We have seen over 150 different sets of AI principles from various organizations. This is a nice start, but principles need to be boiled down into precise criteria that can be measured and benchmarked. 

There are now initiatives working to create certifications and standards for AI Ethics from IEEE, ISO, and others. Sometimes these kinds of measures can become soft law, if government tenders mandate that systems conform to a certain standard. 

Private organizations can sometimes be a little faster in reacting to emerging situations, and may also have a greater familiarity with the trade-offs and challenges inherent in implementing safety and ethics rules. 

Governments also tend to borrow ideas from standards bodies when they look to create regulation, and governments also greatly influence each other, borrowing liberally from foreign legislation when composing their own.

A lot of regulation is done in the rear-view mirror – scrambling to respond to a situation once it has arisen rather than proactively predicting emerging issues and attempting to control them head-on. 

My recommendation would be for governments to attempt to coordinate in better ways, to present a unified front against the excesses of Big Tech, and to converge on equitable best practices.

As AI gets steadily more powerful, the potential for abuse increases. We need greater international collaboration, including between the West and China, and other parts of the globe. We may also soon require specialist courts that have the resources and experience to arbitrate high-level AI ethics issues. 

Jim Stolze: I agree with Nell that governments should hold companies that use AI-systems accountable. The challenges however are twofold:

  1. Laws and legislation are ‘frozen’ ethical discussions. In other words: if WE don’t let others know what we think, if the debate is silent, governments cannot update their laws. We need more debate on AI, from various perspectives. Therefore I am so happy to get this opportunity to openly discuss the topic from different angles and with people from various backgrounds. 
  2. Regulating the technology itself is a very bad idea. If we want to take legal action against Deep Fakes we shouldn’t ban Deep Learning. Because if we do, doctors in hospitals can no longer use AI-systems for medical image analysis. It’s the behavior that should be punished. The same for Facebook, we shouldn’t ban social media platforms, we need to hold Facebook accountable for not doing enough against the spread of fake news and disinformation.

Q6: What challenges does GDPR pose to the use of AI in healthcare and how can we overcome them? Will anonymizing data help or will the data lose it’s value after being anonymized?

Nell Watson: GDPR puts a strong emphasis on needing valid reasons to perform functions on data, especially data of a sensitive nature. Although these controls seem to make operations require more up-front thought and investment, in the long run it makes things easier.

Appropriate rules help to prevent a race to the bottom in terms of the exploitation of data, as well as making it easier to pinpoint bad actors. This creates a safer ecosystem for porting data around, reducing friction and trust issues.

Anonymization of data doesn’t necessarily mean that it’s any less valuable or applicable. For example, a name can be exchanged for a pseudonym or a tracking number, which enables one to make sense of data points, without it being directly traced back to a specific entity. 

However, most attempts at anonymization have flaws and weaknesses. It’s often possible to cross-correlate different data points in order to work out an identity (particularly in isolated areas or specific postcodes), or to deduce an inference about someone based upon the purchase of certain products.

Technologies are now being developed, such as Homomorphic Encryption, Zero-Knowledge Proofs, Differential Privacy, and Part-Trained Models. They all work in different ways, but they provide the ability to perform machine learning on encrypted data, or data with plausible deniability. These techniques are still quite experimental and often have technical limitations and tradeoffs. But they illustrate that it’s possible to find a balance between the privacy of data and its utility.

Jim Stolze: GDPR was a blessing in disguise. We (the tech community) thought it was going to be a hassle, but looking back it gives us clarity. Innovation is great, but you need boundaries. 

Having said that, on a small scale (local experiments) it is possible to work with data that has been anonymised. Your models can be trained on safe data, but at some point you need more scale. And having data leave your firewall, dripping into other clouds can be tricky. 

That’s why I think that Federated Learning has great potential for the healthcare industry. It is a method where you don’t bring data to the model, but you bring the model to the data. That way parties can work together to optimize the accuracy of their models, without the possibility of leaking sensitive data. It’s just the model that travels from cloud to cloud, not the data itself. Again, this requires collaboration, and more than that; orchestration. 

Jim Stolze is a tech-entrepreneur and a prominent figure in the European startup scene. In 2009 he was approached by TED.com to become one of their twelve ambassadors worldwide. Between then and 2016 he was the driving force behind TEDxAmsterdam and many other TEDx events in Europe, the Middle East and even the Caribbean. An alumnus from the prestigious Singularity University (California) Jim Stolze is a thoughtleader and changemaker in the field of exponential technologies. Since 2017 Jim focuses on Artificial Intelligence (AI). With his platform Aigency he connects algorithms from PHD’s and startups to data-sets and challenges from big corporates. This initiatieve was labeled by the media as “the world’s first employment agency for artificial intelligence”.

Nell (Eleanor) Watson is a Machine Intelligence engineer who worked to pioneer Deep Machine Vision at her company QuantaCorp, which enables fast and accurate body measurement from just two photos. In sharing her knowledge as AI Faculty at Singularity University and author of Machine Intelligence courseware for O’Reilly Media, she realised the importance of protecting human rights and putting ethics, safety, and the values of human spirit into A.I. Nell serves as Chair & Vice-Chair respectively of the IEEE’s ECPAIS Transparency Experts Focus Group, and P7001 Transparency of Autonomous Systems committee on AI Ethics & Safety, engineering credit score-like mechanisms to safeguard algorithmic trust.