How to make the most of your healthcare data?
Today, our most potentially valuable resource is data and the ability to extract patterns from large data sets that can lead to novel or predictive insights
Most companies can benefit from using artificial intelligence if they 1) Allow easy access to data; 2) Structure data; and 3) Ensure data is easily machine readable
Collaboration, especially between domain experts and data scientists, is essential for an artificial intelligence-based solution to be successful
Karl-Heinz FiebigCofounder and chief innovation officer (CINO/CTIO) of idatase GmbH
How to make the most of your healthcare data?17 October 2019
Karl-Heinz Fiebig is the cofounder and chief innovation officer (CINO/CTIO) of idatase – A company whose vision is to bridge the gaps between domain experts, data experts, and the abilities of data-driven automation by offering a new way of collaboration through the provision of an operating system for artificial intelligence (AI).
This is part one of our startup series, where we focus on the experiences, challenges, and insights gained from innovative solution seekers who are redefining healthcare.
The power of data-driven solutions
“Our most potentially valuable resource that we currently have is data.”
HT: Part of your company’s strategy is to offer data-driven solutions with predictive learning analytics. As the “data dj”, could you please explain a little about this field to get us started?
Karl-Heinz Fiebig: Our most potentially valuable resource that we currently have is data. We can extract patterns, or new insights from large amounts of data that can support domain experts in doing their work more efficiently. Predictive analytics is a special area – one which I think is the most interesting – because you can use the data to predict future patterns based on the data you already have or have already seen or experienced, historically speaking.
Predictive analytics is actually quite a general topic, which is why it has such high potential. For instance, it can apply to everything from predictive maintenance – currently a very hot topic – to predicting customer behaviour. We see this with companies like Google, Facebook and Amazon who try to see what products their customers most likely want to see based on their clicking or buying behavior.
HT: Do you see one predictive model emerging as the current leader? And is there one that you think will have the largest impact for businesses in the near future?
Karl-Heinz Fiebig: Predicting human behavior is an area where we see a lot of investment as it is quite a complex question and area. Big companies that are able to gather a lot of data like Facebook, Amazon and Google, for instance, have been quite successful at developing consumer predictive solutions.
You see a similar trend, although not yet as established, in the area of predictive maintenance for the general assets of a company. Companies will try to achieve what these large companies have done with analytics to understand human behaviour, but with machines – meaning being able to predict what problems a machine will have based on its current state.
This is one of the most cost-effective solutions you can have because its allows you to plan ahead. You can order parts in advance so a machine can be instantly repaired when it does break down, or even prevent machines from breaking down altogether. A lot of companies say that they are doing this – and I believe that they do – but you don’t see the application in the real word scale like Amazon and Google have done with their solutions. This would be however, the main use case application from a technological perspective.
Deep learning and neural networks are more mysterious models that can learn a lot of things by themselves. These kinds of solutions, being applied to the areas of predictive maintenance for instance, could be just as powerful as those developed by these large companies that look into consumer behaviour, but we still lack this one source of big data for them to really flourish.
Overcoming data dilemmas
HT: Everyone is talking about the great potential that predictive analytics will have in healthcare. What would you say is the biggest hurdle in realizing this potential? Would it be having access to the data, the fact that it is unstructured, the tools, or mindset?
Karl-Heinz Fiebig: It is interesting because it is a mixture of all these components, at least in my experience. We see that a lot of companies are hesitant to give out data, so there is this problem of being able to access it. Other companies indicate that they don’t have the right format, or that they don’t have enough data in general, or that everything is written down or fragmented across the company itself. This leads to the issue that there isn’t any structured data.
“Most companies can benefit from AI if they just start structuring their data, even if they don’t have a lot.”
Most of the big data projects rely on data that is unstructured, but I would say this is only applicable on a very large scale. Most companies can benefit from AI if they just start structuring their data, even if they don’t have a lot. With this knowledge they can start connecting things like their components and assets in a digital way and use that data to automate processes and gain insights.
HT: What would be the top three things that the healthcare industry can do today to start reaping the benefits of AI? For example, hospitals or commercial labs have a lot of patient data, but it may be unstructured or fragmented.
Karl-Heinz Fiebig: The first part is making it easy to access the data. This is the major milestone. When you have fragmented data from different data sources, it isn’t that much of an issue. You don’t need to have everything in one place, but you do need to have access to these data points and sources. And then you need to have the tools to be able to find the patterns.
Secondly, you have to be able to structure the data. You can hire data scientists to take this on, which again, isn’t too much of a problem. My team is also working on solutions to create both human and machine readable structure models so that engineers or technicians can take on this task themselves.
Thirdly, data has to be structured so that it is processable by machines. What I see for example regarding documentation is that when you send engineers or technicians out in the field, they start documenting their findings by writing it down with a pen and paper in the worst case. A more advanced process would be to use a digital application – but it is still written text and not machine readable.
So, there is a gap between how humans read and view things versus how machines read and view things. Just because you see it written in a Wikipedia article does not mean that it’s easily processable by a machine. It’s not impossible, but it is a problem that you first have to solve, which is making it readable by a machine. Then you can advance in doing analytics or whatever you want to do with your data.
You can get rid of the second step if you structure your data in a way that allows the machine to interpret it from the start. This applies to all settings – not just in healthcare – and is a step that is not considered a lot. It’s not just about digitization, but how you digitize your assets so that in the end you can actually do something with it, which is to make it accessible to machine intelligence.
Top 3 tips companies can do to start benefiting from AI:
- Allow easy access to data
- Structure data
- Ensure data is easily machine readable
HT: Is it just data scientists that are involved in advising companies on how to make that data processable by machines, or is there anyone else?
Karl-Heinz Fiebig: I would say yes. That is usually what a data scientist does. However they also need information from the domain experts – who are the ones that put in the data and who know their labs, instruments, machines, patients and doctors for instance.
Domain experts are the ones that know the data, how it is structured and what is relevant. The data scientist wouldn’t necessarily know how it’s semantically structured, but rather how it should be structured so that it’s readable to the machine.
Collaborate to innovate
“Collaboration is one of the most important aspects for success, but also one that is most lacking.”
HT: So, it’s more of a collaborative effort that is key to success?
Karl-Heinz Fiebig: That is one of the most important aspects, but also one that is most lacking. There is always a bit of conflict between the data scientist – who just wants to see the data – and the domain expert – who doesn’t want to have anything to do with the data scientist because they just do fancy new AI stuff.
So, on one side, domain experts feel that they know better because they have spent years in the field and are not open to get new insights from statistical evaluations, for example. And on the other side, data scientists feel that they don’t need the domain experts because they can just get all the knowledge they need from the data.
From my perspective, it is all about exploring different kinds of AI projects, and when you initiate a project, it cannot be one department or back office developing it on their own.
Rather, the domain experts and data scientists need to very closely develop the solution together – whether they want to or not. This will allow for both parties to learn from one another and then you can start to basically bridge the gap between the two most important stakeholders.
You have another stakeholder, which is the management and operations, but that is another thing. The first two stakeholders that need to come together are the domain and data experts. If they develop the solution together, it is much more probable that the project will be accepted and it will develop much faster and in the end will be used.
HT: Do you notice that this collaboration is currently occurring more and more? Or, do you still see pushback within healthcare organizations, for example, to bring these people together?
Karl-Heinz Fiebig: Currently, I don’t see it happen too often to be honest. What usually happens is the following:
- The R&D department, for instance, develops a solution and then tries to bring it to the domain experts
- The domain experts are accustomed to their own tools and don’t want to use it
- Upper management forces the domain experts to use the newly developed tool
- Domain experts don’t necessarily understand how or why they should use the tool and to what advantage it serves because they were never involved in the development process or asked for feedback on the solution
As the domain experts are ultimately the ones who will use the tool, this is a little bit of a contradiction. Wanting people to use a solution that is very effective and tends to be better than what is currently in place, is still a change for the ones that use it in the end. And understandably, they are more reserved towards it.
In this sense, I would suggest that for such projects these two stakeholders work together to develop the solution – even if at first they don’t like each other. This way, the domain expert will know about the predictive analytics side and the domain experts will know how the tool is actually useful for the assets or for the patients that they work with.
Karl-Heinz Fiebig Co-Founder and Head of Technical Innovation (CTIO) of idatase GmbH. He is responsible for the data analytics and semantics architecture of the NetLume Platform. Karl-Heinz has nine years of experience in development and application of machine learning and gathered multiple years of experience with technical leads in Industry 4.0 projects on the Internet of Things and artificial intelligence. He won a best paper award at the IEEE International Conference on Systems, Man, and Cybernetics as an undergraduate as well as various competition awards for Industry 4.0 challenges. Karl-Heinz has also co-authored a book chapter on Transfer Learning.