Healthcare data is precious, as we know, towards driving innovation. Healthcare data is also vulnerable in the way it can serve unethical purposes and in turn cause irreparable damage. As we tap into the potential of healthcare AI solutions, what comes under the spotlight is the use of patient data without any breach on patient privacy.
Federated AI has come as a ray of hope to address complex healthcare data challenges. How does Federated AI pave the way for safe healthcare innovations and patient privacy protection?
Today’s Data-to-AI Healthcare Scenario
Today, Healthcare AI solutions have redefined diagnoses, symptom checking, patient engagement and cure among other novel cases. But acquiring data and maintaining data privacy is still a hurdle to cross.
There are difficulties in validating AI healthcare solutions while there is a need to manage disparate datasets across different systems. If there are healthcare AI solutions created to support innovation, maintaining AI models calls for huge data on a continual basis and the need to establish regulatory compliance even as the AI model evolves.
Today, collaboration among institutions is at the core of AI development. For this to happen, data is being shared in different scenarios. For one, enterprises developing AI healthcare solutions buy data from medical institutions, which suits well to develop prototypes. There are shared data lakes housing anonymized data that suit some cases. But this scenario comes with its own challenges. From a technical perspective, it would become a costly-affair while lifting huge data as the major portion is image-centric (CT scans, MRIs etc.) aimed at supporting AI diagnostic solutions. From a business angle, this scenario calls for novel governance process and business operations.
Healthcare ‘Data’ Concern
Healthcare data has touched 2,000 extrabytes by 2020. Though there is enough data feed for Healthcare AI & Machine learning, there is a catch. Sensitivity of healthcare data poses challenges in terms of collecting and using Healthcare data for building AI use cases.
Patient privacy and data sharing have raised concerns in Healthcare sector. With patients insistent on privacy, organizations face the imperative of safeguarding ‘personal information’ and adhering to regulatory laws including GDPR and HIPPA. There have been instances where patient data has been used without consent, which in turn has fallen under regulators’ scanner. Patient data privacy is further threatened by unethical acts of cybercriminals. Healthcare faces cyber threats and intrusions where healthcare data can be used with malicious intent. There is an underlying need to embrace an approach to preserve patient privacy.
Federated AI – A sneak peek
Federated AI has emerged as the ideal solution to fix issues raised by data silos and patient privacy. Leveraging federated learning, it follows the concept of training algorithms collaboratively without the sharing of data. Patient data remains inside the walls of an institution while a consensus model is used to extract insights collaboratively. For instance, an institution rolls out the machine learning process internally and shares only the model characteristics such as the parameters with the participating organization.
Designing Federated AI
The design of Federated AI puts the federated efforts of multiple institutions into perspective, explores options to establish the federated learning concept. The ‘Hub & Spoke’ approach is one area of interest. The hub represents the aggregation server that receives training iterations from various training nodes acting as the spokes, aggregates, and then shares the models to the training nodes back and forth.
The peer-to-peer approach has also garnered interest. This takes a decentralized approach to federal learning where a training node shares its trained model with either all its peers or some of the peers, as each of these peers achieve aggregation on their own. Federated AI champions the cause of sharing models but not data.