Source: www.govinfosecurity.com – Author:
Hackers can potentially use AI to manipulate data that’s generated and shared by some health apps, diminishing the data’s accuracy and integrity, said Sina Yazdanmehr, founder and managing director, and Lucian Ciobotaru, IT consultant, at cybersecurity firm Aplite. The two recently led a research project involving Google Health Connect.
For the project, Yazdanmehr and Ciobotaru created malware that gathered data from Google Health Connect and sent it to a malicious AI-based app. The AI crafted fake data tailored to the individuals’ medical conditions, for example incorrect blood sugar information readings for a user of a diabetes management app.
Google Health Connect combines data from users’ fitness and health apps and displays the shared health app information on their Google Fit dashboards.
The manipulation allowed the malicious AI app to steer the users’ health and fitness apps’ output into suggesting incorrect treatments and recommendations without the user noticing, the researchers said in a joint interview with Information Security Media Group.
While the Aplite project involved Google Health Connect data, this sort of AI-fueled data manipulation is a potential concern for any medical app or device, researchers said.
“I think that it’s a huge risk by blindly trusting apps – or even devices connected to apps – because it’s very hard as a doctor to contradict something that the patient sees daily,” Ciobotaru said.
To help prevent mishaps involving AI-fueled medical misinformation, it’s critical that technology developers – as well as patients and healthcare providers – validate the sources of data that will be used in potentially making health-related decisions.
“Always check the source of data and make sure what you receive and consume is from a trustworthy source and application,” Yazdanmehr said.
Bottom line, “we should make this environment secure and safe, so everybody can use it without any problem.”
Google Health did not immediately respond to ISMG’s request for comment on Yazdanmehr and Ciobotaru’s findings.
In this audio interview with Information Security Media Group (see audio link below photos), Yazdanmehr and Ciobotaru also discussed:
- Detailed results of their research findings;
- Risks that malicious AI potentially pose to patient safety and well-being;
- How to reduce the risk of malicious use of AI affecting health apps and devices.
Yazdanmehr, founder of Aplite, is an IT security consultant and researcher with more than a decade of experience. He has led a wide range of projects, from hands-on assessments to strategic security initiatives, and provided consulting for diverse industries, including Fortune 500 companies.
Ciobutaru is an Aplite IT security consultant with a focus on penetration testing and vulnerability management. He is a medical graduate who chose to become a cybersecurity professional with the goal of making healthcare infrastructure safer.
Original Post URL: https://www.govinfosecurity.com/interviews/how-hackers-manipulate-ai-to-affect-health-app-accuracy-i-5427
Category & Tags: –
Views: 0