New study reveals critical vulnerabilities in healthcare AI
The InnoTep research team has published a high-impact article on security vulnerabilities in artificial intelligence systems applied to healthcare, together with researchers from Karolinska Institute.
Featured Publication
The paper titled “Data Poisoning Vulnerabilities Across Healthcare AI Architectures: A Security Threat Analysis” has been published on arXiv and analyzes critical data poisoning threats across different AI architectures used in healthcare.
Reference: arXiv:2511.11020 [cs.CR]
Publication date: November 2024
Key Findings
The research reveals concerning findings about the security of AI systems in healthcare:
- π΄ Limited sample attacks: Attackers with access to only 100-500 samples can compromise AI systems regardless of dataset size
- π High success rate: Attacks can achieve over 60% effectiveness
- β±οΈ Late detection: Detection of these attacks can take 6 to 12 months, or may not occur at all
- π Supply chain vulnerabilities: A single compromised vendor can poison models across 50-200 institutions
Analyzed Architectures
The study examines vulnerabilities in four categories:
- Architectural attacks: Convolutional neural networks, large language models, and reinforcement learning agents
- Infrastructure attacks: Exploiting federated learning and medical documentation systems
- Critical resource attacks: Affecting organ transplant allocation and crisis triage
- Supply chain attacks: Targeting commercial foundation models
Implications for Healthcare Security
The study highlights how:
- The distributed nature of healthcare infrastructure creates multiple entry points for attackers
- Privacy laws such as HIPAA and GDPR can unintentionally shield attackers by restricting analyses needed for detection
- Federated learning can worsen risks by obscuring attack attribution
Recommendations
The researchers propose multilayer defenses:
- β Mandatory adversarial testing
- β Ensemble-based detection
- β Privacy-preserving security mechanisms
- β International coordination on AI security standards
- β Transition toward interpretable systems with verifiable safety guarantees
Questioning Black-Box Models
The study raises a fundamental question: Are opaque black-box models suitable for high-stakes clinical decisions?
The authors suggest a shift toward interpretable systems with verifiable safety guarantees, especially for critical healthcare applications.
Article access:
π View PDF on arXiv
π Article page
License: Creative Commons BY-NC-SA 4.0
This research reinforces InnoTep’s commitment to developing ethical, secure AI technologies centered on people’s well-being, particularly in critical areas such as healthcare.