New publication on security vulnerabilities in healthcare AI

publications
InnoTep researchers publish comprehensive analysis of data poisoning threats in healthcare AI systems
Autor: InnoTep Team
AI security health publication cybersecurity

New study reveals critical vulnerabilities in healthcare AI

The InnoTep research team has published a high-impact article on security vulnerabilities in artificial intelligence systems applied to healthcare, together with researchers from Karolinska Institute.

The paper titled “Data Poisoning Vulnerabilities Across Healthcare AI Architectures: A Security Threat Analysis” has been published on arXiv and analyzes critical data poisoning threats across different AI architectures used in healthcare.

Reference: arXiv:2511.11020 [cs.CR]
Publication date: November 2024

Key Findings

The research reveals concerning findings about the security of AI systems in healthcare:

  • πŸ”΄ Limited sample attacks: Attackers with access to only 100-500 samples can compromise AI systems regardless of dataset size
  • πŸ“Š High success rate: Attacks can achieve over 60% effectiveness
  • ⏱️ Late detection: Detection of these attacks can take 6 to 12 months, or may not occur at all
  • πŸ”— Supply chain vulnerabilities: A single compromised vendor can poison models across 50-200 institutions

Analyzed Architectures

The study examines vulnerabilities in four categories:

  1. Architectural attacks: Convolutional neural networks, large language models, and reinforcement learning agents
  2. Infrastructure attacks: Exploiting federated learning and medical documentation systems
  3. Critical resource attacks: Affecting organ transplant allocation and crisis triage
  4. Supply chain attacks: Targeting commercial foundation models

Implications for Healthcare Security

The study highlights how:

  • The distributed nature of healthcare infrastructure creates multiple entry points for attackers
  • Privacy laws such as HIPAA and GDPR can unintentionally shield attackers by restricting analyses needed for detection
  • Federated learning can worsen risks by obscuring attack attribution

Recommendations

The researchers propose multilayer defenses:

  • βœ… Mandatory adversarial testing
  • βœ… Ensemble-based detection
  • βœ… Privacy-preserving security mechanisms
  • βœ… International coordination on AI security standards
  • βœ… Transition toward interpretable systems with verifiable safety guarantees

Questioning Black-Box Models

The study raises a fundamental question: Are opaque black-box models suitable for high-stakes clinical decisions?

The authors suggest a shift toward interpretable systems with verifiable safety guarantees, especially for critical healthcare applications.


Article access:

πŸ“„ View PDF on arXiv
πŸ”— Article page

License: Creative Commons BY-NC-SA 4.0


This research reinforces InnoTep’s commitment to developing ethical, secure AI technologies centered on people’s well-being, particularly in critical areas such as healthcare.