No laughing matter: navigating the perils of AI and medical misinformation
The emergence of powerful AI content generators only adds to the dangers of fake news and the challenges of ensuring that people receive reliable medical information.
HIGHLIGHTS
- Misinformation, particularly in healthcare, can significantly undermine public trust, leading individuals to forgo scientifically backed treatments for unproven remedies, adversely affecting health outcomes and public health initiatives.
- The rise of artificial intelligence and deepfake technology exacerbates the challenge of medical misinformation.
- Addressing the spread of misinformation requires a comprehensive approach involving technological, legislative, and educational strategies, including regulatory enforcement, promotion of accurate information, and public education on digital literacy.
Through a series of past articles, UICC has emphasised the critical need for accurate, reliable information, and the collective effort required on the part of individuals, organisations and governments to address the dangerous spread of fake news.
The spread of misinformation is certainly not harmless. It can undermine public trust in healthcare institutions and professionals, potentially leading patients to delay or reject scientifically backed treatments in favour of unproven remedies. This not only jeopardises individual health outcomes but also undermines efforts to raise awareness about risk factors, engage people in receiving routine vaccinations, check-ups, and screening.
Medical misinformation is particularly concerning when it involves cancer. This can involve popular myths about the causes of cancer, ‘miracle cures’ and unproven treatments.
“Medical misinformation causes delays in cancer care, forgoing of treatment, economic harms, potentially toxic effects, and harmful medical interactions with standard curative treatments,” write Amitabha Palmer and Colleen Gallagher in the Journal of Clinical Oncology.
The advent of artificial intelligence (AI) is further transforming the landscape of information dissemination, particularly in the context of medical misinformation. AI's capability to generate content that can often contain errors, even harmful advice, sometimes citing non-existent sources, raises even greater concerns about the ability of individuals to distinguish between authentic and misleading information.
Deepfake technology is another source of risk. People used to be able to trust, if not the written word at least sound and images. No longer. Deepfakes, generated through Generative Adversarial Networks (GANs), allow for the creation of highly realistic fake content, including pictures, videos, and audio clips. This technology has made it easier to rapidly produce and disseminate convincing misinformation, including medical misinformation, across social media, mainstream news, and various digital platforms.
The quality of misinformation is also a concern, as generative AI can produce content that appears more credible, professional, and scientific, potentially influencing the public’s perception of its reliability. A recent study published in the BMJ highlights the potential benefits of generative AI, while cautioning against the risks it poses in creating “high quality, persuasive disinformation that can have a profound and dangerous impact on health decisions among a targeted audience”.
As UICC has highlighted in previous articles on medical misinformation addressing these challenges requires a multifaceted approach involving technological, legislative, and educational strategies, particularly given the apparent “unresponsiveness of generative AI companies to deal with the vulnerabilities of their own invention”, according the study by Menz et al. published in the BMJ.
Regulatory bodies, social media, and online platforms have a responsibility to enforce laws, remove false content, and promote accurate health information. This includes fact-checking medical claims and providing the public with access to reliable sources of information. Health institutions and policymakers should collaborate to establish protocols and support policies that reduce the impact of disinformation, ensuring preparedness against future disinformation techniques.
Governments and public authorities can support these efforts by investing in research on misinformation and public health campaigns designed to educate the public about the dangers of medical misinformation and the importance of reliable information.
Strategies should also include the development of AI technologies to detect and counter disinformation, regulatory measures to hold creators and disseminators of fake content accountable, and public education campaigns to improve digital literacy and critical thinking skills among internet users.
The aim is to ensure that the significant potential of AI to improve health outcomes, notably when it comes to diagnostics, is fully realised, while guarding against the real dangers of misinformation.
Last update
Tuesday 26 March 2024