AI and (mis)information on cancer: balancing access, risk, and human care
AI offers accessible cancer information but also presents risks of misinformation and heightening anxiety, as well as privacy concerns. Mary Wong Hemrajani, Chairperson of the Global Chinese Breast Cancer Organizations Alliance, a UICC member, offers insights on how to help people with cancer navigate these issues.
As AI is increasingly being consulted for questions around health, the Global Chinese Breast Cancer Organizations Alliance, a UICC member, brings clinicians and communities together in more open, informal settings to provide accurate, evidence-based responses to common concerns.
HIGHLIGHTS
- AI can improve access to general cancer knowledge, but its responses depend heavily on how questions are framed, sometimes producing broad or alarming information, and raising privacy issues if detailed personal data is provided.
- Misinformation, cultural factors, and low health literacy can shape how people interpret AI outputs, reinforcing fears or misconceptions and affecting treatment decisions.
- AI lacks the emotional understanding needed for personalised cancer care and should not replace human support.
- Practical solutions initiated by the Global Chinese Breast Cancer Organizations Alliance, a UICC member, include improving health literacy, encouraging patients to verify AI information with clinicians, and community initiatives that connect patients with experts and support informed decision-making.
Artificial intelligence is becoming a common source of information, offering immediate, accessible answers to complex questions. For this reason, it is increasingly being used for health-related questions, notably for people seeking to understand cancer.
While it can improve access to general knowledge, concerns are growing about misinformation, the impact on a person’s emotional condition, and its influence on decision-making, as well as privacy risks and how that information is interpreted.
Mary Wong Hemrajani, a breast cancer survivor and Chairperson of the Global Chinese Breast Cancer Organizations Alliance, a UICC member organisation, describes AI as a tool with potential, but also significant limitations when applied to individual health decisions. “AI can never replace human touch, human care, because it’s unable to respond effectively to the emotional and personal dimensions of a cancer diagnosis.”
One of the central challenges lies in how people interact with AI systems. The quality and relevance of responses depend heavily on how questions are asked. General queries can produce broad, and sometimes alarming, information.
Ms Hemrajani recalls the experience of a woman who searched for side effects of radiotherapy without providing information on her specific context, notably the precise nature of her diagnosis. The response included a wide range of complications across multiple cancer types. “She got so scared that she couldn't sleep, she couldn't eat,” Ms Hemrajani said.
At the same time, providing specific medical details to AI platforms may improve the relevance of responses, but it also introduces concerns around privacy by increasing the risk of exposing sensitive personal health information. “There is no protection on privacy, no clear safeguards when individuals input personal data into general-purpose AI tools,” according to Ms Hemrajani.
This creates a tension between usefulness and safety, leaving many people navigating complex decisions without clear guidance on how to use these tools responsibly.
Alongside these structural challenges, there are also concerns about how AI can reinforce existing fears or biases. Individuals may frame questions in ways that reflect their anxieties or preferences, leading AI systems to generate responses that appear to confirm those views. “People who are hesitant about certain treatments may repeatedly search for alternatives, prompting AI to present options that may not apply to their particular case because their condition is very different,” said Ms Hemrajani.
These risks are particularly relevant for people newly diagnosed with cancer, who are often seeking urgent answers and reassurance. “When people are exposed to a lot of worst-case or unfiltered information, it can increase anxiety because it becomes overwhelming.”
For Mary Wong Hemrajani, the growing use of AI for cancer information reflects challenges she encounters in her work with people living with breast cancer across Hong Kong, mainland China, and Chinese diaspora communities.
The Alliance supports women through treatment, survivorship, psychosocial concerns, and the stigma that continues to shape how breast cancer is perceived within families and communities. Misleading information – from long-standing myths to emerging AI-generated inaccuracies – remains a persistent concern.
Her perspective is shaped by personal experience. “I was diagnosed with breast cancer when I was 48. I was at the peak of my career, CEO in the cosmetics sector. I underwent surgery, chemotherapy, and radiotherapy, followed by seven years of hormonal therapy.”
Throughout her diagnosis and treatment, she found reliable, comprehensible information very difficult to access. “Nobody tells you how to face cancer,” she emphasised. And in Asia, when we talk of cancer, often it is often associated with death and silence. Despite my background, I felt so powerless.”
In the course of her work, Ms Hemrajani has observed lower levels of trust in AI-generated medical information, particularly among lower-income and less-educated groups. While this may reduce exposure to misinformation, it can also limit access to potentially useful resources. At the same time, barriers such as medical terminology and digital literacy affect how information is understood.
Her experience led her to co-found the Alliance with other survivors to support women navigating similar fears and information gaps.
“Misinformation is not new in cancer communities, but AI adds another layer of complexity,” she explains. Long-standing myths – such as wearing bras causes breast cancer – continue to circulate. AI systems, which draw on large volumes of online content, may reproduce or amplify such information if not carefully designed or used.
Cultural factors may further influence how people obtain information and how they act on it. In Asia, Ms Hemrajani said, strong deference to medical authority can discourage individuals from asking questions or discussing information they have found online. “Some clinicians may interpret questions as a challenge to their expertise, but when a patient asks a question, it’s because they don't understand.”
Ms Hemrajani’s organisation addresses these issues by bringing clinicians and communities together in more open, informal settings to provide accurate, evidence-based responses to common concerns. Regular sessions allow people to ask questions about topics such as diet, supplements, and the safety of cosmetic treatments, with answers provided by specialists.
The organisation has also developed programmes that support individuals in preparing for medical consultations. In one initiative, volunteers accompany people to appointments to help them ask questions and better understand their treatment.
Ms Hemrajani stresses that AI-generated information should always be treated as a starting point, not a basis for decision-making.
“The increasing presence of AI in the health information space presents both opportunities and risks. For organisations working in cancer control, this evolving landscape reinforces the need for clear guidance, stronger health literacy, and systems that connect people to reliable, personalised care. We tell people: ‘Don’t just take the AI at face value. Talk to your clinician, talk to your oncologist’.”
Last update
Monday 30 March 2026