Introduction
Since our incorporation, we’ve embraced digital tools like Canva and AI-generated content platforms to create compelling, high-quality advocacy materials. But video has always been a challenge—until recently. Last month, we trialed RunwayML, and the results were incredible. The ability to produce professional, engaging video content without the traditional barriers of cost and production time was a game-changer. It allowed us to create impactful advocacy pieces that drove engagement for our petition, bringing critical awareness to pancreatic cancer care gaps in Canada.
But here’s the reality: AI video and image generation platforms are still built with commercial users in mind—not advocacy groups. While these tools have the potential to revolutionize how non-profits like ours operate, they also come with serious challenges that are actively holding organizations back.
The Moderation Barrier: A Hidden Roadblock for Advocacy Organizations
For pancreatic cancer non-profits, AI content moderation isn’t just an inconvenience—it’s a systemic issue. We've spent countless hours engineering prompts to create content that aligns with our mission, only to have them rejected, altered, or flagged by overly restrictive AI filters. The reason? Many AI platforms are designed to detect and block content based on generalized rules, without recognizing legitimate medical or advocacy-related material.
This means that words like “tumor,” “cancer,” “diagnosis,” or even “patient” can trigger content flags or alter AI-generated outputs, forcing non-profits to constantly find workarounds, soften messaging, or reframe prompts just to ensure their content gets through. For organizations without dedicated teams of AI engineers or copywriters, this is an unnecessary hurdle that wastes time and limits impact.
AI Companies Can Learn a Lot from Pancreatic Cancer Organizations
Pancreatic cancer non-profits—like other medical and advocacy organizations—operate in a space where clear, accurate communication is critical. We don’t just create content to sell products or entertain audiences; we create it to save lives, drive policy change, and educate the public on complex healthcare issues.
Yet, AI platforms continue to treat non-profits the same way they treat commercial marketers, social media influencers, or entertainment companies—applying the same broad moderation rules that often strip down or outright block critical advocacy content. AI companies need to recognize that non-profits require different considerations when it comes to content generation and moderation.
The Business Case for AI Companies to Work with Non-Profits
The reality is, AI companies are missing out on a huge opportunity. By refining their moderation models and making AI tools more adaptable for advocacy organizations, they wouldn’t just be helping non-profits—they would be improving their own platforms. Imagine an AI system that can differentiate between harmful misinformation and legitimate medical advocacy—a system that understands context rather than blindly flagging keywords.
Better moderation policies would open up AI tools to more users, creating a stronger, more ethical AI ecosystem that serves businesses, creators, and non-profits alike. AI companies that lead the way in building non-profit-friendly AI solutions will stand out, gaining credibility and positioning themselves as industry leaders in ethical AI innovation.
What Needs to Change: Our Recommendations for AI Companies
1. Context-Aware Moderation
AI models must be trained to recognize medical and advocacy content as distinct from harmful or misleading information. Not all content mentioning cancer, tumors, or treatments is harmful—it’s necessary for awareness.
2. Non-Profit-Specific Training Data
AI companies should incorporate verified non-profit content into training datasets to ensure moderation systems can accurately differentiate between misinformation and legitimate advocacy.
3. Collaboration with Advocacy Organizations
AI platforms should engage with health-focused non-profits to refine their models and ensure the needs of medical and advocacy groups are understood and addressed.
4. Dedicated Support for Non-Profits
AI companies should offer better support channels for non-profits, including fast-track content reviews and human oversight for flagged material that might otherwise be wrongly removed.
5. Non-Profit Pricing and Grants
AI platforms should create affordable pricing models for non-profits and offer grants to organizations working on social impact initiatives.
Conclusion: AI Can Empower Non-Profits—But Only If It’s Built With Them in Mind
AI has the potential to revolutionize the way non-profits operate—but right now, many of these tools are not built with advocacy organizations in mind. Instead of empowering non-profits, AI moderation systems are slowing us down, forcing us to waste time working around unnecessary restrictions.
The good news? This can change. AI companies that invest in better moderation, smarter training data, and closer collaboration with advocacy organizations will not only improve their technology, but also help drive real-world change.
As a pancreatic cancer advocacy organization, we believe in leveraging AI for good—but that requires AI tools that work with us, not against us. We invite AI companies to engage with non-profits like ours to develop solutions that don’t just power businesses, but also drive progress where it matters most.
🔗 Read our full research report here: [https://heathercutler.ca/the-case-for-increased-pancreatic-cancer-funding-lessons-from-international-success-stories/]
Let’s build AI that works for advocacy, for healthcare, and for change.