10.08.2025

"AI-Generated Hate: Bigfoot Video Sparks Outrage"

At first it appears to be a quirky video clip generated by artificial intelligence to make people laugh

The emergence of AI-generated content has sparked significant concern, particularly regarding its potential to promote violence and spread hate against marginalized communities. One example is a video featuring a Bigfoot character, donning a cowboy hat and an American flag vest, who initially appears to be amusing as it announces plans to attend an LGBTQ+ parade. However, the tone shifts dramatically as the Bigfoot character drives through a crowd of festival-goers, causing alarm among viewers. This video, posted on the AmericanBigfoot TikTok page in June, has amassed over 360,000 views and generated extensive commentary online, most of which has been positive.

Despite its seemingly comedic beginning, the video reflects a wider issue: the proliferation of AI-generated content that often promotes harmful rhetoric against LGBTQ+ individuals, Jews, Muslims, and other minority groups. This trend has raised alarms among experts and advocates, who argue that current Canadian regulations are insufficient to manage the quick dissemination of hate-filled AI-generated material. Egale Canada, an advocacy organization supporting LGBTQ+ rights, highlights that the community is increasingly alarmed by the rise in transphobic and homophobic misinformation on social media platforms.

According to Helen Kennedy, executive director of Egale Canada, these AI technologies are being weaponized to dehumanize and invalidate transgender and gender-diverse individuals. This trend must be addressed urgently, as existing digital safety laws fall short of tackling the scope and pace of this new threat. Rapid advancements in technology have equipped malicious actors with tools to spread misinformation, with transgender people being particularly targeted. In a broader sense, Evan Balgord, executive director of the Canadian Anti-Hate Network, notes that hate content targeting various groups, including Muslims and South Asians, is rampant on social media.

Balgord warns that the normalization of violence against marginalized groups through such content can lead to real-world violence, making the need for regulatory measures even more urgent. Experts like Andrea Slane, a legal studies professor at Ontario Tech University, emphasize that Canada’s digital safety laws are outdated and in dire need of reform to effectively address online harms. Proposed bills aimed at tackling harmful online content failed to progress when Parliament prorogued in January, prompting calls for immediate legislative action.

Justice Minister Sean Fraser has indicated plans to reassess the Online Harms Act, which seeks to hold social media platforms accountable for limiting harmful content exposure. A spokesperson from the newly established Ministry of Artificial Intelligence and Digital Innovation acknowledged the growing issue of AI-generated hate and emphasized that existing laws must adapt to confront these emerging challenges. Sofia Ouslis stated that understanding both the misuse and potential of AI tools is critical in strengthening regulatory frameworks.

Peter Lewis, a Canada Research Chair in trustworthy AI, notes that recent advancements have made it increasingly easy to produce high-quality AI-generated videos. He remarks that while tools like ChatGPT attempt to filter out harmful content, there is still a pressing need for effective safeguards in video content generation. Lewis highlights the crucial role collaboration between governments, advocates, social media platforms, and AI developers plays in forming an effective response to online hate. He advocates for robust mechanisms to rapidly flag and remove harmful content from the internet.

As the conversation about AI's impact on society continues to evolve, experts emphasize that regulatory efforts must be informed by lessons learned from regions like the European Union, which have made strides in digital safety. While generative AI poses unique challenges, the rapid availability and accessibility of these technologies necessitate a coordinated and responsive approach to tackle the real-world implications of AI-generated hate and violence.