Meta, Google and Others Fall Short in Election Protections Amid a historic number of elections across the globe in 2024, our research finds that online platforms' inadequate policies in Global Majority countries are weighing heavily on fragile democracies. Our Open Source Research and Investigations team studied election policies announced by Meta, TikTok, YouTube, and X in various regions — and found troubling patterns about what, where, and why these initiatives are deployed. The latest from Mozilla | | The Challenge of Identifying AI-Generated Content As AI-generated content of presidents and pop stars spreads across the internet, popular ways of detecting and disclosing that content performs poorly. Read the research → |
|
---|
| | New Book Exposes Tangle of Romance and Racism in Dating Algorithms University of Michigan professor and Mozilla Fellow Apryl Williams's book "Not My Type" explores race-based discrimination on Tinder, Bumble, and other platforms. Read more → |
|
---|
| | How (And Why) To Turn Off Twitter’s Voice Call Feature X released a feature that allows you to audio or video call people on the app, but it has serious security flaws. Here’s how to turn it off. Read more → |
|
---|
| | Should You Use AI to Write Your Resume? AI can do a lot of things, but should it? We have some thoughts. Read more → |
|
---|
| | Was This Video Generated Using AI? OpenAI announced Sora, their latest tool to create high-definition video based on a simple paragraph of text. So how can you tell if a video was generated by AI? Here’s how to tell → |
|
---|
Participate Upcoming Events | | Holding AI Accountable April 2, online: Food and cars undergo strict testing, why don’t algorithms? Join Mozilla Fellow Deborah Raji with host Xavier Harding for a discussion on LinkedIn Live about the tools society needs to hold algorithms accountable. RSVP on LinkedIn → |
|
---|
| | Is There a Better Way to Govern All This Data? May 13, online: Join EM Lewis-Jong and Gina Moape to learn about Common Voice, the largest open speech corpus, its CC0 licence, and how the project is thinking about the way its data is governed in the future. RSVP → |
|
---|
| | Earlybird Tickets: MozFest House Amsterdam June 11-13, in person: Join us at MozFest House: Amsterdam for community talks, collaborative workshops, and vibrant art and culture moments. Get your discounted tickets now. Buy tickets → |
|
---|
Take Action | | Tell OpenAI, Google, and Microsoft to provide transparency about the data used to train their AI tools! Understanding how AI is trained is crucial for improving trust, mitigating risks, and enhancing internet utility for all. Sign the Petition → |
|
---|
What We’re Reading: What our researchers, grantees and staff are reading this month Nobody knows how AI works (MIT Technology Review): Tech firms are quickly launching AI products, ignoring evidence that they are hard to control and behave unpredictably. This weird behavior happens because nobody knows how, or why, deep learning works. Automakers Are Sharing Consumers’ Driving Behavior With Insurance Companies (New York Times): LexisNexis tracked G.M. drivers' trips, detailing speeding, hard braking, and rapid acceleration for insurance risk profiles. OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias. (Bloomberg, paywall): Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone. The rise and fall of robots.txt (The Verge): As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart. The National Public Opinion Poll on The Impact of AI (Elon University): when it comes to risks posed by AI, there’s a gap between U.S. experts' concerns and the American public's worries. LOLERCOPTER 🚁🔥posts that made us laugh Want to do more to help? Thank you for reading this newsletter. Before you go on with your day, we hope you'll consider making a one-time or monthly recurring donation to Mozilla. Here's what your donation will support: | Establishing trustworthy AI |
| | Holding irresponsible tech companies accountable |
| |
---|
|
|