Loading...
Hundreds of Big Technology readers have joined our premium tier for exclusive content and to support independent tech journalism like today’s story. Try it for 20% off in year 1, or just $8 per month. AI Employees Should Have a “Right To Warn” About Looming TroubleReasonable rules allowing employees at AI research houses to warn about impending problems are sensible given the technology’s increasing power.
By now, we’restarting to understand why so many OpenAI safety employees left in recent months. It’s not due to some secret, unsafe breakthrough (so we can put “what did Ilya see?” to rest). Rather, it’s process oriented, stemming from an unease that the company, as it operates today, might overlook future dangers. After a long period of silence, the quotes are starting to pile in. “Safety culture and processes have taken a back seat to shiny products,” said ex-OpenAI Superallignment co-lead Jan Leike last month. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, an ex-governance team employee, soon afterward. Safety questions are “taking a backseat to releasing the next new shiny product,” added ex-OpenAI researcher William Saunders. Whether these assertions are correct (OpenAI disputes them), they make clear that AI employees need much better processes to report concerns about the technology to the public. Ordinary whistleblower protections tend to deal with the illegal, not potentially dangerous, and so they fall short of what these employees require to speak freely without fear. And though today’s cutting edge AI models aren’t a threat to society, there’s currently no good way to flag potentially dangerous future developments — at least for those working within the companies — to third parties. That’s why we’ve almost exclusively heard from those who’ve exited. Right now, the alert system is left to the corporations themselves. So Saunders, Kokotajlo, and more than a dozen current and ex-OpenAI employees are calling for a ‘Right to Warn,’ where they’d be free to express concerns about potentially-dangerous AI breakthroughs to external monitors. They went public about this in an open letter earlier this month where they called for an end to non-disparagement agreements and for the start of an anonymous process to flag concerns to third parties and regulators. And after speaking with Saunders and Harvard Law Professor Lawrence Lessig, who is representing the group pro-bono, their demands seem sensible to me. “Your P(doom) does not have to be extremely high to believe that it makes sense to have a system of warning,” Lessig told me. “You don't put a fire alarm inside of a school because you really believe the school is going to burn down. It's just that if the school's on fire, there ought to be a way to pull an alarm.” The AI doom narrative has been way overblown lately, but that doesn’t mean the technology is without risk. And while a right to warn might send a signal that the tech is more dangerous than it is, and even do some marketing for OpenAI’s capabilities, it’s worth establishing some new rules to ensure that employees can talk when they see something. Even if it’s not species-threatening. The alternative — trusting companies to self-report troubling developments or meaningfully slow product cadence — has never worked. Even for those entities with novel corporate structures built around safety, an employee “Right to Warn” is essential. My full conversation with Saunders and Lessig will go live next Wednesday on Big Technology Podcast. To get it in your feed, you can subscribe on Apple Podcasts, Spotify, or your app of choice. Considering the global talent pool? Start here. (sponsor)Inside Deel’s free International Hiring Guide, you’ll learn: How to find and attract the right talent Four unique global hiring strategies How to pay your global team And more! Advertise on Big Technology? Reach 170,000+ plugged-in tech readers with your company’s latest campaign, product, or thought leadership. To learn more, write alex@bigtechnology.com or reply to this email. What Else I’m Reading, Etc.Record labels sue AI music generators for copyright infringement [AP] Al chip company that only runs transformers raises $130 million [Techcrunch] The cybertruck costs a lot of money to insure [Sherwood] What exactly is going on within Jeff Bezos’s Washington Post [The Atlantic] How Starbucks devalued its brand [Harvard Business Review] It’s time for Joe Biden to step aside [The Atlantic] Two big asteroids are passing by earth this weekend [New York Times] Quote Of The WeekI continue to believe it’s incorrect to see Apple as ‘behind,’ overall, on generative AI. But clearly they are feeling tremendous competitive pressure on this front, which is good for them, and great for us. John Gruber assessing Apple’s AI push with some distance between WWDC Number of The Week$2 trillion Amazon reached this valuation for the first time ever this week. This Week on Big Technology Podcast: Decoding The NVIDIA Trade — With Michael BatnickMichael Batnick is managing partner at Ritholtz Wealth Management and co-host of The Compound and Friends Podcast. Batnick joins Big Technology Podcast for a conversation that asks all the questions about NVIDIA's historic run. We cover the valuation, volatility, competition, chances to keep going, and AI fatigue. We also ask whether NVIDIA could give up its gains just as fast as it built them, whether it will become an Apple-like fixture in portfolios, and how algorithmic trading might play a role in its massive growth. You can listen on Apple, Spotify, or wherever you get your podcasts. Thanks again for reading. Please share Big Technology if you like it! And hit that Like Button to warn me that you like this newsletter My book Always Day One digs into the tech giants’ inner workings, focusing on automation and culture. I’d be thrilled if you’d give it a read. You can find it here. Questions? News tips? Email me by responding to this email, or by writing alex@bigtechnology.com Or find me on Signal at 516-695-8680 Thank you for reading Big Technology! Paid subscribers get our weekly column, breaking news insights from a panel of experts, monthly stories from Amazon vet Kristi Coulter, and plenty more. Please consider signing up here.
© 2024 Alex Kantrowitz |
Loading...
Loading...