In Pursuing Human-Level Intelligence, The AI Industry Risks Building What It Can’t ControlInstead of asking whether AI can achieve something, perhaps we should ask whether it should.In front of a packed house at Amsterdam’s World Summit AI on Wednesday, I asked senior researchers at Meta, Google, IBM, and The University of Sussex to speak up if they did not want AI to mirror human intelligence. After a few silent moments, no hands went up. The response reflected the AI industry’s ambition to build human-level cognition, even if it might lose control of it. AI is not sentient now — and won’t be for some time, if ever — but a determined AI industry is already releasing programs that can chat, see, and draw like humans as it tries to get there. And as it marches on, it risks having its progress careen into the dangerous unknown. “I don't think you can close Pandora's box,” said Grady Booch, chief scientist at IBM, of eventual human-level AI. “Much like nuclear weapons, the cat is out of the bag.” Comparing AI’s progress to nuclear weapons is apt but incomplete. AI researchers may emulate nuclear scientists’ desire to achieve technical progress despite the consequences — even if the danger is on different levels. Yet more people will access AI technology than the few governments that possess nuclear weapons, so there’s little chance of similar restraint. The industry is already showing an inability to keep up with its frenzy of breakthroughs. The difficulty of containing AI was evident earlier this year after OpenAI introduced Dall-E, its AI art program. From the outset, OpenAI ran Dall-E with thoughtful rules to mitigate its downsides and a slow rollout to assess its impact. But as Dall-E picked up traction, even OpenAI admitted there was little it could do about copycats. "I can only speak to OpenAI,” said OpenAI researcher Lama Ahmad when asked about potential emulators. Dall-E copycats arrived soon after and with fewer restrictions. Competitors including Stable Diffusion and Midjourney democratized a powerful technology without the barriers, and everyone started making AI pictures. Dall-E, which only onboarded 1,000 new users per week until late last month, then opened up to everyone. Similar patterns are bound to emerge as more AI technology breaks through, regardless of the guardrails original developers employ. It’s admittedly a strange time to discuss whether AI can mirror human intelligence — and what weird things will happen along the way — because much of what AI does today is elementary. The shortcomings and challenges of current systems are easy to point out, and many in the field prefer not to engage with longer-term questions (like whether AI can become sentient) since they believe their energy is better spent on immediate problems. Shorttermists and longtermists are two separate factions in the AI world. As we’ve seen this year, however, AI advances in a hurry. Progress in large language models made chatbots smarter, and we’re now discussing their sentience (or, more accurately, lack of). AI art was not in the public imagination last year, and it’s everywhere now. AI is also now creating videos from strings of text. Even if you’re a shorttermist, the long-term can arrive ahead of schedule. I was surprised by how many AI scientists said aloud they couldn’t — and didn’t want to — define consciousness. There is an option, of course, to not be like the nuclear weapon scientists. To think differently than how J. Robert Oppenheimer, who led work on the atomic bomb, put it. “When you see something that is technically sweet,” he said. “You go ahead and do it and you argue about what to do about it only after you have had your technical success.” Perhaps more thought this time would lead to a better outcome. Build the best remote team with UpStack's world-class software developers (Sponsored)UpStack helps you find the best developer for your project. Assess your needs in a quick, 15-minute discovery call with our Client Success Team. That’s all it takes to start our search for your perfect match within our pre-vetted candidate pool. What Else I’m ReadingWhat Else I’m Reading Mark Zuckerberg is still stoked about the metaverse. Apple is preparing to push into TV-style advertising on its original programming. Turns out Covid didn’t drive shopping online forever. Manhattan Venture Partners wants out of funding Musk’s Twitter deal. VCs pay thousands for Twitter ghostwriters. Truth Social is returning to the Play Store. Will parkour solve the energy crisis? What happens when people donate their bodies to science. A profile of Pennsylvania senate candidate John Fetterman. Number Of The Week
Approximate worth of paintings that artist Damien Hirst will burn after promising NFT owners he’d destroy his original work after they purchased its digital counterpart. Quote Of The Week
Mark Zuckerberg on the challenge of bringing legs to the metaverse. Advertise with Big Technology?Advertising with Big Technology gets your product, service, or cause in front of the tech world’s top decision-makers. To reach 80,000+ plugged-in tech insiders, please reply to this email. We have availability starting in November. This Week On Big Technology Podcast: Will The Fed Blink And Save Tech — With Ranjan RoyRanjan Roy is the co-author of Margins, a Substack newsletter about the financial markets. He joins Big Technology Podcast for a conversation about the Federal Reserve's steep interest rate raises, how they've harmed tech valuations, and whether the Fed might reverse course and bring the party back. Stay tuned for the second half where we discuss the short-form video wars and the likely outcome of Elon Musk's pursuit of Twitter. You can listen on Apple, Spotify, or wherever you get your podcasts. Thanks again for reading. Please share Big Technology if you like it! And hit that heart if like your robots friendly. Questions? Email me by responding to this email, or by writing alex.kantrowitz@gmail.com News tips? Find me on Signal at 516-695-8680 If you liked this post from Big Technology, why not share it? |