In Pursuing Human-Level Intelligence, The AI Industry Risks What It Can’t Control
Hi everyone, and regards from Brooklyn, where I’m typing this week’s newsletter after a long stretch of travel spanning The Netherlands, Belgium, and California. I also had a brief stopover in Salt Lake City, where my toothpaste remains in custody of TSA. Anyway, it’s good to be home.
In my travels, I’ve spent hours speaking with tech practitioners about the massive advances we’re seeing in artificial intelligence. In recent months, we’ve seen AI draw with Dall-E and speak with LaMDA. These advances build on the work that’s already been underway in other areas including analytics, optimization, and — as in Amazon’s case — negotiating for, ordering, and transporting products.
In this week’s Big Story, we’re going to look at whether the AI industry is messing with forces it doesn’t understand. I think the answer might be less cut and dry than many AI practitioners let on. And, to go into more depth about it, I brought Prof. Anil Seth on the podcast. Prof. Seth is a professor of cognitive and computational neuroscience at the University of Sussex and a fascinating thinker about these issues. He's also the author of Being You.
You can listen to our conversation on Apple, Spotify, or your podcast app of choice.
Now, before we get to the Big Story, I want to share one more recent episode, relevant amid this week’s big earnings reports:
Will The Fed Blink And Save Tech — With Ranjan Roy
Ranjan Roy is the co-author of Margins, a Substack newsletter about the financial markets. He joins Big Technology Podcast for a conversation about the Federal Reserve's steep interest rate raises, how they've harmed tech valuations, and whether the Fed might reverse course and bring the party back. Stay tuned for the second half where we discuss the short-form video wars and the likely outcome of Elon Musk's pursuit of Twitter.
You can listen on Apple, Spotify, or your podcast app of choice.
The Big Story
In Pursuing Human-Level Intelligence, The AI Industry Risks Building What It Can’t Control
In front of a packed house at Amsterdam’s World Summit AI on Wednesday, I asked senior researchers at Meta, Google, IBM, and The University of Sussex to speak up if they did not want AI to mirror human intelligence. After a few silent moments, no hands went up.
The response reflected the AI industry’s ambition to build human-level cognition, even if it might lose control of it. AI is not sentient now — and won’t be for some time, if ever — but a determined AI industry is already releasing programs that can chat, see, and draw like humans as it tries to get there. And as it marches on, it risks having its progress careen into the dangerous unknown.
“I don't think you can close Pandora's box,” said Grady Booch, chief scientist at IBM, of eventual human-level AI. “Much like nuclear weapons, the cat is out of the bag.”
Comparing AI’s progress to nuclear weapons is apt but incomplete. AI researchers may emulate nuclear scientists’ desire to achieve technical progress despite the consequences — even if the danger is on different levels. Yet more people will access AI technology than the few governments that possess nuclear weapons, so there’s little chance of similar restraint. The industry is already showing an inability to keep up with its frenzy of breakthroughs.
The difficulty of containing AI was evident earlier this year after OpenAI introduced Dall-E, its AI art program. From the outset, OpenAI ran Dall-E with thoughtful rules to mitigate its downsides and a slow rollout to assess its impact. But as Dall-E picked up traction, even OpenAI admitted there was little it could do about copycats. "I can only speak to OpenAI,” said OpenAI researcher Lama Ahmad when asked about potential emulators.
Dall-E copycats arrived soon after and with fewer restrictions. Competitors including Stable Diffusion and Midjourney democratized a powerful technology without the barriers, and everyone started making AI pictures. Dall-E, which only onboarded 1,000 new users per week until late last month, then opened up to everyone.
Similar patterns are bound to emerge as more AI technology breaks through, regardless of the guardrails original developers employ.
It’s admittedly a strange time to discuss whether AI can mirror human intelligence — and what weird things will happen along the way — because much of what AI does today is elementary. The shortcomings and challenges of current systems are easy to point out, and many in the field prefer not to engage with longer-term questions (like whether AI can become sentient) since they believe their energy is better spent on immediate problems. Shorttermists and longtermists are two separate factions in the AI world.
As we’ve seen this year, however, AI advances in a hurry. Progress in large language models made chatbots smarter, and we’re now discussing their sentience (or, more accurately, lack of). AI art was not in the public imagination last year, and it’s everywhere now. AI is also now creating videos from strings of text. Even if you’re a shorttermist, the long-term can arrive ahead of schedule. I was surprised by how many AI scientists said aloud they couldn’t — and didn’t want to — define consciousness.
There is an option, of course, to not be like the nuclear weapon scientists. To think differently than how J. Robert Oppenheimer, who led work on the atomic bomb, put it. “When you see something that is technically sweet,” he said. “You go ahead and do it and you argue about what to do about it only after you have had your technical success.
Perhaps more thought this time would lead to a better outcome.
Trainer of Robodogs and on demand AI expert capable of architecting vision roadmaps. Focused on AGI through Cybernetics. I've spent a lifetime mastering the science behind scalability, efficiency, and adaptable control.
1yIf a technology causes the same level of problems in 10 minutes as another one does in 10 years, does it make either technology safer? I don't think so, and designing systems for the next 4 years used to be an easy bar to hit. People today don't seem to think about the consequences going ahead 4 minutes... We have a world now divorced of logic, one that wants to fight for no reason, and to gain more access to AI but less access to polio vaccines. You can't make it make any sense, and I'm afraid the stupidity unleashed is cumulatively far more dangerous to the world, than any of the workers in your analogy.
Founder | Product MVP Expert | Fiction Writer | Find me @Dubai Trade Show
2yAlex, thanks for sharing!