Sam Altman on OpenAI’s upcoming challenges and Responsible AI

I really enjoyed listening to the latest episode of “Unconfuse Me”. Bill Gates invited Sam Altman to discuss AI “because it’s such an exciting thing, and people are also concerned.” 

Among the insights you can get from the talk, Sam announced the key milestones for the next two years:

  1. Multimodality: “Multimodality will definitely be important. Which means speech in, speech out. Images. Eventually video. Clearly, people really want that. We’ve launched images and audio, and it had a much stronger response than we expected. We’ll be able to push that much further.”
  2. Reasoning ability: “Maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. 
  3. Reliability: “If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important.”
  4. Customizability and personalization: “People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources…” 

They also addressed what sort of regulations would be constructed.

“It would be very easy to put way too much regulation on this space. You can look at lots of examples of where that’s happened before. If this technology goes as far as we think it’s going to go, it will impact society, geopolitical balance of power, so many things. For these, still hypothetical, but future extraordinarily powerful systems – not like GPT-4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact.”

“Different countries are going to think about those differently and that’s fine. Some people think if there are models that are so powerful, we’re scared of them – the reason nuclear regulation works globally, is basically everyone, at least on the civilian side, wants to share safety practices, and it has been fantastic. When you get over into the weapons side of nuclear, you don’t have that same thing. If the key is to stop the entire world from doing something dangerous, you’d almost want global government, which today for many issues, like climate, terrorism, we see that it’s hard for us to cooperate. People even invoke U.S.-China competition to say why any notion of slowing down would be inappropriate. (…) If it instead says, “Do what you want, but any compute cluster above a certain extremely high-power threshold” – and given the cost here, we’re talking maybe five in the world, something like that – any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. (…) That’s not going to save us from everything. There are still going to be things that are going to go wrong with much smaller-scale systems, in some cases, probably pretty badly wrong. But I think that can help us with the biggest tier of risks.” 

Published by Sylvia

Futurist - Futures Thinking & Strategic Foresight

Leave a comment

Design a site like this with WordPress.com
Get started