Two documents released in the last few hours and days, OpenAI’s ‘Governance of Superintelligence’ and DeepMind’s ‘Model Evaluation for Extreme Risks’, reveal that the top AGI labs are trying hard to think of how to live with, and govern, a superintelligence.
I want to cover what they see coming. I’ll show you persuasive evidence that the GPT 4 model has been altered and now gives different outputs from two weeks ago.
And I’ll look at the new Tree of Thoughts and CRITIC prompting systems that might constitute ‘novel prompt engineering’.
I’ll also touch on the differences among the AGI lab leaders and what comes next.
This video will cover everything from what I think is the most pressing threat, synthetic bioweapons, among the many threats, including a situationally aware superintelligence, deepfakes, audio manipulation.
I’ll delve into what Dario Amodei thinks, the secretive head of Anthropic, and Emad Mostaque, Sam Altman’s new interview, Sundar Pichai and even Rishi Sunak’s meeting.
In the middle of the video I will touch on not just the tree of thought prompting method but also snowballing hallucinations (featured in a new paper), code interpreter on the MMLU and how we shouldn’t underestimate GPT models.