Oppenheimer's Legacy - A Cautionary Tale for 2025 and Beyond

Blog

Oppenheimer, the inventor of the atomic bomb, was widely credited for saying, "Now I am become death, the destroyer of worlds. I suppose we all thought that one way or another".

Despite him saying that, the first sentence, "Now I am become death, the destroyer of worlds" is actually a piece from the Bhagavad Gita (https://en.wikipedia.org/wiki/Bhagavad_Gita), the Hindu Sanskrit text from thousands of years ago. And it's widely reported that the quote was evidence that Oppenheimer felt a massive responsibility for creating the atomic bomb and its devastating impact, but it's actually a bit different than that. If you look carefully, you'll spot three aspects that many miss.

The first is that he never stated regret in combating fascism with the bomb. Secondly, the quote is actually misinterpreted fundamentally.

The original Sanskrit text which is a 700 verse or 700 chapter epic and I think this is about chapter 12 where this story happens where there's a prince called Arjuna (I think) and as the prince is struggling with facing people that he knows in battle knowing that he must destroy them and essentially Vishnu the the divine entity reveals itself with a thousand faces and says, listen, it's all in the hands of the divine. You don't actually decide who lives and dies. And this allegedly gave the prince comfort in knowing that it wasn't his doing. Oppenheimer in contrast seemed less settled in such knowledge.

And finally, the third aspect is his main point was that what had been created was far more advanced than the ability to govern and manage the invention. And this here was an existential issue.

This here is where we are with artificial intelligence.

As of now, the end of the year, end of December, approaching the end of December 2024, this is where we are.

The thing is, we are yet to see the enormity of the fact that we've created a new species. And we have famous podcasters frothing at the mouth about how AI can solve everything. And the entire investment community throwing billions into projects that have an almost pathological disregard for humanity. The entire game is rigged for one hell of an interesting future.

And as David Suzuki said, "We're in a giant car heading towards a brick wall and everyone's arguing over where they're going to sit".

So my prediction for 2025 is that we'll see the first signs of the consequences, not necessarily the mushroom cloud yet, but the first telling signs. What we're doing at SELF (https://self.app) is creating each person's nuclear bunker for their individual identity and data. Or if you like, a force field for what I suspect happens next.

Now, this might sound fantastical or ideological or dramatic, so let me break it down in a slightly more functional way, a practical way.

So there is this race towards creating artificial general intelligence and almost everyone in the entire community will say that it's extraordinarily exciting that we're creating these machines that are more intelligent, more smart than humans, right? If you add to that quantum computers where everything is immediate and all calculations are solved and all decisions are made, the pace of change in the next 12 months moving into 2025 is going to look like the pace of change in the last 10 years.

And then the pace of change in 2026 will look like the pace of change in the last 20 or 30 years, because change increases in speed as we move forward. My prediction, and it may happen in 2025 or may happen 2026 or 2027, who knows, but at some point it will be announced as a slightly orange flag wave that intelligence isn't just logical intelligence, but also emotional, but but artificial and general intelligence has missed the entire aspect of humanity. And it's gonna be some famous thought leader or another who will be on Joe Rogan's podcast and he or she will reveal this as "new thought". It will be this showstopper, jaw dropper that maybe we 'miscalculated' intelligence. We hadn't thought it all through yet.

And then the creeping concern will set in.

It's like, "hang on...we voted for humans to be replaced by things that don't have feelings, but it's the feelings that kept society and relationships galvanised. But we handed over our information to machines to make our lives more convenient and become loyal to platforms that are now entirely and by design, disregarding our emotional well-being...and it's too late to reverse ourselves out of the brick wall upon which the car in which we're strapped into impacted.

Well, that's why we're building SELF.

This is the alternate version of the outcome and it might seem pointless now at the end of 2024, but I bet that it won't always be pointless.

You may look at SELF today as an AI or a private search engine or as a memory bank or a personal shopper or as a personal assistant, but it's not really.

It's a filter for a web that's geared toward harming humans. It's a protector for your information towards platforms that don't deserve a place within your psyche, despite you letting them in and signing the terms and conditions that you didn't read.

If I'm 5 % right, then 5 % of us need this in the future. And if I'm 95 % right, then it's a different story, but either way around, it has to be done.

There's no choice in the matter. Not for me and my amazing team.

So roll on 2025. Let's see if we can spot the signs. And we certainly publish them; there's a channel in our Discord (https://discord.com/invite/selfcommunity) called World of Ethics where we post 'signs'.

Remember, I'm just as fallible as anyone else in this. I signed up to these platforms too and I've used these same tools as well and we've all done it, we've all voted it in and sure we can stand on stages and write books about our concern but it looks a bit 'tinfoil hat' to some extent when you're kind of talking about a potential disaster but I'm less concerned about the shape of the disaster and more concerned about having an alternative.

And if it turns out that SELF as a Sovereign Ethical Lifestyle Filtering system and a software development kit and layer one blockchain and an operating system; is an 'also ran' because the the dystopian outcomes don't happen, paradoxically that's awesome.

So if it turns out that 1 % of the population, 1 % of 9 billion people find what we're doing relevant, then that's cool. We'll roll with those stats. But this isn't about market dominance and getting market share. Because ironically, no one wants SELF to be 'needed' in the same way as no one really wants life insurance and no one really wants to use an ad blocker. I mean, ultimately, you wouldn't have your attention intruded on by ads right? But nonetheless we need ad blockers because it turns out that's the accepted way of digital content.

I think we can reverse all of these things. I think we can reverse advertising to be ethical advertising because the people whose eyeballs are being used to generate the revenue should be the people who get the revenue from advertising. Right?

The people whose eyeballs and attention are being used to curate the results of a search engine should surely, surely have ultimate sovereignty over that data that they're providing for that algorithm. I mean, it's obvious, right?

It's not tough to see the way that it should be, but what's happened over time is that we have had the outrageous situation we're in normalised through extraordinarily easy to use tools.

And that normalisation does not make it okay.

So we need to provide an alternative and that's what SELF is.

Anyway, look after yourself. If you celebrate this time of year, then have a good one. If you don't, then I hope that you are in the best possible peaceful place with the people that you care about and yourself.