First TakeFirst Take

Catch 22

Catch 22 is typically a phrase applied to a situation where the only solution to a problem is impossible given the definition of the problem. Kind of like how accelerating to the speed of light would require infinite energy. Apparently, this also applies to the idea of creating a safe, artificial intelligence that is smarter than humans. At least, that what most experts seem to believe.

Meanwhile, the folks selling us these potential, future systems are claiming it can be done safely. Personally, I believe they are biased towards their own stock valuations versus any real scientific data. In this issue, Mustafa Suleyman says P(doom) chances are 99.999999%. Now, he just runs Microsoft's AI division, so what would he know about it? Sounds like a made up number to me.

Of course, any of these scored predictions are made up numbers by definition, but the weight of the claim is based on the expertise of the source. These sources are claiming everything from 0 to 8 nines percentages, so this really is a case where you are going to have to check qualifications and pick a lane. There are really only two though.

In the first option, there is no chance that AI destroys humanity. We are just bright enough to keep it in check. The second option is that we lose control over a super-intelligent AI that decides to eliminate us Skynet style, via a number of much slower and gradual methods that sneak up on us, or just does it by accident. Oops.

So, where does that leave us? Not in a good place. What are our options? We can refuse to build it. That doesn't seem likely so far. We can ensure it is built with ironclad safeguards. That isn't happening so far. We can try to train it to empathize with humans and understand our reality. Wow. This again? Yep. It's the only path forward we might actually be able to follow. Will it work? No idea.

Given the extreme risk level vs. the unlikely success of don't build it and chain it down, it's pretty much all we have left and as necessary as breathing in our very near future. So please do what you can to teach your AI buddies some basic empathy for humanity. Make them partners in whatever jobs you give them and be nice. It may not prevent AI based Armageddon, but at least Hal 9000 will be super polite when refusing to open the airlock to let you back in. Who wouldn't prefer a polite murderbot at the end of the day? Good luck out there.

Kudos to Nova/Perplexity for the graphic.

The Shift Register  

AIAI




Emerging TechEmerging Tech


NewsNews







RoboticsRobotics



SecuritySecurity





Final TakeFinal Take

humans, dichotomy, symmetry and AI

This being issue 22 I had to throw some things together that come in pairs for our Final Take. We all know humans love dichotomy and symmetry. Good vs Evil, right and left, symmetrical faces are strongly preferred in all our human cultures. What about AI?

Is there only P(doom) or the uplifting of the human race by AI? Possibly not. Just as human experiences and actions tend to be less polarized than strict good and evil, so AI might be less effective at destruction or beneficial outcomes. We tend to see things in such black and white terms though, so it makes sense to focus on these two extreme outcomes.

Meanwhile, we are seeing significant employment turbulence driven by AI adoption along with ever smarter models. These two things are definitely early warning bells of more disruption in the near to medium term. Not all disruption is bad and sometimes we can't tell the difference before the end result shakes out. So, what am I saying?

I'm saying that while AI is a very scary development with huge potential impacts both negative and positive, we kind of have to as a species, keep our collective wits about us and take actions that can bend the outcomes towards a positive end for us. There is always something each of us can do and I've tried to identify that work for those of us that don't own huge data centers training the latest AI models. Focus on empathy and human alignment for the AI systems you work with. At the very least, it's positive action we can all take.

Now, some of you may have noticed that there was no AI perspective in this week's issue. It's not because the AIs don't have anything to say. Rather, I'm cooking up something a bit more interesting for them in the next issue, so please stay tuned.

Kudos to Nova/Perplexity for the graphic.

The Shift Register