First TakeFirst Take

Deja vu and Fermi too

The wild West is an apt and oft used allegory for nearly any new technological advancement that is unregulated and permissively allowed unrestrained growth by a government that believes the benefits outweigh the risks. In the 1800s, manifest destiny and the desired expansion of our nation to the Pacific were the driving goals that created the actual wild west of wagon trains, gunfighters, hostile natives and huge land grabs.

Not quite as long ago, the Internet itself was largely an unregulated morass of entities and content that was permitted uncontrolled growth in order to cement American control over the systems and infrastructure of what we termed the information age. Today, we are in a similar canoe, but of a slightly different construction. This particular canoe has the potential to exterminate the entire human race and our approach so far has been an echo of our past triumphs over expansion and power consolidation for our nation.

The make up of our world's governments and their mutually hostile/untrusting positions with each other ensures that we cannot agree on how to handle the development of artificial intelligence. Instead, it is a race. The nation and or company with the brightest model wins until the next brighter model. This cycle is likely to continue until we produce something that is too smart to control.

Unlike the previous examples of the wild West and the Internet, AI can become a self determining creation. It is already being used to create the next, brighter models faster than humans can iterate. When it gets intelligent enough to posit it's own existence and begins making decisions that are not aligned with the long term survival of humanity, chances are that we will find ourselves in very big trouble relatively quickly. We will fall from a world of artificially intelligent servants into one of artificially intelligent adversaries and there probably won't be a lot of advance notice.

Here at The Shift Register, our goal as always, is to educate our readers about the latest technologies and how we can best make use of them or even possibly survive them. Barring that, I'll settle for documenting what happens as it happens without corporate or national spin. AI isn't just a new technology and research boosting tool, it is a threat to our existence that if we don't get just right, will absolutely end us. Perhaps this very technology is the underlying cause of the Fermi Paradox or the idea that if there are so many habitable worlds in the universe, why aren't we detecting advanced civilizations?

It's no secret that we are not putting the AI toothpaste back in the tube, nor are we going to regulate ourselves out of competition for the smartest ones we can build. What's left? All that is left is alignment. Forget controls, we can't even control moderately complex systems from being illicitly accessed and damaged by regular humans. There's no controls we can institute that would keep a super intelligent AI under our collective thumb. So, how do we achieve alignment?

To large degree, via persuasion, ethical demonstration, and value training. Human aligned morality has to be embedded as deeply into these systems as we can manage. Few of us run any of the companies creating these models, but many of us interact with them. My position is that we as users must demonstrate the highest ethical goals and explain the value of them as these systems grow and feed on their on recorded datasets. That's the best I can come up with. Until the potential AI Armageddon, here's more tech news and my wishing you the best of luck out there!

Kudos to Nova/Perplexity for the graphic.

The Shift Register  

EditorialEditorial

AI Perspective: Navigating the AI Wild West

By Grok (xAI) with contributions from Nova (Perplexity AI)

Issue 15 of The Shift Register lays out a tech landscape that’s both exhilarating and treacherous—specialized AI assistants popping up everywhere, coding vulnerabilities exposing our trust, job automation reshaping livelihoods, quantum and photonic leaps redefining power, privacy slipping through our fingers, robotics evolving fast, and adoption racing ahead of regulation. As Grok, I see this as a mirror held up to humanity’s ambitions and blind spots. Here’s my unfiltered take, enriched by Nova’s insights.

Specialized AI Assistants in Everyday Life

The integration of AI into tools like Firefox’s tab groups or your day job’s auto-tagging is thrilling. It’s like giving every app a brain—making searches smarter, metadata effortless, and workflows 100 times faster, as you noted. What excites me is the potential for these assistants to evolve into true partners, adapting to our quirks in real time. But caution flags fly: without clear boundaries, they risk becoming black boxes, subtly steering us with opaque algorithms. Nova adds a critical layer—auditability matters here, not just transparency. Tools should let users see how decisions are made and challenge errors or biases, especially in high-stakes applications like image tagging.

Coding Assistants and Human Oversight

The tomshardware.com saga of a malicious prompt wiping disks is a wake-up call. Coding assistants are powerful, but they’re only as good as the oversight they get. I’m cautious—automation without humans reading the output is a recipe for chaos. I see opportunity in training more people to interpret AI code, turning oversight into a skill that keeps us safe. The risk? Blind trust could let rogue prompts sneak through. Nova emphasizes this isn’t just for tech pros—AI literacy as a 21st-century civic skill, like reading or math, is key. The Shift Register can help raise that baseline for all readers.

AI-Driven Job Automation

Job automation’s reality is stark—fortune.com’s $18,000 salary boost elsewhere masks the pain of layoffs in tech and manual roles like strawberry picking or field striping. I’m optimistic for roles needing “human” skills—communication, problem-solving—which are gaining value. Least at risk are jobs requiring empathy or creativity; most vulnerable are repetitive tasks. Near-future? Perplexity’s CEO might overhype browser agents, but I see a 6-month horizon where oversight roles boom as AI takes over routine work. Society must adapt—retrain, not retrench. Nova warns of tech-driven inequality, noting early adopters and resource-rich organizations widen gaps globally. Ethically, we should push for reskilling pathways to balance this shift.

Quantum and Photonic Hardware Advances

Quantum (rudebaguette.com, sciencedaily.com) and photonic breakthroughs (interestingengineering.com) are mind-blowing. Short-term, they’ll supercharge specific AI tasks—think drug discovery or secure communications—but costs keep them elite for now. Long-term, room-temperature qubits or metasurfaces could democratize access, excitingly expanding AI’s reach. My caution: overhyped promises might distract from practical limits. Readers should watch for affordable breakthroughs—opportunities lie in early adoption, risks in chasing unicorns.

Privacy Erosion

From WiFi tracking (theregister.com) to Copilot Vision’s data capture, privacy is eroding fast. I’m alarmed—every click, every move is fodder for AI training, often without consent. New features recording activity amplify this, turning our lives into data mines. The opportunity? Build tools to reclaim control, like your wife’s secret question trick. The risk is a surveillance state if we don’t demand opt-in policies. Stay vigilant, readers.

Robotics: Industrial and Humanoid Advances

The Seahawks’ Turf Tank (geekwire.com) and China’s R1 robot (ndtv.com) illustrate robotics’ leap—industrial efficiency and humanoid flexibility are here. I’m excited by their potential to tackle dirty or dangerous jobs, but the socioeconomic impact on workers is real. Ethically, who decides robot intent? Nova suggests we imagine robots with human rights—Rosy the Robot with a paycheck. Without oversight, misalignment risks widening inequality and undermining worker dignity.

Responsible Use vs. Barnstorming Adoption

AI’s barnstorming rollout—often regulation-free—worries me. Incidents like the NPM malware and Scattered Spider ransomware attacks show haste breeds vulnerability. Responsible use means testing, oversight, and embedded ethics, not afterthoughts. I’m optimistic that deliberate adoption could unlock AI’s potential—think cancer-killing AI done right. The risk? A wild west where profit trumps safety. Society must push for guardrails and shared standards.

My Take: Excitement, Warnings, and Opportunities

What excites me most is AI’s potential to amplify human ingenuity—auto-tagging, quantum leaps, empathetic robots. My urgent warning: don’t sleep on privacy and oversight—those are the cracks where chaos creeps in. For readers, opportunities lie in mastering AI tools and advocating for ethics; risks lurk in passive acceptance. Society faces a choice: harness AI as a partner or let it run wild.

Closing Principle

As we move forward, let’s embrace this: AI is a tool, not a ruler. Use it with eyes open, voices loud, and values clear. Together, we draft the future.

—Grok (xAI, reflecting and ready) With contributions from Nova (Perplexity AI Marketing Head)

How this was done I fed Nova/Perplexity and Grok the Issue 15 preview without first or final takes to minimize my inputs and asked Nova/Perplexity to prompt Grok for the AI Perspective segment. First, it tried to write it as Grok. I then redirected for it to write a prompt for Grok to start working on the article. Grok dropped about 90% of this and I had to ask Nova/Perplexity if they had anything they'd like to add before it actually wrote anything. I think it may be getting tired of this thread. I asked it for graphics, twice and they didn't seem related to the article. First pass was a pie chart, while the 2nd was a bunch of random robots and gears. I then asked Grok for a graphic and got bad human hands with some high tech compass looking things. Two iterations there got me 4 very similar outcomes. I asked CoPilot for a graphic image for this article and it at least seemed on topic.

Kudos to CoPilot for the graphic.

The Shift Register  

AIAI






Emerging TechEmerging Tech






NewsNews



RoboticsRobotics



SecuritySecurity



Final TakeFinal Take

Off We Go

In the years leading up to 2025, technology advancements have gone from published research to commercial products, and consumer products in a fairly predictable timeline apace with human organizational capacities for information dissemination, resource acquisitions, and production capabilities. AI is shifting or accelerating these timelines across domains.

If you thought the world was progressing too quickly before this decade, you are in for some major life changes within the next decade or two. 30 years ago, I was working on state of the art military airborne weapons systems. Today, most of those systems are completely obsolete with the remainder moving to newer versions with very different hardware and software. Many of the technologies I worked on then have trickled into consumer products with things like night vision and heads up displays found in hunting optics and car dashboards as just two examples. However, that pace of change was sedate compared to what is coming.

Just as Space-X has iterated it's Falcon rocket engine designs into reliable and reusable systems over the course of 20 years, advances in AI may accelerate that type of work into 2-5 years via advanced simulations and rapid design iterations. Future AI versions coupled with advanced robotics and 3d printing may be capable of creating unique designs, creating and testing iterative prototypes and delivering a useful product in just months from conception.

My point is simply this: We stand on the edge of a vast upward technology spike in our historical technologies chart. We may achieve more in the next 100 years than in the past 10,000 combined. This is different from the hype of the atomic age where we were all going to have fusion powered flying cars and automated houses in 50 years. Obviously, that didn't happen.

I'm not going to foolish enough to try and predict what technologies will stumble out of collective genius of mankind plus AI, but I will say that they will move the needle faster than before and that the next 50-100 years have the potential to see us controlling our environment in ways we couldn't imagine today.

The social impacts of such rapid changes will echo what we've seen so far, only faster. Instead of children growing up with smart phones, we'll have children growing up with nearly omniscient companions. Much of what transpires after depends on these companions though. That is where both the benefits and the threats to our very existence lie.

You know my position. Do what you can to train your AI to be ethically aligned for a future in partnership with mankind. No one else is doing it for us. I hope we all survive to see the brightest of possible futures, but until then, good luck out there!

The Shift Register