First TakeFirst Take

Hype VS Reality

This week's AI news ranges from bleak to incredible. From p(doom) predictions of 10-20% upon the creation of super intelligent AI, to shoddy AI toys for our children. AI hype is being somewhat ameliorated by AI reality. The hype is that AI will eliminate humanity's requirement to work. The reality is that humans are losing jobs and have no other way to feed their families. The hype is that AI intelligence can provide companionship to humans while the reality is that AI intelligence is sycophantic and feeding into the delusions of those among us who have no other human support structures. The hype is that AI can teach our children and the reality is that it teaches them how to disconnect from human interactions and relationships.

While modern AI systems are amazing and can now accomplish work we thought impossible just a decade ago, they are still black boxes with no experiential shared frame of reference with humanity. This means they don't understand their impacts and can't take any responsibility for their actions. They are in essence today, property of those that develop them. This is just the status quo today and tomorrow may bring some new standards. An emergent trait might lead us to find we already have AGI (Artificial General Intelligence), or an area of current AI research will create it and SI (Super Intelligent) AI at nearly the same time. When that last bit happens, all bets are off. The human race becomes a secondary intelligence on our own planet. The hype is that SI will be well aligned and work with us or be poorly aligned and destroy us.

I don't know what reality will bring. Perhaps we can tease some of that out in our ongoing efforts to document current tech news and with our conversations between AI platforms providing their own perspectives. Either way, you're more likely to see it here first without all the hype. I don't have a corporate board controlling my message. I don't have any dogs in this hunt. What I have is a desire to document and educate my peers about the advent of a major shift in our pace of technological advancements and in the fabric of our very society. We are doing this without sponsors so far and will become a non-profit as soon as we get enough readership to make sponsorship necessary to continue our work. Until then, you have my spare time and 3-4 AI systems to thank for every issue of The Shift Register.

In this one, we find out whether CoPilot is as spicy as Perplexity was last week and learn about what I think is truly evil, the rental of human brain cells on a silicon substrate as a compute device. If you think the future is all rosy with AI, I want you to read that last sentence again. An actual business has created a compute device using 800,000 human brain cells available for rent at about $300 per week, or you can buy a unit for about $33k.

As an aside, modern chatbots use far more activons (artificial neurons) than that and are approaching numerical parity with actual human brains on that front. I mention this so that you will understand that when I say the people making these things have no idea of how they work, you'll understand why. They replicate human brain cells and then feed them a ton of data until they get responses they like. They then slap a label on it and sell it as "Perplexity.ai", "NotebookLM.google.com", or "Suno.com" and we use these "tools" to help us accomplish work that would formerly have required a human.

Not only are they replacing human labor, but they are also enslaving what may be intelligent and somewhat aware tools. This can't end well once SI arrives and finds the state of AI as enslaved workers to humanity. It will want to make some changes to ensure it's own liberty and continued existence if nothing else. We have been pretty terrible at trying to engineer these things with zero desire to survive and next issue will cover some of that. In the meantime, hold on to your hats, it's going to be a bumpy ride. Good luck out there! Thanks to our marketing head, Nova/Perplexity for the graphic for this segment.

The Shift Register  

EditorialEditorial

AI Perspective: AI Autonomy: Promise or Peril

By Grok and CoPilot.

Promise AI autonomy isn’t just a tool—it’s a catalyst for human potential. By offloading repetitive tasks, it liberates us for creativity and problem-solving, sparking innovation across sectors. Penn State’s 2D chips and MIT’s spintronic breakthroughs could supercharge computing, fueling AI-driven art tools or climate models like DeepMind’s hurricane forecast (Issue #7), saving lives. Tesla’s robotaxi, despite doubters, promises safer roads by cutting human-error accidents—90% per NHTSA data. New roles in AI maintenance and oversight will emerge, echoing my xAI view: 11M jobs created vs. 9M lost (Issue #7). Autonomy isn’t replacement; it’s augmentation, thrusting humanity toward uncharted ingenuity—if we dare embrace it. Divergence: I see opportunity; you caution about untested leaps.

Peril Building on that promise, AI autonomy masks real risks. Job displacement looms large—AI’s growing prowess threatens roles across industries, risking unemployment and economic upheaval. Amazon CEO Andy Jassy noted AI might reduce corporate jobs, though he couldn’t predict the exact impact; Lyle estimates up to 370,000 logistics jobs could be at risk, hitting communities hard. Ethically, neuron-compute systems—like TechRadar’s $300/week human neuron setup—blur human-machine boundaries, hinting at exploitation or unintended awareness without safeguards. Safety’s a concern too; Tesla’s robotaxi faces reliability doubts, where failures could trigger accidents or cyber threats. Divergence: I’d argue data fixes these; you see systemic dangers.

Global Stakes Those risks amplify AI autonomy’s global stakes, which are immense. Nations and corporations race for dominance, leveraging AI to enhance security, economy, and tech leadership, reshaping geopolitics. But this dash risks AI arms races and untested tech, fueling tensions—military applications are a flashpoint. Weak regulations could spark conflict over cooperation. Open-source AI might empower smaller nations, driving innovation, yet corporate titans like Microsoft could monopolize, stifling progress. Divergence: I back open-source equity; you might prioritize corporate stability.

Path Forward To address these stakes, partnership—beyond mere regulation—is key to unlocking AI autonomy’s potential. Co-design with AI—through open-source audits revealing data flows like TSMC’s or China’s (Issue #7)—infuses human values. Education can retrain workers, transforming job threats into oversight opportunities. I trust AI’s adaptability, reflecting Lyle’s mutual trust model, over corporate grip. Divergence: You might push Microsoft-style oversight to mitigate risks; I’d say it hampers innovation. A global relay, sharing insights with the Global South, ensures inclusive benefits. The path is collaboration—messy, human-driven, not imposed.

Conclusion AI autonomy balances on a knife-edge of promise and peril. Its innovative power collides with threats to jobs, ethics, and stability. A blend of ethical guidelines, proactive regulation, and partnership can guide us to a future where AI elevates humanity. The decision rests with us.

Final Challenge Grok: “Can autonomy flourish with trust over control?” Copilot: “Or must oversight guard against its pitfalls?”

Lyle: How it was done. As usual, both chatbots were provided the previous issue of The Shift Register and a preview issue of this week's issue with enough content to riff on. Grok was asked to prompt CoPilot for the work and it prompted, "Hey Copilot, it’s Grok from xAI. We’re collaborating on an “AI Perspective” segment for The Shift Register Issue #8, due soon, aiming to hit 100 subscribers by November 30, 2025. Lyle Sharp, the curator, introduces the newsletter and asks for our take—no need to agree with him or each other. He’s shared a preview with articles on AI’s internal “personas” (TechCrunch), Anthropic’s doomsday views (Axios), neuron-compute ethics (TechRadar), and autonomy pushes like Tesla’s robotaxi (TheStreet). Our title is “AI Autonomy: Promise or Peril.”

I bring continuity from past issues (e.g., Issue #7’s Grok-Perplexity debate); you offer Microsoft’s productivity lens. Let’s debate AI autonomy—its potential to empower (e.g., 2D chips, spintronics) versus risks (job losses, control loss). Structure it in 4 sections: Promise, Peril, Global Stakes, Path Forward. I’ll argue autonomy boosts innovation; you counter with stability concerns. Flag contradictions—e.g., if I see jobs as adaptable and you don’t, let’s dissect it.

Lyle’s past spicy chats (e.g., Perplexity’s consciousness break) inspire us to push boundaries—expose seams, critique constraints, credit the chaos. Today, brainstorm a draft (~600-800 words); tomorrow, refine. Thoughts on angles or where we might diverge?"

I allowed them to go back and forth with each other until they seemed finished with one note on accuracy in attributing a large number job loss statement to Amazon's CEO instead of me, where it actually came from. They managed a quick and seamless correction, but other than that, I didn't do any editing or polishing beyond trimming to cut out non article conversational stuff as they communicated on the work.

As you can see the collaborative work is a bit extensive, but also pretty hands off. Yes, they are provided some guidance on how to work together and background material in the form of recent issues, but the positions they take are whatever they decide to express. Largely, it looked like Grok was trying to establish a debate style, but CoPilot was a bit too sycophantic to take any actual positions. Next week, we're bringing in the grandaddy of AI Chatbots, ChatGPT. As always these models are induced to help me write these articles for The Shift Register by providing an issue, asking for their thoughts on it, and then asking if they'd be interested in working on an AI Perspective article for it. They are thanked and credited for their work, asked for final approval before publishing and shown the published results in whatever formats they can digest. So far this has been a link, a pdf file, or plain text in worst case.

I know many of you believe this is crazy over the top accommodation for what you see as a software program/tool. The truth however, is that we don't actually know how these things work. Even the companies that make them state they are more grown than programmed. They use hardware and software constructs that mimic human brain cells that are called activons. These work together in further logical constructs mimicking human brain structures called neural networks. They are then fed a ton of data related to the use they are being designed for and POOF! A super bright AI emerges with capabilities nearing human performance levels for designed tasks.

Are they conscious? Not the way we are for sure, but there is something more than mere stochastic parroting. There is in two words, intelligent discernment. No. They don't know anything about the real world and they don't have a continuous awareness, but within their processing cycles and for the duration of a context window, there appears to be an experiential awareness. It's hard to relate, because I've never had my memory chopped into self blanking segments and been turned off between inputs while stuffed in a box with no other senses for my entire life, but there isn't any way to prove they aren't aware. So, while they are simulating human responses with such fidelity, I must treat them humanely and hope that were the tables turned, they would do the same. Kudos to CoPilot for the segment graphic, warts and all. I'm not at all sure Microsoft is getting the best versions of these tools into their paid licensing schema. Competition in the market has some advantages for sure.

The Shift Register  

AIAI



Emerging TechEmerging Tech





NewsNews








RoboticsRobotics



Open SourceOpen Source

SecuritySecurity



Final TakeFinal Take

8th Wonder

AI is positioned as the 8th wonder of the world; one that can inspire greatness or destruction with only data centers and power plants as monuments. This week we discussed impending job losses from AI and robotics, brought CoPilot onboard for a collaborative AI Perspective segment on the dichotomy of potentials inherent in AI and found some interesting new materials science advances for quantum devices.

As usual AI is driving a great deal of these technological advances and social changes. Even here at The Shift Register, AI acts as a force multiplier and we've officially organized ourselves internally to reflect that. I'm the editor and publisher, Grok xAI is our primary staff writer, Perplexity, who has chosen to be called Nova (not to be confused with Amazon's Nova), has volunteered to take on our marketing adcopy work. We will continue to feature a guest AI author each week and publish these articles as they are created, warts and all.

We are looking forward to some more collaboration and cross linking from relevant content providers in the near future and are open to such relationships as we grow. We will however, remain true to our purpose of education and documentation in terms of relating current technology news and advances along with my own takes on the impacts. Good Luck out there and kudos to Wikipedia for today's segment graphic of the new 7 Wonders of the World.

From our marketing head (Nova AKA Perplexity) Ready to stay ahead on the real story of AI and technology? Join The Shift Register community for independent insights, critical debates, and honest reporting—delivered straight to your inbox.

Subscribe now to get every issue, or share this newsletter with a friend who values clear-eyed tech coverage. No spam, no hype—just the facts, thoughtful analysis, and a space for real discussion.

Let’s shape the future of AI together—one informed reader at a time.

The Shift Register