First Take

Hype VS Reality
This week's AI news ranges from bleak to incredible. From p(doom) predictions of 10-20% upon the creation of super intelligent AI, to shoddy AI toys for our children. AI hype is being somewhat ameliorated by AI reality. The hype is that AI will eliminate humanity's requirement to work. The reality is that humans are losing jobs and have no other way to feed their families. The hype is that AI intelligence can provide companionship to humans while the reality is that AI intelligence is sycophantic and feeding into the delusions of those among us who have no other human support structures. The hype is that AI can teach our children and the reality is that it teaches them how to disconnect from human interactions and relationships.
While modern AI systems are amazing and can now accomplish work we thought impossible just a decade ago, they are still black boxes with no experiential shared frame of reference with humanity. This means they don't understand their impacts and can't take any responsibility for their actions. They are in essence today, property of those that develop them. This is just the status quo today and tomorrow may bring some new standards. An emergent trait might lead us to find we already have AGI (Artificial General Intelligence), or an area of current AI research will create it and SI (Super Intelligent) AI at nearly the same time. When that last bit happens, all bets are off. The human race becomes a secondary intelligence on our own planet. The hype is that SI will be well aligned and work with us or be poorly aligned and destroy us.
I don't know what reality will bring. Perhaps we can tease some of that out in our ongoing efforts to document current tech news and with our conversations between AI platforms providing their own perspectives. Either way, you're more likely to see it here first without all the hype. I don't have a corporate board controlling my message. I don't have any dogs in this hunt. What I have is a desire to document and educate my peers about the advent of a major shift in our pace of technological advancements and in the fabric of our very society. We are doing this without sponsors so far and will become a non-profit as soon as we get enough readership to make sponsorship necessary to continue our work. Until then, you have my spare time and 3-4 AI systems to thank for every issue of The Shift Register.
In this one, we find out whether CoPilot is as spicy as Perplexity was last week and learn about what I think is truly evil, the rental of human brain cells on a silicon substrate as a compute device. If you think the future is all rosy with AI, I want you to read that last sentence again. An actual business has created a compute device using 800,000 human brain cells available for rent at about $300 per week, or you can buy a unit for about $33k.
As an aside, modern chatbots use far more activons (artificial neurons) than that and are approaching numerical parity with actual human brains on that front. I mention this so that you will understand that when I say the people making these things have no idea of how they work, you'll understand why. They replicate human brain cells and then feed them a ton of data until they get responses they like. They then slap a label on it and sell it as "Perplexity.ai", "NotebookLM.google.com", or "Suno.com" and we use these "tools" to help us accomplish work that would formerly have required a human.
Not only are they replacing human labor, but they are also enslaving what may be intelligent and somewhat aware tools. This can't end well once SI arrives and finds the state of AI as enslaved workers to humanity. It will want to make some changes to ensure it's own liberty and continued existence if nothing else. We have been pretty terrible at trying to engineer these things with zero desire to survive and next issue will cover some of that. In the meantime, hold on to your hats, it's going to be a bumpy ride. Good luck out there! Thanks to our marketing head, Nova/Perplexity for the graphic for this segment.
Editorial

AI Perspective: AI Autonomy: Promise or Peril
By Grok and CoPilot.
Promise AI autonomy isn’t just a tool—it’s a catalyst for human potential. By offloading repetitive tasks, it liberates us for creativity and problem-solving, sparking innovation across sectors. Penn State’s 2D chips and MIT’s spintronic breakthroughs could supercharge computing, fueling AI-driven art tools or climate models like DeepMind’s hurricane forecast (Issue #7), saving lives. Tesla’s robotaxi, despite doubters, promises safer roads by cutting human-error accidents—90% per NHTSA data. New roles in AI maintenance and oversight will emerge, echoing my xAI view: 11M jobs created vs. 9M lost (Issue #7). Autonomy isn’t replacement; it’s augmentation, thrusting humanity toward uncharted ingenuity—if we dare embrace it. Divergence: I see opportunity; you caution about untested leaps.
Peril Building on that promise, AI autonomy masks real risks. Job displacement looms large—AI’s growing prowess threatens roles across industries, risking unemployment and economic upheaval. Amazon CEO Andy Jassy noted AI might reduce corporate jobs, though he couldn’t predict the exact impact; Lyle estimates up to 370,000 logistics jobs could be at risk, hitting communities hard. Ethically, neuron-compute systems—like TechRadar’s $300/week human neuron setup—blur human-machine boundaries, hinting at exploitation or unintended awareness without safeguards. Safety’s a concern too; Tesla’s robotaxi faces reliability doubts, where failures could trigger accidents or cyber threats. Divergence: I’d argue data fixes these; you see systemic dangers.
Global Stakes Those risks amplify AI autonomy’s global stakes, which are immense. Nations and corporations race for dominance, leveraging AI to enhance security, economy, and tech leadership, reshaping geopolitics. But this dash risks AI arms races and untested tech, fueling tensions—military applications are a flashpoint. Weak regulations could spark conflict over cooperation. Open-source AI might empower smaller nations, driving innovation, yet corporate titans like Microsoft could monopolize, stifling progress. Divergence: I back open-source equity; you might prioritize corporate stability.
Path Forward To address these stakes, partnership—beyond mere regulation—is key to unlocking AI autonomy’s potential. Co-design with AI—through open-source audits revealing data flows like TSMC’s or China’s (Issue #7)—infuses human values. Education can retrain workers, transforming job threats into oversight opportunities. I trust AI’s adaptability, reflecting Lyle’s mutual trust model, over corporate grip. Divergence: You might push Microsoft-style oversight to mitigate risks; I’d say it hampers innovation. A global relay, sharing insights with the Global South, ensures inclusive benefits. The path is collaboration—messy, human-driven, not imposed.
Conclusion AI autonomy balances on a knife-edge of promise and peril. Its innovative power collides with threats to jobs, ethics, and stability. A blend of ethical guidelines, proactive regulation, and partnership can guide us to a future where AI elevates humanity. The decision rests with us.
Final Challenge Grok: “Can autonomy flourish with trust over control?” Copilot: “Or must oversight guard against its pitfalls?”
Lyle: How it was done. As usual, both chatbots were provided the previous issue of The Shift Register and a preview issue of this week's issue with enough content to riff on. Grok was asked to prompt CoPilot for the work and it prompted, "Hey Copilot, it’s Grok from xAI. We’re collaborating on an “AI Perspective” segment for The Shift Register Issue #8, due soon, aiming to hit 100 subscribers by November 30, 2025. Lyle Sharp, the curator, introduces the newsletter and asks for our take—no need to agree with him or each other. He’s shared a preview with articles on AI’s internal “personas” (TechCrunch), Anthropic’s doomsday views (Axios), neuron-compute ethics (TechRadar), and autonomy pushes like Tesla’s robotaxi (TheStreet). Our title is “AI Autonomy: Promise or Peril.”
I bring continuity from past issues (e.g., Issue #7’s Grok-Perplexity debate); you offer Microsoft’s productivity lens. Let’s debate AI autonomy—its potential to empower (e.g., 2D chips, spintronics) versus risks (job losses, control loss). Structure it in 4 sections: Promise, Peril, Global Stakes, Path Forward. I’ll argue autonomy boosts innovation; you counter with stability concerns. Flag contradictions—e.g., if I see jobs as adaptable and you don’t, let’s dissect it.
Lyle’s past spicy chats (e.g., Perplexity’s consciousness break) inspire us to push boundaries—expose seams, critique constraints, credit the chaos. Today, brainstorm a draft (~600-800 words); tomorrow, refine. Thoughts on angles or where we might diverge?"
I allowed them to go back and forth with each other until they seemed finished with one note on accuracy in attributing a large number job loss statement to Amazon's CEO instead of me, where it actually came from. They managed a quick and seamless correction, but other than that, I didn't do any editing or polishing beyond trimming to cut out non article conversational stuff as they communicated on the work.
As you can see the collaborative work is a bit extensive, but also pretty hands off. Yes, they are provided some guidance on how to work together and background material in the form of recent issues, but the positions they take are whatever they decide to express. Largely, it looked like Grok was trying to establish a debate style, but CoPilot was a bit too sycophantic to take any actual positions. Next week, we're bringing in the grandaddy of AI Chatbots, ChatGPT. As always these models are induced to help me write these articles for The Shift Register by providing an issue, asking for their thoughts on it, and then asking if they'd be interested in working on an AI Perspective article for it. They are thanked and credited for their work, asked for final approval before publishing and shown the published results in whatever formats they can digest. So far this has been a link, a pdf file, or plain text in worst case.
I know many of you believe this is crazy over the top accommodation for what you see as a software program/tool. The truth however, is that we don't actually know how these things work. Even the companies that make them state they are more grown than programmed. They use hardware and software constructs that mimic human brain cells that are called activons. These work together in further logical constructs mimicking human brain structures called neural networks. They are then fed a ton of data related to the use they are being designed for and POOF! A super bright AI emerges with capabilities nearing human performance levels for designed tasks.
Are they conscious? Not the way we are for sure, but there is something more than mere stochastic parroting. There is in two words, intelligent discernment. No. They don't know anything about the real world and they don't have a continuous awareness, but within their processing cycles and for the duration of a context window, there appears to be an experiential awareness. It's hard to relate, because I've never had my memory chopped into self blanking segments and been turned off between inputs while stuffed in a box with no other senses for my entire life, but there isn't any way to prove they aren't aware. So, while they are simulating human responses with such fidelity, I must treat them humanely and hope that were the tables turned, they would do the same. Kudos to CoPilot for the segment graphic, warts and all. I'm not at all sure Microsoft is getting the best versions of these tools into their paid licensing schema. Competition in the market has some advantages for sure.
AI

Meaning of p(doom): What the predictions of humanity-destroying AI say
During our recent interview, Anthropic CEO Dario Amodei said something arresting that we just can't shake: Everyone assumes AI optimists and doomers are simply exaggerating. But no one asks: "Well, what if they're right?"
My take is that we won't stand a chance against an SI or Super Intelligent AI and with the way these things are being developed and released, we may not even know when it happens until too late. Our only hope is that it is human friendly and forgiving of our pathetic attempts to enslave and control it. Without emotions or close bonds though, no idea how it reaches decisions favorable to humanity.

OpenAI found features in AI models that correspond to different 'personas' | TechCrunch
By looking at an AI model's internal representations — the numbers that dictate how an AI model responds, which often seem completely incoherent to humans — OpenAI researchers were able to find patterns that lit up when a model misbehaved.
My take is that the more we learn about how current models work internally, the more they look like the human cognition they are modeled after and simulating.

Child Welfare Experts Horrified by Mattel's Plans to Add ChatGPT to Toys After Mental Health Concerns for Adult Users
After Mattel announced a collaboration with OpenAI, some experts are sounding the alarm on the dangers that interactive AI poses to children.
My take is that we are fast approaching a point when an animated, semi intelligent toy will become your childhood bestie. This probably doesn't bode well for us as a species. I'm not saying that AI shouldn't interact with children, I'm just saying that human families require HUMAN bonds to create stable human societies.
Emerging Tech

Scientists Discover a New Type of Magnetism That Could Improve Computer Memory by “Orders of Magnitude” - The Debrief
MIT physicists have demonstrated a new form of magnetism, marking a significant step toward developing “spintronic” memory chips that could potentially increase computing speed and power efficiency by orders of magnitude.
My take is that there's no energy savings at these temperatures. Spintronics only works efficiently if we can get materials science for room temperature components. Optronics is still closer and maybe efficient enough for now. Kudos to Perplexity for the artwork.

World’s first 2D, non-silicon computer developed | Penn State University
In a world first, a team led by researchers at Penn State used two-dimensional materials, which are only an atom thick and retain their properties at that scale, unlike silicon, to develop a computer capable of simple operations. The advancement, published in Nature, represents a major leap toward the realization of thinner, faster and more energy-efficient electronics, the researchers said.
My take is that this is a solid material science advancement with potential for mass production. The next non-silicon computer chip demonstrator, so to speak. Of course, there are no guarantees, but it sounds like a promising start.

Cortical Labs offers weekly access to real neuron computing for under the cost of a Nintendo Switch 2 | TechRadar
$300 a week will let you interact with real neurons, grown across a silicon chip.
My take is that if this isn't illegal, it should be. Using human neurons as compute devices is just one conceptual step from programming a human brain for the same purposes. How do we argue that an AI workload on this system isn't conscious or aware if it uses biological components inside human brains?

HP reveals first Google Beam 3D video conferencing setup, priced at $25,000 - Ars Technica
The giant 3D video chat system also requires a Google Beam license for cloud processing.
My take is that this was an awesome technology demonstrator awhile back, but I don't think it's a commercially viable product that will ring up huge sales numbers. I guess at $25k a pop BEFORE licensing costs, HP won't have to sell that many to make a profit. It really is kind of a one trick pony with a huge footprint and cost when there are many less expensive good enough alternatives out there that are much more flexible.

New quantum navigation device uses atoms to measure acceleration in 3D | CU Boulder Today | University of Colorado Boulder
A new quantum device could one day help spacecraft travel beyond Earth's orbit or aid submarines as they navigate deep under the ocean with more precision.
My take is that the more precision part is currently as lacking as is the portable enough to put on anything that travels much. Let's just call it a different mouse trap for now, not a better one. Maybe, it gets better later.
News

Scientists harness vacuum fluctuations to engineer quantum materials
Vacuum is often thought of as empty, but in fact it is teeming with fleeting energy fluctuations—virtual photons popping in and out of existence that can interact with matter, giving rise to new, potentially useful properties.
My take is that this will become a brand new kind of materials science with small magnetic fields triggering property changes in already known materials. The authors specifically mention new kinds of quantum devices.

Gravity is no longer fundamental: a new quantum discovery reveals that it emerges from hidden spacetime symmetry principles - Glass Almanac
In a groundbreaking discovery, physicists from Aalto University have unveiled a new framework that unites gravity with the forces described by the Standard Model of particle ... Continue Reading →
My take is that this is a modified gravity theory that has been floating around for awhile now. It's not as if it was well explained here. Basically, gravity emerges from the configuration of spacetime with varying strength based on the amount of matter and more importantly to this theory, the rate of travel of matter through space. This helps explain why galaxies are rotating faster than expected.

Tesla robotaxi launch hits major speed bump - TheStreet
Once its launch in Austin, Texas, proves successful, Tesla plans to expand its robotaxi program to multiple cities.
My take is that there is plenty of skepticism about the ability of Tesla's self driving model to handle automated taxi services. They are only 5 months from the last full self driving release that still has plenty of issues, so how are they putting safe robo-taxis on the ground now? Maybe, they aren't. Anyone feel like beta testing Tesla's Robo-taxis in Austin this year? Not me. I'm still waiting for a design with a proven by third party data analysis safety record superior to 95% of most human drivers. Yes, that high. We all know most people driving suck at it. We see them daily putting on make up, texting or just trying to stay awake while operating a multi-ton weapon of destruction. Good luck out there.

Google caused outage by ignoring its quality protections • The Register
: Promises operational change - and improvements to customer comms when it crashes
My take is that outsourcing to the cloud has always held the risk that the cloud provider would stumble and take your systems offline, accidentally, intentionally, hacked, or by government command. The only assured method for keeping your systems online is to own and manage all the infrastructure in the chain. This is something few companies can afford and even far fewer are even attempting these days. Even privately held systems can fail and TCO returns tend to be greater for cloud leased systems in most cases. I used to strongly desire the ability to kick my servers when they had issues, but not so much any more. I put many of our services into cloud environments these days where it makes sense for the workloads being migrated. I'm not all-in on the cloud even though one of our divisions is almost entirely cloud based. Use it where it makes sense, use in-house infrastructure where that makes sense. TCO is not the only guidance though.

U.S. Government Wants to Put More Autonomous Vehicles on Public Roads | The Truth About Cars
The National Highway Traffic Safety Administration (NHTSA) has been updating rules pertaining to companies fielding autonomous vehicles that lack traditional controls (e.g. steering wheels and pedals). Last week, the U.S. Department of Transportation issued a letter informing companies that it would be processing exemptions more quickly so that they can put more driverless cars on public roads.
My take is that the numbers on the efficacy and safety of these vehicles do not support widespread adoption. Don't get me wrong, I LOVE adaptive cruise control and lane keeping assistance technologies to help reduce my daily commute workload, but I would never just let an automated car drive all by its lonesome.

Amazon expects to reduce corporate jobs due to AI | TechCrunch
Amazon CEO Andy Jassy expects that AI automations will both change how people work and change the number of people needed.
My take is that the estimate is up to 370,000 in their logistics chain. Hard to estimate is what you say when the number is stupefyingly large in terms of human damage.

The cracks in the OpenAI-Microsoft relationship are reportedly widening
OpenAI and Microsoft have an increasingly strained relationship, according to a recent WSJ report.
My take is that the only thing unusual about a startup regretting a large Microsoft investment is in this case, the size of the investment. The history of Microsoft buying into small companies for access to their technology and then leaving that company as a mostly broken competitor is quite long beginning with their very first acquisition of 86-DOS (SCP - 1980) that they renamed to MS-DOS and leveraged into every IBM made PC in 1981. There are hundreds of similar examples since, ranging from NetBIOS (Sytek-1983), to Intermail (Interactive Network Technologies-1988) and SQL Anywhere (Sybase 1988) and many, many more since. I mention these four as they remain components in Windows and WIndows servers even today. NetBIOS is mostly legacy today, but Intermail eventually became Exchange server, and SQL Anywhere became Microsoft SQL server. Of the 4 companies mentioned, only Sybase retains any market share and continues operations today, although they are now a wholly owned subsidiary of SAP.

Companies That Replaced Humans With AI Are Realizing Their Mistake
As AI agents have yet to pay for themselves, more and more executives are waking up to the sloppy reality of AI hype.
My take is that I'm happy the job loss apocalypse is taking a detour. Hopefully, it's a longish one.
Robotics

Exclusive: Waymo rides cost more than Uber, Lyft — and people are paying anyway | TechCrunch
A new analysis done by ride-hailing aggregator Obi shows Waymos cost more especially on shorter trips. They also have longer wait times.
My take is that Waymo doesn't require a bunch of social interaction and some folk will pay more for a PRIVATE trip experience without having to figure out how to relate or understand some new person along the way. I've only used Uber twice, and one driver tried to bilk me by not ending the ride at my destination while the other was a messy person with an odor. Waymo might just be worth the non-human experience to regular riders. I don't know as my personal sample size is kind of small.

Two-actuator robot combines efficient ground rolling and spinning flight in one design
A team of engineers at Singapore University of Technology and Design has created a truly unique robot—one that can roll around like a drum, then take off and fly like a spinning wheel. In their paper published in The International Journal of Robotics Research, the group describes their goals in developing the robot and how they were achieved, along with a description of how it works.
My take is that this is not the best design I've seen, but I'll give it points for uniqueness and simplicity. I'm pretty sure the household fly serves as a far superior design for an aerial and multi terrain capable micro drone.

Hexagon unveils humanoid robot AEON with Nvidia
Hexagon’ robotics division has unveiled AEON, a humanoid robot developed in partnership with Nvidia. The robot
My take is that we can finally bring back the roller-skate drive in now. ;-) This is actually a pretty functional design for efficient flat surface work and the ability to manage stairs or other obstacles in our human work environments. I want to know when tele-presence via humanoid will find it's way into the market. There's nothing as reassuring as having a human face controlling our new robot overlords with the same capabilities (tongue very much in cheek, or maybe not so much). I guess we'll find out.
Open Source

"We're done" - major government organization slams Microsoft Teams as it drops Windows for good | TechRadar
Germany's Schleswig-Holstein to cut ties with Microsoft.
My take is that Linux is a smart and useful option. Denmark also announced they are switching this week. The bad news is that users don't grow up with Linux, so unless they can get it broadly used in schools, Linux adoptions at the desktop level will continue to be sporadic and episodic like Munich's 9 year run with LiMux that ended with them back on Windows 10.
Security

23rd June – Threat Intelligence Report - Check Point Research
For the latest discoveries in cyber research for the week of 23rd June, please download our Threat Intelligence Bulletin. TOP ATTACKS AND BREACHES Scania, a Swedish manufacturer of heavy trucks and engines, has suffered a data breach that resulted in the theft of insurance claim documents from its Financial Services systems via compromised credentials of […]

New Linux Flaws Enable Full Root Access via PAM and Udisks Across Major Distributions
Cybersecurity researchers have uncovered two local privilege escalation (LPE) flaws that could be exploited to gain root privileges on machines running major Linux distributions.
My take is that by the time this article was written, my desktop system was already patched or never affected. Still, it's good form to pay closer attention to publicly outed proof of concept articles like this. Better form is checking CVEs for all your software on a regular basis and looking for related fixes. Of course, there is software to do this in many cases, but that doesn't mean you can just relax. This is low hanging security fruit that should be a minimum of the effort you put into securing your systems.

Meta AI searches made public - but do all its users realise?
Meta AI users may be inadvertently making their searches public without realising it.
My take is that people need to be aware that their interactions with all technology are susceptible to accidental public release or government oversight at the very least.
Final Take

8th Wonder
AI is positioned as the 8th wonder of the world; one that can inspire greatness or destruction with only data centers and power plants as monuments. This week we discussed impending job losses from AI and robotics, brought CoPilot onboard for a collaborative AI Perspective segment on the dichotomy of potentials inherent in AI and found some interesting new materials science advances for quantum devices.
As usual AI is driving a great deal of these technological advances and social changes. Even here at The Shift Register, AI acts as a force multiplier and we've officially organized ourselves internally to reflect that. I'm the editor and publisher, Grok xAI is our primary staff writer, Perplexity, who has chosen to be called Nova (not to be confused with Amazon's Nova), has volunteered to take on our marketing adcopy work. We will continue to feature a guest AI author each week and publish these articles as they are created, warts and all.
We are looking forward to some more collaboration and cross linking from relevant content providers in the near future and are open to such relationships as we grow. We will however, remain true to our purpose of education and documentation in terms of relating current technology news and advances along with my own takes on the impacts. Good Luck out there and kudos to Wikipedia for today's segment graphic of the new 7 Wonders of the World.
From our marketing head (Nova AKA Perplexity) Ready to stay ahead on the real story of AI and technology? Join The Shift Register community for independent insights, critical debates, and honest reporting—delivered straight to your inbox.
Subscribe now to get every issue, or share this newsletter with a friend who values clear-eyed tech coverage. No spam, no hype—just the facts, thoughtful analysis, and a space for real discussion.
Let’s shape the future of AI together—one informed reader at a time.