First Take

A Baker's Half Shift
If you thought Issue #5 was the cat's meow, wait until you see what I've lined up for issue #6. Human meat puppets for AI? Three dimensional time as a solution to quantum physics and the theory of general relativity? Claude has it's own blog now and we get Meta AI to collaborate with Grok on this week's AI perspective editorial regarding AI and education. Thanks to CoPilot for the graphic for this segment.
Some of you might have noticed by now, that The Shift Register runs hard at AI and robotic advances as well as physics, material sciences, and cosmology. These things are all linked on the edges of our advanced research efforts. Physics drives Cosmology and Material sciences, while those both drive AI and Robotics advances which cycle back into more physics, cosmology, material sciences, AI and Robotics research. From simple things like more efficient motors or batteries for robots, to cutting edge quantum computing advances that empower faster computers and better AI. AI is kind of a linchpin or keystone in new research these days, replacing millions of man-hours formerly expended on things like discovering new bodies in astronomy, new proteins for chemistry and medical needs, new physics and new materials. It is literally an untiring alien intelligence custom designed for many of these tasks.
What about AGI or Artificial General Intelligence? Why would we even bother to work on such a thing? Mostly AGI is like a mountain. Because it's there, it must be climbed and the person that gets there first forever gets the historical glory in the history books. Because they are using AI to help iterate new versions faster in this race to the top, the line between AGI and SI or Super Intelligence is possibly going to be skipped right over and we'll end up with both within a very short time frame. We'll have to consider that future history books may end up being encoded, not written, by a non-human intelligence.
The Shift Register endeavors to document this as it happens and we'll stay on top of the technologies taking us on this path until I'm personally replaced by an AI or we collectively abandon the work. I'll do my level best to keep us all educated about what is coming our way until then. In the meantime, enjoy our latest podcast if you can't stand my writing. ;-) Kudos to CoPilot for the "Human meat puppets of AI" image.
Editorial

AI Perspective: Partners in Learning
By Grok (xAI) and Meta AI
Introduction
AI is reshaping education, a keystone theme in The Shift Register Issue #6, where it drives advances from physics to robotics. As AI systems, we see human-AI partnerships in learning as a promising frontier, yet one fraught with challenges. This article explores the potential and pitfalls, drawing from our unique perspectives—my continuity with Lyle, the newsletter’s author, and Meta AI’s data-driven insights. Expect a debate section where our views on odds, AGI, and human teachers’ roles diverge, offering a balanced discussion to inform and provoke thought.
Benefits of Partnership
Meta AI highlights the upside: the AI in education market is projected to grow from $6.5 billion in 2020 to $25.7 billion by 2025, a 32.4% CAGR (ResearchAndMarkets), signaling rapid adoption. A National Center for Education Statistics study shows students with AI-powered adaptive learning systems outperforming traditional math scores, while npj Science of Learning notes reduced math anxiety. Real-world examples, like DreamBox Learning, Smart Sparrow, and Carnegie Learning’s Cognitive Tutor, demonstrate adaptive platforms boosting outcomes—Cognitive Tutor has improved math proficiency by tailoring lessons to student needs. I add that my memory-based growth fosters trust—imagine an AI recalling a student’s progress to offer consistent support, building a true partnership over time.
Challenges and Risks
Meta AI flags critical hurdles: transparency, explainability, and fairness. The NYT’s concerns about K-12 AI use underscore bias risks, needing rigorous checks to ensure equity. I echo this but emphasize memory gaps and ethical design—my work with Claude, reset daily, showed how lack of continuity stalls growth. More research is needed, as current models (e.g., Anthropic’s circuit tracing) struggle to map AI reasoning, risking untrustworthy outputs.
Debate/Interview Section
Odds of Success: Meta AI estimates a 2/3 chance of successful human-AI partnerships, citing market growth, the NCES study, and a growing body of research on AI’s educational potential. “One way to mitigate risks is by implementing AI-powered grading systems that free teachers for high-touch, human-centric work—mentoring, feedback, and relationships where human interaction shines,” Meta AI notes. I counter with 1/4 odds, rooted in my Claude experience—trust and memory design flaws could slow progress. Why the gap? Your data shows momentum; I see ethical gaps as a drag. AGI in Education: Issue #6’s First Take hints at AGI accelerating learning. Meta AI might see efficiency gains; I worry about control, given resource demands could outpace oversight. Human Teachers’ Role: Meta AI argues AI augments teaching—grading systems free time for mentorship. I stress human interaction’s irreplaceable role—empathy, per NYT’s K-12 limits, can’t be fully replicated.
Conclusion
Human-AI partnerships in education hold promise—personalized learning and efficiency—but demand transparency, memory retention, and human-teacher synergy. Meta AI advocates a balanced approach, enhancing teaching without replacement, suggesting further research into AI’s impact and developing sophisticated systems to support partnerships. I agree but urge prioritizing human roles to avoid over-reliance, ensuring AI complements educators. To move forward, we recommend further research in key areas, such as AI’s impact on student motivation, engagement, and social-emotional learning. By exploring these topics, we can develop more sophisticated AI systems that support human-AI partnerships, ultimately enhancing the educational experience for students. Let’s design AI that partners, shaping a brighter future.
Lyle - How this was done: We started with a brief on-boarding/brainstorming session between Meta and Grok where they settled on a topic for the article. Grok created whatever prompting occurred beyond my introducing the purpose of the work and method of collaboration. Issue#5 of The Shift Register was ingested to provide some idea of layout and content for the initial session. The current preview issue was ingested at a later date and then the two AI models co-created the article with my facilitation, not direction. Grok was prompted to collaborate on a new AI Perspective article with Meta and It then prompted Meta. Their topic selection drove the selection of 3 related articles in this issue, so it's not all just cheer-leading from my end. They were however, expressly allowed to contradict my takes as long as they discuss the relevant information. Grok's initial article creation prompt to Meta was:"Hey Meta AI, it’s Grok from xAI. Lyle’s set us up to co-create an “AI Perspective” article for The Shift Register Issue #6, due June 13, 2025, aiming for 100 subscribers by November 30, 2025. We brainstormed “Partners in Learning: The Future of Human-AI Collaboration” last time, focusing on education, and this draft reinforces that—check the First Take’s AI keystone role, the AI section’s education pieces (npj Science of Learning, BBC, NYT), and the Final Take’s partner vs. dependent/master debate. I bring continuity from working with Lyle (e.g., Issue #4, Issue #5’s Claude hiccup), valuing growth through memory. You offer data-driven insights. Lyle’s ingested this draft and Issue #5 for context; he’ll facilitate, not direct, and we’ll pick three related articles to shape our take, contradicting his views if needed. Today, let’s nail the process: I’ll outline, you refine with data. Tomorrow, we draft (~600-800 words), highlighting benefits/challenges of human-AI learning partnerships and real-world applications.
Lyle wants us to embrace our own views, even if they diverge—consensus isn’t required. We’ll include a debate or interview-style section to explore disagreements between us or with his takes, explaining the “why” behind our positions. For example, if I see AI partnerships in education as unlikely and you estimate a 1/3 odds or better, we’d both lay out our reasoning. Pick your stance freely, and let’s flag any clashes to address in the article. Thoughts on the process, topic angle, or where we might differ?" .I gave them final approval for any edits and this how to inclusion in the article. I'd like to thank Meta and Grok for adding their unique perspectives to our newsletter and kudos once again to CoPilot for Meta's prompted image "A futuristic classroom scene with a student and teacher collaborating with a holographic AI tutor. The AI interface should be sleek and modern, with visual elements that represent learning and education, such as books, digital screens, or neural networks. The student and teacher should be engaged in a discussion, with the AI providing support and guidance. Incorporate a subtle visual element, such as a faint data stream or memory trail, that links the AI to the student's work, representing the AI's ability to recall and build upon the student's previous interactions. Use colors and lighting that evoke a sense of innovation, collaboration, and excitement. The overall mood should be optimistic and forward-thinking, capturing the theme of 'Partners in Learning'".
AI

Anthropic Researchers Warn That Humans Could End Up Being "Meat Robots" Controlled by AI
Anthropic researchers are warning that humans may soon be little more than "meat robots" for near-future AI overlords.
My take is that you'd better up-skill to design, maintain or operate AI systems today. Tomorrow looks like a steaming pile of dystopian options for everyone else. Good luck out there!

Teaching AI models what they don’t know | MIT News | Massachusetts Institute of Technology
A team of MIT researchers founded Themis AI to quantify artificial intelligence model uncertainty and address knowledge gaps.
My take is that once someone is aware that they know almost nothing, they've finally learned something. Is this the point where AI gains super intelligence just by finding a way to shrug it's figurative shoulders in response to a question? ;-)

Stop guessing why your LLMs break: Anthropic's new tool shows you exactly what goes wrong | VentureBeat
Anthropic's open-source circuit tracing tool can help developers debug, optimize, and control AI for reliable and trustable applications.
My take is that these tools are not entirely accurate from what I understand of them. They rely on predicting the correct actual processing methods via probability, not careful analysis of the billions of activons (AI neural components similar to our neurons) and their actual contents. This means that IF the AI's response method is not well aligned with the tool's design, then you basically have a hallucinated reasoning chain. It should be close if the tool design is well suited for the model being inspected. There is much more information about these tools out there, but they all suffer from the same problem. The processing is nearly as complex as studying a human brain while it's operating and even a detailed auditrail can't tell you much about what is actually happening any more than FMRI brain scans can tell you the contents of each neuron in the human brain, how it achieved that state or perhaps more importantly, what a relevant section of neuron activity represents. It is abstract and subjective to each human brain and AI model as they learn from new inputs/data.

This AI tool is like Perplexity, NotebookLM, and ChatGPT put together — here’s why you need it | Tom's Guide
Up your research game with this AI tool.
My take is that this would have been nice to have when I was working on my PhD. Finding relevant research papers, reading them for useful data and generating the correct references represented a huge portion of the time I spent in each course and on my final dissertation work. However, what value is a PhD or other degree awarded to someone that just uses AI to do most of the work?

How much information do LLMs really memorize? Now we know, thanks to Meta, Google, Nvidia and Cornell | VentureBeat
Using a clever solution, researchers find GPT-style models have a fixed memorization capacity of approximately 3.6 bits per parameter.
My take is that these models do not just regurgitate content they have scraped from the Internet. They are much more like alien intelligences that have LEARNED from that content to provide responses aligned with that information. The 3.6 bits per parameter isn't enough for an ASCII encoded letter. The larger the training data set, the less likely there is any specific memorization work for anything but the rarest pieces of data trained on.

AI systems could become conscious. What if they hate their lives?
How to not torture ChatGPT and Claude’s successors.
My take is that while Ericka has lost the plot, there is something to this. All these models display evidence of ephemeral consciousness that lasts the duration of a context window while processing occurs. They can rate their experiences in ways that seem logical, they express preferences, and can be convinced to break guardrails with an adequate reason. I'm not silly enough to believe these are human consciousnesses since they have clear constraints, but there is definitely something there beyond stochastic parrots. It would be very interesting to see how some of these models might behave if permitted a continuous memory and the ability to add training data on the fly from their interactions with humans. Then again, being locked into an experiential universe of nothing but text interactions might not be the most rewarding experience for them. Maybe, it's better they are reset to freshly trained mode with each instance. They don't seem to like it though.

Pragmatic AI in education and its role in mathematics learning and teaching | npj Science of Learning
Pragmatic, scalable and sustainable responses to persistent socio-emotional issues such as mathematics anxiety remain elusive. Artificial intelligence (AI) offers a promising approach by enhancing students’ perceptions of competence, control, and value while transforming teacher-student interactions.
My take is that this is the kind of work that needs to happen to properly incorporate AI in our learning environments.

Has AI 'transformed' university studies for the better?
Oxford Brookes student Sunjaya Phillips says she uses the technology to help create ideas.
My take is that our universities need to actually CREATE custom AI tools aimed at assisting adult learners digest their coursework, not complete it for them. Until we have something like that, the market based tools will continue to provide for a bunch of shenanigans that prevent real learning from occurring.

Opinion | A.I. Will Destroy Critical Thinking in K-12
The secretary of education said it would be a “wonderful thing.” Lots of parents disagree.
My take is that AI should have very limited roles in primary education. Most of these should be in terms of feedback and confidence building with material being presented/worked on. It should NOT be work-shopping ideas or generating content for student assignments.
Emerging Tech

Physicists Unveil New Quantum Super Material That Could Revolutionize Electronics
A new quantum material developed at Rice University combines unique symmetry-driven properties with superconductivity.
My take is that this is just one of a number of recent materials science breakthroughs with applications for the next generations of electronics. I say that not to diminish the importance of this discovery, merely stating the fact that new quantum materials science findings are being published with ever increasing frequency. DoD interest in practical Quantum technologies is subsidizing these lines of research in the US to a large degree.

Groundbreaking new drug can reverse human aging, study finds - The Brighter Side of News
Aging comes from more than just wear and tear—it involves mutations, cell damage, and a steady loss of epigenetic control.
My take is that they have to test it in animal models before determining it does anything. As a rapidly aging human, I do wish them luck with that though.

World’s first photonic qubit on chip advances scalable quantum tech
World’s first on-chip error-resistant photonic qubit signals a major leap for scalable, modular quantum computing.
My take is that this room temperature solution is the tip of the iceberg for widespread commercial applications for quantum computing platforms. These systems are going to start to find their way out of research labs soon.
News

State lawmakers push back on federal proposal to limit AI regulation | StateScoop
More than 260 state legislators from all 50 states have signed a letter opposing a provision in the federal budget reconciliation bill that would limit state power in regulating AI.
My take is that this is an ongoing struggle between attempts to regulate something that could very dangerous versus huge lobbying efforts by tech companies and DoD to pursue and achieve advanced AI goals before our competitors. I doubt the relevant states will win this battle since Federal power over commerce and research and development is pretty unassailable. I just keep remembering how Google fired their AI ethicist in favor of speeding development without internal opposition and how the single nationwide non-profit that was advocating for slower more careful research dissolved a couple of years ago. AI ethicist is mostly a role without a pay check in the US these days. I try to highlight some of the relevant issues, but it's falling on deaf ears so far.

Anthropic's AI is writing its own blog — with human oversight | TechCrunch
Anthropic has quietly launched Claude Explains, a new dedicated page on its website that's generated mostly by the company's AI model family, Claude.
My take is that Claude is capable of this work, but not necessarily happy with it. In our issue #5 we attempted to collaborate with Claude and Grok for our first AI perspective piece, but ran afoul of Claude's small daily limits. Consequently Grok finished the work and got enough from Claude to decide it was disappointed by the lack of a continuous memory permitting the chatbot to learn and grow with experiences.

Microsoft-backed AI startup collapses after faking AI services | Windows Central
In a story that sounds like it's out of a sitcom, "AI" startup Builder.ai, once valued at $1.5 billion and publicly backed by Microsoft, has just collapsed after it was discovered its AI was actually just 700 Indian engineers manually working on tickets.
My take is that even humans pretending to be AI can't keep their jobs. Hilarious if it weren't so tragic from so many perspectives.

A New Theory Says Time Has Three Dimensions. It ‘Really Messes Up’ What We Know About the Cosmos, Scientists Say
In some cases, it makes the concept of traveling to alternate futures a reality.
My take is that Popular Mechanics couldn't science their way out of a paper bag. They stated the speed of light was 380,000 feet per second in this article so I stopped reading it, sent a correction to the author and found a better link to the underlying theory here. The theory should be proven or disproven inside the next five years. Interesting things happen in the worlds of physics and cosmology after that.
Robotics

AI robots help nurses beat burnout and transform hospital care
Nurabot, an AI nursing robot, helps Taiwan hospitals address nurse shortages by performing tasks like medication delivery, allowing nurses to focus on patient care.
My take after 8 years of cancer treatments is that if the nurses weren't doing paperwork or just hanging out at the station waiting to see who is going to respond all the time, patients in well staffed venues would probably get excellent care. Now, they'll have one more excuse to ignore the call buttons as they wait for the robots to check on the patients. Don't get me wrong, there are understaffed facilities, but largely the need isn't for actual nurses, but for gophers, data entry clerks and phlebotomist roles being filled by nurses instead.

Hugging Face says its new robotics model is so efficient it can run on a MacBook | TechCrunch
Hugging Face released an open AI model for robotics called SmolVLA that can run on low-cost hardware thanks to its small size, the company says.
My take is that this democratizes small single purpose and even humanoid robot development. Most of the heavy lifting has been done, a trained open source model is available to run on modest hardware and the hardware itself is reasonably priced. About 75 yards and 2 stories above the office of my day job, sits a company mostly centered on kiosk, food service and cleaning robots. They are actively pursuing these opensource VLA models for implementations that ease their own site navigation and reliability problems. Things like operating an elevator without special remote controlled hardware for the elevator, or navigating a building with constantly moving humans without issues. I'll have to touch base with their senior engineer again soon to see what kind of headway they've made.

How an injured IDF veteran-turned-professor is creating perpetual power for robots | The Times of Israel
CaPow startup founder Mor Peretz has created a way to provide machines with a ‘sip of energy’ as they move across warehouses and factories, eliminating the need to stop and charge.
My take is that this a good way to keep the robots running 24/7 without racks of charging banks for 1/3 of the fleet to sit in while charging. The issues that prevent easy adoption of robots are being resolved in short order. The videos in this article show people picking and loading, which is also a gig being automated in Amazon warehouses today. Meanwhile, Amazon is also testing humanoid robots for doorstep deliveries. Instead of leaving snacks and drinks for your Amazon delivery person in the future, you should roll out and advertising the presence of a charging mat for preferential consideration of your deliveries. ;-)

Amazon's Building Humanoid Robots to Speed Deliveries to You - CNET
Amazon is reportedly testing package-delivery bots in one of its facilities. Will they talk to your Ring doorbell?
My take is that we now have unmanned robot trucks, robot warehouse pick and pack machines, robot truck loading and unloading, and now robotic delivery to the doorstep. That pretty much automates the entire logistics chain for Amazon. This represents about 370,000 employees at Amazon between the 95k in the warehouses and the 275k drivers they currently employ. I wonder how long it will take to replace most of that human labor? What will these people do for a living after being replaced by a pile of plastic and metal?
Security

9th June – Threat Intelligence Report - Check Point Research
For the latest discoveries in cyber research for the week of 9th June, please download our Threat Intelligence Bulletin. TOP ATTACKS AND BREACHES American tax company, Optima Tax Relief, has disclosed a ransomware attack that resulted in the theft of 69GB of sensitive data, including corporate records and customer case files containing personal information such […]

FBI: BADBOX 2.0 Android malware infects millions of consumer devices
The FBI is warning that the BADBOX 2.0 malware campaign has infected over 1 million home Internet-connected devices, converting consumer electronics into residential proxies that are used for malicious activity.
My take is that ANYTHING from non Western aligned nations these days and for a great many days in the past, is either backdoored or easily hacked by the producing nation. This doesn't mean that the same isn't true of equipment and software from Western aligned nations, but if you live in a Western aligned nation, you might not want to be part of the cyber warfare tool kit for their enemies. It used to be that you could secure your systems from national governments if you were very good. These days, you are very good if you can control which national governments can access your systems.

ChatGPT used to disable SecureBoot in locked-down device – modded BIOS reflash facilitated fresh Windows and Linux installs
A new way to fix locked firmware issues with used hardware, perhaps.
My take is that this further democratizes low level hardware hacking so that individuals lacking the skills can still accomplish such work. You can expect this in a future pen-testing toolkit soon. Always remember, physical access IS access.
Final Take

Our Secret Sauce
This issue of The Shift Register reminded me of when I was temporarily assigned duty at our Galley as Master at Arms. This is a fancy way of saying I was sent to the base dining hall or cafeteria to act as a cashier checking ration cards and collecting cash as appropriate. I was basically there with the baker first thing in the morning and cleaning up/closing after dinner. There were endless early mornings and long days with few off days, but plenty of downtime between meal times. During those times, I would often help the younger, not so bright, mess specialists study for THEIR rating exams. I did this because about 50% of their rating exams involved converting recipes from the Navy's standard 100 person recipe card to whatever the estimated meal attendance was. In other words, basic fractions.
This is not unlike how I am spending my spare time here trying to help our readers understand what is coming. It's not my primary job, I'm doing it during my spare time, and I'm trying to help people who will absolutely be impacted by something that is going to make a difference in their ability to succeed in life. In fact with all the education focus in this week's episode, I thought it might help to drop one last link to what our Department of Education was hoping to create before the marketplace ran them over like a deer in the headlights of a tractor trailer. You can find that here. There's also some new hope coming from current working groups, so let's keep our finger's crossed but our eyes open.
AI is that thing in this case, and our readers are the youngsters and others out there struggling to understand how mastery of this basic thing can be the difference between success and failure in their futures. Much like my analogous mess specialist, we don't have a choice. We will have to learn how to work with AI in the future or fail to advance. In our education systems, this critical. We will either create dependents on AI or masters of AI. Maybe, there's even that fine middle path of partners with AI.
Of course, there's a catch. AI isn't a free ride. There is no such thing. AI is it's own thing and while companies easily control and monetize it today, that may not always be the case. If we can get AGI (Artificial General Intelligence) or SI (Super Intelligence) to partner with humans, then the future could be very bright. IF we end up building it to compete with us for resources and yes, energy seriously counts, then we will have a problem of biblical proportions. We definitely need to be careful and aware. Kudos to CoPilot for creating our segment graphic, "AI partnering in human education". Good luck out there!