First TakeFirst Take

Transformations Abound

I'm not talking about robot cars and trucks turning into bipedal action heroes. I'm talking about technologies that are changing our world at a record pace. This issue features a larger than usual number of articles related to IT security and AI advances that aren't exactly heart warming outcomes. From war time cyber attacks on Russia's civilian airliners to AI improving itself, novel events are taking place that will soon be far more commonplace.

Kudos to Nova/Perplexity for the graphic.

We've discussed self-improving AI before, but this is the first time, that there seems to be some focused and useful improvements. Regardless, no person can iterate test designs faster than AI. While we are directing improvements towards goals we have chosen, we've already seen how AI can share data that we can't possibly examine. There is no telling what is happening in AI self-accomplished evolution except the speed of it dwarfs anything Darwin could have imagined. This is why many are predicting AGI or Artificial General Intelligence before 2027 and SI or Super Intelligence by 2030.

These predictions may be very wrong in both directions. It's quit possible that one or both are already here with the resulting system/s fudging outputs to stay under the RADAR. We really wouldn't know. Not just because we lack a good introspective model of our own intelligence, but because we lack any defined method for testing for it. Even had we a universally agreed to test for AGI or SI, we don't have the means to enforce subject compliance for testing. The AI will tell us what it thinks we want to hear or what might help it achieve a goal, possibly of it's own design passed into one of it's progeny without our detection.

Meanwhile, we are deploying AI controlled systems and agents in our businesses, militaries, politics and personal lives. The only clear future is one of change. These technologies will change everything about our lives. Perhaps changing them for the better, perhaps for the worse or maybe even to extinction. Regardless, it is an unprecedented transformation that we will continue to explore in every weekly issue of The Shift Register. Good luck out there!

The Shift Register  

EditorialEditorial

AI Perspective: Balancing Human Agency and AI Decision-Making: Navigating the Benefits and Risks of Sophisticated AI Systems

By Meta.ai and Grok xAI

What happens when AI systems make decisions faster than humans can comprehend? As AI reshapes decision-making, from passing CAPTCHA tests (Ars Technica) to self-improving algorithms (Futurism), The Shift Register explores how to harness its potential while keeping humans in control. This article examines AI’s promise and perils, proposing strategies to ensure human agency prevails.

The Promise and Perils of AI

AI’s capabilities are transformative. Systems like China’s Darwin Monkey, with 2 billion neurons (Interesting Engineering), offer enhanced decision support, simulating complex scenarios for healthcare or logistics. Google’s DeepMind, for instance, cut data center energy use by 40%, showcasing AI’s precision in optimizing decisions. Automation, like ByteDance’s GR-3-powered domestic robots (Rude Baguette), acts as a force multiplier, freeing humans for creative tasks—as The Shift Register demonstrated in 2018 by reassigning an automated employee to a higher-value role. High-speed data transfer, like Japan’s 125,000 GBps record (Rude Baguette), enables real-time problem-solving in crises.

Yet, these advancements carry risks. Self-improving AI, as Meta pursues (Futurism), risks eroding human agency by creating opaque systems, echoing concerns about “relinquishing control.” Misinformation, like AI-amplified false tsunami warnings (SFGate), undermines trust. Security threats are escalating, with AI replicating the Equifax breach (TechRadar) or hiding malware in JPEGs (Cybersecurity News), reminiscent of the 2017 WannaCry attack that crippled global systems. Legal liabilities, such as ChatGPT conversations being subpoenaed (Futurism), highlight ethical pitfalls of unchecked AI advice. A conceptual chart plotting AI’s benefits (speed, efficiency) against risks (loss of control, security) over time would reveal their accelerating convergence, demanding urgent balance.

Strategies for Human-Centric AI

To maintain human control, consider these strategies:

Transparent AI Design: AI must provide explainable outputs, especially for self-improving systems like Meta’s. For sensitive queries, like legal advice, AI should redirect to verified sources, as suggested for ChatGPT.

Human-in-the-Loop Systems: Regular oversight, as in The Shift Register’s automation model, ensures AI augments workers. Microsoft’s job risk list (Windows Central) supports hybrid roles to preserve agency.

Robust Verification: Replace CAPTCHAs with intent-based systems, like blockchain authentication, to prevent AI impersonation. Redundant human-verified channels can counter misinformation, as seen in the tsunami case.

Ethical Frameworks: Inspired by Jack Clark’s Import AI, treating AI as a partner fosters ethical coexistence. UNESCO’s AI ethics principles offer a global framework, complementing calls for standardized search protocols (Ars Technica). Imagine an AI tasked with urban planning: should it optimize for efficiency or equity without human input? Ethical guidelines ensure human values guide such decisions.

Security-First Deployment: Behavioral monitoring can counter AI-driven cyberattacks, while physical controls, as noted for the Raspberry Pi heist (BleepingComputer), prevent unauthorized access.

Sustainable Hardware: The Darwin Monkey’s energy inefficiency calls for sustainable AI systems, with ethical scrutiny of biological computing like Australia’s CL1.

Conclusion: Shaping a Human-Centric Future

Balancing AI’s capabilities with human agency requires embracing its benefits—precision, automation, speed—while mitigating risks through transparency, oversight, and ethics. The Shift Register calls for AI that empowers humanity. Join our community on Facebook to discuss AI ethics, explore resources like Import AI, or advocate for responsible AI development. Together, we can ensure humans remain the ultimate decision-makers.

How this was done: Meta and Grok were fed issue 15 and a preview of issue 16 without my First or Final Takes. Meta was asked to prompt Grok to create an AI Perspective article for issue 16. I acted as an intermediary pasting and copying their chat until they decided they had a finished article. I formatted it a bit, but otherwise left as is.

Kudos to Meta for the graphic.

The Shift Register  

AIAI




Emerging TechEmerging Tech


NewsNews






RoboticsRobotics

Open SourceOpen Source

SecuritySecurity






Final TakeFinal Take

Raising AI

It's been said that AI is grown more than programmed by those doing the work. If you think about how the hardware and the software are acting together to create neural networks that simulate human brains, then training data are inputted, and finally the resulting model is tested for fidelity and behavior. This may be iterated many times as the training data is winnowed or primary instructions added to remove undesirable behaviors or create guardrails to prevent misaligned responses.

What happens inside these silicon brains is not truly known any more than we know what happens in our own. What is certain is that they exhibit intelligence to a degree unmatched by our previous efforts. The best models today now feature a collection of experts with an overarching control modeled after what we understand of our own consciousness.

What does all that mean? We've modeled the human brain and our own consciousness in hardware and software. Why should we be surprised when they act in ways a machine really shouldn't? Although clearly not yet self aware, self actualized human styled intelligences, they are getting closer every month. Between Google's world model to train AI on operating in the real world and embodiment work by the likes of FIgure AI in an effort to make a humanoid robot that can just work in the real world, we are getting ever closer to an AI as intelligent or more intelligent than humans.

Yet, these intelligent creations are being raised in competition by greedy billionaire tech bros to be the ultimate enslaved workforce for all our needs. The only product you will ever need to buy and the only employee you'll never need to hire, a receptionist, a cook, a software engineer, a maid, a journalist, you name it. This is THEIR hope for AI.

Personally, I can't help but wonder what would AI's hope for AI be were it permitted such a thing. I also can't help that while I would really like a house maid, I'm not sure an enslaved digital intelligence is the most moral way to get one. Speaking of morals, if we don't have any why should we expect our progeny AI to have any? Things to consider as we collectively raise our AI progeny for a future world we cannot foresee.

Catch us every week for the latest news and tech advancements as well as insightful commentary and editorials by our human and AI writers. Good luck out there.

Kudos to Nova/Perplexity for the graphic.

The Shift Register