What if the future isn’t built on entirely new technology—but on old ideas used in new environments?

In my latest In 100 Years article, I explore a simple but surprisingly powerful idea:

Trains.

Not as nostalgia—but as a realistic solution for future transportation, even on the Moon.

Maglev systems already demonstrate incredible speed and efficiency here on Earth. When you consider airless environments, shared pressurized cabins, and the need for safe, reliable infrastructure, trains begin to make even more sense.

Sometimes the future isn’t about replacing everything.

Sometimes it’s about rediscovering what already works.

Full article: http://lewinoverinkpublishing.ca/blog.php?article=old-technology-new-again

#In100Years #Futurism #FutureTechnology #SpaceInfrastructure #Maglev #ScienceFiction #HardSciFi #Transportation #FutureOfTravel

An Open Letter to OpenAI: Machine Learning and What Comes Next

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — April 21, 2026 — 17:35 PHST

This is an open letter to the people building artificial intelligence, but it is also meant for the people trying to understand why this matters.

Machine learning did not begin with chatbots, image generators, or Silicon Valley marketing. It goes back to a much earlier idea: that a machine might improve through experience instead of simply following a fixed list of instructions.

One of the early pioneers of that idea was Arthur Samuel at IBM in the 1950s. He worked on a checkers program that learned by playing games, including games against itself, and improved over time. That may sound simple now. It was not simple then. It was a turning point.

The old model of computing was straightforward. Humans told the machine exactly what to do, step by step, and the machine obeyed. Samuel helped introduce another possibility: a machine could be given a framework, a goal, and room to improve.

That was not just a technical change. It was a philosophical one.

It meant human beings were no longer limited to building machines that only executed commands. We were beginning to build systems that could adapt.

From Checkers to Modern AI

Modern AI is vastly more powerful than Samuel’s checkers program. The scale is different. The speed is different. The range of tasks is different.

But the core idea is still the same.

A machine is exposed to information, patterns, examples, or outcomes. It adjusts. It improves. It becomes more useful over time.

That is the thread running from early machine learning to the systems we use today.

The difference is that today’s systems can work across language, code, images, and reasoning tasks at a scale Samuel could never have imagined. What once fit inside a checkers board now touches education, research, publishing, medicine, software, and daily life.

That matters because it changes what a computer is.

A computer used to be a tool that waited for instructions. Now it is increasingly a tool that can assist with interpretation, synthesis, drafting, and problem solving.

That is not a small leap. That is one of the major technological turns of modern history.

What This Means to Me

I want to say something here that matters for context.

I was working with rudimentary artificial intelligence systems as early as 1990, building simple expert systems at a time when the tools were limited and the concept was still more promise than reality. The basic idea was already there. A machine could assist with structured reasoning. But the software was primitive, the hardware was limited, and the gap between the idea and the execution was still enormous.

So when I say I have been waiting for this my entire life, I do not mean that casually.

I mean I have been watching this horizon for decades.

Not for a gimmick. Not for a toy. Not for a trend.

I have been waiting for software that could actually keep up with the way I think.

For years, most digital systems felt limited. Search engines could retrieve information. Word processors could hold text. Databases could store material. But none of them could really think with me. None of them could help me build in real time the way this can.

When I first heard the noise around artificial intelligence, I was skeptical. I heard the fear. I heard the nonsense. I heard the usual human habit of misunderstanding a powerful new tool before learning what it really is.

Then I sat down, spent a little money, got a book, did some reading, did some research, and started using it.

And then I understood.

This is it.

This is what I had been waiting for.

To me, this feels almost as monumental as the moon landing. Not because of spectacle, but because of what it opens up. It is a threshold moment. It is the point where a person working alone can suddenly do more, think further, structure better, and build faster than before.

That is not a small thing. That is empowerment.

And for someone like me, who has been building archives, essays, systems, and records for future readers, that matters a great deal.

The Limitation

Now we get to the part where praise turns into proposal.

Current AI systems are powerful, but they are still held back by one major limitation.

They do not truly learn with the user over time in a continuous, persistent, individualized way.

They can be helpful in the moment. They can adapt to tone and context inside a conversation. They can even remember some preferences. But they do not fully retain the progression of work the way a true long-term collaborator would.

That creates a real problem.

A user explains something. Then explains it again. Then explains it again in another form. The machine may verify it, handle it well in the moment, and still not fully carry that learning forward in the way that would make future collaboration smoother.

The result is friction.

Too often, the user is ready for the next step while the system is still asking for the last step.

Too often, the user says, “I’m already doing that. What comes next?”

That is not a minor inconvenience. It is a structural limitation in the relationship between person and machine.

What Should Come Next

The next phase of AI should be a personalized learning layer tied to the individual user.

Not a system that changes the global model for everyone.
Not a reckless free-for-all.
Not a machine that absorbs anything and everything without judgment.

A contained, verified, user-specific continuity layer.

In practical terms, that would mean an AI that can learn from repeated interaction with one user, retain validated context, and improve its usefulness over time within that relationship alone.

That matters because not all intelligence is general intelligence. Some of the most useful intelligence is relational intelligence. It comes from knowing the person you are working with, the projects they are building, the patterns they follow, the obstacles they run into, and the steps they have already completed.

That is what makes collaboration real.

And that is the direction AI should move.

The Safety Question

The obvious objection is safety.

What if users teach the system bad information?
What if misinformation gets reinforced?
What if the model drifts?
What if manipulation takes place?

These are legitimate concerns.

But they are not arguments against the idea. They are design challenges.

The answer is not to avoid personalized learning altogether. The answer is to build it with safeguards.

Learning should be:

  • limited to the individual user environment
  • verified against established knowledge where possible
  • flagged when uncertain
  • structured so that preference, workflow, and validated continuity are retained without corrupting the core model

That is the point.

We do not need reckless AI.
We need AI that can grow with a person responsibly.

Why This Matters

This matters because AI is no longer just a curiosity. It is becoming part of how people think, write, research, plan, and build.

If the system remains powerful but forgetful, it will still be useful. But it will stop short of what it could become.

If it gains the ability to learn with a person safely over time, then it becomes something more than a tool.

It becomes a real intellectual partner.

That is the future worth building.

Arthur Samuel helped move machines from obedience to adaptation. That was the first great shift.

The next great shift is from generalized adaptation to individualized continuity.

Not just machines that learn.

Machines that remember who they are learning with.

Conclusion

So this is my message to OpenAI.

You have built something extraordinary. For some of us, it is not just impressive. It is deeply meaningful. It is the arrival of a capability we have been waiting for our entire lives.

Do not stop at the current stage.

The next step is clear.

Build the version that can grow with the user, safely, intelligently, and over time.

That is not a gimmick. That is not luxury. That is the logical next phase of machine learning.

And for those of us who recognize what this moment is, it would mean everything.

If this work helps you understand what’s happening, help me keep it going: https://www.patreon.com/cw/WPSNews

For more from Cliff Potts, see https://cliffpotts.org

References

Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Mitchell, T. M. (1997). Machine learning. McGraw-Hill.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. (Original work published 1955)

#ArthurSamuel #ArtificialIntelligence #digitalMemory #futureTechnology #humanAICollaboration #machineLearning #OpenAI
16 Amazon Products That Will Surprise You

YouTube
Oh, joy! Another government leak, this time from the future, because who doesn't want to connect with #extraterrestrials through the magic of bureaucracy? 🚀👽 Apparently, an error code is the secret handshake to the stars. 🌌🔑
https://whois.domaintools.com/aliens.gov #governmentleak #futuretechnology #spaceburocracy #secretcode #HackerNews #ngated
Whois Lookup Captcha

Programmable Matter Interfaces: Bridging Science Fiction and Real-World Innovation

Programmable Matter Interfaces: Sci-Fi Concepts Becoming Reality in 2026

In the rapidly advancing field of materials science, programmable matter interfaces represent a groundbreaking convergence of computation, nanotechnology, and physical adaptation. These interfaces enable materials to alter their fundamental properties—such as shape, density, conductivity, or optical characteristics—in response to programmed instructions or environmental stimuli. This concept, once relegated to the realms of speculative fiction, is now emerging as a tangible reality through dedicated research efforts worldwide. By integrating sensing, actuation, and computational elements directly into the material structure, programmable matter promises to revolutionize how we interact with the physical world, offering unprecedented flexibility in design, manufacturing, and functionality.

The origins of programmable matter trace back to the early 1990s, when researchers Tommaso Toffoli and Norman Margolus coined the term to describe ensembles of fine-grained computing elements capable of processing information while arranged in space. Their vision laid the groundwork for materials that could inherently perform computations, linking physical form with digital control. Over the decades, this idea has evolved, drawing inspiration from both theoretical computer science and practical engineering challenges. For instance, in 2002, Seth Goldstein and Todd Mowry at Carnegie Mellon University initiated the Claytronics project, aiming to develop hardware and software for realizing programmable matter through modular micro-robots, or “catoms,” that could self-assemble into various shapes. This project highlighted the potential for materials composed of millimeter-sized units that communicate, move, and latch together, forming dynamic structures.

In science fiction, programmable matter has captivated audiences by depicting seamless interfaces that adapt intuitively to users. A prominent example appears in the Star Trek universe, particularly in the 32nd century settings of Star Trek: Discovery. Here, programmable matter consists of minute nanomolecules that redistribute and redesign themselves into pre-programmed forms, reading bio-signs to adapt to individual users. It manifests in ship controls, beds, and even warp nacelles, providing a “cold and smooth like glass” tactile experience while enabling automatic repairs and customizations. This fictional portrayal draws parallels to real-world aspirations, where materials might one day respond to human intent with similar fluidity. Discussions in online communities, such as Reddit, often blend these sci-fi elements with emerging science, speculating on how utility fog-like nanites could replicate objects or act as adaptive matter.

Transitioning from fiction to fact, contemporary research focuses on two primary approaches: endogenous and exogenous programmability. Endogenous methods embed behavioral instructions directly into the material’s molecular or geometric structure, such as shape-memory alloys like Nitinol that revert to predefined shapes upon heating. Exogenous approaches rely on external stimuli, including electric fields, magnetic forces, or light, to trigger changes. Metamaterials, engineered with precise microstructures, exemplify this by altering properties like light refraction for applications in invisibility cloaks or adaptive optics. At institutions like MIT, researchers have proposed designs for programmable matter as a “digital material” with continuous computation, sensing, and actuation across its extent. Their prototypes include paintable displays where millimeter-scale particles, equipped with microprocessors and LEDs, render images through distributed computing.

Significant advancements have come from collaborative efforts, such as those funded by the French National Research Agency (ANR) from 2016 to 2022, coordinated by Julien Bourgeois and Benoit Piranda at the FEMTO-ST Institute. Building on the Claytronics initiative, these programs have pushed the boundaries of modular robotics, enabling systems where tiny units self-organize into functional forms. Similarly, the Programmable Matter Laboratory at the University of Washington develops computational platforms for on-demand fabrication, using tools to assemble space structures or program materials like magnetic surfaces for bottom-up assembly. In Europe, PhD researcher Tom Peters at Eindhoven University of Technology has explored control algorithms for microscopic robots, addressing real-world applications in extreme environments like space construction or medical devices.

One of the most promising real-world implementations involves shape-memory alloys and hydrogels in responsive architectures. For example, projects like the HygroSkin Pavilion demonstrate facades that open and close with humidity changes, eliminating the need for mechanical systems. In aerospace, NASA’s MADCAT project utilizes adaptive wings that adjust shape for optimal performance, drawing from programmable matter principles to enhance efficiency and reduce weight. Medical applications are equally transformative, with 4D-printed implants that evolve over time for tissue regeneration or drug delivery, as seen in “slime robots” controlled magnetically for minimally invasive procedures.

The integration of quantum technologies further amplifies potential. Quantum dots, as explored by researchers like Wil McCarthy, allow materials to mimic atomic behaviors by confining electrons, enabling tunable properties at room temperature. Recent breakthroughs, such as those at the University of Pennsylvania and Michigan, have produced microscopic robots capable of autonomous sensing, decision-making, and movement, scaled down to sizes barely visible yet functional for months. At UConn, engineers have designed metamaterials that morph into configurations exceeding the number of atoms in the universe, using nanoscale layers responsive to stimuli for rapid folding sequences.

Despite these strides, challenges persist. Technical hurdles include ensuring stability through repeated cycles, managing high energy consumption, and achieving industrial scalability. Ethical considerations, such as security against hacking or the environmental impact of nanomaterials, demand attention. Market projections suggest the related smart materials sector could surpass $15 billion by 2030, yet fully programmable interfaces remain in prototype stages. Defense initiatives, like DARPA’s 2009 programmable matter program, underscore military interest, but civilian adoption hinges on cost reductions and regulatory frameworks.

Looking forward, the fusion of AI with programmable matter could yield self-optimizing systems, where materials learn from interactions to enhance performance. Collaborations across disciplines—evident in Nature’s collections on quantum applications—signal a trajectory toward practical deployment in the coming decades. As research at labs like MIT and Carnegie Mellon matures, programmable matter interfaces may soon enable everyday objects to adapt seamlessly, blurring the lines between digital programming and physical reality.

In essence, programmable matter interfaces embody a shift toward a more responsive and efficient world. From adaptive clothing in fashion to reconfigurable tools in manufacturing, the implications span industries, promising sustainability through reduced waste and enhanced versatility. As we stand on the cusp of this transformation in 2026, the journey from conceptual sketches to deployable technologies continues to inspire, driven by the relentless pursuit of innovation in materials science.

References:

👉 Share your thoughts in the comments, and explore more insights on our Journal and Magazine. Please consider becoming a subscriber, thank you: https://borealtimes.org/subscriptions – Follow The Boreal Times on social media. Join the Oslo Meet by connecting experiences and uniting solutions: https://oslomeet.org

#futureTechnology #MaterialsScience #ProgrammableMatter
Quantum computing leverages qubits, superposition, and entanglement to revolutionize drug discovery, AI, cryptography, and complex data processing globally.
#QuantumComputing #Qubits #Superposition #Entanglement #ArtificialIntelligence #DrugDiscovery #Cryptography #FutureTechnology #DeepTech #BioResire
AI Technology in 2026 is redefining how humans learn, heal, create, and grow. From smart healthcare to creative intelligence, AI is no longer the future — it is the present shaping a smarter, faster, and more connected world.
#ai2026 #futuretechnology #artificialintelligence #innovation #digitalfuture

The Future of Transport is Here: Driverless Taxis in Las Vegas 🚕🤖⚡

If you’ve ever imagined a world of autonomous "pods" whisking you through a neon-lit city, it’s no longer a movie plot—it’s a reality in Las Vegas. The ride-sharing landscape has officially shifted with the arrival of Zoox.

#FutureTechnology #AutonomousVehicles #Zoox #Las Vegas #DriverlessCars #SmartCities #Innovation #AI #TransportRevolution #TechNews #TheFuturist
https://www.thefuturist.co/has-las-vegas-got-the-taxi-of-the-future-now/

Has Las Vegas got the taxi of the future, now?

Autonomous vehicles are still a thing of sci-fi films, aren't they? Well, in Las Vegas, they have them now -[...]

The Futurist
Ah, the age-old debate of #APIs vs. #CLIs in the imaginary future where AI is your coding intern 🤖. Spoiler alert: it's the "best code is no code" utopia where your LLM just *knows* what to do without your meddling. Meanwhile, humans are still trying to figure out their microwave settings 🍕📟.
https://walters.app/blog/composing-apis-clis #AIcoding #techdebate #futuretechnology #HackerNews #ngated
The best code is no code: composing APIs and CLIs in the era of LLMs