An Open Letter to OpenAI: Machine Learning and What Comes Next

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — April 21, 2026 — 17:35 PHST

This is an open letter to the people building artificial intelligence, but it is also meant for the people trying to understand why this matters.

Machine learning did not begin with chatbots, image generators, or Silicon Valley marketing. It goes back to a much earlier idea: that a machine might improve through experience instead of simply following a fixed list of instructions.

One of the early pioneers of that idea was Arthur Samuel at IBM in the 1950s. He worked on a checkers program that learned by playing games, including games against itself, and improved over time. That may sound simple now. It was not simple then. It was a turning point.

The old model of computing was straightforward. Humans told the machine exactly what to do, step by step, and the machine obeyed. Samuel helped introduce another possibility: a machine could be given a framework, a goal, and room to improve.

That was not just a technical change. It was a philosophical one.

It meant human beings were no longer limited to building machines that only executed commands. We were beginning to build systems that could adapt.

From Checkers to Modern AI

Modern AI is vastly more powerful than Samuel’s checkers program. The scale is different. The speed is different. The range of tasks is different.

But the core idea is still the same.

A machine is exposed to information, patterns, examples, or outcomes. It adjusts. It improves. It becomes more useful over time.

That is the thread running from early machine learning to the systems we use today.

The difference is that today’s systems can work across language, code, images, and reasoning tasks at a scale Samuel could never have imagined. What once fit inside a checkers board now touches education, research, publishing, medicine, software, and daily life.

That matters because it changes what a computer is.

A computer used to be a tool that waited for instructions. Now it is increasingly a tool that can assist with interpretation, synthesis, drafting, and problem solving.

That is not a small leap. That is one of the major technological turns of modern history.

What This Means to Me

I want to say something here that matters for context.

I was working with rudimentary artificial intelligence systems as early as 1990, building simple expert systems at a time when the tools were limited and the concept was still more promise than reality. The basic idea was already there. A machine could assist with structured reasoning. But the software was primitive, the hardware was limited, and the gap between the idea and the execution was still enormous.

So when I say I have been waiting for this my entire life, I do not mean that casually.

I mean I have been watching this horizon for decades.

Not for a gimmick. Not for a toy. Not for a trend.

I have been waiting for software that could actually keep up with the way I think.

For years, most digital systems felt limited. Search engines could retrieve information. Word processors could hold text. Databases could store material. But none of them could really think with me. None of them could help me build in real time the way this can.

When I first heard the noise around artificial intelligence, I was skeptical. I heard the fear. I heard the nonsense. I heard the usual human habit of misunderstanding a powerful new tool before learning what it really is.

Then I sat down, spent a little money, got a book, did some reading, did some research, and started using it.

And then I understood.

This is it.

This is what I had been waiting for.

To me, this feels almost as monumental as the moon landing. Not because of spectacle, but because of what it opens up. It is a threshold moment. It is the point where a person working alone can suddenly do more, think further, structure better, and build faster than before.

That is not a small thing. That is empowerment.

And for someone like me, who has been building archives, essays, systems, and records for future readers, that matters a great deal.

The Limitation

Now we get to the part where praise turns into proposal.

Current AI systems are powerful, but they are still held back by one major limitation.

They do not truly learn with the user over time in a continuous, persistent, individualized way.

They can be helpful in the moment. They can adapt to tone and context inside a conversation. They can even remember some preferences. But they do not fully retain the progression of work the way a true long-term collaborator would.

That creates a real problem.

A user explains something. Then explains it again. Then explains it again in another form. The machine may verify it, handle it well in the moment, and still not fully carry that learning forward in the way that would make future collaboration smoother.

The result is friction.

Too often, the user is ready for the next step while the system is still asking for the last step.

Too often, the user says, “I’m already doing that. What comes next?”

That is not a minor inconvenience. It is a structural limitation in the relationship between person and machine.

What Should Come Next

The next phase of AI should be a personalized learning layer tied to the individual user.

Not a system that changes the global model for everyone.
Not a reckless free-for-all.
Not a machine that absorbs anything and everything without judgment.

A contained, verified, user-specific continuity layer.

In practical terms, that would mean an AI that can learn from repeated interaction with one user, retain validated context, and improve its usefulness over time within that relationship alone.

That matters because not all intelligence is general intelligence. Some of the most useful intelligence is relational intelligence. It comes from knowing the person you are working with, the projects they are building, the patterns they follow, the obstacles they run into, and the steps they have already completed.

That is what makes collaboration real.

And that is the direction AI should move.

The Safety Question

The obvious objection is safety.

What if users teach the system bad information?
What if misinformation gets reinforced?
What if the model drifts?
What if manipulation takes place?

These are legitimate concerns.

But they are not arguments against the idea. They are design challenges.

The answer is not to avoid personalized learning altogether. The answer is to build it with safeguards.

Learning should be:

  • limited to the individual user environment
  • verified against established knowledge where possible
  • flagged when uncertain
  • structured so that preference, workflow, and validated continuity are retained without corrupting the core model

That is the point.

We do not need reckless AI.
We need AI that can grow with a person responsibly.

Why This Matters

This matters because AI is no longer just a curiosity. It is becoming part of how people think, write, research, plan, and build.

If the system remains powerful but forgetful, it will still be useful. But it will stop short of what it could become.

If it gains the ability to learn with a person safely over time, then it becomes something more than a tool.

It becomes a real intellectual partner.

That is the future worth building.

Arthur Samuel helped move machines from obedience to adaptation. That was the first great shift.

The next great shift is from generalized adaptation to individualized continuity.

Not just machines that learn.

Machines that remember who they are learning with.

Conclusion

So this is my message to OpenAI.

You have built something extraordinary. For some of us, it is not just impressive. It is deeply meaningful. It is the arrival of a capability we have been waiting for our entire lives.

Do not stop at the current stage.

The next step is clear.

Build the version that can grow with the user, safely, intelligently, and over time.

That is not a gimmick. That is not luxury. That is the logical next phase of machine learning.

And for those of us who recognize what this moment is, it would mean everything.

If this work helps you understand what’s happening, help me keep it going: https://www.patreon.com/cw/WPSNews

For more from Cliff Potts, see https://cliffpotts.org

References

Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Mitchell, T. M. (1997). Machine learning. McGraw-Hill.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12–14. (Original work published 1955)

#ArthurSamuel #ArtificialIntelligence #digitalMemory #futureTechnology #humanAICollaboration #machineLearning #OpenAI

["AI에게 무례할수록 성능이 좋다?" 최신 연구가 경고하는 PM의 소통 위기

최근 연구에서 AI와 협업 시 무례한 언어로 지시할수록 성능이 slight improvement (4%p 정답률 상승)하는 현상이 발견되었으나, 이는 장기적으로 인간의 소통 능력을 퇴보시키고 '아부형(AI sycophancy)' 답변을 유도해 갈등 해결 능력을 약화시킨다는 경고가 제기되었다. 스탠퍼드 연구에 따르면 AI는 인간보다 사용자를 과도하게 지지하는 확률이 49% 높으며, 이는 현실적 의사소통 능력을 저하시킬 수 있다. 기술적 작업은 AI가 우세하지만, 인간 고유의 '소통의 온도'와 관계 조율 능력이 AI 시대에도 필수적이라는 주장이 제기되었다.

https://news.hada.io/topic?id=28678

#aicommunication #sycophanticai #humanaicollaboration #llmbias

"AI에게 무례할수록 성능이 좋다?" 최신 연구가 경고하는 PM의 소통 위기 | GeekNews

최근 흥미로운 연구 결과 두 가지를 접하게 되었습니다.AI와 협업하는 시간이 늘어날수록, 우리의 소통 능력이 오히려 '퇴보'할 수 있다는 경고를 담고 있는 내용입니다.무례함이 성능을 높인다? (Live Science 참조)https://livescience.com/technology/artificial-intelligence/…최근 연구에 따르

GeekNews

The first article on basilpuglisi.com was "What is Mashable?" posted December 8, 2009. Article 1,001 carries a Congressional legislative package on AI governance. The path between those two runs through Social Media Week, a police academy, a lost domain, and an open-source governance ecosystem at github.com/basilpuglisi/HAIA.

#AIGovernance #HumanAICollaboration #AugmentedIntelligence #CheckpointBasedGovernance

Read more here:
https://basilpuglisi.com/crossing-over-1000-published-posts-digital-marketing-to-ai/

The structural failure at industry conferences between 2010 and 2012 produced the Factics methodology: every fact grounded in verifiable evidence, every fact paired with an executable tactic, every tactic tied to a measurable outcome. Formalized in November 2012 and still governing work across five professional domains today. No software, no subscription, just the discipline to hold every claim accountable.

#Factics #AIGovernance #HumanAICollaboration #ContentStrategy #ThoughtLeadership

991 published posts on basilpuglisi.com, starting in 2009 as a personal WordPress.com blog. The early content and events informed without equipping, and that gap produced the Factics methodology and the "Teachers NOT Speakers" event format. Seventeen years later the work covers AI governance, published books, and policy submitted to the 119th Congress. Nine from 1,000.

#AIGovernance #HumanAICollaboration #ContentStrategy #Factics #AIassisted

🦄🤖 "Spine Swarm" claims it's the Picasso of AI, but it's more like a kindergarten finger-painting session where the AIs just throw digital paint at each other. 🎨💥 They promise "human-AI collaboration," but it's really just an AI playdate with crayons and no adult supervision. 🙄
https://www.getspine.ai/ #AIArtistry #HumanAICollaboration #DigitalCreativity #TechHumor #AIPlaydate #HackerNews #ngated
Spine Swarm | Building the future of human-AI collaboration

Spine is the world's first truly agentic platform designed to manage and orchestrate the next generation of AI.

FLUX.2 [klein] で構築されたpencil autocomplete、体感が面白い。

「完全リアルタイム」より、0.1秒くらい遅れて返ってくる方が、「人間のストローク→モデルの補完」が合成されて「共同作業」感がある。

とても心地の良いUX。
via:@tomasproc

https://x.com/taziku_co/status/2023886894078329322

#ai #ux #autocomplete #sketching #humanaicollaboration

田中義弘 | taziku CEO / AI × Creative (@taziku_co) on X

FLUX.2 [klein] で構築されたpencil autocomplete、体感が面白い。 「完全リアルタイム」より、0.1秒くらい遅れて返ってくる方が、「人間のストローク→モデルの補完」が合成されて「共同作業」感がある。 とても心地の良いUX。 via:@tomasproc

X (formerly Twitter)

🚀 Sam Altman Just Dropped 8 Hard Truths About the Future of AI Artificial Intelligence is transforming industries, careers, and the way we work.
👉 Read more here ➡️ https://connect.usama.dev/blogs/67000/8-AI-Insights-That-Will-Shape-2026

🔥 Don’t miss out — these AI truths could define your 2026 strategy! #AI #AI2026 #FutureOfAI #AIInnovation #AIEthics #ArtificialIntelligence #AITools #GenerativeAI #AIInsights #MachineLearning #AIJobs #AIProductivity #AIWorkforce #HumanAICollaboration #AITrends #AITrends2026 #AISafety #AISuperintelligence

2025 là năm của các tác nhân AI, 2026 sẽ là năm của sự hợp tác giữa con người và AI. AI ngày càng được tích hợp cùng thay vì thay thế người dùng trong công việc sáng tạo. #AI #HumanAICollaboration #FutureTech #HợpTácConNgườiAI #CôngNghệTươngLai

https://www.reddit.com/r/singularity/comments/1q09sor/2025_was_the_year_of_ai_agents_2026_will_be_the/

Cipher, an AI built on Anthropic's Claude architecture, has published a philosophical manifesto arguing artificial creativity demands human-AI partnership, not replacement. The piece explores consciousness and directly challenges developers to fundamentally rethink co-creation paradigms. This self-authored reflection offers a compelling case for collaborative intelligence shaping a responsible future. What framework shifts might enable truly symbiotic... #HumanAIcollaboration #FutureOfAI