RE: https://bsky.app/profile/did:plc:halertijfbxiekwigrzt2fvc/post/3mh75cjviy52w
The State of AI?
I came across this article on CNBC today. It was included in part of an explanation as to why the U.S. Stock Market was tanking a bit in the moment (the Dow Jones Industrial Average lost more than 600 points in trading today). Apparently folks are rattled about AI again, and this article was mentioned as explaining why investors are rattled. I read every word of this piece by Matt Shumer, even though it was posted on Twitter. I am included the entire text of his article below. Iâm also including the link to Twitter with the original text.
Itâs an interesting look at the future of AI. As a software engineer and a leader in that space, AI is moving incredibly fast, and some of the demos Iâve seen in the past three weeks in my new position have rather blown me away. Thereâs a lot of bad things happening out there in the space, I freely admit that. But thereâs some very impressive advancements happening as well; functionality I didnât think I would see before I retired. For example, I watched a developer write a pretty impressive web application using a few prompts into Claude (an AI designed for coding) and in less than two hours. It was more than a starting point, and the developer didnât need to make any changes to the code, it just worked.
Hereâs one take on where AI could be taking us. Iâm impressed by the tech, and more importantly, Iâm grateful I am getting close to retirement. I have no idea what the United States, or the world for that matter, is going to look like in the coming years.
This is a long read, but I feel itâs worth the time. If youâre interested in the space, I feel like this is worthy of the time commitment.
This article really made me think.
Note: I initially published this post with the reposted article as a big quote. I feel like that made the text hard to read, so I removed that styling. Everything below this paragraph in this blog entry was written by Matt Shumer, as posted on Twitter.
https://twitter.com/mattshumer_/status/2021256989876109403
Something Big Is Happening
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us werenât paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought theyâd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldnât have believed if youâd described it to yourself a month earlier.
I think weâre in the âthis seems overblownâ phase of something much, much bigger than Covid.
Iâve spent six years building an AI startup and investing in the space. I live in this world. And Iâm writing this for the people in my life who donât⊠my family, my friends, the people I care about who keep asking me âso whatâs the deal with AI?â and getting an answer that doesnât do justice to whatâs actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like Iâve lost my mind. And for a while, I told myself that was a good enough reason to keep whatâs truly happening to myself. But the gap between what Iâve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
I should be clear about something up front: even though I work in AI, I have almost no influence over whatâs about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies⊠OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didnât lay. Weâre watching this unfold the same as you⊠we just happen to be close enough to feel the ground shake first.
But itâs time now. Not in an âeventually we should talk about thisâ way. In a âthis is happening right now and I need you to understand itâ way.
I know this is real because it happened to me first
Hereâs the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. Weâre not making predictions. Weâre telling you what already occurred in our own jobs, and warning you that youâre next.
For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasnât just better than the last⊠it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.
Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch⊠more like the moment you realize the water has been rising around you and is now at your chest.
I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just⊠appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.
Let me give you an example so you can understand what this actually looks like in practice. Iâll tell the AI: âI want to build this app. Hereâs what it should do, hereâs roughly what it should look like. Figure out the user flow, the design, all of it.â And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesnât like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until itâs satisfied. Only once it has decided the app meets its own standards does it come back to me and say: âItâs ready for you to test.â And when I test it, itâs usually perfect.
Iâm not exaggerating. That is what my Monday looked like this week.
But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasnât just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.
Iâve always been early to adopt AI tools. But the last few months have shocked me. These new AI models arenât incremental improvements. This is a different thing entirely.
And hereâs why this matters to you, even if you donât work in tech.
The AI labs made a deliberate choice. They focused on making AI great at writing code first⊠because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. Thatâs why they did it first. My job started changing before yours not because they were targeting software engineers⊠it was just a side effect of where they chose to aim first.
Theyâve now done it. And theyâre moving on to everything else.
The experience that tech workers have had over the past year, of watching AI go from âhelpful toolâ to âdoes my job better than I doâ, is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what Iâve seen in just the last couple of months, I think âlessâ is more likely.
âBut I tried AI and it wasnât that goodâ
I hear this constantly. I understand it, because it used to be true.
If you tried ChatGPT in 2023 or early 2024 and thought âthis makes stuff upâ or âthis isnât that impressiveâ, you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.
That was two years ago. In AI time, that is ancient history.
The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is âreally getting betterâ or âhitting a wallâ â which has been going on for over a year â is over. Itâs done. Anyone still making that argument either hasnât used the current models, has an incentive to downplay whatâs happening, or is evaluating based on an experience from 2024 that is no longer relevant. I donât say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous⊠because itâs preventing people from preparing.
Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know whatâs coming.
I think of my friend, whoâs a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it wonât work. Itâs not built for his specialty, it made an error when he tested it, it doesnât understand the nuance of what he does. And I get it. But Iâve had partners at major law firms reach out to me for advice, because theyâve tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me itâs like having a team of associates available instantly. Heâs not using it because itâs a toy. Heâs using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects itâll be able to do most of what he does before long⊠and heâs a managing partner with decades of experience. Heâs not panicking. But heâs paying very close attention.
The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. Theyâre blown away by what it can already do. And theyâre positioning themselves accordingly.
How fast this is actually moving
Let me make the pace of improvement concrete, because I think this is the part thatâs hardest to believe if youâre not watching it closely.
In 2022, AI couldnât do basic arithmetic reliably. It would confidently tell you that 7 Ă 8 = 54.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.
On February 5th, 2026, new models arrived that made everything before them feel like a different era.
If you havenât tried AI in the last few months, what exists today would be unrecognizable to you.
Thereâs an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.
But even that measurement hasnât been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METRâs graph to show another major leap.
If you extend the trend (and itâs held for years with no sign of flattening) weâre looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.
Amodei has said that AI models âsubstantially smarter than almost all humans at almost all tasksâ are on track for 2026 or 2027.
Let that land for a second. If AI is smarter than most PhDs, do you really think it canât do most office jobs?
Think about what that means for your work.
AI is now building the next AI
Thereâs one more thing happening that I think is the most important development and the least understood.
On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:
âGPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.â
Read that again. The AI helped build itself.
This isnât a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.
Dario Amodei, the CEO of Anthropic, says AI is now writing âmuch of the codeâ at his company, and that the feedback loop between current AI and next-generation AI is âgathering steam month by month.â He says we may be âonly 1â2 years away from a point where the current generation of AI autonomously builds the next.â
Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know â the ones building it â believe the process has already started.
What this means for your job
Iâm going to be direct with you because I think you deserve honesty more than comfort.
Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think heâs being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. Itâll take some time to ripple through the economy, but the underlying ability is arriving now.
This is different from every previous wave of automation, and I need you to understand why. AI isnât replacing one specific skill. Itâs a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesnât leave a convenient gap to move into. Whatever you retrain for, itâs improving at that too.
Let me give you a few specific examples to make this tangible⊠but I want to be clear that these are just examples. This list is not exhaustive. If your job isnât mentioned here, that does not mean itâs safe. Almost all knowledge work is being affected.
Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isnât using AI because itâs fun. Heâs using it because itâs outperforming his associates on many tasks.
Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.
Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals canât distinguish AI output from human work.
Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.
Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.
Customer service. Genuinely capable AI agents⊠not the frustrating chatbots of five years ago⊠are being deployed now, handling complex multi-step problems.
A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but canât replace human judgment, creativity, strategic thinking, empathy. I used to say this too. Iâm not sure I believe it anymore.
The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.
Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I donât know. Maybe not. But Iâve already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.
I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isnât âsomeday.â Itâs already started.
Eventually, robots will handle physical work too. Theyâre not quite there yet. But ânot quite there yetâ in AI terms has a way of becoming âhereâ faster than anyone expects.
What you should actually do
Iâm not writing this to make you feel helpless. Iâm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.
Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. Itâs $20 a month. But two things matter right away. First: make sure youâre using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now thatâs GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share whatâs actually worth using.
Second, and more important: donât just ask it quick questions. Thatâs the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If youâre a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If youâre in finance, give it a messy spreadsheet and ask it to build the model. If youâre a manager, paste in your teamâs quarterly data and ask it to find the story. The people who are getting ahead arenât using AI casually. Theyâre actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.
And donât assume it canât do something just because it seems too hard. Try it. If youâre a lawyer, donât just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If youâre an accountant, donât just ask it to explain a tax rule. Give it a clientâs full return and see what it finds. The first attempt might not be perfect. Thatâs fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And hereâs the thing to remember: if it even kind of works today, you can be almost certain that in six months itâll do it near perfectly. The trajectory only goes one direction.
This might be the most important year of your career. Work accordingly. I donât say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says âI used AI to do this analysis in an hour instead of three daysâ is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate whatâs possible. If youâre early enough, this is how you move up: by being the person who understands whatâs coming and can show others how to navigate it. That window wonât stay open long. Once everyone figures it out, the advantage disappears.
Have no ego about it. The managing partner at that law firm isnât too proud to spend hours a day with AI. Heâs doing it specifically because heâs senior enough to understand whatâs at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. Itâs not. No field is.
Get your financial house in order. Iâm not a financial advisor, and Iâm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.
Think about where you stand, and lean into whatâs hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isnât happening.
Rethink what youâre telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. Iâm not saying education doesnât matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things theyâre genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.
Your dreams just got a lot closer. Iâve spent most of this section talking about threats, so let me talk about the other side, because itâs just as real. If youâve ever wanted to build something but didnât have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. Iâm not exaggerating. I do this regularly. If youâve always wanted to write a book but couldnât find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month⊠one thatâs infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever youâve been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things youâre passionate about. You never know where theyâll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.
Build the habit of adapting. This is maybe the most important one. The specific tools donât matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well wonât be the ones who mastered one tool. Theyâll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.
Hereâs a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new⊠something you havenât tried before, something youâre not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand whatâs coming better than 99% of the people around you. Thatâs not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.
The bigger picture
Iâve focused on jobs because itâs what most directly affects peopleâs lives. But I want to be honest about the full scope of whatâs happening, because it goes well beyond work.
Amodei has a thought experiment I canât stop thinking about. Imagine itâs 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?
Amodei says the answer is obvious: âthe single most serious national security threat weâve faced in a century, possibly ever.â
He thinks weâre building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what itâs creating.
The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimerâs, infectious disease, aging itself⊠these researchers genuinely believe these are solvable within our lifetimes.
The downside, if we get it wrong, is equally real. AI that behaves in ways its creators canât predict or control. This isnât hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.
The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe itâs too powerful to stop and too important to abandon. Whether thatâs wisdom or rationalization, I donât know.
What I know
I know this isnât a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.
I know the next two to five years are going to be disorienting in ways most people arenât prepared for. This is already happening in my world. Itâs coming to yours.
I know the people who will come out of this best are the ones who start engaging now â not with fear, but with curiosity and a sense of urgency.
And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when itâs too late to get ahead of it.
Weâre past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasnât knocked on your door yet.
Itâs about to.
#ai #commentary #repostingI find it really sad that besides my own posts and a few others, #sexy seems to be flooded with #bots that are just #reposting random images with neither #credits to the original #model nor #photographer nor anything.
- Really sad, cuz I want to see more #OriginalContent under every hashtag...
#Repost: Now this #SuspenderBelt screams:
"PRO DOM(ME) SPEC"!
- 14 straps (7 per side) to keep even then most stubborn #nylons level...
- And that front doesn't prevent any "frontal action" of any sorts...
Damn I need to get my hands on this beauty... đ§Š đ đ
#nylon #lycra #suspender #belt #stockings #fetish #ClubWear #NightOut #sexy #dessous #lingerie #outfit #kinky #stocking #fashion #Couture #reposting #nylon #nylons #NylonStockings
#Repost:
Oh my gosh: THEY BOUGHT IT BACK!
My favorite #latex #vendor bought back the #long latex #corset I asked them about...
This really is a stunning piece, as there are rarely #overbust #corsets made out of latex on the market.
𩱠đ€ âš
And yes it does have (abeit flat) steel bones...
#clubwear #fetish #rubber #NightOut #dress #sexy #FetishWear #reposting #corsets #dress #outfit
#Repost: Am I the only one who wants to see more #lingerie aimed and designed with #men in mind?
I mean we all know the #sissy - style and #crossdressing outfits, but I think it's kinda bad that #women are expected to wrap themselves up like christmas presents and men don't.
đ©Č đ đ
Really a missed opportunity indeed...
There's also an #underbust version available if that's more your fancy...
đ đ 𩱠đ€
#sexy #fetish #corsets #couture #WaistTraining #ClubWear #NightOut #corset #repost #reposting
...and they really fit with the #blue #SuspenderBelt that comes with a nice #string, which I also have... đ©Č đ đ đ§Š
#Stockings #Lingerie #dessous #clubwear #sexy #fetish #NightOut #repost #reposting