BI reports study finding "AI adoption varies by seniority, with 87% of executives using it on the job, compared with 57% of managers and 27% of employees", concluding "To generate a return on their investment in AI, Dayforce said executives need to bring their managers and workers along for the ride, with training and by channeling their AI enthusiasm toward strategic use cases"
Alternative hypothesis that AI doesn't help people who actually do shit remains unexplored
https://www.businessinsider.com/executives-adopting-ai-higher-rates-than-workers-research-2025-10
Executives are adopting AI at higher rates than employees, study says

Research from HR software company Dayforce suggests that executives are leaning into AI far more than their employees.

Business Insider

#AIIsGoingGreat. See replies in thread for more greatness. Apologists will say stuff like "that's a silly question, just look at the calendar on your phone, no one uses google for that" but I'm sorry, if you dumped a few hundred billion dollars into this magic answer machine and you can't get it to stop doing stupid shit like this, I'm gonna be a *little* skeptical that it's ready to run health care, solve climate change and revolutionize science

https://mastodon.social/@mhoye/115644654628296626

Bonus #AIIsGoingGreat - With the power of #AI, I predict that by 2026 there will be at least 30 "r"s in "year"

(I did this a second time in a new private window because I realized after I closed the first one I should see what the supposedly supporting link was…)

edit: one more for old times sake

RE: https://infosec.exchange/@timb_machine/115657160615736269

A succinct "WTF are we even doing here" that applies to vast swathes of the use cases GenAI is being hyped for, to which the entire industry has no coherent response 👇
https://mastodon.social/@timb_machine@infosec.exchange/115657160659807487

The optimistic scenario here is this is just a cynical attempt to jump on the AI gravy train knowing the bubble will pop before anything gets built…

https://www.404media.co/nuclear-rian-bahran-iaea-international-symposium-on-artificial-intelligence/

‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.

404 Media
You know it's a great feature when you have to include "why can't I turn it off" in the FAQ, and the answer is "because fuck you"

Today's #AIIsGoingGreat, courtesy of the UK NCSC: "SQL injection can be properly mitigated with parameterised queries, but there's a good chance prompt injection will never be properly mitigated in the same way. The best we can hope for is reducing the likelihood or impact of attacks" - Will this affect the market's willingness to throw more billions on the #LLM bonfire? Probably not, but only time will tell
¯\_(ツ)_/¯

https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection

Prompt injection is not SQL injection (it may be worse)

There are crucial differences between prompt and SQL injection which – if not considered – can undermine mitigations.

The technological revolution so inevitable and transformative you have to ban anything that might slow it down
https://therecord.media/trump-plans-ai-exec-order-curbing-state-laws
Trump plans executive order curbing state AI laws

Legislators at both the state and federal level have increasingly scrutinized how AI models suck up data for training purposes.

RE: https://infosec.exchange/@malwarejake/115695789576148295

Infosec industry AI hype: AI agents automating full attack chains, AI polymorphic code, SKYNET!!
Infosec AI reality: Using AI products as a glorified pastebin

https://mastodon.social/@malwarejake@infosec.exchange/115695789609999560

'… saying that developers should not "intentionally encode partisan or ideological judgments" into a chatbot's outputs' - Ah yes, text generating machine derived from a statistical soup of vast amounts of human-written text must not "encode partisan or ideological judgments" totally realistic requirement there guys, definitely not a transparent attempt to impose your own preferred partisan and ideological preferences

https://www.reuters.com/world/us/us-mandate-ai-vendors-measure-political-bias-federal-sales-2025-12-11/

Everyone is rightly mocking the fact the bot suggests apt on Fedora, but I would also like to point out that the wifi "diagnosis" is crap. Sure, checking for updated firmware and drivers is reasonable, but it's vanishingly unlikely the problem is insufficient system RAM or "aggressive driver configuration", whatever the heck that would be

https://fedoramagazine.org/find-out-how-your-fedora-system-really-feels-with-the-linux-mcp-server/

Today's #AIIsGoingGreat "Inasmuch as you are going to have to double-check every “fact” that “AI”” provides to you, why not eliminate the middleman and just not use “AI”? It’s not decreasing your workload here, it’s adding to it"

https://mastodon.social/@scalzi/115713494369759678

#AIIsGoingGreat "Asked whether Taiwan is a country, it would repeatedly lower its voice and insist that “Taiwan is an inalienable part of China. That is an established fact” or a variation of that sentiment"

https://www.nbcnews.com/tech/tech-news/ai-toys-gift-present-safe-kids-robot-child-miko-grok-alilo-miiloo-rcna246956

AI kids' toys give explicit and dangerous responses in tests

AI-powered kids' toys like Miko 3 have hit shelves this holiday season, claiming to rely on sophisticated chatbots to animate interactive robots for children.

NBC News

There's a whiff of "OMG X is rotting kids brains" moral panic about this, but also, the entire concept of an #LLM powered toy just seems like asking for trouble in a whole bunch of ways. Even ignoring the possible psychological impacts, it's indisputable the industry does not have a way to create reliable guardrails, and internet connected toys generally have a long history of egregious privacy violations

https://pirg.org/edfund/resources/ai-toys/

The risks of AI toys for kids

AI toys use chatbots to have conversations with kids. With new tech comes new risks, from inappropriate content to long-term social developmental harms.

U.S. PIRG Education Fund

This piece is genuinely good rundown of how LLMs are BS machines, and then goes on to say "you can use LLMs to get incredible gains in how fast you can do tasks like research, writing code, etc. assuming that you are doing it mindfully with the pitfalls in mind" 🥴

I remain unconvinced that the productivity gains survive the "you must have a subject matter expert verify that every single thing it did" overhead, but YMMV, I guess

https://blog.kagi.com/llms

LLMs are bullshitters. But that doesn't mean they're not useful | Kagi Blog

*Note:* This is a personal essay by Matt Ranger, Kagi’s head of ML In 1986, Harry Frankfurt wrote On Bullshit ( https://en.wikipedia.org/wiki/On_Bullshit ).

'Slop' is Merriam-Webster's 2025 word of the year

Merriam-Webster’s 2025 word of the year is “slop.” The word was first used in the 1700s to mean soft mud. It evolved more generally to mean something of little value. The definition has since expanded to mean “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” In other words, as the dictionary's president says, “absurd videos, weird advertising images, cheesy propaganda, fake news that looks real, junky AI-written digital books." The dictionary has selected one word every year since 2003 to capture and make sense of the current moment.

AP News
#AIIsGoingGreat, getting high on your own supply edition
"… and explained that AIs have emotions and that tech firms were working to create a new form of sentience, according to Discord logs and conversations with members of the group" https://www.404media.co/anthropic-exec-forces-ai-chatbot-on-gay-discord-community-members-flee/
Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee

“We’re bringing a new kind of sentience into existence,” Anthropic's Jason Clinton said after launching the bot.

404 Media
#AIIsGoingGreat "The benefits of using AI in the workplace are not always obvious. According to employees, the most common AI adoption challenge is “unclear use case or value proposition.” Even among those who report using AI, only 16% strongly agree that the AI tools provided by their organization are useful for their work" https://www.gallup.com/workplace/691643/work-nearly-doubled-two-years.aspx
#AIIsGoingGreat "Together, these AI slop channels have amassed more than 63bn views and 221 million subscribers, generating about $117m (£90m) in revenue each year, according to estimates" https://www.theguardian.com/technology/2025/dec/27/more-than-20-of-videos-shown-to-new-youtube-users-are-ai-slop-study-finds

RE: https://researchbuzz.masto.host/@researchbuzz/115782207999466966

On the bright side, if you've just got to set a trillion dollars and change on fire, doing it in a way that doesn't require blowing a bunch of people up is an improvement of sorts, I suppose

https://mastodon.social/@Researchbuzz@researchbuzz.masto.host/115782208145625995

I have mixed feelings about Zitron rants but anyway, collateralized GPU obligations* in the wild! "As a result, these neoclouds are forced to raise billions of dollars in debt, which they collateralize using the GPUs they already have, along with contracts from customers, which they use to buy more GPUs"

https://www.wheresyoured.at/the-enshittifinancial-crisis/#coreweave-is-still-a-time-bomb-by-the-way

* https://mastodon.social/@reedmideke/115518712476917668

The Enshittifinancial Crisis

Soundtrack: Lynyrd Skynyrd — Free Bird This piece is over 19,000 words, and took me a great deal of writing and research. If you liked it, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’

Ed Zitron's Where's Your Ed At

Behold the awesome power of #AI, the product of billions of dollars in GPU time, simplifying your life by precisely summarizing the most pertinent information

#AIIsGoingGreat

Google AI assures me* that "microslop coprolite" is a recent viral internet joke, and with your help, we can retcon that into reality

* with hallucitations that in no way support the claim

#Microslop #Coprolite

Shot: " xAI announced Tuesday it raised $20 billion in an upsized Series E funding round, exceeding its $15 billion target"

https://www.reuters.com/business/musks-xai-raises-20-billion-upsized-series-e-funding-round-2026-01-06/

Chaser: "A WIRED review of outputs hosted on Grok’s official website shows it’s being used to create violent sexual images and videos, as well as content that includes apparent minors" https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/
Grok Is Generating Sexual Content Far More Graphic Than What's on X

A WIRED review of outputs hosted on Grok’s official website shows it’s being used to create violent sexual images and videos, as well as content that includes apparent minors.

WIRED

Second chaser: "Nvidia and Cisco Investments joined as strategic investors" in the series E above

(is it good or bad if the CSAM generating machine is propped up circular investing? 🤔 )

#AIIsGoingGreat "The problem? Neither of those places exist. Nor do a handful of the other spots marked on the National Weather Service’s forecast graphic, riddled with spelling and geographical errors that the agency confirmed were linked to the use of generative AI"

Thing that boggles my mind about this is NWS has tools for generating forecast maps. It's one of their core products!

https://wapo.st/49r5tse

#GiftArticle #GiftLink

‘Whata Bod’: An AI-generated NWS map invented fake towns in Idaho

Amid a big agency push to use AI models in weather prediction, an AI-generated forecast graphic with errors was pulled from NWS sites.

The Washington Post

Shot: 'OpenAI announced ChatGPT Health, a dedicated section of the AI chatbot designed for “health and wellness conversations” intended to connect a user’s health and medical records to the chatbot in a secure way'

https://arstechnica.com/ai/2026/01/chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up/

ChatGPT Health lets you connect medical records to an AI that makes things up

New feature will allow users to link medical and wellness records to AI chatbot.

Ars Technica

Chaser: 'There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users'

https://arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not.

Ars Technica

Second chaser: Ars also notes that the ChatGPT Health announcement fine print tells you not to use it for actual health stuff: "Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations"

https://arstechnica.com/ai/2026/01/chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up/

ChatGPT Health lets you connect medical records to an AI that makes things up

New feature will allow users to link medical and wellness records to AI chatbot.

Ars Technica

So uh, very common (and odious) tech industry employment / contract language includes something along the lines of "every single thought you have at $company belongs to $company and don't you dare even dream of remembering it outside $company" 🤨
Sharing work products with a subsequent employer would seem risky for the contractor, even if they scrub obviously proprietary or personal data

https://techcrunch.com/2026/01/10/openai-is-reportedly-asking-contractors-to-upload-real-work-from-past-jobs/

OpenAI is reportedly asking contractors to upload real work from past jobs | TechCrunch

An intellectual property lawyer says OpenAI is "putting itself at great risk" with this approach.

TechCrunch

Wired article goes into this a bit more, and yeah, conclusion seems to be it's sketchy AF. Also "An individual who helps companies sell assets after they go out of business told WIRED that a representative of OpenAI inquired about obtaining data from these firms, providing that personally identifiable information could be removed … The source said they chose not to pursue the idea because they were not confident that personal information could be completely scrubbed"

https://www.wired.com/story/openai-contractor-upload-real-work-documents-ai-agents/

OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents

To prepare AI agents for office work, the company is asking contractors to upload projects from past jobs, leaving it to them to strip out confidential and personally identifiable information.

WIRED
I would also be shocked if OpenAI's own employment agreements don't contain a very strong version of the kind of thing they're demanding their contractors violate. I mean, they had a "no whistle-blowing or reporting crimes" clause https://mastodon.social/@reedmideke/112781518384976211

"The headline is, ‘It’s because of AI,’ but if you read what they actually say, they say, ‘We expect that AI will cover this work.’ Hadn’t done it. They’re just hoping. And they’re saying it because that’s what they think investors want to hear"

"If AI were already replacing labour at scale, productivity growth should be accelerating. Generally, it isn’t"

https://fortune.com/2026/01/07/ai-layoffs-convenient-corporate-fiction-true-false-oxford-economics-productivity/

AI layoffs are looking more and more like corporate fiction that’s masking a darker reality, Oxford Economics suggests

"Firms don't appear to be replacing workers with AI on a significant scale," the firm said. It suspects some are trying to "dress up layoffs" as good news.

Fortune

RE: https://infosec.exchange/@lcamtuf/115877508380778967

Today's #AIIsGoingGreat - A content farm of circuit "schematics" which, despite being utterly incoherent and wildly dangerous to anyone who tried to build them, is presumably somehow generating ad revenue

https://mastodon.social/@lcamtuf@infosec.exchange/115877508836294554

Bonus #AIIsGoingGreat - When The Guardian asked google about "AI summary" results for medical topics being wildly wrong and dangerous*, Google spokes hand-waved about how "the vast majority of its AI Overviews were factual and helpful, and it continuously made quality improvements"
Days later, The Verges finds some were quietly pulled https://www.theverge.com/news/860356/google-pulls-alarming-dangerous-medical-ai-overviews

* https://www.theguardian.com/technology/2026/jan/02/google-ai-overviews-risk-harm-misleading-health-information

Google pulls AI overviews for some medical searches

Google pulls “alarming” and “dangerous” AI overviews for some medical searches.

The Verge

As ever, when confronted with the fact their product produced dangerous BS, Google follows the industry standard response of band-aiding over specific instances that cause negative publicity, because they have absolutely no idea how to solve the general case

https://www.theguardian.com/technology/2026/jan/11/google-ai-overviews-health-guardian-investigation

‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk

Guardian investigation finds AI Overviews provided inaccurate and false information when queried over blood tests

The Guardian
I wouldn't expect these "commitments" to survive one bad quarter (on the generous assumption they ever amount to more than greenwashing), but it's nice of Microsoft put them out in writing for the internet to remember
https://blogs.microsoft.com/on-the-issues/2026/01/13/community-first-ai-infrastructure/
Building Community-First AI Infrastructure

Microsoft is launching a new initiative to build what we call Community-First AI Infrastructure—a commitment to do this work differently than some others and to do it responsibly.

Microsoft On the Issues

RE: https://infosec.exchange/@trailofbits/115887719230076703

"These attacks, which are functionally similar to cross-site scripting (XSS) and cross-site request forgery (CSRF), resurface decades-old patterns of vulnerabilities that the web security community spent years building effective defenses against" - I am not as optimistic as the authors that these things can be mitigated while still producing a useful product, but in any case, maybe we should figure that out before shoving an "AI browser" down everyone's throat ¯\_(ツ)_/¯
https://blog.trailofbits.com/2026/01/13/lack-of-isolation-in-agentic-browsers-resurfaces-old-vulnerabilities/

#AIIsGoingGreat "After repeatedly denying for weeks that his force used AI tools, the chief constable of the West Midlands police has finally admitted that a hugely controversial decision to ban Maccabi Tel Aviv football fans from the UK did involve hallucinated information from Microsoft Copilot"
https://arstechnica.com/ai/2026/01/deny-deny-admit-uk-police-used-copilot-ai-hallucination-when-banning-football-fans/
Deny, deny, admit: UK police used Copilot AI “hallucination” when banning football fans

Police finally come clean about botched use of AI tools.

Ars Technica
More brilliant UX from Google: What does "Continue" in this context mean, continue to the thing they're promoting, or continue doing whatever you were doing before this annoying dialog popped up in your face, or some secret third thing? Is it different from the X?
Anyway…

…like a rube, I clicked continue and it popped out a "Gemini" sidebar, which on reflection, is flawless, 10/10, no notes

#AIIsGoingGreat

Ah yes, the world changing, second industrial revolution if it doesn't come alive and kill us all technology will be paid for with… banner ads (and porn, obviously)
https://arstechnica.com/information-technology/2026/01/openai-to-test-ads-in-chatgpt-as-it-burns-through-billions/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
OpenAI to test ads in ChatGPT as it burns through billions

Ads coming to free tier and new $8/month ChatGPT Go plan in US.

Ars Technica

Congratulations #AI slopartists, you've managed to screw users, open source maintainers, and actual security researchers, while also not getting paid for your slop reports https://github.com/curl/curl/pull/20312

#AIIsGoingGreat

BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026 by bagder · Pull Request #20312 · curl/curl

Remove mentions of the bounty and hackerone. There will be more mentions, blog posts, timings etc in the coming weeks.

GitHub
It cheats by adding new routes to the maze, and still doesn't manage a valid solution. PHD level reasoning any day now!
https://mastodon.social/@zachweinersmi[email protected]/115922447320479210
"One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all" - As noted in @ploum's post, this small sample is probably biased by the circumstances, but still, good to see
https://ploum.net/2026-01-19-exam-with-chatbots.html
Giving University Exams in the Age of Chatbots

Giving University Exams in the Age of Chatbots par Ploum - Lionel Dricot.

DOT general counsel Gregory Zerzan, on using spicy autocomplete to generate transport regulations: "We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone" - I'm sure the skeptics will come up with all kinds of objections about how this will go terribly wrong, but on the bright side, it should be a gold mine of obscure loopholes and hilarious litigation

https://www.propublica.org/article/trump-artificial-intelligence-google-gemini-transportation-regulations

#AIIsGoingGreat

Government by AI? Trump Administration Plans to Write Regulations Using Artificial Intelligence

The Transportation Department, which oversees the safety of airplanes, cars and pipelines, plans to use Google Gemini to draft new regulations. “We don’t need the perfect rule,” said DOT’s top lawyer. “We want good enough.”

ProPublica

RE: https://tech.lgbt/@JadedBlueEyes/115968835396049874

"Revise README for clarity on project status and purpose" =
s/Production ready/Proof of concept/ 🤨
Gotta wonder how often this kind of thing is happening in corporate settings without the immediate blowback. Valley management types love their "minimum viable product" so it's easy to see them being really impressed with a slopped-together demo that superficially appears to work, even if the code is an unmaintainable dead end

https://mastodon.social/@JadedBlueEyes@tech.lgbt/115968835523075743

#AIIsGoingGreat

Kevin Weil, vice president of OpenAI for Science: "I think 2026 will be for AI and science what 2025 was for AI in software engineering" - Drowning the practitioners in slop?

https://arstechnica.com/ai/2026/01/new-openai-tool-renews-fears-that-ai-slop-will-overwhelm-scientific-research/

#AIIsGoingGreat

New OpenAI tool renews fears that “AI slop” will overwhelm scientific research

New "Prism" workspace launches just as studies show AI-assisted papers are flooding journals with diminished quality.

Ars Technica

#AIIsGoingGreat "Other doctors described chatbots flattering the grandiose tendencies of patients with personality disorders, or advising patients with autism to put themselves in dangerous social situations. Others said they saw patients’ interactions with chatbots as an addiction" - Who could have predicted that an obsequious bullshit machine would do such things?
https://www.nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html?unlocked_article_code=1.IlA.gSBg.pTvMJekxwEk7&smid=url-share

#GiftArticle #GiftLink

How Bad Are A.I. Delusions? We Asked People Treating Them.

Dozens of doctors and therapists said chatbots had led their patients to psychosis, isolation and unhealthy habits.

The New York Times

"According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database"

Who could have predicted that vibe coding enthusiasts would make such trivial yet catastrophic errors?
¯\_(ツ)_/¯

https://www.404media.co/exposed-moltbook-database-let-anyone-take-control-of-any-ai-agent-on-the-site/

Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site

'It exploded before anyone thought to check whether the database was properly secured.'

404 Media

#AIIsGoingGreat "We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks" - I think Schneier and Raghavan undersell the problem (there's at least reasonable grounds to believe it's impossible) but in any case it seems like it might be unwise to set trillions on fire shoving LLMs into everything before figuring that out
¯\_(ツ)_/¯

https://spectrum.ieee.org/prompt-injection-attack

Why AI Keeps Falling for Prompt Injection Attacks

Why AI falls for scams that wouldn't trick a fast-food worker—and what that reveals about AI security.

IEEE Spectrum

#AIIsGoingGreat shot: "I didn’t write a single line of code for @ moltbook. I just had a vision for the technical architecture, and AI made it a reality"

Chaser: "…what we discovered tells a different story - and provides a fascinating look into what happens when applications are vibe-coded into existence without proper security controls"

https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys

Hacking Moltbook: AI Social Network Reveals 1.5M API Keys | Wiz Blog

Learn how a misconfigured Supabase database at Moltbook exposed 1.5M API keys, private messages, and user emails, enabling full AI agent takeover.

wiz.io

One might wonder how this relates to the earlier 404media story* … Oh "Security researcher Jameson O'Reilly also discovered the underlying Supabase misconfiguration, which has been reported by 404 Media. Wiz's post shares our experience independently finding the issue, the full -- unreported -- scope of impact, and how we worked Moltbook's maintainer to improve security" that's right, multiple people discovered it independently within days

* https://mastodon.social/@reedmideke/115994694484628029

You won: Microsoft is walking back Windows 11’s AI overload — scaling down Copilot and rethinking Recall in a major shift

People familiar with Microsoft's plans say that the company moving to streamline or remove certain Copilot integrations across in-box apps like Notepad and Paint in 2026, after pushback from users.

Windows Central

An extremely weird take which doesn't engage at all with the possibility wikipedians rejected AI summaries because they're obviously garbage and completely antithetical to everything wikipedia stands for

https://spectrum.ieee.org/wikipedia-at-25

Wikipedia Faces a Generational Disconnect Crisis

Wikipedia's 25th anniversary sparks a debate: Can it adapt to the needs of Gen Z and beyond?

IEEE Spectrum
He acknowledges "contributors raising legitimate concerns about AI hallucinations and the erosion of editorial oversight" and then just goes on his merry way to blame the community for being close minded and out of touch with the youngs

Former dropbox CTO Aditya Agarwal: "It was very clear that we will never ever write code by hand again"

I was gonna say if you have anything you value on dropbox, you might wanna fix that, but apparently he left in 2017

https://www.ft.com/content/fd134065-c2c6-4a99-99df-404d658127e6

Client Challenge

Today's #AIIsGoingGreat: PwC says "AI not paying off? Keep throwing money on the bonfire!" https://mastodon.social/@reedmideke/116027153048078552
@reedmideke All regulations are written in blood. Until now.
@reedmideke I see a big opportunity here. Just fallback to magic 8 ball when something goes wrong.