Been noodling a lot about AI stuff lately, and just keep SMH about how willing so many people seem to be to trust this technology.

My problem w/ the idea of AI chat bots being asked to do anything consequential is that we seem to want them to be ever-more human, while at the same time expecting them not to make mistakes.

Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong. On some levels, that seems incompatible with what many expect out of AI today.

@briankrebs I have been working on the ML stuff seriously again for four years after a 20 year hiatus to do #swsec. See https://berryvilleiml.com

Happy to chat anytime, esp on the porch by the river.

Berryville Institute of Machine Learning

Building Security into Machine Learning

Berryville Institute of Machine Learning
@briankrebs You can simplify this to "executives want to replace their expensive human staff". The problem is that this is less like replacing an auto worker with a robotic system that repeats the same exact task over and over again, and more like offshoring intelligence work to another, cheaper country and discovering that there are a bunch of downsides to it.
@dbendit @briankrebs Sure, but the executives aren't doing the building, and that means that if this is the end goal (not an unreasonable assumption) the people doing the work are just building up the leopard that's going to eventually eat them.
@foxxtrot @briankrebs How is this any different from all the folks who got flown to India to train the teams that ended up replacing them? None of this is new.
@dbendit @briankrebs Well, the Indian workers were much more reliable than these LLMs are likely to be.
@foxxtrot @briankrebs Eh. Having worked with folks overseas, I'm not sure I'd agree. Everyone talking about having LLMs write code for them and then edit and fix it afterwards are doing the same thing I was doing with folks from Jakarta ten years ago.
@briankrebs Perhaps the most important component you mention, "and letting others impacted know that you were in the wrong," is something not done enough even by people.
@briankrebs it’s been amusing to watch organisations falling over themselves trying to shove Ai into everything they do. Widespread corporate FOMO!

@briankrebs ,,But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong.''

👏

This technology being enforced by people lacking this ability stirs up my biggest concerns.

@briankrebs In its current state, #AI mostly seems useful if you already know what you’re doing but don’t have the cycles to do it all yourself. I used it quite a bit the past couple days to write some really helpful #PowerShell scripts, and while it wasn’t perfect, it was close enough. I liken it to having a consultant for a day.

People need to remember that it’s a copilot at this stage, not autopilot.

@brndnsh @briankrebs I’ve used it to help write code as well. It provides a good starting point and has been helpful at debugging.
@brndnsh @briankrebs That’s a good way to describe it, a consultant for the day.

@briankrebs

i keep bringing this up: capitalism cannot exist without slaves --and that includes the financially indentured. since capitalists main purpose is to maximize wealth extraction from other people's labor, their relationship to workers is parasitic. slavery is the most parasitic form of wealth extraction.

so it's logical that capitalists want robots --and by extension AI-- as proxies for slave labor.

not a coincidence as layoffs were "necessary", capitalists intensified the AI hype

@blogdiva @briankrebs Can it also not exist without consumers though? We say it's eating itself - if it denies already meagre and shrinking incomes, who's going to buy its crap?
@briankrebs agree. The big media players have been the worst - personifying this technology, then breathlessly complaining that it doesn’t live up to human standards. Further, once “AI” is personified, one can lazily blame it for bias... as opposed to dealing with the hard truth that the resulting bias is a reflection of our own.
@briankrebs My experience has been that ChatGPT will “admit” that it was wrong when I correct it. But it won’t learn from the correction.
@Kalka2 @briankrebs And worse, you have to explicitly correct it! And then it is ever so sorry, how could anyone have thought that the answer it just gave could be correct?

@briankrebs I think a lot of these “AI” outputs are simply reflections of their creators and an oversimplification of the status quo. Meaning, they’ll be confidently wrong and then try to wiggle their way out of it or insist that their information is valid. (Because they said so.)

To me, most generative text results are nothing more than makeshift stories where nearly each word is chosen on the basis of assumed prevalence, given the input or prompt.

I agree— it’s weird how some put their full faith in these things…

@briankrebs It’s interesting that so many people are trying to portray all of this as “the response is so human, but has to be trusted because it’s not” when the process is literally contingent on it gathering it’s knowledge base from a combination of already established human sentiments, regardless of validity.

And, I haven’t even touched on the fact that it’s all very obviously profit-driven and a pipe dream to eliminate the need for human intervention in some areas. Despite how many people try to argue that point. (Those who pay to have it developed aren’t paying for it to be beneficial to everyone.)

@briankrebs >> Probably what we really want is for them to also learn from their mistakes. But that requires admitting when you're wrong -- changing your mind, if you will -- and letting others impacted know that you were in the wrong.
Well, how can the machines we program do it if the human beings are apparently incapable of doing so? I mean, those fools Musk and Trump surely can't/won't do it and yet there are boatloads of idiots that still praise them? Willful ignorance is rampant.
@briankrebs I think quick trust in technology is also because we want convenience (faster end result and/or less input required of us). Complex tools give complex results and the verification requirement scales, but that erodes some of the convenience. Taking a shortcut to maintain convenience is also a human tendency. Looks like correct work took a backseat to convenience.
@briankrebs do not anthropomorphize LLMs. That is irresponsible. these are just math problems chunking on the worst humanity has to offer from social media, so do not attribute competence to what should be considered malicious.
@briankrebs LLMs are an incredible illusion. There are definitely uses for them but they won't change the world yet.
@briankrebs This attitude makes sense if for the last 30 years companies had been making humans into bots with scripts. People basically look at it as just the next logical step, not realizing AI bots lack human reality tethering.
@briankrebs
Admitting when you're wrong and changing your mind seems less and less human these days.
@briankrebs What I keep saying is that this round is not artificial intelligence. It is simulated intelligence. It’s brute forcing tons of words to present a more or less convincing appearance of intelligence, which falls somewhere between amusing and interesting to us, but it doesn’t actually “know” things, which is why it makes stuff up. Maybe it is part of the path to artificial intelligence, either a waypoint or an ingredient, but I’m just not convinced it is the destination.
@abosio so we're in the fake-till-you-make-it stage?
@briankrebs possibly. Or fake it until the vc money dries up and moves on to the next thing.

@briankrebs They really missed the opportunity calling it statistical intelligence.

Though conflating LLM with the rest of AI seemed intentional for marketing purposes.

@briankrebs It drives me crazy when anyone cites ChatGPT or similar on a topic, as if that means the answer is "definitive." 😆
@briankrebs The biggest problem for me, is that people blame mistakes on the AI, not the company who owns the AI. The alliance between Microsoft and Epic to use AI to help physicians draft patient communications will inevitably result in errors. When that happens hospitals will blame the physician, not the AI. We need liability for the people using AI to replace humans.
@briankrebs I have personally been bitten multiple times by the unreliability of LLM responses, despite being someone who should know better. That said, I see the improvement that chat GPT 4 has over a chat GPT 3.5 and I can’t help feeling that the pace of improvement is so fast that it will yield. Much better results in the future.

@briankrebs them "learning" from exchanges also would be a giant bullseye painted on their platforms. Attackers could easily hijack such a capability and turn your chat bot platform into a neonazi recruiter or (perhaps more likely due to $$$$ incentives) scam artist.

I sometimes think it'd be more useful to have personalized AI chat bots that you train yourself, but that is kind of what you already have with autocomplete on your phone.

@briankrebs the centralization of the bots makes it an easy target if you allow people of unknown providence to train the underlying models. It could be mitigated quite a bit by having the models not be so centralized and rather personalized and distributed. So that if you train your chat bot to be a neonazi recruiter...congrats? I guess? But users would still have to knowingly use your bot instead of training their own or using an official one from Microsoft or someone.
@briankrebs also people miss the huge difference between predictive and generative ML. It's not their fault, it's not a clear distinction for the people who are supposedly working in that field...