0 Followers
0 Following
2 Posts

Except, imo, AI searching is literally a regression vs other search methods.

I work as a field operations supervisor for an ISP, and we use a GPS system to keep track of our fleet. They’ve been cramming AI into it, and I decided to give it a shot.

I had a report of a van running a stop sign. The report only had a license plate, so I asked the AI which of the vehicles in my fleet had that plate. And it thought about it and returned a vehicle. So I follow the link to that vehicle’s status page, and the license plate doesn’t match. Isn’t even close.

It’s only in recent time that searching has turned into such a fuzzy concept, and somehow AI turned up and made everything worse.

So you can trust AI if you want. I’ll keep doing things manually and getting them right the first time.

But can you actually trust what it outputs?

Hallucinations are a known thing that LLMs struggle with. If you’re trusting the output of your LLM summary without validating the data, can you be sure there are no errors in it?

And if you’re having to validate the data every time because the LLM can make errors, why not skip the extra step?

I work for a company in a management capacity, overseeing field technicians. We use Samsara for the GPS in our fleet vehicles, and they’ve been adding AI lately.

I’m not a huge fan of AI so far, but one day decided to give it a go. I had a report of a vehicle running a stop sign, and the report came with a license plate.

So I asked the Samsara AI which vehicle in my fleet had that plate, and it returned a vehicle after thinking for a moment. So I follow the link to that vehicle’s information page… and the license plate does not match.

What’s the point of an ‘AI’ that can’t get something I’d expect a normal search engine to accomplish?

I worked for a marina when I was a child, and my favorite boat name was a sailboat named

C:[ESC]

There’s a difference between ‘language’ and ‘intelligence’ which is why so many people think that LLMs are intelligent despite not being so.

The thing is, you can’t train an LLM on math textbooks and expect it to understand math, because it isn’t reading or comprehending anything. AI doesn’t know that 2+2=4 because it’s doing math in the background, it understands that when presented with the string 2+2=, statistically, the next character should be 4. It can construct a paragraph similar to a math textbook around that equation that can do a decent job of explaining the concept, but only through a statistical analysis of sentence structure and vocabulary choice.

It’s why LLMs are so downright awful at legal work.

If ‘AI’ was actually intelligent, you should be able to feed it a few series of textbooks and all the case law since the US was founded, and it should be able to talk about legal precedent. But LLMs constantly hallucinate when trying to cite cases, because the LLM doesn’t actually understand the information it’s trained on. It just builds a statistical database of what legal writing looks like, and tries to mimic it. Same for code.

People think they’re ‘intelligent’ because they seem like they’re talking to us, and we’ve equated ‘ability to talk’ with ‘ability to understand’. And until now, that’s been a safe thing to assume.

Users are blameless, I find the fault with the developers.

Asking users to pipe curl to bash because it’s easier for the developer is just the developer being lazy, IMO.

Developers wouldn’t get a free pass for taking lazy, insecure shortcuts in programming, I don’t know why they should get a free pass on this.

The post is specifically about how you can serve a totally different script than the one you inspect. If you use curl to fetch the script via terminal, the webserver can send a different script to a browser based on the UserAgent.

And whether or not you think someone would be mad to do it, it’s still a widespread practice. The article mentions that piping curl straight to bash is already standard procedure for Proxmox helper scripts. But don’t take anyone’s word for it, check it out:

community-scripts.github.io/ProxmoxVE/

It’s also the recommended method for PiHole:

docs.pi-hole.net/main/basic-install/

Proxmox VE Helper-Scripts

The official website for the Proxmox VE Helper-Scripts (Community) repository. Featuring over 400+ scripts to help you manage your Proxmox Virtual Environment.

Proxmox VE Helper-Scripts

These are rare ads for you, because you’re not in the target demographic that they get shown to.

Everyone’s online experience can be totally different based on what group an algorithm puts you in.

You don’t have to click an ad for it to be a security threat.

It is possible to abuse the mechanics of a web browser to send a fullscreen ad that resists typical means of app closing, scaring a normal user into clicking to install something malicious.

The weakest link is always the user, and advertisements are literally meant to target users. Exactly how hard do you think it is for an ad network to target the kinds of people most likely to get scared and just click the [Fix] button that downloads the malware?

Your average user gets infected and they take a computer to a repair shop to get it fixed, which costs money.

If the ad network would accept liability for damages caused by malware ads their ad networks delivered to people, I could be more sympathetic to the position that blocking ads is unfair to the content creaters paid by ad views. But if I’m financially responsible for fixing damage caused by ads, then I reserve the right to block them.

Full stop.

I think it’s supposed to highlight the futility of the quest.

Frodo was charged with carrying the epitome of evil into the seat of its power while it tempted and tormented you. Not because you think you can, but because it is the only way to actually solve the problem.

In the council of elrond, some of the elves suggest throwing the one ring into the sea, and Gandalf warns that great rings like these tend to be found because they want to be found. They shouldn’t seek a delay, but a final end to the evil.

And the only way to do that is to bring it to Mt Doom and toss it in.

So they do the right thing. They take the ring all the way to Mt Doom and in the midst of where it was made, it is so powerful that none could resist it.

But someone crazed about the ring could be careless enough to fall in. Which is why Bilbo’s mercy was so important. Without Gollum, the ring wouldn’t have been destroyed.

It was because the Hobbits were lawful good to the core that the ring was able to be destroyed.