0 Followers
0 Following
8 Posts
https://basilikum.monster
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

It's you. Every trader who does not have the insider information loses. That's how markets work after all. They collect information by rewarding the use of information. Anyone who has information and uses it is rewarded and anyone who does not is punished.

Even when you are a passive investor you lose. You essentially buy shares at random points in time. When that point happens to fall between the trading of an insider and the public disclosure of the insider information you will get a worse price for that trade.

It isn't even relevant whether the insider buys stocks or other securities directly or trades in futures instead. All information you enter into the market through trades permeates the whole market through arbitrage regardless of where you enter that information.

I know what device attestation is. You did not answer my question.

Google is one large public company with exactly one goal: making money.

Stop shilling

How does preventing people from running software of their choice on their own device (what you call jailbreaking) prevent fraud in practice? It's a pretty strong claim you're making there. And it's being made frequently by institutions, yet I have never seen it actually explained and backed up with any real security model.

All the information and experience I ever got tells me this is security theater by institutions who try to distract from their atrocious security with some snake oil. But I'm willing to be convinced that there is more to it if presented with contraindicating information. So I'm interested in your case.

How did demanding control over your customers' devices and taking away their ability to run software of their choice in practice in quantifiable and attributable terms reduce fraud?

If Google cared just the slightest bit about keeping people safe, they would stop hosting scam ads as core part of their business model.

Google is on the side of the scammers.

Yes, I think AI bots are more compelling to some people. They break the concept of judging information by its source because they obscure the source. But at the same time they are trained on a lot of reputable sources and can say a lot of very smart things, just at other times they say complete BS. But they are really good at making things sound plausible, that's essentially how they work after all.

Ask HN: How do you deal with people who trust LLMs?

A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says.

How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?

Whatever I can install OpenWRT on.