How to centre a div - Aussie Zone

Lemmy

Idk why I like this, high quality healthy shit post.
It’s shit a lot of people don’t think about tbh. Like imagine all the meaningless shit people have done on the internet, and that shit literally travels across the whole world using impressive feats of technology. I’m saving this post for later. 🤣

Infrastructure is all about unbelievable feats of engineering that are taken for granted. Sewage systems, running water, electricity, roads, public transport, cars, physical mail, and grocery stores/supermarkets are all unbelievable achievements that we all take for granted to varying degrees, and that’s just off the top of my head. IP networking is just more of that. Absolutely crazy, and by design we don’t think about it.

But AI (also depicted in this gif) is not in the same category IMO, for a lot of reasons.

Simply doing a traceroute to a website in another country a long time ago fascinated me. Seeing it hit all of the routers in other cities then across the ocean to another continent and back in less than 100ms blew my mind.

Led me down that path and now I’ve been a network engineer for over 10 years.

You might not like AI for what it stands or for the negative impact it has on the world, but you can’t deny that LLM like we have today are a marvel of technology, an incredibly complex technology that would have felt science fiction just a decade ago.
Yeah, but it’s not good infrastructure. It’s not sustainable, it’s privately controlled, and it’s destined to be enshittified. Infrastructure needs to be well thought out and publicly regulated, AI is the opposite.

I think doing this exact post but changing the request could really speak to the inefficiency of AI.

Something like “What is 8x12”, go through the whole sequence, and have it spit out “Eight times twelve is 114”

tbf they are getting significantly better, one of the best things that hasn’t really filtered through to the mainstream is MOE / mixture of experts

the tldr is that back in the chatgpt4 days, wayyy back in the ye olden times of 2024, ai would essentially go through the entire library every single question to find an answer

Now the libraries are getting massive but the queries are getting faster at responding because instead of going through the entire library for every question, they only need a part of it, just like in a library instead of querying all of human knowledge for what is 8x12, it just goes to the maths section saving a lot of power and time

In the case of chat.mistral.ai it doesn’t even go through the library to maths, it just makes a quick python script and outputs the answer that way:

Just take a div and put it in the center

Of course!

Btw, the correct answer is “use flexbox”.
You could also use margin: 0 auto;
Where it works, yes. If you know where it works, it won’t be a problem for you.

Just as a tangent:

This is one reason why I’ll never trust AI.

I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.

Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.

Solving the hallucinations in LLMs is impossible.

That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?

There’s no training for correctness, how do you even define that?

I guess can chat to these guys who are trying:

By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year

huggingface.co/deepseek-ai/DeepSeek-Math-V2

deepseek-ai/DeepSeek-Math-V2 · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.

The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.

The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity

Recent generations of language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. Current evaluations primarily focus on established math and coding benchmarks, emphasizing final answer accuracy. However, this evaluation paradigm often suffers from contamination and does not provide insights into the reasoning traces. In this work, we systematically investigate these gaps with the help of controllable puzzle environments that allow precise manipulation of complexity while maintaining consistent logical structures. This setup enables the analysis of not only final answers but also the internal reasoning traces, offering insights into how LRMs think. Through extensive experiments, we show that LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having remaining token budget. By comparing LRMs with their standard LLM counterparts under same inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models outperform LRMs, (2) medium-complexity tasks where LRMs demonstrates advantage, and (3) high-complexity tasks where both models face complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across scales. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models' computational behavior, shedding light on their strengths, limitations, and raising questions about their reasoning capabilities.

arXiv.org

i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai

although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future

ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements

Uh-huh. Now ask it how to center a div vertically.
.container { display: grid; place-items: center; }

The real answer is of course, as in most cases, “it depends.”

<center> Whoopsie Daisy! </center>
<pre> Easy </pre>
I need that keyboard
I have that keyboard, its a magegee typewriter style.
Heck yes thank you
I have my local LLM rig (powered by solar) for asking stupid questions because I feel it’s unreasonable to ask a data centre somewhere why spoons taste funny
Whoa, I was weirdly into My Little Pony for a while but I didn’t realize it powered data centers.
I suppose this is preferable to human intelligence, where the answer would come back as “Why would you want to do that?”