0 Followers
0 Following
1 Posts

yeah, I think the OP’s take is really naive

the tools and models will get a lot better, but more importantly the end products that succeed will make measured, judicious use of AI.

there always has been slop, and people will always misuse tools and create abominations, but the heights of greatness that are possible are increasing with AI, not decreasing

I think 10x is a reasonable long term goal, given continued improvements in models, agentic systems, tooling, and proper use of them.

It’s close already for some use cases, for example understanding a new code base with the help of cursor agent is kind of insane.

We’ve only had these tools for a few years, and I expect software development will be unrecognizable in ten more.

Essentially, yes. Great point! I think it needs more features to function more like a social network (transitive topic-based sharing, for one)

Hah, I designed one as well!

I think the flow of information has to be fundamentally different.

In mine, people only receive data directly from people they know and trust in real life. This makes scaling easy, and makes it impossible for centralized entities to broadcast propaganda to everyone at once.

I described it at freetheinter.net if you’re interested

the issue is that foreign companies aren’t subject to US copyright law, so if we hobble US AI companies, our country loses the AI war

I get that AI seems unfair, but there isn’t really a way to prevent AI scraping (domestic and foreign) aside from removing all public content on the internet

Sorry for the late reply - work is consuming everything :)

I suspect that we are (like LLMs) mostly “sophisticated pattern recognition systems trained on vast amounts of data.”

Considering the claim that LLMs have “no true understanding”, I think there isn’t a definition of “true understanding” that would cleanly separate humans and LLMs. It seems clear that LLMs are able to extract the information contained within language, and use that information to answer questions and inform decisions (with adequately tooled agents). I think that acquiring and using information is what’s relevant, and that’s solved.

Engaging with the real world is mostly a matter of tooling. Real-time learning and more comprehensive multi-modal architectures are just iterations on current systems.

I think it’s quite relevant that the Turing Test has essentially been passed by machines. It’s our instinct to gatekeep intellect, moving the goalposts as they’re passed in order to affirm our relevance and worth, but LLMs have our intellectual essence, and will continue to improve rapidly while we stagnate.

There is still progress to be made before we’re obsolete, but I think it will be just a few years, and then it’s just a question of cost efficiency.

Anyways, we’ll see! Thanks for the thoughtful reply

niche communities are still struggling due to the chicken-and-egg problem (and reddit dominance), but it’s improving

if there is a party, it’s about lemmy’s inevitable growth amidst reddit enshittification

relative to where we were before LLMs, I think we’re quite close
the extent that Trump has gone to remove barriers to committing atrocities likely corresponds to the extent he intends to commit them

Peer to peer.

I’ve spent a bit of time developing some related ideas, but haven’t had time to start building it.

It’s a bit rough still, but I’d love some feedback! freetheinter.net

freetheinter.net

fti