Psst: if you're on my crit list on Dreamwidth (closed, not taking new applications, thanks), you might want to check out what I just uploaded there.

Addendum: this is draft 2 of the space opera that, well, my elevator pitch was "Iain isn't writing any more, alas, so let's see if I can do something that makes readers feel the same way as his Culture novels without in any way being derivative of the Culture".

It's probably a failure on those terms, but I had to try, right?

I will note that there is a lot more to the Culture universe than chatty starships with odd names: Iain was a litfic author writing in a space opera setting, so there's that.
@cstross are there loads of knife missiles?

@rooftopaxx No knife missiles whatsoever! Not even any AIs. There is some cute bioengineering, though. And a high concept: what is the universe going to look like in a post-science age (by which I mean, all achievable insights have long since been achieved, so there are no new breakthroughs, only stamp-collecting)?

At least the starships have notable names ...

@cstross *blink* No AI? That's a change.
@kithrup Core conceit: "what if there's no singularity because AI is impossible, but everyone still believes in it as a matter of religious faith?"

@cstross A couple of decades back at a worldcon in San Jose, I asked Vinge: Why does everyone assume machine intelligences will be *faster*?

I assume machine intelligences are possible, because the universe doesn't otherwise make sense, but that doesn't mean a Vingean Singularity is possible.

@kithrup @cstross (doubly true now that dumb LLMs exist and absorb terrifying amounts of power just to produce a stochastic parrot; even if consciousness can be layered over this you're still talking something that needs a large staff & a hydroelectric dam just to stay alive.)
@orc @kithrup @cstross Certainly kind of a major efficiency disappointment when meat brains take a small fraction of that to do more.
@lispi314 @orc @kithrup @cstross
There's room for architectural improvement, not sure if there's enough room. My theorem, which is mine, is that in the optimization quest to save power in AI computations, we will reinvent all the human cognitive errors (which are, more or less, mental shortcuts gone wrong). (I really do want my name on this one.)