Psst: if you're on my crit list on Dreamwidth (closed, not taking new applications, thanks), you might want to check out what I just uploaded there.

Addendum: this is draft 2 of the space opera that, well, my elevator pitch was "Iain isn't writing any more, alas, so let's see if I can do something that makes readers feel the same way as his Culture novels without in any way being derivative of the Culture".

It's probably a failure on those terms, but I had to try, right?

I will note that there is a lot more to the Culture universe than chatty starships with odd names: Iain was a litfic author writing in a space opera setting, so there's that.
@cstross are there loads of knife missiles?

@rooftopaxx No knife missiles whatsoever! Not even any AIs. There is some cute bioengineering, though. And a high concept: what is the universe going to look like in a post-science age (by which I mean, all achievable insights have long since been achieved, so there are no new breakthroughs, only stamp-collecting)?

At least the starships have notable names ...

@cstross *blink* No AI? That's a change.
@kithrup Core conceit: "what if there's no singularity because AI is impossible, but everyone still believes in it as a matter of religious faith?"

@cstross A couple of decades back at a worldcon in San Jose, I asked Vinge: Why does everyone assume machine intelligences will be *faster*?

I assume machine intelligences are possible, because the universe doesn't otherwise make sense, but that doesn't mean a Vingean Singularity is possible.

@kithrup Same: machine intelligence may be possible, but it may not be achievable by human intelligence. After all, we're no smarter than the minimum needed to retain collective knowledge and develop technology. Almost all of us ride the coat-tails of those at the extreme end of the bell curve. Getting AI from human-I might be like trying to fuel a nuclear fission reactor using unenriched natural uranium (at today's prevailing isotope ratio).

@cstross I suspect the only way we're going to get human-level (or above) machine intelligence is to start by making much smaller intelligences, and then scaling that up -- and trying to teach it. I suspect that route will result in mostly failures, and the occasional insane one.

But, of course, then I go back to my standard "define intelligence in an objective and testable way" comment.