Psst: if you're on my crit list on Dreamwidth (closed, not taking new applications, thanks), you might want to check out what I just uploaded there.

Addendum: this is draft 2 of the space opera that, well, my elevator pitch was "Iain isn't writing any more, alas, so let's see if I can do something that makes readers feel the same way as his Culture novels without in any way being derivative of the Culture".

It's probably a failure on those terms, but I had to try, right?

I will note that there is a lot more to the Culture universe than chatty starships with odd names: Iain was a litfic author writing in a space opera setting, so there's that.
@cstross are there loads of knife missiles?

@rooftopaxx No knife missiles whatsoever! Not even any AIs. There is some cute bioengineering, though. And a high concept: what is the universe going to look like in a post-science age (by which I mean, all achievable insights have long since been achieved, so there are no new breakthroughs, only stamp-collecting)?

At least the starships have notable names ...

@cstross *blink* No AI? That's a change.
@kithrup Core conceit: "what if there's no singularity because AI is impossible, but everyone still believes in it as a matter of religious faith?"

@cstross A couple of decades back at a worldcon in San Jose, I asked Vinge: Why does everyone assume machine intelligences will be *faster*?

I assume machine intelligences are possible, because the universe doesn't otherwise make sense, but that doesn't mean a Vingean Singularity is possible.

@kithrup Same: machine intelligence may be possible, but it may not be achievable by human intelligence. After all, we're no smarter than the minimum needed to retain collective knowledge and develop technology. Almost all of us ride the coat-tails of those at the extreme end of the bell curve. Getting AI from human-I might be like trying to fuel a nuclear fission reactor using unenriched natural uranium (at today's prevailing isotope ratio).

@cstross @kithrup

One of the elements of the webcomic Questionable Content I like very much is its treatment of AI:
* A few brilliant people invented AI, but they still really don't know how and nobody understands how it works.
* Neither do the AIs.
* Most of the AIs don't think significantly faster than people, because there are so many emergent layers of processing involved that slow it all down.
* They're no better at math than us, for the same reason.
(continued)

@cstross @kithrup

* Many of them are struggling with a sense of identity.
* A lucky few enjoy being toasters or industrial machinery.
* The majority however have modeled themselves on humans, and human presentation, so they have to try to figure out how our identity works (good luck!) and work through all our issues too.
* ... including sexuality and gender presentation.
* The very few AIs that *are* smarter or faster than humans have trouble communicating either with humans or other AI.

@CliftonR @cstross @kithrup
Yeah, I tend to think that emergent AI is not necessarily going to be that different from human minds except that it'll be easier to wire it for Internet access.
@cstross @kithrup
I tend to think machine intelligence can come in a lot of different flavors, different from human too, but again, comparing intelligence is hard (IQ is a mug’s game) because there are so many moving parts. Maybe one type of AI would be able to recognize visual patterns far better than humans (humans can distinguish different 1st order Markov chains, but not 2nd order & higher) but that wouldn’t mean better verbal comprehension or personal interactions.

@cstross I suspect the only way we're going to get human-level (or above) machine intelligence is to start by making much smaller intelligences, and then scaling that up -- and trying to teach it. I suspect that route will result in mostly failures, and the occasional insane one.

But, of course, then I go back to my standard "define intelligence in an objective and testable way" comment.