Psst: if you're on my crit list on Dreamwidth (closed, not taking new applications, thanks), you might want to check out what I just uploaded there.

Addendum: this is draft 2 of the space opera that, well, my elevator pitch was "Iain isn't writing any more, alas, so let's see if I can do something that makes readers feel the same way as his Culture novels without in any way being derivative of the Culture".

It's probably a failure on those terms, but I had to try, right?

I will note that there is a lot more to the Culture universe than chatty starships with odd names: Iain was a litfic author writing in a space opera setting, so there's that.
@cstross are there loads of knife missiles?

@rooftopaxx No knife missiles whatsoever! Not even any AIs. There is some cute bioengineering, though. And a high concept: what is the universe going to look like in a post-science age (by which I mean, all achievable insights have long since been achieved, so there are no new breakthroughs, only stamp-collecting)?

At least the starships have notable names ...

@cstross *blink* No AI? That's a change.
@kithrup Core conceit: "what if there's no singularity because AI is impossible, but everyone still believes in it as a matter of religious faith?"

@cstross A couple of decades back at a worldcon in San Jose, I asked Vinge: Why does everyone assume machine intelligences will be *faster*?

I assume machine intelligences are possible, because the universe doesn't otherwise make sense, but that doesn't mean a Vingean Singularity is possible.

@kithrup @cstross that definitely leans into the zones of thought
@kithrup Same: machine intelligence may be possible, but it may not be achievable by human intelligence. After all, we're no smarter than the minimum needed to retain collective knowledge and develop technology. Almost all of us ride the coat-tails of those at the extreme end of the bell curve. Getting AI from human-I might be like trying to fuel a nuclear fission reactor using unenriched natural uranium (at today's prevailing isotope ratio).

@cstross @kithrup

One of the elements of the webcomic Questionable Content I like very much is its treatment of AI:
* A few brilliant people invented AI, but they still really don't know how and nobody understands how it works.
* Neither do the AIs.
* Most of the AIs don't think significantly faster than people, because there are so many emergent layers of processing involved that slow it all down.
* They're no better at math than us, for the same reason.
(continued)

@cstross @kithrup

* Many of them are struggling with a sense of identity.
* A lucky few enjoy being toasters or industrial machinery.
* The majority however have modeled themselves on humans, and human presentation, so they have to try to figure out how our identity works (good luck!) and work through all our issues too.
* ... including sexuality and gender presentation.
* The very few AIs that *are* smarter or faster than humans have trouble communicating either with humans or other AI.

@CliftonR @cstross @kithrup
Yeah, I tend to think that emergent AI is not necessarily going to be that different from human minds except that it'll be easier to wire it for Internet access.
@cstross @kithrup
I tend to think machine intelligence can come in a lot of different flavors, different from human too, but again, comparing intelligence is hard (IQ is a mug’s game) because there are so many moving parts. Maybe one type of AI would be able to recognize visual patterns far better than humans (humans can distinguish different 1st order Markov chains, but not 2nd order & higher) but that wouldn’t mean better verbal comprehension or personal interactions.

@cstross I suspect the only way we're going to get human-level (or above) machine intelligence is to start by making much smaller intelligences, and then scaling that up -- and trying to teach it. I suspect that route will result in mostly failures, and the occasional insane one.

But, of course, then I go back to my standard "define intelligence in an objective and testable way" comment.

@kithrup @cstross (doubly true now that dumb LLMs exist and absorb terrifying amounts of power just to produce a stochastic parrot; even if consciousness can be layered over this you're still talking something that needs a large staff & a hydroelectric dam just to stay alive.)
@orc @kithrup @cstross Certainly kind of a major efficiency disappointment when meat brains take a small fraction of that to do more.
@lispi314 @orc @kithrup @cstross
There's room for architectural improvement, not sure if there's enough room. My theorem, which is mine, is that in the optimization quest to save power in AI computations, we will reinvent all the human cognitive errors (which are, more or less, mental shortcuts gone wrong). (I really do want my name on this one.)

@cstross @kithrup I'm already sold. This is why I read speculative fiction, not for the space opera trappings.

Though the trappings are also nice.

@cstross

Sounds like the people who still wrote novels with the Soviet Union as a villain right into the 1990

@a There was a real glut of those books on the market, slowly percollating into print, from 1991-95!

Trad publishing is s-l-o-w.

@cstross @kithrup The theory behind AI was always that the Moore's law s-curve wouldn't bend down for silicon until after it did for carbon, with a few centuries of directed engineering surpassing billions of years of evolution, before we hit atomic limits or speed of light signal propagation along finite trace lengths...

It's the "Gray Goo" feedback loop again. Slime molds are yellow goo and they have limits: energy, materials, predators.

(Dunning Krueger is Freddy's sister.)

@landley @cstross That is a lovely explanation.

(Not saying I fully agree with it, but it is *at least* plausible.)

@kithrup @cstross Exponential growth is always an s-curve. It bends down again at some point. You'd think we'd teach this in middle school alongside spelling and the basic muffin recipe, but no...
@landley @kithrup @cstross I didn't get the basic muffin recipe in middle school, and feel like I missed out.

@fade @kithrup @cstross Neither did I, I just think we _should_.

Instead they re-explained what a gerund was in 5th grade, 6th grade, 7th grade, 8th grade, 9th grade, and 10th grade.