GDC 2014: “Using AI Techniques to Make Game Soundscapes More Dynamic” (or “Intelligent Sound Bubbles”) by Dragica Kahlina (@drka?) of Dragica Kahlina Sound & Code https://gdcvault.com/play/1020418/Using-AI-Techniques-to-Make

I didn’t quite understand the core of this presentation.

It was about a project which was made in 9 months by 3 people, and so a lot of the presentation was discussing options they *didn’t* do because of lack of time.

The main topic was about having state machines drive the sound.

1/2

Using AI Techniques to Make Game Soundscapes More Dynamic

Sound in a game is more than just a sound file. It can help with atmosphere, make a level more interesting and less repetitive, and help set the mood. For that, we need some way to package sound with some intelligence, too. Not much, but enough to...

It sounded like, in their project, the state transitions were really just random, parameterized by how often they want each sound to play. Though the speaker listed other transition types they could have used if they had more time. The parameters were set by the sound designer in an XML file.

Review: 2/10 Merging sound design and state machines seems useful, but I also can’t really think of any other way a dynamic sound system could work, so I’m not sure if this is really novel.

@GDCPresoReviews hmmm, yes there is more than that, so I probably did not get the point across. It was about using sound instead of graphic assets in a VR game in an ALife kind of way to make the world seem more alive. Would I do things different? Probably, it was my first GDCTalk and in a fun disaster of cirumstances I had only 1 month to prepare and no mentor. I still think the concept is solid and underused, but its 11 years so probably somebody found a better way to explain it.