HeathenStorm

@heathenstorm
59 Followers
230 Following
232 Posts
Personal #fediverse profile for Daryl Parson, owner of HeathenStorm Productions.

Coder, musician, tour manager, video maker, and more. 
Intrigued by the possibilities of decentralised #socialmedia to boost creative reach and collaboration.

Main blog running #activitypub integration on @index
#introduction #music #metal #heavymetal #freelance #tourmanager #gigbooking #videoproduction #chaosmagick
Websitehttps://heathenstorm.com
Instagramhttps://instagram.com/heathenstorm
Loopshttps://loops.video/@heathenstorm
Main Fediverse Profile@[email protected]
@mike That’s no moon…

Tweaked my #wordpress #activitypub config to use only the Blog actor, instead of both Blog and Author. Now, the Blog actor federates posts directly, rather than boosting posts from each Author.

Has anyone else tried this? Did it cause any issues interacting with boosted Author posts that were previously federated?

Would I need to “repost” all previously federated content for it to appear under the Blog Actor?

#fediverse

The Divine Completion

Very happy to finally finish my Level 1 Dante Certification on an apropos Good Friday evening. With training provided by Aussie AV techies Audinate, this means I’m now officially authorised to hook up any number of Dante-enabled Audio-Visual devices across a basic local network. Covering the fundamentals, the course itself was four-and-a-bit hours of standard video fare, which I admit I’d staggered over three months before taking the time to wrap it up tonight. Overall it was […]

https://heathenstorm.com/2026/04/03/the-divine-completion/

Strands of Home

With a few days free between terms, I took a trip back to the homeland. To spend some time in Seaburn, near Sunderland, with the spring sun shining out across the strand and a balcony view in the middle of it all.

Deliberately choosing to do very little, it was a chance to walk slowly for once, taking in the beach and piers from Roker all the way to Whitburn. Reconnecting and renewing with every step.

The tide ebbs and flows and erodes as all things, but I remain grateful. The seafront of my childhood, sometimes neglected, renews in itself, and despite the alienation of the modern age I can still return home. To hear the lilting, happy voices of my native (yet supplanted) accent. To share the promenade with dog walkers and joggers going along their way. To feel at peace among my people.

Not much has changed across the decades, even if I have.


https://heathenstorm.com/2026/03/31/strands-of-home/
#beach #home #lighthouse #roker #seaburn
Dark and Doomy

I never imagined I would meet video game legend and first-person pioneer John Romero, and especially not in Yorkshire.

Last week, a packed-out WX Wakefield Exchange played host to Game Republic‘s Dark and Doomy gathering. The main draw being a Fireside Chat with the bitch-making ‘rockstar’ developer of Wolfenstein 3D, Quake, and (of course) DOOM. Hearing about the event through the Creative Wakefield network, I made sure I was there to meet the man responsible for bringing a touch of Metal to the gaming world.

Equal respect was paid to his wife, Brenda Romero, who had many stories of her own from her work on tabletop games and the Wizardry series. The chat was a fascinating hour of anecdotes and insight, covering how both found themselves in the industry before it became an industry, and touching on id Software’s collaboration with Trent Reznor on the Quake soundtrack.

While I sidled up to grab his Doom Guy autobiography and pose for a very awkward photo (with thanks to Alex from Rebellion for doing the honours) we had a chat about the Doomed 486 days. I spent many entertaining early-nineties nights in the computer labs at Bradford University, waiting eagerly for the shareware edition of Episode 1 to drop, dying repeatedly in countless deathmatches against my peers, and playtesting one of the first-ever .WAD files developed by a classmate. In retrospect, it’s no wonder I flunked.

Although I’m not as eager a gamer as I was back then, I took the opportunity to investigate other game developers sharing projects old and new inspired by Romero’s work. Local luminaries Team17 were in attendance, offering an emulated edition of Amiga classic Alien Breed 3D. Of special interest was Manchester’s Paranomalous Games, showcasing an early (yet playable) build of Voxel Keeper. A spiritual successor to a certain Dungeon-themed game of yore, with more than a hint of Minecraft to empower the 3D domain-tunnelling.

The main event of the evening was The Dark Room, a raucously interactive ‘Choose Your Own Adventure’ game presented by Australian comedian John Robertson, adorned in fetching glowing spaulders that mostly survived the show.

Starting (and very often restarting) in the eponymous room, the game was presented as a sequence of four options, each leading further along the route to freedom or death. Picking a member of the audience for each run through, he improvised his way through their choices as they led themselves to their inevitable demise. With the clock ticking down and a dual effort by the Romeros failing to make it out, things became increasingly manic and sweary – ultimately offering democratic decision to the crowd factions who could shout the loudest.

We did not escape.

It’s a very exciting time to be in and around the WF postcode, with a big push from Creative Wakefield to showcase more engaging events and opportunities in the region. Many of the technologies used in modern film production, especially virtual sets and volumes, owe their origin to the games industry. The divisions between disciplines fade as we find the common ground to tell our tales.

Game Republic: https://gamerepublic.net/
Voxel Keeper: https://www.voxelkeeper.com/
The Dark Room: https://www.thejohnrobertson.com/thedarkroom/
Creative Wakefield: https://creativewakefield.net/

https://heathenstorm.com/2026/03/29/dark-and-doomy/ #creativewakefield #doom #gamerepublic #gaming #johnromero #thedarkroom #voxelkeeper #wakefield

Finally took a look at the new-ish #wordpress #activitypub ‘Follow on Fediverse’ page template as I contempate another site refresh with WP 7.0. Would definitely appreciate a few more followers there.

https://heathenstorm.com/follow-on-fediverse/

Follow on Fediverse

Follow this blog on Mastodon or the Fediverse to receive updates directly in your feed. Need to know more? Follow Fediverse Followers

HeathenStorm
Fluidity

Now the rush of term-end is behind me, I can afford time to document some of the projects I’ve been up to between my Academy studies.

FLUIDITY is an experimental short film improvised one weekend in February 2026 with the Academy of Live Technology’s Postgraduate cohort, at the behest of visiting Multimedia Artist-Engineer Diana Scarborough. A Cambridge-based creative whose work blends science, art, and ecology, her Sounds of Space project is an inspirational extension of concepts I’ve used in my own work.

The film presents a meditation on water, flowing around a spoken word performance by student Dami Olagbegi. Soundscape and Foley were recorded by contact mics and hydrophone, capturing the sound of underwater instruments, percussion, and melting ice. Suitably swirly visuals were then evoked by filming a tray of water through an old-style desk projector while adding different colour pigments with a pipette.

My part was to blend these elements together, along with separately supplied stock footage and a little alchemical flavour. Layering down a simple synth soundtrack to mix the recordings into form, and editing the video against an unyielding deadline.

Using GarageBand and DaVinci Resolve on iPad to record and edit, and shooting the projections with a SmallRig-ged iPhone, the tools to hand ensured swift turnaround from idea to image. Although I could have used some extra time to polish things properly on home hardware, I’m happy with the raw honesty of the result.

It was a great pleasure to be invited to work alongside the Postgrad students, to get a feel of what awaits my own Masters journey, and to play without expectation for the joy of creating art.

https://youtu.be/UeOFbDiJMMc

Diana Scarborough: dianascarborough.co.uk
Sounds of Space: soundsofspaceproject.bandcamp.com

https://heathenstorm.com/2026/03/27/fluidity/ #academyoflivetechnology #davinciresolve #dianascarborough #experimental #filmmaking #fluidity #garageband #improvisation #postgraduate #synth #video #water
Seven years of Solstice

2018-2025

It is a great relief to finally declare my departure from Solstice. Although we went our separate ways last June, I chose to embargo any revelation until the band were good and ready to say it themselves. Nine months later, their Equinox announcement offers opportunity to reflect on the seven years I spent on bass duties.

I joined Solstice in 2018, having left previous band The Enchanted around fifteen years prior. Being a long-time fan since the demo days, the chance to play the songs I grew up on was sufficient to coax me out of retirement. Coming back to the challenge of making music instead of just appreciating it, it took a while for rusty hands to find their form, with weekly rehearsals in Huddersfield essential to getting my playing up to scratch.

Although that year closed as I took tentative steps back on stage in London, it was in 2019 that the journey took stride. Playing to thousands across Europe at Keep It True, Up The Hammers, and Party.San, it was a privilege to share these festivals with highly lauded artists and a passionate fanbase – to whom I always bore my heart in performance.

Starting 2020 strong with shows in Rome and a Belgian castle, any momentum soon crashed to a halt – along with the events industry and world in general. With entire populations placed under house arrest, it was hard to persist under the immediacy of making sense of the moment.

Band members came and went, and others declared their opposition to the age with vociferous conviction – earning enmity for unyielding words. It was a very different Solstice that emerged from these trials three years later…

… and one that never quite gelled with me as it once did. Suddenly finding myself in the firing line for words spoken by others, with dishonourable demands to distance or discredit, I felt doors close far faster than they had opened. The message, and the perceived need to set and be set an example in all arguments, became louder than the music.

Inspired again by the label signing, I continued with a number of high profile gigs through 2024. Ever alert to physical reprisal threatened in forums, the distractions were high and my playing sometimes sloppy, with joyless tension far tauter than any string. Pulling in professional effort before I started my Academy studies culminated in a far more successful September which saw a standout show at Prophecy Fest and a mini-tour across Finland.

With members dispersed across England and Wales, in-person rehearsals were few and far between, and time spent together increasingly bitter as frustrations came to the fore. Unaccustomed to playing remotely and home recording beyond synth-dabbles, I struggled with the new way of doing things and especially not meeting bandmates for months on end.

Balancing the band with studies and weekend work was a challenge in itself, and although I prioritised rehearsals in my calendar, short-notice cancellations and rescheduling took their toll. My final rehearsal with the band was over a year ago, and despite sustaining my availability (to the point of losing work shifts) and practising nightly between assessments, we had no further in-person contact.

It was an untenable situation, draining and unhealthy for everyone. After being presented with an unbalanced ultimatum during assessment week where my attempts to discuss the matter in person were rebuked, I chose to leave.

The relief comes from closure. My life has hardly stopped since last June, with studies and more taking precedence now I can devote my better energies towards them. I will keep the Solstice section up on the website, as to minimise my involvement would be reductive, craven cowardice.

But also I look forward to the long-awaited next album, having played a part in its foundation. There is some magnificent music to come, whenever it comes, and I will eagerly listen with the same spirit as those in the front row who inspired me to continue.

https://heathenstorm.com/2026/03/22/seven-years-of-solstice/ #doommetal #equinox #keepittrue #livemusic #metal #music #partysan #prophecyfest #solstice #upthehammers
The Wizard of Speed and Time (1988)

Looking back to some cult 80s kino…

Partly autobiographical, sometimes farce, The Wizard of Speed and Time follows emerald-clad director, writer, and stop-motion effects wiz Mike Jittlov’s attempts to break into Hollywood in the 1970s.

It was different era of effects where everything was analogue. Film was shot by hand and tape reels spliced together, with effects themselves painted on the original frames. The process is explained and demonstrated clearly, with the film an instruction guide for others to create as much as a showcase of Jittlov’s skill. The effects themselves, although clearly unreal, still hold up today.

This raw creativity is contrasted with the bureaucracy and betrayals of Hollywood culture, with the cynicism of the system present in even the opening song. One scene that always comes to mind juxtaposes Hollywood’s budget-busting use of the ‘latest’ digital technology with shots of Jittlov joyfully working in his garage.

Although playing to the schmaltzy, overly sentimental style of 1980s movies, (with a few blasts of humour that probably wouldn’t be accepted today), the film has a sincere heart. More than anything, it shows how individual determination and ingenuity can manifest movie magic – with the right kind of support.

It inspired me to dream beyond merely consuming cinema, and to find my own path towards creating it.

I don’t know if it ever formally made it to the streaming services, but an upscaled version of the Laserdisc edition is available to watch in good old 4:3 on YouTube:

https://youtu.be/5lRL85V7oD4

https://heathenstorm.com/2026/03/12/the-wizard-of-speed-and-time-1988/ #1980s #cinema #cult #filmmaking #mikejittlov #specialeffects #thewizardofspeedandtime
Owning the means of (Audio) Production

I’ve been spending some laptop time in one of our louder studios of late. Since switching to Linux for philosophical reasons, the challenge emerged to integrate this sometimes under-supported operating system into the workflow expected of an audio professional. This article will be updated on the go as I plough through the pitfalls, while sharing tips and workarounds I’ve discovered along the way to make the music flow. It will evolve as I learn.

Although many Linux distributions exist, some specialised for creatives, my explorations have been using stock Linux Mint 22.3. Often using base packages provided by the Software Manager for maintenance and stability, rather than the latest and greatest. With that in mind, the journey can begin.

Caution: The command line lurks ahead…

Not only Pulses and Pipes, but ALSA

Although the “Plug and Play”-ability of Linux hardware has improved dramatically over the years, use of audio hardware demands a deeper understanding of the different layers that come together to control sound output.

At the base level, there is ALSA, the Advanced Linux Sound Architecture. This is a kernel-level device layer that connects to the hardware directly. Offering the basics for sound capture and playback, but nothing much more than that. Software-based Virtual Devices can also be configured at this level, which I’ll touch on later.

Historically, PulseAudio was implemented as a user-level layer on top of ALSA, allowing the mixing of multiple audio streams. Individual volume levels could be set for each application, and sound routing could also be switched on the fly, such as when headphones were plugged in. Applications would use the PulseAudio API to connect to this sound server instead of hitting the hardware directly, with the complexities abstracted away.

Another audio server, JACK (JACK Audio Connection Kit. Gotta love a recursive acronym!) was used at a professional level, also abstracting ALSA. Designed for low-latency studio applications, it would allow for accurate synchronisation and explicit audio routing. However, this server was incompatible with PulseAudio, as only one or the other could connect to ALSA at a time. Systems running both required a lot of workarounds and bridging.

PipeWire is a modern evolution and replacement of both PulseAudio and JACK, revised to handle the needs of both audio and video processing as well as MIDI transfer. Able to emulate both the API and toolset of its predecessors, this layer is low-latency by design.

Modern audio implementations mostly use PipeWire, although some applications may hit ALSA directly. A few caveats remain when switching between the two.

The DAW is the Law

Although I’ve limped along with GarageBand on other devices, the job demands a Digital Audio Workstation (DAW) with a bit more control.

Many professional options are available, with Ableton Live, Avid Pro Tools, and Apple Logic Pro coming highly regarded for Windows and MacOS. Linux options are somewhat more limited, but Reaper offers a native version. All these options come with a price-tag, often shifting to a subscription licence instead of owning outright.

Ever the zealot, I went with Ardour – which has the advantage of being free-as-in-speech Open Source software. Donations to the developers are welcome for a ready-to-run supported binary version, but the source code is available for anyone who wishes to build their own.

As for me, I grabbed the pre-built version 8.4 package from the Linux Mint Software Manager. It’s a few versions behind, but sufficient to the task.

sudo apt install ardour

(Using a DAW effectively is beyond the scope of this article, mostly because I’m still learning the intricacies itself!)

One thing I did notice when connecting straight to ALSA is my Dock audio only permits a 48 kHz sample rate, which made importing CD-rate 44.1 kHz stems (individual audio tracks) a little troublesome, with clicks and pops aplenty. As expected, ALSA also takes sole control of the device, removing it from the available list in Sound Settings.

Switching to the PulseAudio system (running through Pipewire as explained earlier) enabled sample rates from 8 kHz to 192 kHz through the ‘Default Playback’ device, which outputted onto the Laptop’s speakers instead of through the dock. Switching the default through the Sound Settings panel soon got things coming out of the expected speakers at the right rate, without stealing control.

Pull the Plug-in

Ardour comes with a bevy of workable LV2 ACE plugins to handle the basics of Compression, Gating, et al. But most of these rely a little too much on sliders and numbers than the familiar knobs and blinkies of a real mixing desk. Downloading the pre-built binary improves the look, but nonetheless the effects chain is easy to navigate, allowing drag and drop visualisation of where everything clicks together.

It is also compatible with industry-standard plugin formats such as Virtual Studio Technology (VST), offering a more familiar interface, but here is where Linux users hit a snag. The underlying format of these plugins has been designed for Windows, and thus incompatible.

Undismayed, I found yabridge able to convert the plugins so that they work just fine, with full custom interface intact. Utilising the Windows compatibility layer offered by Wine, yabridge relinks the VSTs to the equivalent native libraries, allowing them to be added to Ardour. The Software Manager has an older version of wine, but again it does the job:

sudo apt install wine-installer

Once wine has installed, grab a prebuilt yabridge release, (current version 5.1.1), as a tarball, then extract it to ~/.local/share. (‘~’ being the home directory.)

tar -C ~/.local/share -xavf yabridge-5.1.1.tar.gz

Copy your .vst files into ~/.vst3, then just run the following from ~/.local/share/yabridge:

yabridgectl add ~/.vst3 yabridgectl sync

After a little churning, a bunch of new directories will be created under ~/.vst3, storing .so files for each of the plugins. The original files can be deleted if need be, (they’ll just show up as Errors), but ultimately the working versions are easy to find in Ardour’s Plugin Manager, where they can be enabled and added to the chain:

Donning my Electric AXE

Now we’re mixing it up, the next challenge is to get audio in and out.

I’ve been using the IK Multimedia AXE I/O One as external soundcard of choice for a few years now. An ideal device that can receive 1/4″ jack and balanced XLR input, sending the post-DAW signal to headphones, amp, and line-out. Connecting via USB-C, it works with just about everything. Although IK does not formally support Linux, a little tweaking can coax it into life.

When first plugging it in, the first surprise is that nothing happens. Checking sound settings reveals a new analogue input source, but nothing else. Time to troubleshoot.

First thing I tried was listing the USB devices to make sure it had been picked up properly.

lsusb [...] Bus 001 Device 007: ID 1963:00bb IK Multimedia AXE IO One [...]

So at least it exists. Next I checked the ALSA hardware layer.

aplay -l [...] card 2: One [AXE IO One], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0 [...]

So it’s there as a playback device. Next comes the USB Audio kernel module, just to make sure that’s working:

lsmod | grep snd_usb_audio [...] snd_usb_audio 573440 4 [...]

At this point I was able to see the card as an ALSA device in Ardour, but it still wasn’t available elsewhere. So, I needed to go higher up to Pipewire:

pactl list cards | grep -A20 -i axe [...] Name: alsa_card.usb-IK_Multimedia_AXE_IO_One_0700624-02 [...] Profiles: off: Off (sinks: 0, sources: 0, priority: 0, available: yes) output:multichannel-output+input:mono-fallback: Multichannel Output + Mono Input (sinks: 1, sources: 1, priority: 101, available: yes) output:multichannel-output: Multichannel Output (sinks: 1, sources: 0, priority: 100, available: yes) pro-audio: Pro Audio (sinks: 1, sources: 1, priority: 1, available: yes) input:mono-fallback: Mono Input (sinks: 0, sources: 1, priority: 1, available: yes) Active Profile: pro-audio [...]

So, looking at this, it seems the card IS visible to Pipewire, with an audio sink (output), but the pro-audio active profile doesn’t play nice with desktop.

Investigating the Pipewire sinks further:

pactl list short sinks [...] 61 alsa_output.usb-IK_Multimedia_AXE_IO_One_0700624-02.pro-output-0 PipeWire s32le 3ch 48000Hz SUSPENDED [...]

Now I know it’s visible, I can set it as the default sink and send a test signal to the PulseAudio device exposed by PipeWire:

(I could also have sent it to the plughw:2,0 PipeWire device which wraps the raw hardware.)

pactl set-default-sink alsa_output.usb-IK_Multimedia_AXE_IO_One_0700624-02.pro-output-0 speaker-test -D pulse -c 2 -f 4000 -r 48000 -t sine -l 0

It was at this point I realised my headphone volume was maxed, and 4000 Hz really is an unpleasant frequency.

The card was working at the desktop level, and after a swift blast of ‘Procreation (Of the Wicked)’ to soothe my delicate ears, I went to see if I could use it this way in Ardour. After some starting and stopping of the DAW’s sound server, it worked!

However, the card was still not available in Sound Settings, probably due to the aforementioned pro-audio profile. So I just needed to map it into something Pipewire could use elsewhere:

(Note to the overwhelmed reader: THIS IS THE IMPORTANT BIT!!!!!)

pactl load-module module-remap-sink sink_name=AXE_IO_STEREO master=alsa_output.usb-IK_Multimedia_AXE_IO_One_0700624-02.pro-audio channels=2 master_channel_map=front-left,front-right channel_map=front-left,front-right

And with that, the AXE popped up in the Sound Settings with a somewhat mangled name – but at least it works and I could test it from there.

Going back to Ardour, the default PulseAudio device could be changed from this control panel, and the audio switched seamlessly. Success!

Finally, all I had to do was add the remap to a script to run whenever I wanted to plug the card in. Automating this can be finessed later, but it’ll do for now.

Welcome to Helvum

Helvum is a great little tool to help visualise audio flow. Acting as a virtual patchbay, signals can be dragged from application to audio sink, with the remapped AXE sink appearing as both a Playback input and output.

The AXE itself appears with three playback_AUX channels. AUX0 and AUX1 are Left and Right headphone/line out respectively, and AUX2 the amp output.

If patches are lost for whatever reason, they can be remapped here. Counter-intuitively, patches are deleted by dragging the same link from node to node again, but this soon becomes second nature.

sudo apt install helvum

A Divine Comedy of Compatibility

Broadcasting beyond the confines of my laptop into a wider world of connectivity, the next step is to get it to speak the Dante protocol. Developed by Audinate, Dante is the media industry standard for synchronising and transmitting low-latency multi-track audio (and video) digital data across Ethernet, with bandwidth far in excess of the multicores of old. And of course, it isn’t supported on Linux.

Fortunately, a clean-room reverse-engineering of the protocol exists. Appropriately named Inferno, it is a software-only implementation thatis still very much experimental and not recommended for real-world productions. However, it should be sufficient for my goal of sending a ‘Band in a Box’ from my laptop to a mixing desk.

Like the author’s depiction of the afterlife, getting this working requires purgatorial levels of fault finding and configuration hacking. As of writing, I’m not quite there yet, but here’s what I have so far…

First, I needed to build everything. Following the instructions at the main code repository, I soon realised I needed to obtain a second package – statime – to allow for network clock synchronisation.

git clone --recurse-submodules -b inferno-dev https://github.com/teodly/statime git clone --recursive https://gitlab.com/lumifaza/inferno.git

This got me all the code I needed. As both projects are written in the Rust language, I also had to install the latest version of the language via rustup, despite my reluctance to pipe random code from the Internet into the shell.

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

With the code in place, I built statime with cargo build, and edited the inferno-ptpv1.toml file to include my correct ethernet interface, enp4s0. Then I kicked everything off:

sudo target/debug/statime -c inferno-ptpv1.toml

Lots of trace messages scrolled up, so I thought I’d better stop it for now to build the main virtual ALSA device. First I needed some extra development libraries:

sudo apt install libasound2-dev

And then I ran another cargo build from the alsa_pcm_inferno directory. This created a library which needed to be linked into the correct directory.

cd /usr/lib/x86_64-linux-gnu/alsa-lib sudo ln -s ~/Development/inferno/target/debug/libasound_module_pcm_inferno.so .

With the virtual device library in place, the next step was to get it to appear to ALSA. Taking hints from both the readme documentation and a useful set of forum posts, I created an .asoundrc file in my home directory, roughly containing the following:

pcm.fixed { type plug slave.pcm "inferno" hint { show on description "Plug - Inferno ALSA" } } pcm.inferno { type inferno rate 48000 NAME "daryl_phantom" SAMPLE_RATE "48000" TX_CHANNELS 2 RX_CHANNELS 2 BIND_IP "enp4s0" hint { show on description "RAW - Inferno ALSA" } } ctl.fixed { type hw card 10 } ctl.inferno { type hw card 11 }

This would create a raw ALSA device, as well as a plug for that device to allow it to be called from Pipewire. I knocked the available receive and transmit channels down to two apiece, just to make it easier to test by sending two-channel audio from the command line.

Happily, the virtual devices became visible to ALSA:

aplay -L [...] fixed Plug - Inferno ALSA inferno RAW - Inferno ALSA [...]

But not to aplay -l, which only lists physical hardware.

At this point I was ready for testing, so I chained my laptop’s Ethernet to a pre-existing ad-hoc Dante network consisting of a Midas M32, DigiCo Quantum 225, and a MacBook running Dante Controller and Reaper. Making sure to set my Ethernet’s IP to a similar range to the rest. The intent being to run some test sounds over the network to see if they work.

ffmpeg -y -i /usr/share/sounds/alsa/Front_Center.wav -ac 2 -f wav -acodec pcm_s32le /tmp/Front_Center_32.wav aplay -D inferno /tmp/Front_Center_32.wav

The main issue in all of this was the clock. My Ethernet port has no hardware timing, so I also experimented with setting up a software clock with ptp4l:

sudo apt install linuxptp sudo ptp4l -i enp4s0 -m -S

Eventually, I was able to get a positive response from playing audio into the virtual device, showing that the clock was eventually found and stabilised. However, the device still wasn’t appearing in Ardour, despite my trying other ways to access the plug.

At this point the studio session was over so I had to continue at home, where the clock didn’t work at all. After subsequent digging, I conclude a physical Dante hardware device must be available to provide a reliable clock signal, even when the software clock is acting as master, so my investigations have paused for now.

Hopefully I’ll be able to get these last few tweaks working when I’ve got access to a Dante-compatible desk again.

The journey continues

Although I’m a lot closer than I was when I started, there are still a few things that could be better to ensure Linux has its place in a professional audio environment. More research and experimentation is essential, and this article will be expanded accordingly as I go.

https://heathenstorm.com/2026/03/01/owning-the-means-of-audio-production/ #alsa #ardour #audio #dante #daw #ikmultimedia #inferno #linux #pipewire #production #vst #yabridge