Been working on something for a while and finally put it out there, a public security challenge against a threshold cryptography system I built for my own infrastructure.

Four servers, four countries, four hosting providers. The group signing key was generated distributedly (Pedersen DKG), no single server holds the full secret. I literally can't extract it myself. The challenge is to forge a valid FROST Ed25519 signature against today's published challenge string.

What makes it different from a typical CTF:

→ It's not a weekend event. It runs 24/7 for 90 days. The servers are real production boxes running real software (Nextcloud, Gitea, a team API, Grafana). Not docker containers with planted vulns.

→ Post-quantum hybrid. The audit chain carries ML-DSA-44 signatures alongside the FROST threshold sigs, with a downgrade-detection flag baked into the signed payload. Stripping the PQ signature invalidates the classical one.

→ There's a spiking neural network watching the cluster. 258 neurons with STDP learning and four neuromodulators (dopamine, noradrenaline, acetylcholine, serotonin). It processes DAG events, network metrics, and system telemetry as spike trains. A local LLM reads the brain's internal state every five minutes and reports what it observes. Currently it says the cluster is calm. I want to see what it says when someone's actually poking around.

The detection layer is consensus-based. Cross-peer Merkle verification, honey ports, file canaries, DNS sentinels — but quarantine requires multiple observers to agree before acting. One node can't panic the cluster on its own.

I've already broken it myself twice during deployment. Rolled a binary update and got cascade-quarantined by my own Merkle checker. Tripped a file canary rotating honeypot credentials. Those incidents are published. The system catches real mistakes.

Five tiers from foothold to crown jewel. No cash bounty, just your name on the board, CVE attribution, and write-up rights. Safe harbour under disclose.io terms.

https://hyveguard.com

#infosec #security #cryptography #thresholdcrypto #ctf #FROST #postquantum #pentest #redteam #hacking #spikingneuralnetwork #neuromorphic

@eff @mttaggart @GossiTheDog @briankrebs @lcamtuf

HyveGuard — break the threshold

A guy with no engineering background and an AI built a server-mesh defence system. Bifrost is open. Come break it.

New preprint! What happens if you add neuromodulation to spiking neural networks and let them go wild with it? TLDR: it can improve performance especially in challenging sensory processing tasks.

Preprint:

https://www.biorxiv.org/content/10.1101/2025.07.25.666748v1

Short explainer thread on Bluesky:

https://bsky.app/profile/neural-reckoning.org/post/3lz4rihm2622e

#neuroscience #ComputationalNeuroscience #SpikingNeuralNetwork

Submissions (short!) due for SNUFA spiking neural networks conference in <2 weeks!

https://forms.cloud.microsoft/e/XkZLavhaJe

More info at https://snufa.net/2025/

Note that we normally get around 700 participants and recordings go on YouTube and get 100s-1000s views, so it's a good place to promote your work.

Please repost.

#neuroscience #SpikingNeuralNetwork #SpikingNeuralNetworks #snn #snufa

Microsoft Forms

I recently played around with #RateModels using #NESTsimulator. Compared to #SNN, RM focus on average firing rates of #NeuronPopulations, simplifying analysis of large networks. They effectively capture collective dynamics like #oscillations and #synchronization, though they miss precise spike timing details. Thus, both approaches have their merits. Here is a brief overview:

🌍 https://www.fabriziomusacchio.com/blog/2025-08-28-rate_models/

#CompNeuro #Neuroscience #Python #PythonTutorial #SpikingNeuralNetwork

📚 New preprint by Vafaii, Galor & Yates: Brain-like variational inference. They derive #SpikingNeuralNetwork dynamics directly from variational free energy minimization via online natural #GradientDescent, yielding the iterative Poisson #VAE (iP-VAE) with strong sparsity, reconstruction & #BiologicalPlausibility.

🌍 https://arxiv.org/abs/2410.19315
🧑‍💻 https://github.com/hadivafaii/IterativeVAE

#Neuroscience #MachineLearning #SNN #CompNeuro

New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with Pengfei Sun and awesome MSc student Ziqiao Yu).

https://arxiv.org/abs/2507.16043

Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can!

We also find that delay-based spiking neural networks seem to degrade in more human-like ways than networks without delays.

Check the next post for links to the code and dataset which you can easily use to test your own spike based learning algorithms and models.

Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks

We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond firing rates. We first design synthetic tasks isolating intra-neuron inter-spike intervals and cross-neuron synchrony under matched spike counts. On more complex spike-based speech recognition datasets (Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC), we construct variants where spike count information is eliminated and only timing information remains, and show that Surrogate GD-trained SNNs are able to perform significantly above chance whereas purely rate-based models perform at chance level. We further evaluate robustness under biologically inspired perturbations -- including Gaussian jitter per spike or per-neuron, and spike deletion -- revealing consistent but perturbation-specific degradation. Networks show a sharp performance drop when spike sequences are reversed in time, with a larger drop in performance from SNNs trained with delays, indicating that these networks are more human-like in terms of behaviour. To facilitate further studies of temporal coding, we have released our modified SHD and SSC datasets.

arXiv.org

Proud to have managed to finish a #neuromorphic manuscript, with Chiara De Luca, Mirco Tincani and Elisa Donati just before the end of the year!

It demonstrates the benefits of using #braininspired principles of computation for achieving robust computation across multiple time-scales, despite the inherent variability of the underlying computational substrate (silicon neurons that emulate faithfully biological ones):
A neuromorphic multi-scale approach for heart rate and state detection
https://doi.org/10.21203/rs.3.rs-5737326/v1
#neuromorphic #wearable #neuroai #SpikingNeuralNetwork

A neuromorphic multi-scale approach for heart rate and state detection

With the advent of novel sensor and machine learning technologies, it is becoming possible to develop wearable systems that perform continuous recording and processing of biosignals for health or body state assessment. For example, modern smartwatches can already track physiological functions, in...

It’s actually very easy and straightforward setting up a large-scale, multi-population #SpikingNeuralNetwork (#SNN) with the #NESTsimulator. Here is an example with two distinct populations of #Izhikevich neurons:

🌍 https://www.fabriziomusacchio.com/blog/2024-06-30-nest_izhikevich_snn/

#ComputationalNeuroscience #CompNeuro #Neuroscience

Izhikevich SNN simulated with NEST

In this post, we explore how easy it is to set up a large-scale, multi-population spiking neural network (SNN) with the NEST simulator. We simulate a simple SNN comprising two distinct populations of Izhikevich neurons, demonstrating the efficiency and flexibility of NEST and its capability to handle complex neural network simulations with ease.

Fabrizio Musacchio

Mmm...
Once I was surfing the net for some papers in CNS and I came across a paper about Natural Language Processing all of a sudden and I realized There aren't any real Spiking Neural Networks that are bio plausible for NLP.

I started reading some papers and some books to gain more knowledge about Language comprehension and language generation but no real model suggestions yet.

Do anyone know any labs working on NLP in Computational neuroscience or how to connect with them?

#computationalneuroscience #cns #SpikingNeuralNetwork #NLP

A Thousand Brains : a new theory of intelligence

Hi :)
I'm a new member of this amazing community and I would like to have my first post on the amazing breakthrough of Jeff Hawkins.

Before giving my opinion, I would like everyone to tell me whether they know the theory of have they read the book or the original papers and If so what's their insight on them?

I think the material in this research is pretty much fascinating and would like to engage and talk about it more.

#cns #SpikingNeuralNetwork #theory_of_brain #jeff_hawkins
#neuroscience #computationalneuroscience