'Mind-captioning' technique can read human thoughts from brain scans

Reading brain activity with advanced technologies is not a new concept. However, most techniques have focused on identifying single words associated with an object or action a person is seeing or thinking of, or matching up brain signals that correspond to spoken words. Some methods used caption databases or deep neural networks, but these approaches were limited by database word coverage or introduced information not present in the brain. Generating detailed, structured descriptions of complex visual perceptions or thoughts remains difficult.

Medical Xpress
Team develops high-speed, ultra-low-power superconductive neuron device

A research team has developed a neuron device that holds potential for application in large-scale, high-speed superconductive neural network circuits. The device operates at high speeds with ultra-low-power consumption and is tolerant to parameter fluctuations during circuit fabrication.

Tech Xplore
Neural Nets Explained – With Fluxia, Johnny & the Coffee Pot | Episode 1

YouTube
Paired Neural Network for Matching Experimental and Predicted Infrared Spectra.
Anal. Chem. 2025
https://doi.org/10.1021/acs.analchem.5c01607
#infrared #neuralnets
Follow us on Bluesky: https://bsky.app/profile/clirspec.org
Paired Neural Network for Matching Experimental and Predicted Infrared Spectra

We present a novel machine learning (ML)-based scoring technique for determining the similarity between experimental and predicted infrared (IR) spectra for identification purposes. IR spectroscopy is a powerful technique used to identify the molecular structure and composition of a sample by measuring the unique vibrational frequency pattern of the molecule’s functional groups. Molecular identifications are often made by comparing experimental and reference spectra. However, the limited number of reference spectra available in spectral libraries can confound the identification process. Alternative identification procedures rely on in silico techniques to simulate spectra for a wide range of molecules. However, scoring spectral similarity between an experimental query and computationally predicted reference remains a significant challenge. Our proposed ML-based scoring technique overcomes these barriers by accurately and efficiently determining spectral similarity.

ACS Publications
Ah yes, the groundbreaking revelation that AI models can be squished into datasets 🤯. Now we can store these majestic neural nets in our back pockets, because who wouldn't want to lug around gigabytes of "innovation"? 📦🎉
https://www.scalarlm.com/blog/llm-deflate-extracting-llms-into-datasets/ #AIinnovation #AIstorage #neuralnets #techrevolution #dataengineering #HackerNews #ngated
LLM-Deflate: Extracting LLMs Into Datasets

Large Language Models compress massive amounts of training data into their parameters. This compression is lossy but highly effective—billions of parameters can encode the essential patterns from terabytes of text. However, what’s less obvious is that this process can be reversed: we can systematically extract structured datasets from

ScalarLM

OK last one (I think). This is a cool article that goes in-depth into the math of LLMs and why they appear non-deterministic even with temperature set to 0:

https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/

tldr, all of the forward operations through a neural network can be/are deterministic, actually, so the common refrain of "concurrency + floating point adds" is not actually true. The actual "cause" of the non-determinism is the other users in the system! If the system is overloaded, then we reduce the batch size of queries, and neural nets are not deterministic based on the batch size.

There's a bunch of math about how to fix this which I skipped, but I thought the "other users in the system cause non-determinism" notion was fascinating!

#neuralnets #llms #math

Defeating Nondeterminism in LLM Inference

Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models. For example, you might observe that asking ChatGPT the same question multiple times provides different results. This by itself is not surprising, since getting a result from a language model involves “sampling”, a process that converts the language model’s output into a probability distribution and probabilistically selects a token. What might be more surprising is that even when we adjust the temperature down to 0This means that the LLM always chooses the highest probability token, which is called greedy sampling. (thus making the sampling theoretically deterministic), LLM APIs are still not deterministic in practice (see past discussions here, here, or here). Even when running inference on your own hardware with an OSS inference library like vLLM or SGLang, sampling still isn’t deterministic (see here or here).

Thinking Machines Lab
🤖🆚📱 In this battle of neural nets versus cellular automata, prepare for a mind-numbing descent into a sea of #numbers that even a calculator wouldn't bother reading. 😂 Who knew that updating cells ON and OFF could reach levels of #complexity typically reserved for assembling IKEA furniture? 📊🔧
https://www.nets-vs-automata.net/ #neuralnets #cellularautomata #techhumor #IKEAassembly #HackerNews #ngated
Neural Nets vs. Cellular Automata

Neural Nets vs. Cellular Automata

I just discovered the ARC-AGI initiative and the associated test to estimate how close "AI" models are from #AGI

https://arcprize.org/arc-agi

While I found the initiative interesting, I'm not sure I understand what in this test really guarantees that the model is capable of some form of generalization and problem-solving.
Wouldn't it be possible for specialized pattern-matching/discovering algorithms to solve such problems?
I imagine some computer scientists, mathematicians or computational neuroscientists have already had a look at this, so would anyone knows of some articles/blogs on the topic?

Maybe @wim_v12e? Is this something you already looked at?

#AI #machineLearning #neuroscience #cognition #computationalNeuroscience #neuralNets #lazyWeb

ARC Prize - What is ARC-AGI?

Learn more about the only AI benchmark that measures AGI progress.

ARC Prize