@stOneskull

2 Followers
10 Following
2K Posts
this is my account to post articles off https://entangled.cloud
my real account: https://fosstodon.org/@stOneskull
stOneskullhttps://stOneskull.xyz
Show HN: Fin-primitives Zero-panic, decimal-precise trading types for Rust - https://entangled.cloud/117151040/show-hn-fin-primitives-zero-panic-decimal-precise-trading-types-for-rust??via=md
Show HN: Fin-primitives Zero-panic, decimal-precise trading types for Rust

I couldn't find a Rust crate that gave me validated financial types backed by decimal arithmetic. Everything I found either used f64 (unacceptable for order books), panicked on bad input, or was a thin wrapper around a single indicator.fin-primitives provides:- Price and Quantity newtypes over rust_decimal::Decimal, validated at construction — an invalid Price literally can't exist at runtime - L2 OrderBook with sequence validation and atomic rollback if a delta would produce an inverted spread - OHLCV aggregation from tick streams with bar invariants enforced on every push - Streaming SMA, EMA (SMA-seeded), and RSI (Wilder smoothing matching TradingView/Bloomberg) that return SignalValue::Unavailable until warm-up completes - Position ledger with VWAP average cost, realized/unrealized P&L net of commissions - Composable RiskRule trait — plug in your own rules, breaches returned as typed Vec, never swallowedNo unwrap, no expect, no panic in library code. cargo clippy denies all of them.This is part of a larger set of 11 crates I've published for LLM and trading infrastructure (https://crates.io/users/Mattbusel) but fin-primitives is the one I think fills the biggest gap in the Rust ecosystem right now.Happy to answer questions about the design decisions, especially the order book rollback mechanism and the indicator warm-up approach.

entangled dot cloud
Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users - https://entangled.cloud/117101253/brain-implants-let-paralyzed-people-type-nearly-as-fast-as-smartphone-users??via=md
Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users

As they imagine typing, implants translate brain signals into keystrokes on a standard digital keyboard. It’s hard to picture a keyboard layout other than the one we know best. From laptops to smartphones, it’s an integral part of our digital lives.Scientists at Massachusetts General Hospital have now restored the ability to communicate by keyboard to two people with paralysis—using their thoughts alone.Both people already had brain implants that could record their minds’ electrical chatter. The new system translated brain signals in real time as each person imagined finger movements. The system then accurately predicted the character they were trying to type.The system learned to translate brain activity to physical intent after just 30 sentences. Typing speeds reached 22 words per minute with few errors, nearly matching speeds of able-bodied smartphone users.“To our knowledge, this system provides the fastest… [brain implant] communication method reported to date based on decoding from hand motor cortex,” wrote the team.The participants are part of the BrainGate2 clinical trial, a pioneering effort to restore communication and movement by decoding neural signals in people who have lost the use of all four limbs and the torso. One of the participants previously used the implants to translate his inner thoughts into text, but with mixed success.Controlling a digital keyboard is far more intuitive and familiar, which makes it easier to grasp. Once a person learns to use the system, they don’t have to look at the keyboard, giving their eyes a break as they type with their minds. It also allows users full control of when, or when not, to share their thoughts, preventing accidental leakage of private musings onto a screen or broadcasted with AI-generated speech.All Hands on DeckParts of the brain hum with electrical activity before we speak. Over the past decade, brain implants—microelectrodes that listen in and decode signals—have translated these seemingly chaotic buzzes into text or speech, allowing paralyzed people to regain the ability to communicate.Methods vary. Some hardware takes the form of wafer-thin disks sitting on top of the brain and gathering signals from vast regions; other devices are inserted into the brain for more targeted recordings.These systems are life changing. In a recent example, an implant translated the neural activity controlling a man with ALS’s vocal muscles. With just a second’s delay, the system generated coherent sentences with intonation, allowing him to sing with an artificial voice. Another device turned a paralyzed woman’s thoughts into speech with nearly no delay, so she could hold a conversation without frustrating halts. People have also benefited from a method that uses the neural signals behind handwriting for brain-to-text communication.Brain implants aren’t purely experimental anymore: China recently approved a setup allowing people with paralysis to control a robotic hand. It’s the first such device available outside of clinical trials.Perhaps the most widely used clinical solution is eye-tracking. Here, patients move their eyes to focus on individual letters, one at a time, on a custom digital keyboard. But the pace is agonizingly slow and prone to error. And prolonged screen time strains the eyes, making extended conversations difficult.“Those systems take far too long for many users,” said study author Daniel Rubin in a press release, causing them to abandon the technology.Tapping AwayFor people who already know how to type, the standard keyboard layout—known as QWERTY—feels familiar and comfortable. Fingers stretch to hit letters in the upper row, tap directly down for ones in the middle, and curl into a loose claw to hit bottom letters and punctuation.As fingers dance across the keyboard, parts of the motor cortex that control their motion spark with activity, precisely directing each placement. Mind-typing using a familiar keyboard, compared to a custom one, could feel more intuitive and relaxing.Two people with tetraplegia gave the idea a shot. Participant T17 was diagnosed with ALS at 30, a disease that slowly destroys motor neurons, weakening muscles and eventually impairing breathing. Three years later, when he enrolled in the study, he’d lost control of his vocal muscles and relied on a ventilator. He could move only his eyes, but his mind was still sharp. The second participant, T18, was paralyzed by a spinal cord injury 18 months before enrollment. Both had multiple brain implants in different areas. These were connected to cables that shuttled recordings to a computer system for real-time processing.The participants used a simplified QWERTY digital keyboard containing all 26 letters, a space key, and three types of punctuation—a question mark, comma, and period. To train the system, the volunteers imagined stretching, tapping, or curling their fingers to type text prompts, while implants captured and isolated neural signals for each finger. After training, a deep learning model predicted intended characters, and a language model continuously attempted to autocomplete the sentence.After practicing just 30 sentences, both participants could copy on-screen text or type whatever they wanted. When asked “what was the best part of your job,” T18 cheekily replied “the best part of my job was the end [of] the day.” Meanwhile, T17, a fan of The Legend of Zelda video games, told the researchers “you should try oracle of ages and seasons…another is skyward sword…the music in those games is great.”Their typing speeds broke records. T18 communicated at 110 characters or roughly 22 words per minute, which is 20 characters more than a previous state-of-the-art method based on handwriting, wrote the team. The rate is nearly on par with able-bodied smartphone users similar to his age. Typing errors were consistently low and neared perfection after practice.T17, with incomplete locked-in syndrome due to ALS, typed 47 characters a minute at a higher error rate. He had full use of his vocabulary, unlike with previous systems that imposed word restrictions, and communicated much faster.The performance differences could be due to where their implants are located. T18’s microarrays are on both sides of the brain, with some covering an area that controls all four limbs. T17’s implants are on only the left half of his brain, with less coverage of finger motor areas.The team is now tweaking the system for longer use tailored to individuals. As disease progresses, the link between brain signals and keyboard characters may drift and produce more errors. But updating the algorithm is easy. The system needs only a few sentences to learn, so users could start each day mind-typing some thoughts to keep things dialed in.Updates to the digital keyboard, like adding numbers or the return and delete keys, are in the works. Temporarily disabling the language model could also let participants type strong gibberish passwords, internet slang (ikr, btw, lol), and other non-standard words without being autocorrected.The brain implant “is a great example of how modern neuroscience and artificial intelligence technology can combine to create something capable of restoring communication and independence for people with paralysis,” said study author Justin Jude.The post Brain Implants Let Paralyzed People Type Nearly as Fast as Smartphone Users appeared first on SingularityHub.

entangled dot cloud
Quantum Machines Launches Open Acceleration Stack Alongside NVIDIA, AMD and Riverlane to Deliver Next Level of Hybridization - https://entangled.cloud/117022344/quantum-machines-launches-open-acceleration-stack-alongside-nvidia-amd-and-riverlane-to-deliver-next-level-of-hybridization??via=md
Quantum Machines Launches Open Acceleration Stack Alongside NVIDIA, AMD and Riverlane to Deliver Next Level of Hybridization

"The Open Acceleration Stack reflects the industry's shift from quantum computing demonstration to scaling and integration," ...

entangled dot cloud
Digital Twin of a Cell Tracks Its Entire Life Cycle Down to the Nanoscale

The simulation encompasses nearly all of a cell’s molecules over roughly two hours. Five years ago, scientists watched in wonder as synthetic bacteria grew and split into daughter cells. The bacteria’s extremely stripped-down genome still supported its entire life cycle. It was a crowning achievement in synthetic biology that shed light on life’s most basic processes.These processes can now be viewed digitally. This month, a team at the University of Illinois at Urbana-Champaign developed a virtual model of the bacteria tracking nearly all of a cell’s molecules down to the nanoscale. The researchers made this digital cell by combining several large datasets covering thousands of molecules and then animating them as the bacteria split in two.The model is the latest in a growing effort to make digital twins of living cells. Mimicking diseases or treatments in the digital world offers a bird’s-eye view of cellular changes and could speed up drug discovery and help researchers tackle complex diseases like cancer.“We have a whole-cell model that predicts many cellular properties simultaneously,” study author Zan Luthey-Schulten said in a press release. The model could provide “the results of hundreds of experiments” at the same time, she said.Digitizing LifeEvery cell is a bustling metropolis. Proteins orchestrate a vast range of cellular responses. RNA molecules carry instructions from genes to the cell’s protein-building factories. Fatty acids in a cell’s membrane rearrange themselves to admit nutrients or ward off invaders. Working in tandem, they all keep the cell humming along.This complexity makes cells hard to simulate. But with large datasets charting the genome, gene expression, and proteins alongside sophisticated AI, scientists have built static virtual cells that paint a near-complete picture with atomic-level resolution. More recent models can even predict molecular movements for a short period of time (often less than a second).But they can’t simulate “the mechanics and chemistry that take place over minutes to hours in processes such as gene expression and cell division,” wrote the University of Illinois team.Other efforts use physics to predict how molecular changes affect behavior in bacteria, yeast, and human cells. These treat cells as a “well-stirred system”—that is, a cup of molecular soup lacking details about where each molecule sits and how molecules vary from cell to cell.But location is key. As cells divide, some proteins gather around DNA to help copy it; others assemble near the membrane to recruit fatty molecules for its growth as the cell splits in two.Simulating everything, everywhere, all at once during human cell division is beyond even the most powerful supercomputers. Minimal bacteria offer an alternative. These synthetic bacteria are stripped-down versions of the parasite Mycoplasma mycoides. The team focused on one of these known as JCVI-syn3A. Its 493-gene genome—roughly half the original—is the smallest set of DNA instructions to boot up a living bacteria that can still grow and divide.In 2022, the team developed a 3D model of the bacteria’s metabolism, genes, and growth. But the software, Lattice Microbes, struggled to track division.Life in 4DThe new study added more data to the software. This included membrane changes and information about how ribosomes, the cell’s protein-making machines, assemble and move inside the cell’s gooey interior. They also added stochasticity, or unpredictability, to the model.Changes to the location of chromosomes, which house DNA, are random as the cell divides, which makes them difficult to predict. But their position influences DNA replication and gene expression.The first update nearly broke the software. It could map molecules involved in cell division, such as an enzyme critical for DNA copying. But adding chromosome location predictions slowed the model to a crawl, even when running on advanced GPUs. Most of the cells died before their simulations were complete.Several tweaks helped. One was to add more computational power. The team used a GPU dedicated to chromosomes, while all other details were processed on a separate chip. The model also ran faster by rendering some proteins as inert spheres that could be largely ignored.The upgrades worked. Leaving the model running over Thanksgiving, the team returned to find it had completed the bacteria’s whole life cycle. “All of a sudden, it was just this huge leap,” study author Zane Thornburg toldNature.The simulation matched many real-world experiments, such as how the cells elongate and bubble into dumbbell-like shapes during division. The model also accurately predicted the length of a cell cycle and captured a wide range of cellular activity.“I can’t overstate how hard it is to simulate things that are moving—and doing it in 3D for an entire cell was…triumphant,” said Thornburg.Every cell is like a snowflake: Although containing similar molecules, the amounts and locations differ. The model easily handled this diversity. Repeated simulations of the bacteria, each starting with slightly different genetic, molecular, and metabolic makeup, resulted in a similar cycle length and movement of chromosomes during division.The results came at a cost: Simulating the cell’s 105-minute cycle took up to six days on a supercomputer. But the virtual cell could lend insights into the molecular dance that causes all cells to grow and divide. JCVI-syn3A doesn’t have the smallest genome. Its predecessor holds the record, but it also struggles to make normally shaped and functional daughter cells—suggesting some genes are essential for division. Simulation could help us understand why.Other efforts using generative AI to build virtual cells are in the works. But because this study’s model was grounded in strict physical and biochemical rules, results could be easily verified in the lab. AI-generated virtual cells, however, are commonly trained on gene expression data alone, which is a snapshot of a cell’s state and often fails to predict complex cell responses.The two approaches could inspire each other by homing in on principles that make a virtual cell run like the real deal. For example, they could show that capturing each molecule in space and time, rather than as a soup, vastly improves the model.Although the model can’t simulate a cell atom-by-atom, the team wrote, it could “illuminate the interwoven nature of the biology, chemistry, and physics that govern life for cells.”The post Digital Twin of a Cell Tracks Its Entire Life Cycle Down to the Nanoscale appeared first on SingularityHub.

entangled dot cloud
Show HN: Run the popular LLM-Course tutorials on HyperAI

LLM-Course is one of the most popular open learning resources for Large Language Models, with over 75k on GitHub. It provides a structured curriculum that walks through the full LLM stack — from fundamentals to building production-ready applications.HyperAI recently built a ready-to-run notebook that lets you explore parts of the course directly in the browser without setting up a local environment.The original course is organized into three main tracks:1.LLM Fundamentals – math, Python, neural networks, and NLP basics2.The LLM Scientist – fine-tuning, quantization, evaluation, optimization3.The LLM Engineer – RAG, agents, deployment, and real-world applicationsOur notebook focuses on one of the most practical parts of the course: running LLMs and building applications around them. It walks through topics like:* Different ways to run LLMs (API vs local inference) * Discovering open-source models on Hugging Face * Prompt engineering techniques (zero-shot, few-shot, chain-of-thought, ReAct) * Generating structured outputs (JSON / templates) using libraries like OutlinesThe goal was to make it easier for developers to experiment with LLM workflows quickly, especially if they don’t have powerful local hardware.Some steps in the notebook can run on free CPU resources, while others demonstrate workflows that typically require stronger hardware. The idea is to help developers quickly understand the setup and experimentation process before scaling further.If you're exploring LLM tooling, prompt techniques, or deployment workflows, this might be a convenient way to try parts of the course material interactively.Happy to hear feedback or suggestions!

entangled dot cloud
Crossed wires: The quantum threat to encryption is real, but you can sleep easy - https://entangled.cloud/116885355/crossed-wires-the-quantum-threat-to-encryption-is-real-but-you-can-sleep-easy??via=md
Crossed wires: The quantum threat to encryption is real, but you can sleep easy

Amid fears of a looming encryption crisis from quantum computing, experts are proactively implementing robust defences, ensuring our digital security remains intact.

entangled dot cloud
Show HN: QKD eavesdropper detector using Krylov complexity-open source Python - https://entangled.cloud/116817747/show-hn-qkd-eavesdropper-detector-using-krylov-complexity-open-source-python??via=md
Show HN: QKD eavesdropper detector using Krylov complexity-open source Python

I built a framework that detects eavesdroppers on quantum key distribution channels by reading the scrambling "fingerprint" embedded in the QBER error timeline, no new hardware required. The core idea: every QKD channel has a unique Lanczos coefficient sequence derived from its Hamiltonian. An eavesdropper perturbs the Hamiltonian, which shifts the coefficients in a detectable and unforgeable way (Krylov distortion ΔK). Validated on 181,606 experimental QBER measurements from a deployed fiber-optic system, AUC = 0.981. Based on a 12-paper Zenodo preprint series covering the full theoretical stack: Physical Bridge proof, one-way function property, universality across 8 Hamiltonian families, open-system extension via Lindblad, and Loschmidt echo validation. Paper series: https://zenodo.org/records/18940281`

entangled dot cloud
This Week’s Awesome Tech Stories From Around the Web (Through March 14) - https://entangled.cloud/116778672/this-weeks-awesome-tech-stories-from-around-the-web-through-march-14??via=md
This Week’s Awesome Tech Stories From Around the Web (Through March 14)

RoboticsHow Pokémon Go Is Giving Delivery Robots an Inch-Perfect View of the WorldWill Douglas Heaven | MIT Technology Review ($)“Niantic Spatial is using that vast and unparalleled trove of crowdsourced data—images of urban landmarks tagged with super-accurate location markers taken from the phones of hundreds of millions of Pokémon Go players around the world—to build a kind of world model, a buzzy new technology that grounds the smarts of LLMs in real-world environments.”FutureA Roadmap for AI, if Anyone Will ListenConnie Loizos | TechCrunch“The newly published document, signed by hundreds of experts, former officials, and public figures, opens with the no-nonsense observation that humanity is at a fork in the road. One path, which the declaration calls ‘the race to replace,’ leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential.”ComputingStartup Is Building the First Data Center to Use Human Brain CellsAlex Wilkins | New Scientist ($)“Data centers use huge amounts of energy and chips are in high demand—could brain cells be the answer? Australia-based start-up Cortical Labs has announced it is building two ‘biological’ data centers in Melbourne and Singapore, stacked with the same neuron-filled chips that it has demonstrated can play Pong or Doom.”RoboticsWhy Do Humanoid Robots Still Struggle With the Small Stuff?John Pavlus | Quanta Magazine“‘I asked each researcher: Can your flagship robot—Boston Dynamics’ Atlas or Agility’s Digit, two of the most credible and pedigreed humanoids on Earth—handle any set of stairs or doorway? ‘Not reliably,’ Hurst said. ‘I don’t think it’s totally solved,’ Kuindersma said. …It’s 2026. Why are humanoids still this…hard?”FutureAI Isn’t Lightening Workloads. It’s Making Them More Intense.Ray A. Smith | The Wall Street Journal ($)“One of the great hopes for artificial intelligence—at least, among workers—is that it will ease workloads, freeing people up for more high-level, creative pursuits. So far, the opposite is happening, new data show. In fact, AI is increasing the speed, density and complexity of work rather than reducing it, according to an analysis of 164,000 workers’ digital work activity.” FutureKarpathy’s March of Nines Shows Why 90% AI Reliability Isn’t Even Close to EnoughNikhil Mungel | VentureBeat“The ‘March of Nines’ frames a common production reality: You can reach the first 90% reliability with a strong demo, and each additional nine often requires comparable engineering effort. For enterprise teams, the distance between ‘usually works’ and ‘operates like dependable software’ determines adoption.”ComputingThe Race to Solve the Biggest Problem in Quantum ComputingKarmela Padavic-Callaghan | New Scientist ($)“Quantum computers are already here, but they make far too many errors. This is arguably the biggest obstacle to the technology really becoming useful, but recent breakthroughs suggest a solution may be on the horizon. ‘It’s a very exciting time in error correction. For the first time, theory and practice are really making contact,’ says Robert Schoelkopf at Yale University.”RoboticsModular Yard Robot Mows Lawns, Plows Snow, Gathers Leaves and Trims GrassMaryna Holovnova | New Atlas“Homeowners usually end up with a garage filled with various equipment: a lawn mower, snow blower, shovels, and tools for clearing fallen leaves. Currently available on Kickstarter, the Yarbo M attempts to combine all those individual tools into one compact robotic platform that can automatically do all the yard work.”RoboticsThese Self-Configuring Modular Robots May One Day Rule the WorldTom Hawking | Gizmodo“Each unit has multiple points to which another unit can attach itself: 18 of them, to be precise, which means that just two units can be combined in 435 ways. The number of possible configurations explodes as the number of units increases, and by the time you get to five units, there are hundreds of billions of possible combinations.”SpaceThis SpaceX Veteran Says the Next Big Thing in Space Is Satellites That Return to EarthTim Fernholz | TechCrunch“The reusable rocket has transformed the space industry in the last decade, and a new startup led by a SpaceX veteran wants to do the same for satellites. Brian Taylor, who helped build satellites for networks like SpaceX’s Starlink and Amazon’s Leo, founded Lux Aeterna in December 2024 to develop satellite structures with a built-in heat shield that will allow them to return to Earth with their payloads intact.”TechAlmost 40 New Unicorns Have Been Minted So Far This Year—Here They AreDominic-Madori Davis | TechCrunch“Using data from Crunchbase and PitchBook, TechCrunch tracked down the VC-backed startups that became unicorns in 2026. While most are AI-related, a surprising number are focused on other industries like healthcare and even a few crypto companies.”SpaceSETI Thinks It Might Have Missed a Few Alien Calls. Here’s WhyMatthew Phelan | Gizmodo“A new study published by researchers at the SETI Institute, short for the Search for Extraterrestrial Intelligence, has tested the possibility that ‘space weather’ could render strong premeditated alien broadcasts into the kind of fainter radio signals that SETI typically ignores.”The post This Week’s Awesome Tech Stories From Around the Web (Through March 14) appeared first on SingularityHub.

entangled dot cloud
Pi Day: From rockets to cancer research, here's how the number pi is embedded in our lives - https://entangled.cloud/116746587/pi-day-from-rockets-to-cancer-research-heres-how-the-number-pi-is-embedded-in-our-lives??via=md
Pi Day: From rockets to cancer research, here's how the number pi is embedded in our lives

Math nerds and dessert enthusiasts unite to celebrate Pi Day every March 14, the date that represents the first three digits of the mathematical constant pi.

entangled dot cloud
We Need to Look into Machine Cognition

I’ve been thinking a lot about how we actually form ideas.When we speak, it often feels like the words simply arrive. One word leads to the next, then the next, until eventually we reach the end of a thought. It’s an iterative process — a step-by-step stream of language emerging in real time.But that raises a deeper question.Is speaking itself the process of thinking? Or is it simply the surface output of a deeper process happening outside our awareness?It often feels like the latter.Ideas seem to exist just outside of conscious reach. The next word you say, the next idea you express — they appear as if they’ve been assembled somewhere beneath the surface. Our conscious mind only sees the final output.Sometimes this hidden processing shows up visually rather than verbally. We might “see” an image in our mind. But even then, the same question applies:How was that image constructed? Where did its details come from? What process generated it?This is the real question of cognition: How are thoughts constructed before they reach consciousness?Today’s AI systems — particularly transformer models — demonstrate something remarkable. With a relatively understandable architecture, they compress enormous amounts of information and generate outputs that often appear intelligent. They recognize patterns, synthesize knowledge, and sometimes produce surprisingly creative responses.But the deeper challenge remains.Intelligence isn’t just about knowing more information than a human. Many models already do that.The real frontier is correct reasoning and knowledge synthesis — the ability to combine information in meaningful ways and produce reliable, creative insights.Humans do this imperfectly, but we still possess something powerful: a form of deep reasoning that emerges from processes we barely understand ourselves.If we want AI systems to truly reach — or surpass — human-level intelligence, we may need to understand that hidden layer of cognition much better.Not just what thoughts are produced, but how thoughts are constructed.Because ultimately, that might be the key to building systems that don’t just mimic intelligence — but genuinely reason.

entangled dot cloud