Show HN: AnamDB – An AI-native, differentiable Datalog engine written in Rust

AnamDB는 Rust로 작성된 AI 네이티브 미분 가능 Datalog 엔진으로, AI 및 ML 시스템에서 논리 프로그래밍과 미분 가능 연산을 결합해 새로운 데이터 처리 및 추론 방식을 제공한다. Rust 기반으로 높은 성능과 안전성을 목표로 하며, AI 모델과의 통합에 적합한 차별화된 기능을 갖추고 있다. AI 개발자들이 복잡한 논리 쿼리와 미분 가능 연산을 효율적으로 수행할 수 있도록 지원한다.

https://github.com/jam5991/anam

#rust #datalog #differentiable #ainative #database

NVIDIA Warp는 Python 기반 GPU 네이티브 프레임워크로, 요소별 제어흐름과 SIMT 병렬성을 활용해 고성능·미분 가능한 시뮬레이션 커널을 ML 워크플로에 직접 통합합니다. 글은 FFT 기반 Poisson 솔버와 SSP-RK3로 구현한 2D Navier-Stokes 예제, 자동미분(순·역모드) 지원, PyTorch/JAX 연동 및 산업 적용(Autodesk, DeepMind 등)과 수백배 속도 향상 사례를 다룹니다.

https://developer.nvidia.com/blog/build-accelerated-differentiable-computational-physics-code-for-ai-with-nvidia-warp/

#nvidia #warp #simulation #differentiable #gpu

Build Accelerated, Differentiable Computational Physics Code for AI with NVIDIA Warp

Computer-aided engineering (CAE) is shifting from human-driven workflows toward AI-driven ones, including physics foundation models that generalize across…

NVIDIA Technical Blog

🧠 New paper by Deistler et al: #JAXLEY: differentiable #simulation for large-scale training of detailed #biophysical #models of #NeuralDynamics.

They present a #differentiable #GPU accelerated #simulator that trains #morphologically detailed biophysical #neuron models with #GradientDescent. JAXLEY fits intracellular #voltage and #calcium data, scales to 1000s of compartments, trains biophys. #RNNs on #WorkingMemory tasks & even solves #MNIST.

🌍 https://doi.org/10.1038/s41592-025-02895-w

#Neuroscience #CompNeuro

🤔 Oh look, Alice is having mathematical tea with the Cheshire Cat in a "differentiable" wonderland, because the real world just isn't complex enough! 🚀 Meanwhile, arXiv, the Wikipedia of nerds, wants a #DevOps wizard to keep the scientific cat videos running... because, clearly, science can't disseminate itself! 🧙‍♂️✨
https://arxiv.org/abs/2404.17625 #AliceInWonderland #Differentiable #Math #ScienceNerds #CheshireCat #HackerNews #ngated
Alice's Adventures in a Differentiable Wonderland -- Volume I, A Tour of the Land

Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more. Stripped of anything else, neural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming. This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland. I overview the basics of optimizing a function via automatic differentiation, and a selection of the most common designs for handling sequences, graphs, texts, and audios. The focus is on a intuitive, self-contained introduction to the most important design techniques, including convolutional, attentional, and recurrent blocks, hoping to bridge the gap between theory and code (PyTorch and JAX) and leaving the reader capable of understanding some of the most advanced models out there, such as large language models (LLMs) and multimodal architectures.

arXiv.org
Alice's Adventures in a Differentiable Wonderland -- Volume I, A Tour of the Land

Neural networks surround us, in the form of large language models, speech transcription systems, molecular discovery algorithms, robotics, and much more. Stripped of anything else, neural networks are compositions of differentiable primitives, and studying them means learning how to program and how to interact with these models, a particular example of what is called differentiable programming. This primer is an introduction to this fascinating field imagined for someone, like Alice, who has just ventured into this strange differentiable wonderland. I overview the basics of optimizing a function via automatic differentiation, and a selection of the most common designs for handling sequences, graphs, texts, and audios. The focus is on a intuitive, self-contained introduction to the most important design techniques, including convolutional, attentional, and recurrent blocks, hoping to bridge the gap between theory and code (PyTorch and JAX) and leaving the reader capable of understanding some of the most advanced models out there, such as large language models (LLMs) and multimodal architectures.

arXiv.org
Differentiable Logic CA: from Game of Life to Pattern Generation

@schmitz (left) explaining his recent work on making #dftk algorithmically #differentiable at the #cecam workshop on #dft and #ai (https://www.cecam.org/workshop-details/1281). With his work derivatives of key density-functional theory quantities like forces or band structures wrt. model parameters can now be easily computed.
CECAM - Density Functional Theory and Artificial Intelligence learning from each otherDensity Functional Theory and Artificial Intelligence learning from each other

Deep Learning with JAX

Accelerate deep learning and other number-intensive tasks with JAX, Google’s awesome high-performance numerical computing library.</b> The JAX numerical computing library tackles the core performance challenges at the heart of deep learning and other scientific computing tasks. By combining Google’s Accelerated Linear Algebra platform (XLA) with a hyper-optimized version of NumPy and a variety of other high-performance features, JAX delivers a huge performance boost in low-level computations and transformations. In Deep Learning with JAX</i> you will learn how to: Use JAX for numerical calculations</li> Build differentiable models with JAX primitives</li> Run distributed and parallelized computations with JAX</li> Use high-level neural network libraries such as Flax</li> Leverage libraries and modules from the JAX ecosystem</li> </ul> Deep Learning with JAX</i> is a hands-on guide to using JAX for deep learning and other mathematically-intensive applications. Google Developer Expert Grigory Sapunov steadily builds your understanding of JAX’s concepts. The engaging examples introduce the fundamental concepts on which JAX relies and then show you how to apply them to real-world tasks. You’ll learn how to use JAX’s ecosystem of high-level libraries and modules, and also how to combine TensorFlow and PyTorch with JAX for data loading and deployment.

Manning Publications

The #Zhukovsky #Aerofoil (sometimes transliterated as #Joukowsky from #Russian), is a 2D model of #streamlined #Airflow past a #wing. It uses #ComplexVariable and is an #AnalyticFunction (i.e. #Differentiable everywhere, save at isolated #Singularities). Take a circle in the #ComplexPlane which is not quite centred at the #origin but passes through the #coordinate (1,0) or (z=1+0i).

#MyWork #CCBYSA #AppliedMathematics #WxMaxima #FreeSoftware #Aeronautics #Aerodynamics #LaminarFlow

Whenever I walk to/from home, I have to walk up/down an inclined street; I noticed that the asphalt floor has different curvatures depending on how near it is of a bend, and I try to find a less steep incline while walking.

This got me inspiration for the few questions below. Any simple explanations, and related links, are welcome.

Given a #differentiable surface within R^3, and two distinct points in it, there are infinitely many differentiable paths from one point to another, remaining on the surface. At each point of the #path, one can find the path's local #curvature. Then:

- Find a path that minimizes the supreme of the curvature. In other words, find the "flattest" path.

- Find a path that minimizes the variation of the curvature. In other words, find a path that "most resembles" a circle arc.

Are these tasks always possible within the given conditions? Are any stronger conditions needed? Are there cases with an #analytic solution, or are they possible only with numerical approximations?

#Analysis #DifferentialGeometry #Calculus #DifferentialEquations #NumericalMethods