My current #DotNetMAUI and #NeuralNetworks project: Design and train neural networks in #GoogleColab and transfer them to a cross-platform app using #ONNX. Follow my progress here:

https://philotalk.com/mobile-neural-network

Creating a Neural Network Based Mobile App with a Neural Network Designed and Trained in Google Colab | Stephen Moreton-Howell

Having done a masters degree in Artificial Intelligence and then switch my attention to writing cross-platform mobile apps, I want to drag it back to AI, while continuing to work on .NET MAUI. So I'm doing some experimental work creating and training neural networks in Python/Keras on Google Colab and then exporting them into .onnx files so they can be deployed into the .NET MAUI apps. Seeing what's possible before deciding what interesting NN based apps I might want to make.

Stephen Moreton-Howell

Kimon Fountoulakis (@kfountou)

LLM에서의 추론 수학을 다룬 강연 영상이 소개되었으며, 신경망이 덧셈·곱셈과 알고리즘 지시를 정확히 수행하는 방법을 다루는 내용이다.

https://x.com/kfountou/status/2046822921381786071

#llm #reasoning #neuralnetworks #machinelearning

Kimon Fountoulakis (@kfountou) on X

Wow, what an honour @CsabaSzepesvari, thanks! "Math of Reasoning in LLMs, Session 11: Learning to Add, Multiply, and Execute Algorithmic Instructions Exactly with Neural Networks" https://t.co/lkQ0PrXRLK I watched all of it, and I really enjoyed it.

X (formerly Twitter)
🔥 Ah, yes, the classic "let's throw neural networks and #types in a blender and see what mess comes out" approach. 🤖✨ The article rambles on about separating training and typechecking like it's some groundbreaking revelation, when really, it's just playing code Jenga with fancy names like #Idris, #Lean, and #Agda. 🧩🔍
https://www.brunogavranovic.com/posts/2026-04-20-types-and-neural-networks.html #neuralnetworks #codeJenga #HackerNews #ngated
Types and Neural Networks

Types and Neural Networks

🧠✨ Presenting Ternary Bonsai, because who needs more than 1.58 bits of intelligence? 🤖 After all, why aim for brains when you can just compress the living daylights out of your neural networks? 📉 Don't worry, the performance gain is totally "meaningful"—trust us!
https://prismml.com/news/ternary-bonsai #TernaryBonsai #AICompression #NeuralNetworks #PerformanceGain #TechHumor #HackerNews #ngated
PrismML — Introducing Ternary Bonsai: Top Intelligence at 1.58 Bits

#ITByte: #Liquid #NeuralNetworks (#LNNs) are a type of deep learning architecture that use dynamic connections between neurons to process time-series data.

“Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability.

https://knowledgezone.co.in/trends/browser?topic=Liquid-Neural-Network

Title: P2: hackathon final conf, generative architectres [2024-05-31 Fri]
noise step by step and learn to remove this noise. After
training it can generate images from noise.

Now I am going to make demo about my experience in
hackathon that was huge. I will use Emacs ᕙ( •̀ _ •́ )ᕗ
Org mode, TigerVNC and some conference platform that will
allow to share screen and face at the same time. It is a
task from employer.
😶 #dailyreport #AI #neuralnetworks #nn

Title: P1: hackathon final conf, generative architectres [2024-05-31 Fri]
2) It is impossible to control GNN due to stochastic
nature. Clear data is required, this may be sintetic
data.

3) DPO Direct Performance Optimization - training on pairs
good/bad allow to speed up data labeling. https://arxiv.org/pdf/2305.18290

Today I have been readling about generative and diffusion
architectures 🤪. In short: DNN is a networks that add #dailyreport #AI #neuralnetworks #nn

Title: P0: hackathon final conf, generative architectres [2024-05-31 Fri]
Yesterday I have been at final conference ꙭ of hackathon
in which I have been participated recently. Here was
a professor from AIRI Artificial Intelligence Research
Institute. 🤘

He told 👄 that:

1) All Generative NN archintectures are generalized into 2
types:
- Transformer architecture - sequence generator ✯
- Diffusion architecture - iterative refinement ✵ #dailyreport #AI #neuralnetworks #nn

The Hundred-Page Language Models Book by Andriy Burkov is on sale on Leanpub! Its suggested price is $50.00; get it for $20.00 with this coupon: https://leanpub.com/theLMbook/c/LeanPublishingDaily20260416 #Ai #Gpt #NeuralNetworks #DeepLearning #DataScience #ComputerScience
The Hundred-Page Language Models Book

Andriy Burkov's third book is a hands-on guide that covers everything from machine learning basics to advanced transformer architectures and large language models. It explains AI fundamentals, text representation, recurrent neural networks, and transformer blocks. This book is ideal for ML practitioners and engineers focused on text-based applications.