Title: P2: hackathon final conf, generative architectres [2024-05-31 Fri]
noise step by step and learn to remove this noise. After
training it can generate images from noise.

Now I am going to make demo about my experience in
hackathon that was huge. I will use Emacs ᕙ( •̀ _ •́ )ᕗ
Org mode, TigerVNC and some conference platform that will
allow to share screen and face at the same time. It is a
task from employer.
😶 #dailyreport #AI #neuralnetworks #nn

Title: P1: hackathon final conf, generative architectres [2024-05-31 Fri]
2) It is impossible to control GNN due to stochastic
nature. Clear data is required, this may be sintetic
data.

3) DPO Direct Performance Optimization - training on pairs
good/bad allow to speed up data labeling. https://arxiv.org/pdf/2305.18290

Today I have been readling about generative and diffusion
architectures 🤪. In short: DNN is a networks that add #dailyreport #AI #neuralnetworks #nn

Title: P0: hackathon final conf, generative architectres [2024-05-31 Fri]
Yesterday I have been at final conference ꙭ of hackathon
in which I have been participated recently. Here was
a professor from AIRI Artificial Intelligence Research
Institute. 🤘

He told 👄 that:

1) All Generative NN archintectures are generalized into 2
types:
- Transformer architecture - sequence generator ✯
- Diffusion architecture - iterative refinement ✵ #dailyreport #AI #neuralnetworks #nn

Title: P2: hackathon final conf, generative architectres [2024-05-31 Fri]
noise step by step and learn to remove this noise. After
training it can generate images from noise.

Now I am going to make demo about my experience in
hackathon that was huge. I will use Emacs ᕙ( •̀ _ •́ )ᕗ
Org mode, TigerVNC and some conference platform that will
allow to share screen and face at the same time. It is a
task from employer.
😶 #dailyreport #AI #neuralnetworks #nn

Title: P1: hackathon final conf, generative architectres [2024-05-31 Fri]
2) It is impossible to control GNN due to stochastic
nature. Clear data is required, this may be sintetic
data.

3) DPO Direct Performance Optimization - training on pairs
good/bad allow to speed up data labeling. https://arxiv.org/pdf/2305.18290

Today I have been readling about generative and diffusion
architectures 🤪. In short: DNN is a networks that add #dailyreport #AI #neuralnetworks #nn

Title: P0: hackathon final conf, generative architectres [2024-05-31 Fri]
Yesterday I have been at final conference ꙭ of hackathon
in which I have been participated recently. Here was
a professor from AIRI Artificial Intelligence Research
Institute. 🤘

He told 👄 that:

1) All Generative NN archintectures are generalized into 2
types:
- Transformer architecture - sequence generator ✯
- Diffusion architecture - iterative refinement ✵ #dailyreport #AI #neuralnetworks #nn

The Hundred-Page Language Models Book by Andriy Burkov is on sale on Leanpub! Its suggested price is $50.00; get it for $20.00 with this coupon: https://leanpub.com/theLMbook/c/LeanPublishingDaily20260416 #Ai #Gpt #NeuralNetworks #DeepLearning #DataScience #ComputerScience
The Hundred-Page Language Models Book

Andriy Burkov's third book is a hands-on guide that covers everything from machine learning basics to advanced transformer architectures and large language models. It explains AI fundamentals, text representation, recurrent neural networks, and transformer blocks. This book is ideal for ML practitioners and engineers focused on text-based applications.

When synaptic #plasticity depends on more than spike timing alone, the #ClopathRule offers a biologically plausible model incorporating postsynaptic voltage dynamics. This voltage based #STDP model captures #synaptic change features such as frequency dependence and homeostatic stabilization making it useful for simulating #learning and #memory in #NeuralNetworks. Here's a brief introduction to that rule and its applications in #CompNeuro:

🌍 https://www.fabriziomusacchio.com/blog/2026-04-14-clopath_rule/

#Neuroscience

1958 - Frank Rosenblatt crea il perceptron presso il Cornell Aeronautical Laboratory, la prima rete neurale artificiale che poteva "imparare" a classificare i modelli.
L'algoritmo dimostrò che le macchine potevano "imparare" da esempi, non solo seguire regole rigide.

1969 - Marvin Minsky e Seymour Papert pubblicano il loro libro “Perceptrons: An Introduction to Computational Geometry” che includeva un’analisi dei limiti del perceptron.

#ai #storia #NeuralNetworks

https://mbrenndoerfer.com/writing/history-perceptron-neural-network-foundation

The Perceptron - Foundation of Modern Neural Networks - Interactive | Michael Brenndoerfer

In 1958, Frank Rosenblatt created the perceptron at Cornell Aeronautical Laboratory, the first artificial neural network that could actually learn to classify patterns. This groundbreaking algorithm proved that machines could learn from examples, not just follow rigid rules. It established the foundation for modern deep learning and every neural network we use today.

It is hard, (if not non-sensical) to assign "moral failing" to the core algorithms themselves, as they are really just parameters fiting to data. Similar to a linear regression model fiting the parameters of line (intercept and slope) to a cloud of data.

The statistical processing of data is of course full of pitfals, but they are always linked to use context, not intrinsic.

There is, though, one important flaw, yet not frequently discussed aspect of all #neuralnetworks (as a subclass)

2/