Release v9.0.0 · jenetics/jenetics

Improvements Update Java 25 and optimize code for new Java version. #917: ScopedValue for RandomRegistry class. #940: Remove deprecated API. #955: Make IntStream counting more robust.

GitHub
Release v8.3.0 · jenetics/jenetics

Improvements #933: Deprecate RandomAdapter for removal. #935: Compile and test Jenetics with Java 24/25 #938: Convert Range classes into records. #943: Remove `org.apache.commons:commons-math3´ te...

GitHub
Release v8.2.0 · jenetics/jenetics

Improvements #889: Allow adding annotations to Cfg elements for Grammatical Evolution. final var cfg2 = Cfg.<String>builder() .R("expr", rule -> rule .N("num", "annotation 1") ...

GitHub

@baldur

In my experience, code generation by LLMs is no good. In the best possible case, you have to spend more time to correct it. And most of the time it's just garbage.

But LLMs are quite useful for code review if you ask me. They suggest like 10 suggestions for your code. And usually at least half of them are right. You could ask another developer to do it for you. But by giving it to an LLM, you save human efforts for more important things.

By the way, if you truly want automatic and useful code generation, try #GeneticProgramming and #EvolutionaryAlgorithm :)

#LLM #Copilot #ChatGPT #AI #ML #programming

Release v8.1.0 · jenetics/jenetics

Improvements #822: Improve the build script for generating combined Javadoc. #898: Add support for reading data from CSV files or strings. This simplifies the code for regression problems. static...

GitHub
Moshe Sipper’s Cat-a-log of Writings - Moshe Sipper, Ph.D. - Medium

Moshe Sipper’s Cat-a-log of Writings. Academia — Artificial Intelligence — Artificial Life — Better Humans — Bio-Inspired Systems — Cellular Automata — Comics — Deep Learning —.

Medium

Evolutionary algorithm (Evolution 🧬)

In computational intelligence, an evolutionary algorithm is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play...

https://en.wikipedia.org/wiki/Evolutionary_algorithm

#EvolutionaryAlgorithm #Evolution #Cybernetics #EvolutionaryAlgorithms

Evolutionary algorithm - Wikipedia

I need to dig into the paper more deeply, but this seems like a pretty significant result for optimizing ML performance and power usage (at least for classifiers - but it seems like the same approach could be used elsewhere).

"Despite Tiny Classifiers being constrained to a few hundred logic gates, we observe no statistically significant difference in prediction performance in comparison to the best-performing ML baseline"

#ML #FPGA #evolutionaryalgorithm

https://arxiv.org/abs/2303.00031

Tiny Classifier Circuits: Evolving Accelerators for Tabular Data

A typical machine learning (ML) development cycle for edge computing is to maximise the performance during model training and then minimise the memory/area footprint of the trained model for deployment on edge devices targeting CPUs, GPUs, microcontrollers, or custom hardware accelerators. This paper proposes a methodology for automatically generating predictor circuits for classification of tabular data with comparable prediction performance to conventional ML techniques while using substantially fewer hardware resources and power. The proposed methodology uses an evolutionary algorithm to search over the space of logic gates and automatically generates a classifier circuit with maximised training prediction accuracy. Classifier circuits are so tiny (i.e., consisting of no more than 300 logic gates) that they are called "Tiny Classifier" circuits, and can efficiently be implemented in ASIC or on an FPGA. We empirically evaluate the automatic Tiny Classifier circuit generation methodology or "Auto Tiny Classifiers" on a wide range of tabular datasets, and compare it against conventional ML techniques such as Amazon's AutoGluon, Google's TabNet and a neural search over Multi-Layer Perceptrons. Despite Tiny Classifiers being constrained to a few hundred logic gates, we observe no statistically significant difference in prediction performance in comparison to the best-performing ML baseline. When synthesised as a Silicon chip, Tiny Classifiers use 8-18x less area and 4-8x less power. When implemented as an ultra-low cost chip on a flexible substrate (i.e., FlexIC), they occupy 10-75x less area and consume 13-75x less power compared to the most hardware-efficient ML baseline. On an FPGA, Tiny Classifiers consume 3-11x fewer resources.

arXiv.org
Have you ever seen an #EvolutionaryAlgorithm evolve to "extinction"? By that I mean that fitness is gradually improving over multiple generations and then suddenly falls off a cliff to a lower bound, and never recovers. A student saw it in some code we were hacking recently. The cause was obvious, in retrospect.