I came across a post on LinkedIn about evolutionary computation, and opted to post this in response:
I never stopped using evolutionary computation. I'm even weirder and use coevolutionary algorithms. Unlike EC, the latter have a bad reputation as being difficult to apply, but if you know what you're doing (e.g. by reading my publications 😉) they're quite powerful in certain application areas. I've successfully applied them to designing resilient physical systems, discovering novel game-playing strategies, and driving online tutoring systems, among other areas. They can inform more conventional multi-objective optimization.

Many challenging problems are not easily "vectorized" or "numericized", but might have straightforward representations in discrete data structures. Combinatorial optimization problems can fall under this umbrella. Techniques that work directly with those representations can be orders of magnitude faster/smaller/cheaper than techniques requiring another layer of representation (natural language for LLMs, vectors of real values for neural networks). Sure, given enough time and resources clever people can work out a good numerical re-representation that allows a deep neural network to solve a problem, or prompt engineer an LLM. But why whack at your problem with a hammer when you have a precision instrument?
I started to put up notes about (my way of conceiving) coevolutionary algorithms on my web site, here. I stopped because it's a ton of work and nobody reads these as far as I can tell. Sound off if you read anything there!

#AI #GenAI #GenerativeAI #LLMs #EvolutionaryComputation #GeneticAlgorithms #GeneticProgramming #EvolutionaryAlgorithms #CoevolutionaryAlgorithms #Cooptimization #CombinatorialOptimization #optimization
coevolutionary algorithms

Anthony Bucci's personal web site

Anthony Bucci
Do you think that the widespread use of #EvolutionaryComputation, #EvolutionaryAlgorithms etc. in combination with f.e. large models might lead to the creation of #ArtificialLife / #ALife ?
Yes, it might even happen unintentionally
0%
It is quiet probable
0%
No, this is impossible
0%
Poll ended at .
In my opinion, the introduction of #EvolutionaryComputation Methods in combination with large models might lead to #agi in the long term. Life itself is a permanent self-optimization driven by evolution. So will evolutionary computation will lead even to artificial life? What do you think?

Alright another round of experiments for #wakegp has ended. This round was for determining if adding the conditional SelectP instruction to the set will have any effect on both program size and fitness.

Another round of experiments which is still going on, is determining how deletion mutation affects both program size and fitness, that is prevents bloat. Earlier experiments show that the simple parsimony pressure method I invented, has positive effect on both fitness and program size, but when there is also deletion mutation, the effect is even better. In those experiments, deleting an instruction randomly from all programs(that is a rate of 1.0) has positive impact on both fitness and average program size. Now I'm experimenting to see if more than one deletion per program would be even better.

#GeneticProgramming #GP #EvolutionaryML #ML #AI #MachineLearning #evolutionarycomputation #EC #LinearGeneticProgramming #wakeworddetection #wake_word_detection #optimization #opensourceML #opensource #FOSS #opensourceAI

Unfortunately, my horse power computer is now offline. And I am in another city so no access to it. It's likely that there has been a #poweroutage which is now very common in my country #iran

I am doing experiments with #wakegp to see if my simple method for parsimony pressure is effective. Till now, it seems that its effect is very little. I'm thinking of other methods for parsimony pressure such as bucketing and tournament selection(for size instead of fitness).

I am expecting to deliver results in summer, God willing.

#Geneticprogramming #evolutionarycomputation #EvolutionaryComputing #artificialintelligence #machinelearning #ml #ai #wake_word_detection #wakeworddetection #programming #computer_science #cs #computerscience

I wanted to elaborate a bit on this point.

One thing you can observe in some subfields of computer science is a strong bias towards "inventing" algorithms or performing "novel" demonstrations that improve on the "state of the art". You haven't done anything worth publishing unless you can name it the Such-and-such Algorithm, or you can demonstrate a phenomenon that your (probably incomplete) literature review suggests hasn't been demonstrated before. Taken to an extreme, this kind of bias results in things like the EC Bestiary or the endless claims that LLMs are better than humans at X task.

But what is all this? I believe it represents a power struggle. Since these subfields do not take account of their own histories nor the histories of their subject matter, they grapple only with and in the present. They've given up any pretense of seeking truth--which demands understanding history--and instead seek dominance here and now. "Better". "Faster". "State of the art". These aren't the aims of truth-seekers, they're the aims of power-seekers, those who seek to dominate.

And what would dominance look like in this space? A fundamentalist or dogmatic view of the subfield. Bucci's algorithm is the only algorithm to do X. Bucci's algorithm is the fastest algorithm to do X. Bucci's algorithm is the only algorithm producing state of the art performance at X. Don't bother reading or thinking about other algorithms, just use Bucci's algorithm. That's dogmatism.

Why are you trying to write your learning algorithm from scratch just use PyTorch or TensorFlow. Why are you trying to create a natural language generator just use ChatGPT. Nowadays if you're not working on deep learning you're not really doing machine learning (*). Etc. This is also dogmatism.

#Science #ComputerScience #LLM #EvolutionaryComputation

(*) I've heard a computer science professor express this.
Evolutionary Computation Bestiary

A bestiary of evolutionary, swarm and other metaphor-based algorithms

EC-Bestiary

As my findings in my #GeneticProgramming research are becoming more and more thorough, I have started to document them. The main problem in my runs is still lack of diversity.

PS: It could be nice if we could write papers in some dialect of #markdown

PS2: These writings are for myself to remember where am I and where I'm going. No academic paper yet :)

#ML #MachineLearning #WakeWordDetection

#research #artificialIntelligence

#EvolutionaryComputation #dailynote

Getting back into working on #83: algorithmically evolving analog computing patches to produce HD electronic image sequences. Trying to channel the specific anarchistic potential of evolvable hardware into the production of images.
#evolutionarycomputation #analogcomputing #systemsthatmatter
Not that I have the free time to take on another project, but there's a part of me that wants to do a thorough exploration of argmax and write up what I find, if only as notes. Math-y and science-y people take it for granted; search engines prefer telling you about the numpy function of that name. But it turns out argmax has (what I think are) interesting subtleties.

Here's one. If you're given a function, you can treat argmax of that function as a set-valued function varying over all subsets of its domain, returning a subset--the argmaxima let's call them--of each subset. argmax x∈S f(x) is a subset of S, for any S that is a subset of the function f's domain. Another way to think of this is that argmax induces a 2-way partitioning of any such input set S into those elements that are in the argmax, and those that are not.

Now imagine you have some way of splitting any subset of some given set into two pieces, one piece containing the "preferred" elements and the other piece the rest, separating the chaff from the wheat if you will. It turns out that in a large variety of cases, given only a partitioning scheme like this, you can find a function for which the partitioning is argmax of that function. In fact you can say more: you can find a function whose codomain is (a subset of) some n-dimensional Euclidean space. You might have to relax the definition of argmax slightly (but not fatally) to make this work, but you frequently can (1). It's not obvious this should be true, because the partitioning scheme you started with could be anything at all (as long as it's deterministic--that bit's important). That's one thing that's interesting about this observation.

Another, deeper reason this is interesting (to me) is that it connects two concepts that superficially look different, one being "local" and the other "global". This notion of partitioning subsets into preferred/not preferred pieces is sometimes called a "solution concept"; the notion shows up in game theory, but is more general than that. You can think of it as a local way of identifying what's good: if you have a solution concept, then given a set of things, you're able to say which are good, regardless of the status of other things you can't see (because they're not in the set you're considering). On the other hand, the notion of argmax of a function is global in nature: the function is globally defined, over its entire domain, and the argmax of it tells you the (arg)maxima over the entire domain.

In evolutionary computation and artificial life, which is where I'm coming from, such a function is often called an "objective" (or "multiobjective") function, sometimes a "fitness" function. One of the provocative conclusions of what I've said above for these fields is that as soon as you have a deterministic way of discerning "good" from "bad" stuff--aka a solution concept--you automatically have globally-defined objectives. They might be unintelligible, difficult to find, or not very interesting or useful for whatever you're doing, but they are there nevertheless: the math says so. The reason this is provocative is that every few years in the evolutionary computation or artificial life literature there pops up some new variation of "fitnessless" or "objective-free" algorithms that claim to find good stuff of one sort of another without the need to define objective function(s), and/or without the need to explicitly climb them (2). The result I'm alluding to here strongly suggests that this way of thinking lacks a certain incisiveness: if your algorithm has a deterministic solution concept, and the algorithm is finding good stuff according to that solution concept, then it absolutely is ascending objectives. It's just that you've chosen to ignore them (3).

Anyway, returning to our friend argmax, it looks like it has a kind of inverse: given only the "behavior" of argmax of a function f over a set of subsets, you're often able to derive a function g that would lead to that same behavior. In general g will not be the same as f, but it will be a sibling of sorts. In other words there's an adjoint functor or something of that flavor hiding here! This is almost surely not a novel observation, but I can say that in all my years of math and computer science classes I never learned this. Maybe I slept through that lecture!

#ComputerScience #math #argmax #SolutionConcepts #CoevolutionaryAlgorithms #CooptimizationAlgorithms #optimization #EvolutionaryComputation #EvolutionaryAlgorithms #GeneticAlgorithms #ArtificialLife #InformativeDimensions



(1) If you're familiar with my work on this stuff then the succinct statement is: partial order decomposition of the weak preference order induced by the solution concept, when possible, yields an embedding of weak preference into ℝ^n for some finite natural number n; the desired function can be read off from this (the proofs about when the solution concept coincides with argmax of this function have some subtleties but aren't especially deep or hard). I skipped this detail, but there's also a "more local" version of this observation, where the domain of applicability of weak preference is itself restricted to a subset, and the objectives found are restricted to that subdomain rather than fully global.

(2) The latest iteration of "open-endedness" has this quality; other variants include "novelty search" and "complexification".

(3) Which is fair of course--maybe these mystery objectives legitimately don't matter to whatever you're trying to accomplish. But in the interest of making progress at the level of ideas, I think it's important to be precise about one's commitments and premises, and to be aware of what constitutes an impossible premise.


I'm really enjoying my latest research project. This one's exploring how different spatial environments can lead to different evolutionary dynamics. Here we see an environment where it's harder to survive in the middle than on the edges (that is, it requires higher scores from a fitness function). We can see the population evolve increasing fitness as it spreads into the interior space.

#EvolutionaryComputation #EvolutionaryAlgorithms #science #evolution