People keep coming back to this article (originally written in 2015). I've revised and expanded it with a few points that I haven't shared publicly before. The suggestions for identifying low quality content and improving content quality are still relevant today.

#google #quality #search #algorithms #eeat #seo #searchengineoptimization #webmarketing #digitalmarketing #panda

"How The Panda Algorithm Might Evaluate Your Site"

https://www.seo-theory.com/google-panda-signals/

Its amazing that people keep "discovering" (and writing journal papers and books) that the GPT/LLMs are just big "predictive text" machines. A GPT by definition is a program that is just sophisticated, conditional mimicry. It is mostly designed to fool gullible humans. With a vast amount of input data, industrial scale mimicry can entertain and distract humans for hours. It's a parlour game.

A text generator might pass a Turing test but that's only to say it can fool a human to believe that the text generator is alive or responsive when it isn't. Turing's "artificial" intelligence test was never about machine awareness or actual consciousness, so any trick that can fool a human will do. It is about artifice, not real intelligence.

All GPT generator software uses lots of data to make its mimicry seem nuanced and comprehensive. A simpler program might produce responses that are too obviously plagiarised. That is, it's not really impressive to ask "write me a love song" and have a Beatles song come back as if it were an original answer. The trickery in the text programs is to jumble up different information (always plagiarised) in a way that it seems original. This jumbling eventually produces responses that are more obviously impossible for the interacting person to accept. This fails the Turing test. The less knowledgable a person is the less they notice when this happens.

An Atari chess program beats ChatGPT. The reason? A GPT mimics language not good chess moves. It doesn't reason about chess. Expecting a computer program that has less stored knowledge and actions about the chess game to beat one that does is foolish.

#ai #atari #chess #algorithms #turingtest #deception

Years ago at the uni one prof annoyed me with his statement that "due to architecture limitations of current processors, Hanoi Towers can't be solved for more than 32 levels".

Challenge accepted. We built a tool that could create any step of any tower up to 32000 levels in linear time. We ran into the limitation of contemporary monitors as we were not able to visualise such tower.

#HanoiTowers #Algorithms

"“des éboueurs en Inde aux chauffeurs de VTC au Nigéria, les travailleurs résistent au contrôle algorithmique en organisant des manifestations, en créant des syndicats et en exigeant la transparence de l’IA”. Le risque est que les pays du Sud deviennent le terrain d’essai de ces technologies de surveillance pour le reste du monde"
La surveillance au travail s’internationalise
#surveillancedemasse #algorithms #darkenlightenment
https://danslesalgorithmes.net/stream/la-surveillance-au-travail-sinternationalise/
La surveillance au travail s’internationalise

Le rapport de Coworker sur le déploiement des « petites technologies de surveillance » - petites, mais omniprésentes (qu’on évoquait dans cet ...

recommendation #algorithms are so gross

Took a quick look at a camera and Amazon is already here with the "ohoho, looks like someone has an expensive new hobby" recommends

🎩✨ Ah, the joy of wading through a "modern" 2025 paper on "minimal perfect hashing"—because clearly, hashing from 2023 just wasn't perfect or minimal enough. 🤓🔍 Spoiler: It's basically a love letter to #algorithms, but only if you're fluent in Nerdish and have the stamina for 2506 pages of hash-tastic mumbo jumbo. 🧠💥
https://arxiv.org/abs/2506.06536 #minimalperfecthashing #techhumor #nerdculture #2025research #HackerNews #ngated
Modern Minimal Perfect Hashing: A Survey

Given a set $S$ of $n$ keys, a perfect hash function for $S$ maps the keys in $S$ to the first $m \geq n$ integers without collisions. It may return an arbitrary result for any key not in $S$ and is called minimal if $m = n$. The most important parameters are its space consumption, construction time, and query time. Years of research now enable modern perfect hash functions to be extremely fast to query, very space-efficient, and scale to billions of keys. Different approaches give different trade-offs between these aspects. For example, the smallest constructions get within 0.1% of the space lower bound of $\log_2(e)$ bits per key. Others are particularly fast to query, requiring only one memory access. Perfect hashing has many applications, for example to avoid collision resolution in static hash tables, and is used in databases, bioinformatics, and stringology. Since the last comprehensive survey in 1997, significant progress has been made. This survey covers the latest developments and provides a starting point for getting familiar with the topic. Additionally, our extensive experimental evaluation can serve as a guide to select a perfect hash function for use in applications.

arXiv.org

Se hizo de acceso universal el #algoritmo que da fuerza y cuerpo a las ias?

Me llama mucho la atención que exista tanta multinacional con su propio #franconscibertein cuando se supone que tan solo debería de existir una #ia

#software #inteligenciaartificial #algorithms #algoritmos

Readings shared June 8, 2025

The readings shared in Bluesky on 8 June 2025 are The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity. ~ Parshin Shojaee et al

Vestigium

This is interesting.

Listening to The Quanta Podcast (In Computers, Memory Is More Useful Than Time): https://play.prx.org/listen?ge=prx_11709_b2336f1f-74e3-4140-a44c-47a3470fd249&uf=https%3A%2F%2Fquantapodcast.quantamagazine.org%2F

One computer scientist’s “stunning” proof is the first progress in 50 years on one of the most famous questions in computer science.

This week's guest is Ben Brubaker; he recently published "For Algorithms, a Little Memory Outweighs a Lot of Time.”

#Podcast #Computing #Algorithms

The Quanta Podcast

One computer scientist’s “stunning” proof is the first progress in 50 years on one of the most famous questions in computer science.This is the third episode of our new weekly series The Quanta Podcast, hosted by Quanta Magazine editor in chief Samir Patel. This week's guest is Ben Brubaker; he recently published "For Algorithms, a Little Memory Outweighs a Lot of Time.”(If you've been a fan of Quanta Science Podcast, it will continue as 'audio edition episodes' in this same feed every other week.)Historical Recording © Jack Copeland and Jason Long

Gaussian integration is cool

Brief discussion on gaussian quadrature and chebyshev-gauss quadrature