Giacomo Miceli

@jamez
45 Followers
107 Following
293 Posts
Xennial, exogamous xenophile. Humanist nerd, recidivist startupper, antitribalist. Maker of things, coder for expressive purposes. Dreams in tongues, hears in colors.
sitehttps://jamez.it

Some people are ok with letting a major part of our lives go forgotten. The type of people who delete an email after “taking care of it”. But the world needs memory, lest we forget all the important lessons of our past. I will agree that there is value in “starting fresh” and that not all memories are worth the same, but I think that given the blunt options of either remembering or forgetting, I would always pick remembering.

Despite the tireless work of archivists and historians, despite major initiatives like The Archive, a staggering amount of the Web disappears every day. In some cases, the link to that past becomes so tenuous that even if an archived version exists, without a human remembering a now-inexistent URL, a page would be effectively lost.

Back in 2021, I was in Scottsdale and stumbled upon an art gallery downtown. I was very amused by the paintings of one specific artist represented in the gallery, the humorous surrealist painter Bob Price. I was that close to buying one of his artworks! I didn’t, but I put a mental bookmark to check on that gallery when I had more bandwidth. Fast forward to 2025. I tried looking for the website of the gallery, but the website no longer existed! It had just vanished, the physical location permanently closed according to Google Maps. Now, with the Google listing gone, looking for the name of the gallery, followed by the name of the artist, yields zero results to the dead website. And most incredible: looking for the name of the artist and the name of the gallery also results in a long list of irrelevant results on the Wayback Machine!

In a nutshell, without remembering the URL, it seems impossible to get back to those precious bits. And just to emphasize how tenuous the link to the past is: most photos from the archived gallery website were missing from the Wayback Machine. Thankfully, the original resources appeared to have been hosted on static.wixstatic.com, which was miraculously still hosting them (despite the website being offline for the past ~3 years), and by writing a simple script I was able to retrieve them all.

I don’t have an easy answer to this problem, but I will be thinking some more about it. In the meantime, I’ll leave here a dump of Bob Price work which would otherwise be lost on the web!

Bob Price’s paintings

#macro #memory #rant

BOB PRICE | Archive

The project A Sign In Space reached an important milestone last year. A team of researchers was able to extract a cellular automaton and obtain a very interesting configuration, displaying what appears to be an amino acid diagram.

It might still be too early to discuss the ramifications of this conclusion. But it’s not too soon to play around with the results and imagine new interesting visualizations of the concept. The image you see follows a simple premise: what if we took a snapshot of the cellular automaton’s configuration at each frame, and turned the alive cells into cubes in a three-dimensional space? By adding one layer upon another like bricks, we construct these Pillars of Life. It would be fun to animate them!

#macro #weekendProject

A construir comunidade à volta da open web nos @cookies da Achada. Venham os próximos!

Com @teclista e @gsantos_descolagem e @Sardera

I have just realized that Il Post released part of an interview I gave on the topic of vibe coding a few months ago (article in Italian). I have many problems with the term Vibe Coding, but I suspect the expression will gradually fade, and it will return to being just “coding”, like wireless phones are now just phones or online banking is now just banking.

I’m adding here the full context of the interview. (Scroll down for translation)

Il vibe coding non è una moda, ma uno spostamento tettonico nel campo dell’Informatica.

Personalmente, ho cominciato a usare i LLM per scrivere codice nel 2022 quando GPT-3 era lo stato dell’arte. Quando è uscito GPT-4 circa 8 mesi dopo, ho notato un forte cambiamento nelle mie abitudini da programmatore. Anziché fare domande precise su come risolvere un problema, ho cominciato a fare domande più generiche, praticamente descrivendo quello che desideravo in output, di fatto dando al modello maggiore autonomia di risolvere il problema nel modo che preferiva. Il sentimento prevalente di quei giorni è un misto di esaltazione e panico. Alan Warburton ha fatto un ottimo lavoro nel descrivere il wonder panic nel suo documentario sugli effetti delle AI nel mondo degli artisti, te lo raccomando. Molto di quello che dice è applicabile per gli sviluppatori.

Neanche un anno dopo, intorno a inizio 2024, ho notato che persone vicine a me che non erano sviluppatori, ma erano un pochino smanettoni o avevano una tendenza al pensiero analitico, avevano cominciato a usare i LLM nello stesso modo e stavano avendo successo a lavoro, accelerando il loro processi e rivelando inefficienze nel loro campo.

Fra i miei amici sviluppatori c’è una divisione che quella fra democratici e repubblicani a confronto è nulla. Da una parte ci sono quelli che come me hanno abbracciato questi strumenti e sono entusiasti di poter esplorare più idee con più facilità e più rapidamente. Dall’altra parte ci sono quelli che giurano di voler morire prima di usare un LLM (“just a giant autocomplete”, “just a stupid markov chain”, etc.) e sono convinti che un LLM non sostituirà mai il loro lavoro. Io sono convinto che nel giro di 5 anni al massimo si dovranno ricredere e trasformare/adattare la loro professione.

Arrivando alla tua seconda domanda: credo che il tuo futuro sia già il presente per molti! Allo stato attuale, usando LLM per scrivere un programma non molto modulare, si giunge presto a un punto in cui il modello comincia a perdersi pezzi e fare errori ed è necessario che qualcuno capisca quello che sta succedendo. Questo punto di rottura con Claude 3.7 è già molto alto, parliamo di decine di migliaia di righe di codice. Ma sono convinto che a breve questo limite sarà meno evidente e anche società con grandi codebase cominceranno a usare questi nuovi strumenti.

Ovviamente gli umani insisteranno per rimanere sul sedile del guidatore. Ovviamente vorremo accertarci che tutto quello prodotto da un modello sia vagliato da un esperto, specialmente quando il suo uso è critico. Ovviamente capire quello che sta succedendo è essenziale per noi. Un buon modello di esempio è il pilota automatico degli aerei. La maggior parte del volo è nelle mani del PA, ma per decollo e atterraggio, le fasi più critiche, gli umani insistono ad avere il controllo totale. Lo stesso avverrà per il codice scritto per fini commerciali.

Ci sarà sempre bisogno di qualcuno che capisce quello che sta succedendo dietro le quinte. Il numero di persone in grado di programmare l’intera stack stava già diminuendo (io conosco solo tre o quattro persone in grado di scrivere in qualche dialetto del linguaggio assembly, per esempio), e credo che i LLM renderanno questo fenomeno ancora più evidente. Il che non significa necessariamente meno lavoro per i programmatori (anche se è una possibilità), ma semplicemente uno spostamento di ruolo un gradino più alto. I coder diventano manager e fanno supervisione del lavoro dei LLM. Molte persone adorano scrivere codice, e quando le società decideranno che è più economico lasciare quel task alle macchine, quelle persone soffriranno.

Vibe coding isn’t a fad, but a tectonic shift in the field of computing.

Personally, I started using LLMs to write code in 2022, when GPT-3 was the state of the art. When GPT-4 came out about eight months later, I noticed a strong change in my programming habits. Instead of asking precise questions about how to solve a problem, I began asking more generic ones—basically describing what I wanted as output—effectively giving the model more autonomy to solve the problem however it preferred. The prevailing feeling in those days was a mix of exhilaration and panic. Alan Warburton did a great job describing this “wonder panic” in his documentary about AI’s effects on the world of artists—I recommend it. Much of what he says applies to developers.

Not even a year later, around early 2024, I noticed that people close to me who weren’t developers but were a bit tech-savvy or had an analytical bent had started using LLMs the same way and were finding success at work, speeding up their processes and revealing inefficiencies in their field.

Among my developer friends there’s a divide that makes the one between Democrats and Republicans look mild by comparison. On one side are those who, like me, have embraced these tools and are excited to explore more ideas more easily and more quickly. On the other side are those who swear they’d rather die than use an LLM (“just a giant autocomplete,” “just a stupid Markov chain,” etc.) and are convinced an LLM will never replace their job. I’m convinced that within at most five years they’ll have to change their minds and transform/adapt their profession.

Coming to your second question: I think your future is already the present for many! As things stand, when you use LLMs to write a not-very-modular program, you quickly hit a point where the model starts losing track of pieces and making mistakes, and someone has to understand what’s going on. With Claude 3.7 this breaking point is already very high—we’re talking tens of thousands of lines of code. But I’m convinced that soon this limit will be less apparent, and even companies with large codebases will start using these new tools.

Obviously, humans will insist on staying in the driver’s seat. Obviously, we’ll want to make sure everything produced by a model is vetted by an expert, especially when its use is critical. Obviously, understanding what’s going on is essential for us. A good analogy is the airplane autopilot. Most of the flight is in the autopilot’s hands, but for takeoff and landing—the most critical phases—humans insist on having full control. The same will happen for code written for commercial purposes.

There will always be a need for someone who understands what’s happening behind the scenes. The number of people capable of programming the entire stack was already shrinking (I personally know only three or four people who can write in some dialect of assembly language, for example), and I believe LLMs will make this phenomenon even more pronounced. That doesn’t necessarily mean less work for programmers (though it’s a possibility), but rather a shift in role one rung higher. Coders become managers, supervising the work of LLMs. Many people love writing code, and when companies decide it’s cheaper to leave that task to machines, those people will suffer.

#AI #macro #rant

Con l’intelligenza artificiale diventeremo tutti programmatori?

Anche i meno esperti possono scrivere codice col cosiddetto "vibe coding": i risultati però hanno dei limiti

Il Post

Let’s talk about AI art.

https://theoatmeal.com/comics/ai_art

The Last Economy

A Third Path for the Intelligence Age

The Last Economy

I salvaged a pen plotter from the early ’90s. The owner, a retired architect, was ready to free up some space in their home after decades of disuse. The unit was in fair condition and required hardly any maintenance: some cleaning, tightening of a belt, and a slide rail that needed lubrication; after that, it was ready to go.

The bigger challenge was how to deal with the proprietary pens, which are long out of production. What you find on eBay is dried up, leaving DIY as the only option: I had decent luck refilling old pens using ink reservoirs from other pens or markers, and I have recently started designing my own adapters and prototyping them in PETG using a 3D printer, although with mixed results so far.

What is certain is that this plotter will travel many more miles with this new owner. As a megalomaniac, I’m loving plotting in A0 format.

Here’s the manual of the unit, to the best of my knowledge previously unavailable online.

https://jamez.it/blog/wp-content/uploads/2025/08/Calcomp_Pacesetter_2024_2036_manual.pdf

#machines #macro #penplotter

Github repo.
And by plotting, I mean both converting vector coordinates and tracing the result on a piece of paper. Because it all starts with a pen plotter and a love for mini planets.

It always amazes me when I seem to be the only person on the planet to have a given problem. No matter what combination of keywords I put on a search engine, I just couldn’t seem to find the tool that I needed. I wanted to create a mini planet working exclusively with vector art, specifically with SVG files. You can try breaking down the problem into smaller tasks, but you still will not find the utilities you need. For my specific pipeline, it boils down to three tools:

  • a Blender script to batch-export a panorama composed of several freestyle SVG shots from a given viewpoint;
  • a script to remap the resulting SVGs together in a single equirectangular SVG;
  • a program that takes an equirectangular SVG and remaps the lines/paths in it to stereographic projection.

These tools have been missing for at least 6 years, so it comes as a shock to me that I am ultimately doing something about it. Why not sooner? I guess it’s a combination of hope that “someone will do it for me” combined with finally having the time and resources to do it. This task would have hardly been a “weekend project” before the advent of LLMs and coding agents, but now it can!

The first tool, cube_map_svg.py, is the most straightforward, but potentially the one that can set you up for success or failure downstream. Open your scene in Blender and position your camera where you want it. Configure your camera for a 90-degree FOV. Make sure your camera sensor fit is set to have the same width and height. I usually leave it to 36mm by 36mm. Set the output resolution to 2048×2048, though any square ratio will do. In the script, configure the output directory and keep everything else as is. Don’t forget to enable freestyle output and freestyle SVG in your render panel. Run the script from Blender.

reproj_svg_to_equirect.py will take as input the metadata file generated in your export directory and process each SVG, reprojecting them all into one equirectangular SVG file. When this project was not enhanced by coding agents, I was following the wrong approach and attempting to stitch different SVGs together. It did not go well. GPT-5 largest contribution to me so far was providing the math to project the cube-map into an equirectangular format.

Once you have an equirectangular panorama, you can go crazy with the reprojections. The only one I implemented so far is the stereographic one. svg_stereographic.py helps you make those transformations, taking parameters such as horizontal camera panning, and zoom level.

Here’s some early results using this stack:

Let the penplotting season begin…

#macro #penplotter #weekendProject

@nikazygusztav @[email protected] you're welcome. Don't forget to share your creations!

I think this music video majestically wraps up the creative zeitgeist of the second half of 2025. Source

#AI #Micro