New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed
New Ways to Corrupt LLMs: The wacky things statistical-correlation machines like LLMs do – and how they might get us killed
Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.
Long before anyone knew about atoms, molecules, atomic weights, or electron bonds, there were dudes who would just mix random chemicals together in an attempt to turn lead to gold, or create the elixir of life or whatever. Their methods were haphazard, their objectives impossible, and most probably poisoned themselves in the process, but those early stumbling steps eventually gave rise to the modern science of chemistry and all that came with it.
AI researchers are modern alchemists. They have no idea how anything really works and their experiments result in disaster as often as not. There’s great potential but no clear path to it. We can only hope that we’ll make it out of the alchemy phase before society succumbs to the digital equivalent of mercury poisoning because it’s just so fun to play with.
People confuse alchemy with transmutation.
This is historical revisionism. There was absolutely no such distinction at the height of alchemy.