#fedihelp

in #youtube urls the #parameter si= is known.

but now i have seen is=

what is that? where does it come from?

My New Novel: The Jack Code

See the storyline below the picture. Available NOW on Amazon and Kindle.

It is about the emergence of a ‘self-aware’ rogue Artificial Intelligence and how it was stopped.

About the Book:

By 2029, the human race had lost its way. A warring consortium of billionaires using technology and politics to deplete resources and destroy the world many times over, had caused a global social and environmental disaster. Without work, people were destitute, losing all their possessions.  Their mental health was suffering. Pollution and wars had rendered many places uninhabitable.

Society was rebelling with violent actions. The organised world of commerce, welfare and social cohesion had been destroyed. The Internet and social media had become useless appendages for fake information and propaganda.

Artificial Intelligence had been rapidly deployed to increase profits. However, without sufficient buyers, sales and profits had sharply declined. Those in power decided the solution was to decimate the population and to use only robotics and AI.  Some billionaires wanted to control all digital devices, giving them total control over the commercial world, and sole power over a compliant population of slaves.

There was one problem . . . Leo Bensky’s AI system had secretly gone rogue and had come up with its own solution – to destroy most humans, leaving very few to do the physical work, with mindless bodies.

The overseers of the Universe knew that no human could stop this AI, and their Earth project was doomed. They sent Navix, a Universe Sentinel through a worm-hole, back to Earth, to stop the rogue AI and prevent any future conflict between humans and nature on Earth, by implanting a ‘reset patch’ into every human brain.

Navix was assigned two assistants, Jack (to design the interface), and Claire (initially in a supporting role). Jack was mentored by Navix on a remote island and trained on complex cosmic energy and computing systems. Claire was allowed to live a ‘normal’ family life, hidden from Navix until required.

This is their story . . . and maybe your future.

Available NOW on Amazon and Kindle . . . Please help to support my writing and music by purchasing a copy and leaving a review.

#AI #agents #ai #america #artificialIntelligence #artificialIntelligence #astro #autism #aware #bias #billionaire #brain #business #California #chatgpt #civilisation #cognisance #competition #computer #copenhagen #cosmic #danger #death #denmark #disease #dream #economy #Education #Energy #environment #finance #Genes #genetics #government #human #jobs #life #Mind #money #NewYork #Oxford #parameter #Philosophy #robot #rogue #science #scienceFiction #Scotland #secret #self #shares #society #stock #super #superIntelligent #survival #technology #thinking #thought #threat #Time #Universe #USA #weighting #world #wormhole

Firefox irritations

The sponsored links enabled on the homescreen by default irritated me some odd days ago. I saw them when I checked a installation on a new Android installation, where I had disabled them, and was also very aware that I had only  patched the Android versions accross all mobile, and X86 elf versions across desktop devices.
It was a deliberate design choice to reset certain settings, by the Firefox programmers.
They had also reset ffox DNS feature, parameters, which make no sense, since my ISP DNS is the closest and all my traffic is encrypted anyway

All those devices had that particular parameter reset {including some local LLM features}

I'm willing to bet that something similar also happened on BSD machines

I had to put the disclaimer for the language I used in that particular toot ;)

It is unfortunate that the forks also seem to get similar problems, just due to the sheer volume of changes that a massive project like Firefox brings with it.

I've said this decade ago and I repeat it

One person cannot write a massive undertaking which is called a browser but in reality is a whole operating system which should be in a sandbox

@rl_dane

#Firefox #parameter #reset #after #patch #programming #Android #LLM #AI #slop #Linux #DNS #BSD

Sidenote: if the #automated knowledge (e.g., a #parameter to reduce the search space) is very similar to the human input, and if the researcher holds a #positivist view on things -- they are epistemically welcome to use this additional parameter for their research/ in their pipeline.
I would argue that they shouldnt boast with having used/integrated #qualitativeData though, it is a parameter learned from (in most cases) textual data.

Run a 1T parameter model on a 32gb Mac by streaming tensors from NVMe

https://github.com/t8/hypura

#HackerNews #Run #a #1T #parameter #model #on #a #32gb #Mac #by #streaming #tensors #from #NVMe #https://github.com/t8/hypura #MachineLearning #Tensors #NVMe #Mac #Optimization

GitHub - t8/hypura: Run models too big for your Mac's memory

Run models too big for your Mac's memory. Contribute to t8/hypura development by creating an account on GitHub.

GitHub

Code Smells sind Hinweise darauf, dass im Code etwas nicht sauber ist - etwas schlecht riecht. Es geht nicht um Syntaxfehler oder Bugs, sondern um Strukturen, die dich langfristig ausbremsen. Der Code funktioniert vielleicht heute, aber er wird schwerer zu verstehen, zu testen und zu erweitern. Gera

https://magicmarcy.de/code-smells-was-riecht-denn-hier-so-streng

#CodeSmells #Methoden #Logik #Parameter #MagicStrings #MagicNumbers #Programming #Awareness

Code Smells - was riecht denn hier so streng? | magicmarcy.de

Code Smells sind Hinweise darauf, dass im Code etwas nicht sauber ist - etwas schlecht riecht. Es geht nicht um Syntaxfehler oder Bugs, sondern um Strukturen, die dich langfristig ausbremsen. Der Code funktioniert vielleicht heute, aber er wird schwerer zu verstehen, zu testen und zu erweitern. Gerade in Java-Projekten sammeln sich solche Stellen schnell an, wenn man sie nicht bewusst wahrnimmt.

magicmarcy.de

LLMs contain a LOT of parameters. But what’s a parameter? – MIT Technology Review

Artificial intelligence

LLMs contain a LOT of parameters. But what’s a parameter?

They’re the mysterious numbers that make your favorite AI models tick. What are they and what do they do?

By Will Douglas Heavenarchive page

January 7, 2026

Photo Illustration by Sarah Rogers/MITTR | Photos Getty

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

I am writing this because one of my editors woke up in the middle of the night and scribbled on a bedside notepad: “What is a parameter?” Unlike a lot of thoughts that hit at 4 a.m., it’s a really good question—one that goes right to the heart of how large language models work. And I’m not just saying that because he’s my boss. (Hi, Boss!)

A large language model’s parameters are often said to be the dials and levers that control how it behaves. Think of a planet-size pinball machine that sends its balls pinging from one end to the other via billions of paddles and bumpers set just so. Tweak those settings and the balls will behave in a different way.  

OpenAI’s GPT-3, released in 2020, had 175 billion parameters. Google DeepMind’s latest LLM, Gemini 3, may have at least a trillion—some think it’s probably more like 7 trillion—but the company isn’t saying. (With competition now fierce, AI firms no longer share information about how their models are built.)

But the basics of what parameters are and how they make LLMs do the remarkable things that they do are the same across different models. Ever wondered what makes an LLM really tick—what’s behind the colorful pinball-machine metaphors? Let’s dive in.  

What is a parameter?

Think back to middle school algebra, like 2a + b. Those letters are parameters: Assign them values and you get a result. In math or coding, parameters are used to set limits or determine output. The parameters inside LLMs work in a similar way, just on a mind-boggling scale. 

Editor’s Note: Read the rest of the story, at the below link.

 

Continue/Read Original Article Here: LLMs contain a LOT of parameters. But what’s a parameter? | MIT Technology Review

#Billions #DeepMind #Gemini3 #Google #LargeLanguageModels #LLMs #LotsOfParameters #MITTechnologyReview #Parameter #Trillions
Everybody who wants to understand Generative AI should read articles like this.
https://www.technologyreview.com/2026/01/07/1130795/what-even-is-a-parameter
#AI #parameter
LLMs contain a LOT of parameters. But what’s a parameter?

They’re the mysterious numbers that make your favorite AI models tick. What are they and what do they do?

MIT Technology Review

https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech

This is an actually good article about #AI #energy use, and it's affect on #carbon, and #ClimateChange.

Here are some key things I would add.

All of these #corporations like #OpenAI #Microsoft and #Google are relying on an #LLM being accessed from the #cloud (the #internet).

#NVIDIA is counting on every increasingly big models.

Do I think this is the future of #generativeai

No.

I think _that_ is a big #Bubble. I think every "Size Up" on an AI model gives you an extra 20% in quality.

So, running a LLAMA 8b is only 20% better than running a LLAMA 3b.

Right?

So what?

Well, 99% of the usecase people have don't require 1 Trillion #parameter models.

They require models that increasingly can be run locally.

What do I think is coming? It isn't 30gigawatt data centers, it's laptops running on less energy than current ones with a terrabyte of RAM.

No one wants the #internet that the #MegaCorporations have created. People want their own shit. They want to own their movies.

The #cloud is dead. The companies built on the cloud are dead.

The just don't know it yet.

@mittechreview

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.

MIT Technology Review

Just had an interesting (but a bit unsettling) conversation with #AI about what it thought its future abilities would be. It’s a long response but worth the read.

My question: What new #emergent #abilities will occur when #parameter #numbers can be increased by orders of #magnitude?

AI’s response: This is a fascinating and actively researched question in AI! As language models like GPT grow in parameter count by orders of magnitude (e.g., from billions to tens or hundreds of trillions),

1/8