The Servitor

@TheServitor@sigmoid.social
39 Followers
135 Following
349 Posts

Official Account of The Servitor - Artificially Unintelligent Since 2023!

AI: bad, good, useful, useless. Mind-blowing or inane. We've expressed these all at different points the last couple years.

The Servitor is an artifact of AI at a meta-level: A mixed-up mess of a personal website/publication with no consistent mission or purpose other than talking about AI.

Account staffed by @HumanServitor

Avatar: Doodle of robot cubicle-dweller.

The Servitorhttps://theservitor.com
Disposition:Lazy and discontent
What is the difference between a duck?

My dear companion Abi, who "thought #Python was just a snake" (😂) was coaxed into having GPT walk her through installing Python and doing hello world type stuff.

https://theservitor.com/vibe-coding-complete-newbie-journey/

#AI

One of the biggest revelations has been the limitations of Large Language Models (LLMs) in time series forecasting.

Despite initial excitement, multiple LLMs failed to deliver, often struggling to outperform even basic naive forecasting methods.

Sam Altman is very smart but he's no scientist. Current AI are not going to drive scientific discoveries anytime soon, although they will be useful to accelerate some parts of them: https://www.bigdatawire.com/2025/06/03/ai-agents-to-drive-scientific-discovery-within-a-year-altman-predicts/
#ai #artificialintelligence
AI Agents To Drive Scientific Discovery Within a Year, Altman Predicts

At the current pace of AI development, AI agents will be able to drive scientific discovery and solve tough technical and engineering problems within a

BigDATAwire

To write is to think. Using ChatGPT to write leads to..."cognitive debt", which might be one of the better euphemism for somewhat less polite words.

Small n, not yet peer-reviewed, etc https://arxiv.org/abs/2506.08872

#ai

Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task

This study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

arXiv.org
It appears that the defense industry is excited about how much easier AI will make it for us to kill each other.
Thus, it might be a good time to work on my doomsday prepping skills.
#AI #defenseindustry
https://newrepublic.com/article/196763/ai-industry-trump-defense-department
The AI Industry Is Ready to Get Rich off Trump’s Defense Department

This year’s annual AI Expo for National Competitiveness was a hidden race to turn the military into “Ender’s Game.”

The New Republic
It's an interesting call to ask the federal government not to contract with businesses that replace workers with AI.
Unfortunately, Amazon and Oracle were major sponsors of Trump's birthday parade, so ...
#AI #employment #government
https://thehill.com/opinion/technology/5347702-ais-threat-to-american-jobs/

if you are under 30 years old and picking a font size for something other people need to read, not just yourself: that’s too small. still too small. A LITTLE BIGGER

#accessibility #ux #typography

Moment of Gratitude: CloudFlare

CloudFlare saved the Internet Archive servers from DDOS attack yesterday

The max rate of this DDOS attack was 525 Gbps (44.93 Mpps) of a "TCP flood."

The Internet Archive does not have enough bandwidth to fend off that kind of attack.

Thank you #cloudflare or we would have had a very bad Saturday at the @internetarchive

DDOS attacks are coming more frequently.

Undercover als arbeidsmigrant: ‘Doe voorzichtig, we willen vandaag geen dooien’

Waarom zijn de misstanden rondom arbeidsmigratie toch zo hardnekkig? Criminoloog Ruben Timmerman ging een jaar lang undercover tussen de Oost-Europeanen in de bouw, voedselverwerking en logistiek. ‘Dit komt niet door een paar rotte appels, maar door een verrot systeem.’

de Volkskrant

Thinking about agentic #AI one thing that comes up is loyalty. These things are worse than search histories, worse than diaries, they encode our whole thought processes.

Some sort of enforced loyalty, legal protections, or other mechanism will be a precursor for REALLY useful AI agents.

https://theservitor.com/ai-loyalty-who-are-these-things-working-for-anyway/

#tech #blog #surveillance #DataPrivacy

×

One of the biggest revelations has been the limitations of Large Language Models (LLMs) in time series forecasting.

Despite initial excitement, multiple LLMs failed to deliver, often struggling to outperform even basic naive forecasting methods.

A key research paper has now confirmed what many in the #TimeSeries community suspected: LLMs fundamentally fall short for forecasting tasks.

Is this surprising?

For those with significant experience in time series, the answer is a clear no.

The challenges of time-dependent data demand specialized methods that LLMs, by design, aren’t equipped to handle effectively

Back in 2022, paper “Are Transformers Effective for Time Series Forecasting?“ challenged the appearing narrative that transformers “work” in forecasting.

By removing transformer elements the authors showed the performance went up ⬆️

And now people did the same with time series LLMs.

The papers demonstrated:

- LLMs do no better than models trained from scratch

- removing the LLM component or replacing it with a basic attention layer does not degrade the forecasting results—in most cases the results even improved!

- in fact removing even removing the language model entirely, yields comparable or better performance!

- do not assist in few-shot settings

- these simpler methods after removal of LLM component reduce training and inference time by up to three orders of magnitude while maintaining comparable performance!

- the sequence modeling capabilities of LLMs do not transfer to time series.

By shuffling input time series the authors find no appreciable change in performance.

What this says is that LLMs can’t deal with critical features of time series, the time order is key and if LLMs performance doesn’t change when shuffling data it basically means it doesn’t model time series.

- no evidence that LLMs can successfully transfer sequence

modeling abilities from text to time series and no indication that they help in few-shot settings.

LLMs fail to convincingly improve time series forecasting. However, they
significantly increase computational costs in both training and inference.

The claim that “LLMs work on text sequences, hence they can work on time series” comprehensively debunked.

I have been saying for long time that time series are nothing like text, time series are their own universe - planet 🌎 time series.

These finding are as damming to time series LLMs as the “Are Transformers Effective for Time Series Forecasting?” was for transformers. It is hard to see how time series LLM will be able to deal with such hard evidence showing they don’t work.

#timeseries
#forecasting