David Frank

@bitinn@mastodon.gamedev.place
860 Followers
827 Following
893 Posts
Trying my best to talk more about gamedev | currently Tech Artist and R&D at an AAA publisher.
比特客栈https://bitinn.net
Marisa Clubhttps://marisa.club
Posted this elsewhere but feel it's a good message. #gamedev #indiegamedev

It is kinda interesting to see the mastodon app ecosystem split into 2 groups:

a. those that follow hashtags (hence put all posts from those hashtags directly in the your main timeline);

b. those that pin hashtags (hence allow you to switch between hashtags without polluting your main timeline);

The official implementation is using type A, so some apps are taking advantage of these API; but I never got used to this style of following content, I am very much in the type B category.

Turning on iOS screen distance alert feature has made me realize, the default iOS font size are not great for viewing at 12 inch (30cm).

We keep trying to solve “if I had more money I would hire a guy to do it” problem with AI.

I think we should solve the problem by “redistributing money to those in need”.

But that would be impossible in market capitalism.

They would sooner invent cryptocurrency and crowdfunding than giving up accumulated wealth.

So here we are.

So to wrap things up:

- Context engineering is where you have to manage an over-complicated series of LLM systems, because they are nowhere near a human agent, and simple prompts are no longer sufficient for inputs.

- Corporations value context engineering, in the same way they value other software engineering tasks: complex jobs to be done, how to do it more efficiently (fast and cheap).

- Expert engineers are still crucial to command Agentic LLM, the barrier to entry isn’t any lower.

fin.

To ask the question more succinctly:

- Why aren’t billionaires using AI everyday for everything if they are already so powerful?

- Because they prefer to manage humans, even though AI promised to be trustworthy and contractually bounded, it is no human.

Sure they might use their AI products once or twice, saw demos from various direct reports probably every week.

But they are not experts, and it is not their job.

LLM is just that, another tool for doing your jobs, nothing more.

So once again in our software industry history, we are reducing the quality of our software, for the sake of development speed and lower hiring costs.

It is a recurring theme.

It is the nature of our capitalist innovation.

LLM is the Dreamweaver and Squarespace.

LLM is the React Native and Electron.

It is all these promises of “everyone should code / automate” combined together.

And then realizing, good engineers are still good engineers, the rest are just fodders to the billionaires.

PS: I don’t see the appeal of providing context to LLM with natural language, eg. the copilot experiment Microsoft ran on their repo are often cringe dialogues of human begging machines to do the right thing repeatedly then giving up.

If we are assuming expert knowledge on the subject, give us better tool to manage these wild LLMs.

And I see why the industry is going Agentic: to replace humans, specifically interns and juniors.

But LLM is nowhere near the growth of juniors, it’s just cheaper.

Simon Willison (@simon@simonwillison.net)

I think "context engineering" is going to stick - unlike "prompt engineering" it has an inferred definition that's much closer to the intended meaning, which is to carefully and skillfully construct the right context to get great results from LLMs https://simonwillison.net/2025/Jun/27/context-engineering/

Mastodon

As someone who has to retrieve/fill in context among team members, I believe it is a skill, but it is a skill we should use with humans, not bots.

Automated bots (aka Agentic LLM) should collect its own context, or we are just glorifying the process of feeding machines with supervised inputs.

What this implies are clear:

1. the user must know the task better than the machine.

2. doing so in a natural language are actually a burden to users.

Context engineering* is a reflection of LLM limit.