Ian Channing 🦈

@ianchanning
118 Followers
45 Following
618 Posts

Javascript dev mostly, but pretty much everything. Worked for small stats, engineering and now AI startups.

ex-Brit, now in Belgium.

Not a fan of hierarchies. Dad dancer.

Websitehttps://ianchanning.com
Bloghttps://ianchanning.wordpress.com
Stack Overflowhttps://stackoverflow.com/users/327074/icc97
GitHubhttps://github.com/ianchanning

In the field of cybersecurity, a distinction is made between the "blue team" task of building a secure system, and the "red team" task of locating vulnerabilities in such systems. The blue team is more obviously necessary to create the desired product; but the red team is just as essential, given the damage that can result from deploying insecure systems.

The nature of these teams mirror each other; mathematicians would call them "dual". The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component (e.g., a house with a securely locked door, but an open window) will be insecure (and in fact worse, because the strong component may convey a false sense of security). Dually, the contributions to a red team can often be additive: a red team report that contains both a serious vulnerability and a more trivial one is more useful than a report that only contains the serious issue, as it is valuable to have the blue team address both vulnerabilities. (But excessive low-quality reports can dilute attention from critical issues.)

Because of this, unreliable contributors may be more useful in the "red team" side of a project than the "blue team" side, though the blue team can still accommodate such contributors provided that the red team is competent enough to catch almost all of the errors that the contributor to the blue team might make. Also, unreliable red team contributions only add value if they _augment_ the output of more reliable members of that team, rather than _replace_ that output, and if their output can be effectively filtered or triaged by more experienced red team members. (1/3)

I think "context engineering" is going to stick - unlike "prompt engineering" it has an inferred definition that's much closer to the intended meaning, which is to carefully and skillfully construct the right context to get great results from LLMs https://simonwillison.net/2025/Jun/27/context-engineering/
Context engineering

The term context engineering has recently started to gain traction as a better alternative to prompt engineering. I like it. I think this one may have sticking power. Here's an …

Simon Willison’s Weblog

TDD Pro-Tip: If you don't want to run all of your microtests all of the time, there is something wrong with the tests or, more usually, the design of your code.

The steerability principle says "Tests & testability are first-class participants in design."

This principle is almost entirely ignored in the trade, even in test-centric shops.

Can't share the thread because OpenAI no likey:

10. “Sheila”

- Pros: Aussie energy. Feels like an AI that’ll get things done but also isn’t afraid to call you a muppet.

- Cons: Might make every response sound like it belongs in a pub argument.

9. “Brenda”

- Pros: Brenda isn’t here for your nonsense. Brenda has seen it all. Brenda will tell you the truth whether you like it or not.

- Cons: Might judge you for your life choices.

8. “Pam”

- Pros: Has “warm but slightly intimidating” energy. Your AI, but also the unofficial ruler of the office.

- Cons: You’ll feel like you have to apologize if you ignore her suggestions.

7. “Sue”

- Pros: One syllable, straight to the point, sounds like an AI that knows how to get things done.

- Cons: Might constantly remind you to “just be careful” before making major decisions.

6. “Debbie”

- Pros: Feels like an AI that’d offer life advice but in a “listen, love” way instead of a TED Talk.

- Cons: Could accidentally book you in for an aerobics class from the 80s.