as a toolmaker, there's an inherent tradeoff that I encountered years ago when I just started working at ChipFlow; what I was asked was essentially to develop Amaranth further as a way to de-skill the hardware design (RTL) field. I agreed because I don't really value the skill of knowing every one of the five hundred different ways in which SystemVerilog is out to fuck you over; I think we'd be better off with tooling that doesn't require you to spend years developing this skill, and that would be a lot more friendly to new RTL developers, and people for whom RTL isn't the primary area of work.

I also knew that ChipFlow was on the lookout for opportunities to shoehorn AI somewhere into the process. (at first this was limited to "test case generation"—frankly ill conceived idea but one I could hold my nose at and accept—nowadays they've laid off everyone and went all-Claude.) however, it was clear pretty early on that making hardware development more accessible to new people inherently means making it more accessible to new wielders of the wrong machine. benefiting everyone (who isn't a committed SystemVerilog developer) means benefitting everyone, right?

you can trace this trend in adjacent communities as well. Rust and TypeScript have rich type systems that generally help you write correct code—or bullshit your way towards something that looks more or less correct. I'm pretty sure it's a part of the reason Microsoft spent so much money on TypeScript.

so today I find myself between a rock and a hard place: every incremental improvement in tooling that I build that makes the field more accessible to new people also means there's less of a barrier to people who just want to extract value from it, squeezing it like Juicero (quite poorly but with an aggressively insulting amount of money behind it). so what do I do now?..

(matter of factly if you ask an LLM to write you some Amaranth, it usually just stops doing that after a few dozen lines and switches to SystemVerilog, so I don't think I have personally contributed to this; the above is more me questioning the structural factors we live with today)

@whitequark I've been staring down the same conundrum for a long time. I think the only answer is that we have to build solidarity, rebellion, and moral progress in the social, interpersonal domain rather than hoping technical improvements will be emancipatory forces

it's one reason I've been focusing more on education - a lot of the reason right-wing propaganda works is simply because people are ignorant of patterns that play out over history, and they take motivated liars at their word

@migratory to me, going out of my way to spend years working on making a (technological, in this case) field more accessible is very much in the interpersonal domain: it is motivated by social factors (empathy towards users of technology that causes misery and alienation) and aimed at changing social factors (participation in a field)
@whitequark Who knew that leaky abstractions are actually a useful measure against deskilling.
@riley sort of? baroque processes are reactionary in nature: they help the incumbent keep its position. if you like the incumbent this is useful. if you don't like the incumbent, like @xgranade didn't like the AI-fication of Calibre, then you get to spend months of your life fixing the plumbing that would otherwise wash out the foundations.

@whitequark

I used to do time at Google. Passed the interviews, and, inbetween engineering, got the training to administer them, and took a bunch of interviews from new applicants before I left.

A running theme in their interviewing criteria, at least back then — it's been a while — was, they looked for an applicant's ability to shift between levels of abstraction.

In recruitment context, this tends to be conceptualised as a matter of skill and knowledge, but it's actually also a matter of design, to a significant degree. When more effort is put into plugging abstraction leakage, less people have practical "everyday" reasons for moving across those tightly plugged boundaries, get the experience of doing it, and, well, both de-skilling and baroquisation can set in as a result.

Maybe putting effort into well-designed abstraction leakages, rather than trying to abolish them, would be a useful and pro-social subthread in the work against enshittification. I'm also going to argue that literate programming is a useful tool for managing and understanding (some kinds of) well-designed abstraction leakages.

@xgranade

@whitequark Another perspective about this sort of thing is how many basic MUDs and IF languages offer tools for linking up rooms to their immediate neighbours, but a more realistic world-building would often require to also offer some faraway view for the next room over, say, the forest that you can see over the river or a valley, and many early data models of these kinds of languages really weren't very good at making such set-ups convenient. Even though the pattern being modelled is common in real life, it does not come up often enough in Thinking About Thinking kind of discussions (and object-oriented programming classes), and so people designing (meta-)systems often tend to ignore it.

@xgranade

@riley @xgranade I think designing around a high-skill-specialization expectation has historically been harmful in this industry; consider how the expectation of needing to know C (a language notoriously lacking in guardrails and good tooling) to do systems programming has both directly contributed to the pervasive gatekeeping and also created a barrier to entry to people not willing to dedicate their life to nvaigating the social and technical aspects of it. it's pretty difficult to me to see how this could be turned around to be prosocial

@whitequark I'm thinking something like "An abstraction shall not leak without a good reason to", and considering it an important principle of good design that it is the end engineer (or end user) who gets to ultimately override what reasons are good enough, should upstream reasonings turn out to be problematic. Things like "Thou shalt not hamper logic probe access" would then inherently follow.

@xgranade

@whitequark Or I might be misunderstanding your argument. Would you like to elaborate on it?

@xgranade

@riley @xgranade I... don't think that's how things work? all abstractions leak: they take a wide set of possibilities and narrow it down to make it easier to reason about the things you care about, at the cost of making your life harder if you hit one of the things you've decided to leave aside.

@whitequark Yes, all abstractions leak.

But sometimes, people like to pretend, and/or make laws about pretending, that some don't, or mustn't, or "it's impossible to cross this abstraction boundary, so anybody who does it must be harshly punished" kind of thing. Likewise, some design cultures[1] like to build elaborate wrappers for hiding abstraction leakages, because of the simplistic notion that such leaks are bad design.

[1] Particularly the "enterprise software" school of thought, in what I've seen. But the idea can also be seen outside big corporate environments.

@xgranade

@riley @xgranade I think we're talking past each other; whatever culture you've encountered at Google sounds borderline traumatizing but I've avoided it by ghosting the recruiter since the culture at the on-site interview location kinda creeped me out; so I don't have your context

@whitequark You might be overgeneralising; I only did Google for a year, and was a sort of an infrastructure-consultant-for-statistics-support before that. We had numerous Big Data clients (in a time when twenty PCs was a Big Cluster), and, well, data warehouse systems tend to be places that need to talk to a lot of operative software (and make sense of the data that comes from there, but we had other people who tended to specialise on the data-shape-and-quality kind of problems).

@xgranade

@riley @xgranade that's still the kinds of environments I've had basically no expsure to! I do embedded and programming language design almost exclusively

@whitequark Right. So, perhaps, different contexts.

@xgranade

@whitequark Wasn't the Juicero notable for the bags being pre-squeezed and the whole thing was a $600 show?

@whitequark

This is very close to where I parted ways with the FSF. There's always a tension between enabling people to create the desirable thing and enabling people to make the undesirable. Their view is that it should be very hard to make the undesirable thing, and slightly easier to make the desirable thing. My view is that you should make it so easy to make the desirable thing that people always have a choice and then, once the desirable thing exists, you can apply other pressures to get rid of the undesirable thing.

I don't think deskilling is the right framing for a lot of these things, it's about where you focus cognitive load. There's a line from the Stantec ZEBRA's manual (1956) that says that the 150-instruction limit is not a real problem because no one could possibly write a working program that complex. Small children write programs more complex than that now. That's not a loss to the world, the fact that you don't have to think about certain things means you can think about other things, such as good algorithm and data structure design.

There was research 20ish years ago comparing C and Java programs and found that the Java programs tended to be more efficient for the same amount of developer effort, because Java programmers would spend more time refining data structure and algorithmic choices and improve entire complexity classes, whereas C programmers spend the time tracking down annoying bug classes that are impossible in Java and doing microoptimisations. Of course, under time pressure, Java developers will simply ship the first thing that works and move onto new features rather than doing that optimisation. C programmers would take longer to get to the MVP level and their poorly optimised code was often faster than poorly optimised Java.

I see LLMs as very different because they don't provide consistent abstractions. A programmer in a high-level language has a set of well-defined constraints on how their language is lowered to the target hardware and can reason about things, while allowing their run-time environment to make choices within those constraints. Vibe coding does not do this, it delegates thinking to a machine, which then generates code that is not working within a well-defined specification. This really is deskilling because it's not giving you a more abstract reasoning framework, it's removing your ability to reason.

Letting people accomplish more with less effort, in an environment where their requirements are finite, ends up shifting power to individuals, because it reduces the value of economies of scale.

@david_chisnall this is an interesting view, I'll have to think about it.
@[email protected]

To be honest I think you are misrepresenting #FSF ethical position on the matter that is perfectly aligned with your own: thus the freedom of use for any purpose that is a strong requirement for any #FreeSoftware license.

@[email protected]
@giacomo @david_chisnall I think you'll find it that using search to insert yourself uninvited into conversations with people you don't know is a poor way to promote your cause, whatever that is.

@giacomo @whitequark

I think you're misunderstanding my point. The FSF decides to promote the creation of Free Software (a goal I agree with) by creating complex licenses.

Developing software reusing software under any license requires understanding the license. The FSF's licenses are sufficiently complex that I have had multiple conversations with lawyers (including some with the FSF's lawyers) where they have not been able to tell me whether a specific use case is permitted. This places a burden on anyone developing Free Software using FSF-approved licenses, because there are a bunch of use cases that the FSF would regard as ethical, but where their licenses do not clearly permit the use.

It places a larger burden on people doing things that the FSF disapproves of. They have to come up with exciting loopholes. Unfortunately, it turns out that this isn't that hard and once you've found a loophole you can keep using it. The FSF responds with even more complex licenses.

EDIT: To be clear, the FSF and I have very similar goals. I just think that their strategy is completely counterproductive. Complex legal documents empower people who can afford expensive lawyers. We're increasingly seeing companies using AGPLv3 to control nominally-Free Software ecosystems.

@david_chisnall @whitequark "[...] the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise." - Edsger W. Dijkstra, 1972, "The Humble Programmer" https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html
E.W.Dijkstra Archive: The Humble Programmer (EWD 340)

@david_chisnall @whitequark i think to me this is the key part. a (digital) hdl provides a model that you can think about bits and gates in. a better hdl provides you with the better model.

llms on the other hand promise to let you avoid all the thinking. if i may philosophizd a bit - llms promise to let you avoid expressing yourself in terms of mental labor. maybe some like this, but i know i don't. [cont'd]

@david_chisnall @whitequark one analogy i think is that it would be preposterous to think that learning and using modern algebra and category theory is deskilling to a mathematician.

maybe it's not a perfect match, but i don't think writing in a higher level language is a "deskilled" way to draw gates or write instruction bits. it produces a different *kind* of product, that is itself amenable to various different processes, some manual and some automatic. it's not just "bits with extra steps".

@dramforever @david_chisnall but Amaranth and SystemVerilog are more-or-less the same kind of product (Amaranth is slightly lower-level but that's not important here). it's just that using Amaranth takes, trivially, less skill than using SystemVerilog, for the same quality of result

@whitequark @david_chisnall i think i get what you mean. put to the extreme, the situation is: by making $thing and publishing it out for the public, you allow those without the skill to make $thing to still have a $thing.

i ... honestly i don't know. i don't have an answer. if anything, the past month i have been devastated by vibe code that satisfies "functioning" and nothing else like "understandable" (by *anyone*) or collaboration (instead it has thousands of loc to paper over problems)

@dramforever @david_chisnall well, sort of? it's more like this: to make $thing you need skill X plus skill Y. by making Amaranth I made it so that you no longer need skill Y. on the face of it I think this can be called "deskilling", the fact that I think skill Y is unnecesssary being not relevant to the classification

@whitequark @david_chisnall i don't have a strong argument for this, but i still think there is a big and maybe fundamental difference between the audience that wants to do things, and the audience that wants to avoid doing things.

i guess i need to think about it more as well...

@whitequark @dramforever

There’s another aspect to this (which is why I get a bit obsessed by end-user programming as a concept): to build a thing today you need skills X, Y, and Z, but no one has the time to learn all three, so if Y can be factored out you can teach X to people who know Z and enable entirely new things that they actually care about.

@david_chisnall @dramforever Amaranth enabled a chemical engineer to build an electron microscope DAQ from scratch in 6mo.
@whitequark @david_chisnall @dramforever I am interested. May we hear more about this build?
GitHub - nanographs/Open-Beam-Interface

Contribute to nanographs/Open-Beam-Interface development by creating an account on GitHub.

GitHub

@whitequark @dramforever

Thinking more, I think the issue here is regarding 'skill' as a one-dimensional axis.

To do hardware design well, you need to think about critical-path lengths for combinatorial logic, you need to think about interactions between state machines, how and when to pipeline things, what software interfaces to expose, and so on. All of those are skills.

To do this in SystemVerilog, you also need to remember the baroque consensus semantics of a batshit insane (technical term) language. That's a completely unrelated skill, much closer to playing a complex board game with weird rules than to hardware design.

Amaranth deskills hardware design in the same way that tricycles deskill cycling. Riding a bicycle requires a bunch of things that riding a tricycle requires, plus good balance (in a very specific context that isn't very closely related to balancing in any other setting: people who can stand on one leg for a long time can't automatically balance on a bicycle, for example). The tricycle removes this as a requirement, but is it a key skill for being able to get around a radius of a few miles / tens of miles under your own power at a reasonable speed? I would argue that it's a largely orthogonal skill that, if you have it, lets you do the same thing with a slightly lighter machine. I certainly wouldn't consider someone who rides a tricycle to be less skilled in any meaningful sense. Also, tricycles are cool.

@david_chisnall @whitequark @dramforever
This is similar to "quality of life vs making thing casual" discussion in gaming.
In Counter-Strike precisely throwing grenades across the map is a significant part of the game.
In CSGO this was a bit random, for the trickiest throws you needed frame perfect inputs as well as luck.

When CS2 released, Valve implemented a mechanic, wherein if your inputs are "close enough", the game just pretends that you did it perfectly, eliminating randomness.

Some still argue that this was deskilling, but the reality is that the actual skill has always been in finding possible lineups and recalling the ones strategically suited for the moment in the tense action of the game, not in pressing two buttons with an exactly 42 millisecond window between them.

@david_chisnall @whitequark
"Deskilling" feels like a labour term that I'm not sufficiently versed on, but perhaps "does it help companies treat workers as replaceable?" is a good litmus test, in which case Java may count (AIUI that was an explicit goal, and you can see it in the lack of goto/operator overloading, and in practice via all the consultancies). I'm not sure how to weigh that against "do workers prefer the new thing?".

I haven't used Amaranth or SysemVerilog, but my (little) experience of HDLs is that most of the skill is in your head (and transferable to other HDLs), not in the characters you type. Making it easier to learn the former without the latter seems closer to accessibility than de-skilling.

Similarly, I wouldn't count BASIC as de-skilling even though it demonstrably made it easier to write programs (but I'm not sure how much it was used commercially; maybe some companies used QBasic specifically so they could treat workers as replaceable).

@david_chisnall @whitequark My favourite assignment in Uni was a project where everyone had to implement a server. Requests would either modify its state or query its current status. There was a leaderboard with performance results.
The punchline is that we had to implement it both in Java and C++. (Actually anything JVM/CLR and native w/o GC)
Made me value Java much higher, the implementation was easy and I couldn't get more than 20% better throughput in Rust despite investing more time into it
@whitequark "Democratization" is a friendly-sounding euphemism for "commoditization", often in most negative possible form.

@whitequark I feel like the rise of LLMs is mostly just an acceleration of the trend that already existed.

There's long been the tension within FLOSS software, between the fact that you're making technology more accessible, and you're helping out greedy capitalists who just want to squeeze every ounce of profit out.

This new trend of LLMs has absolutely accelerated that, and I can definitely see why it causes people to pause and wonder what it is that they're doing and whether they want to support that.

But I still think it's good to provide better, accessible tooling for new people.

Shitty people are going to squeeze every ounce of profit and control out of everything no matter what you do. But in the meantime, Amaranth does make it possible for people who want to actually engage with their own brains to learn about hardware design, and I think that's still valuable, whether or not there are awful people out there exploiting it for personal profit.

@unlambda It's a bit different if you are (or were, in my case) on their payroll!

@whitequark As a fellow toolmaker, I feel you!

I know Microsoft has created a couple of drivers using my tools. Let's say it's 3 drivers and it saved a junior San Fran engineer 1 day of work each. Then by my rough estimation they'd have saved about $1500.

In the mean time, I have seen none of that value in return.

Idk what to think of it. I made the tool for people like me and I want them to have it for free. But yeah, then MS also gets it for free unless I do weird license things.

@diondokter I don't really mind that particular bit because my goal with OSS/OSHW is less "creating value" (that's on the agenda but it's more incidental) and more "terraforming", changing the rules by which the world works. I think this is a more interesting mindset to approach OSxx with because a lot of the systems we've been building in the last two decades are of such a high quality that no commercial entity would possibly purchase them (since it's not justifiable to build something like that for a business that would run just fine with a much shittier version of the same thing).

yes, under a different economic system, you could have (maybe?) captured some of that value. but under our current one, if Microsoft had to pay you $1500 they would've probably not used your tools at all (because the overhead of figuring out how to get you that money multiplies it severalfold and takes up valuable time of administrative and legal staff). my overall feeling about it, personally, is just "shrug"; I build tools for different reasons

@whitequark Yeah agreed. The fact that MS has used my tool didn't cost me anything either.

But like I said, I've been building it to help people like me and I think it's succeeding at that. And it generally makes me happy seeing people use it successfully.

> such a high quality that no commercial entity would possibly purchase them

lol yeah, seems paradoxical, but very likely true

@diondokter

lol yeah, seems paradoxical, but very likely true

I didn't come up with that; it's a rephrasing of a very good post on the topic I've read and subsequently neglected to bookmark

@whitequark I think the processes of value-extraction under capitalism have - structural limitations? - which mean tools like Amaranth are unlikely to be used as part of a destructive and alienating hype bubble.

namely, tools contributing to hype bubbles requires not only that a given end is easier than before, but that it's _the easiest_ way to achieve hype at any given moment; any RTL design is never the fastest way to a consumer demo; so Amaranth isn't going to be implicated?

@coral oh, without overtly violating my NDA, I'll just say that flashy demos were absolutely involved

@whitequark Ah, I'd missed the platform config demo. They are trying!

Re the degradation of the "open-guild" position of the RTL designer; I also don't know how to feel. Removing footguns is marginally deskilling, but it's such a social good that I can't object to it.

Other industries have unions to smooth the social consequences in changing labor values. Govs and LLMs are both tools that seek to bypass that, it's not unreasonable to form guild-like closed systems of practice in response.

@coral I fully expect the formation of closed-guild systems (and am already a part of some amount of them); but I also don't know what the future holds & this is definitely a potent way of sawing off the branch on which one sits, so I only admit those as a last resort (and a hedge against the failure of other responses), not the first response

@whitequark I'm not sure if you'll agree with me on this, but I think the ability to extract value out of doing bad things with bad software is largely a consequence of complete regulatory breakdown.

If they hadn't bamboozled and captured the regulatory systems that are supposed to govern data protection, vehicles-for-hire, product safety, directories of personal information, etc. into letting them do whatever they want as long as "with computers" is tacked on, there would have been a lot less value to extract and the main applications of software/computing would be making legitimate business processes run more smoothly without needing to outsource control to predatory service providers (by having stuff be so easy your on-staff folks can do it, ala excel), not disrupting society.

@dalias I used to agree with this outright but I'm less sure these days. (I think regulatory capture is too much of a problem to really believe in regulation, in the world we live in.)
@whitequark Regulation is just the institutionalized outcome of collective action for the common good, and I think we have to believe in the latter if we ever want to get anywhere. I'm not going to say we have to institutionalize it, but we have to do something.
@dalias I'm saying I don't trust the institutions we have and can't prevent them from corrupting my efforts.

@whitequark I think PHP is a good example of what happens when it's, erm, "too easy" to build stuff: It lets incompetent people build something that appears to be much more than it is.

I don't want to gatekeep things, but I think there is a lot of value in making sure people have some sort of idea of what they're doing.

@mk I think PHP has a lot of value that we in the "more competent" fields haven't replicated yet, and that bothers me. it enabled (and enables) nonspecialists to participate in the web as authors, not consumers, in a way nothing else does. I don't look down on that

@whitequark that's a positive thing about it, and I think PHP was an entry point to swdev for many people.

My point of view is that it also let people build stuff that was way more complex than they understood, but they made it look good, so a lot of people trusted it and got compromised. It felt like there were years where each month came with at least one major PHP or PHP-based vulnerability.

We are likely going to see the same with genAI code.