Every single ACPI vs Device Tree argument needs to start with the observation that I can boot a modern Linux kernel on an arbitrary x86 board from 1998 and it will probably suspend and resume correctly, and I can't do that with an arbitrary Arm board from 2026
It's unfair to say 1996? Ok provide an alternative year where it becomes true for Arm
@mjg59 is it fair to suggest that that might be more to do with 25 years of relentless hammering on and working around bugs, rather than actual availability of specs? (saying this as someone that didn't get a laptop to properly resume until about 2010).
@tmcfarlane Well sure, we've spent time becoming compatible with the platform vendors tested against, and Arm suffers from not having any sort of baseline. That doesn't really end up being an argument for DT.
@mjg59 certainly not :) I'm blissfully unaware of Device Tree.
@mjg59 arm has always been a "fire and forget" device business model, so nobody cared about standardizing. Just getting Windows to support ARM was a huge multi-year effort that resulted in Armv7 standardizing a few things like interrupts and timers. The closest thing to a reusable computer ecosystem we have on ARM is Raspberry Pi.
@cubeos @mjg59 Doesn't Windows on ARM mandate ACPI?
@jernej__s @mjg59 AFAIK that only came with Armv8 and the W11-based ARM64 devices where the OEM would have to provide this. Windows 10 IOT Core works on specific RPi devices (2 and 3B) and the boot process is "rather complex". It brings its own UEFI implementation in the boot image and that might even pretend to have some ACPI tables, but just to get the NT kernel up and running.
@mjg59 meh, suspend/resume broke on my laptop following a Fedora update about 2 years ago, so I'm not sure that is true (but no, it hasn't been quite enough of a pain point for me to debug it - hibernate still works)

@mjg59 to be fair, suspend/resume doesn't work on the desktop I mounted last year with good specs from that time... I'm sure the vendor (Gigabyte) will tell me that "it works on Windows", but it still doesn't make it work on Linux...

That said, Device Tree seems truly horrendous, from the work I had to do from far away on some ARM projects.

@mjg59 But is it true of a modern x86 fancy laptop?
@penguin42 @mjg59 i dont think i have had non-shite resume on my amd64 laptops with linux yeah

@mjg59 "probably" is a strong word here. I still had my fair share of sleep/resume problems on both a Tuxedo computer and Lenovo computers.

ACPI is a mess by itself.

@mjg59 it is 2026 and I still have sometimes no WiFi after resume on a ThinkPad x230t running Debian 13 amd64.
@mjg59 Agreed - but TBH, I remember when laptops started migrating from APM to ACPI for suspend/hibernation and everyone complained about the bugs for years.
@mjg59 The arm folks have consistently refused to use device tree the way it's supposed to work since the beginning, treating the dtb like something unstable you're supposed to upgrade with the kernel rather than an immutable hardware description that should be baked in rom. This is their fault, not something intrinsic to device tree.
@dalias @mjg59 SPI Flash, the most expensive chip in the (embedded system) world. It increases the BOM cost by $1, unacceptable!
@dalias @mjg59 i want both x86 and arm boards to have a dtb burned into flash at manufacture time and kill acpi
@azonenberg @dalias @mjg59 that would work if they all made standardized parts, but without that, we'd need translation layers everywhere 😬
Why ACPI?

"Why does ACPI exist" - - the greatest thread in the history of forums, locked by a moderator after 12,239 pages of heated debate, wait no let me start again.<br /><br />Why does ACPI exist? In the beforetimes power management on x86 was done by jumping to an opaque BIOS entry point and hoping it wo

Dreamwidth Studios

@mjg59 @azonenberg @dalias And I want the ARM world to move away from the insanity that is the current DTBs:

* They suck at describing the hardware, they are more a Linux config file: Linux forks (vendor trees) will always have incompatible ones because upstream doesnt care about compatibility...
* Not only every SoC needs to be supported, every board needs its own broken DTB... Which leads to board-specific brokeness for SoC-provided features!
* Good mainline support never comes before the effective end of life of the device... Even with big companies working on it with significant resources!

By allowing DTBs to exist, vendors could start making SoCs very very diverse without caring about the software ecosystem. This is nuts and this insanity needs to be stopped by introducing discoverable buses and blocks, then add per-vendor platform drivers as a glue. Something akin to ACPI would then be introduced for board-specific customization.

You can keep the compatibles and the equivalent of a DTB to document if blocks are user-visible or not (so that he is not exposed to the user otherwise). New SoCs should finally be mostly backwards compatible, allowing basic release-day upstream support, and reducing the need for vendor trees, or the cost of mainlining.

@mupuf @mjg59 @azonenberg @dalias why should upstream care about compatibility with downstream vendor forks? Vendor forks are a nightmare and often everything to support a soc is just hacked together piece of something.

Also every board has some board-specific layouts and so we need separate dts, as dts describe hardware.

And comparing dts with ACPI is unfair.. like comparing apples to pears.

@austriancoder @mjg59 @azonenberg @dalias Right, I wasn't being clear here.

Upstream should care because if DTBs WERE a hardware description language, it could be standardized and would not depend on driver *implementations*. It could then be embedded by the vendor in the firmware and new boards could get release-day upstream support.

In the x86 world, we have essentially what I recommended: Discoverability using pci vendor: device, then a "platform" driver called amdgpu/nouveau/whatever, and a vendor-specific board description language in the video bios. And then no need for per-gpu DTBs.

IMO, hardware description is the vendor's job. Upstream should just take it and roll with it. The fact it doesn't is IMO a failure of the model and a proof than describing the hardware like that is a fool's errand. Unfortunately, vendors have little incentive to keep on maintaining their tree, so that leads to the terrible user experience: either use outdated hardware with good upstream support, or use outdated vendor trees on newer hardware. Everybody loses...

As for ACPI vs DTB comparison: they are meant to solve roughly the same problem, aren't they? Abstract away hardware differences from the OS PoV so that the same kernel binary can work on two boards.

@mupuf @austriancoder @azonenberg @dalias Not quite - DTB is supposed to provide a full description of the available hardware and the metadata required to drive it. ACPI can do that, but is really intended to provide an abstract interface to the underlying hardware such that the OS doesn't need to know what the available hardware is or how it's wired up.

@mjg59 @austriancoder @azonenberg @dalias indeed. I would call ACPI a pragmatic approach: describing hardware in a generic way is pretty much impossible, so let's separate concerns and let the firmware provide the scripts to run when wanting to do X.

This means it is hard to teach new tricks to old boards without the vendor providing firmware updates, but you gain a much more consistent user experience and release-day upstream support.

If we could have ACPI-like support for release day support, then open firmware + dtb-based solution with upstream later on, I guess it would be the best of both worlds and upstream Linux could be used at all time... We'll see how system ready will turn out, maybe it could achieve 50% of this dream?

@mupuf @austriancoder @azonenberg @dalias In theory there's no reason people can't provide device-tree like data that passes chunks of ACPI functionality over to an actual driver - I don't think there's ever been enough need for someone to bother, but it's possible. What we see in the Qualcomm laptop case is people trying to solve it all with DT and us ending up with machines with missing functionality as a result.
@mupuf @mjg59 @azonenberg @dalias And if we were to do that, we would lose what made arm successful in the first place, and we would have the same "problem" with RISC-V. Rinse and repeat. Arm addresses a specific part of the market, if you make arm like x86, you'll end up with two arch competing for the same market, and some other filling the void arm left. Because that market isn't going away.

@mripard @mjg59 @azonenberg @dalias Really? I would have assumed your opinion on this topic to be pretty similar to mine :o

Why do you think DTBs have contributed to the success of ARM? From what I could tell, everyone is feeling the pain of every little board needing custom development, and by way more people than the x86 world... What am I missing here?

@mupuf @mjg59 @azonenberg @dalias To me it's more than DT is an imperfect solution (for the reasons you outlined) to a problem ACPI can't solve. And that problem is also the reason arm has been successful.

So I'm sure if we tried and had unlimited political power and technical resources, we could do better than DT. But we need something that you can use with a decent time to market, no license cost, easy to modify (including at runtime), very fast to boot, and that can deal with complex topologies (and I'm probably forgetting some).

@mjg59 beagle bone black support fell out of FreeBSD because TI kept changing the device tree bindings and no one wanted to fight that.

I can't actually think of a pro device tree argument, I am not pro acpi either
@mjg59 Counterpoint: I believe that has less to do with ACPI and more to do with the _massive_ amount of work Linux has done to add bug fixes or replacement code for most of those broken ACPI ABI interfaces, which is possible because the chipsets are few and well documented. Where as in Arm or RISC-V it's all under multiple levels of NDA because it's IP blocks in a SoC. Power management is hard, ACPI making a ABI targeting Windows doesn't help.

@edolnx @mjg59 Also, Intel and AMD contribute upstream in a way that most of the Arm vendors simply don’t.

If a vendor made upstream kernel support their top priority, they could make it happen.

@alwayscurious @mjg59 Absolutely - that support for the x86 ecosystem would have been impossible without the hard work by Intel and AMD (and earlier the various chipset vendors). The problem is Arm/RISC-V based SoC parts have none of this standardized. The power management, NOC, clock management: all unique to nearly every chip. For most T1 customers it's not a priority, and if it is for them then the vendor kernel will do what's needed and they don't care beyond that.
@edolnx @mjg59 Why do the T1 customers not care about using an upstream kernel?
@alwayscurious In most of these cases, they are paying for an "embedded solution" - they have a specific use case and are paying for an application specific solution. So that's what ships as the Board Support Package (BSP) for that customer. The customer builds on top of that, and ship it. This is why a lot of OpenWRT network devices still run a 4.10 kernel - it's "good enough" and upstreaming costs money for no direct increase in sales.
@alwayscurious @mjg59 There is also the problem that a lot of these vendors _do want upstream support_ but the license agreements with the IP vendors prevent that, and they don't have the wallets or time to push back on the standard license/NDA to open source the driver (which is almost always completely forbidden)
@edolnx @mjg59 Why do the IP vendors insist on keeping things proprietary? Have any vendors decided to pay a third party to write a clean-room reverse engineered driver?
@alwayscurious They keep things proprietary to protect their business model and/or infosec guarantees. Most vendors don't pay for a clean-room reverse engineering effort, that's almost always OSS developers. The ones who are willing to pay will just spend the cash on legal to negotiate terms with the IP vendor to upstream OSS drivers, but that is _rare_ or through a consortium (like Linaro for Arm or RISCstar for RISC-V)
@edolnx How does keeping the interface docs proprietary help them? That seems like security theater to me, as the success of reverse engineering has shown.
@alwayscurious you are correct - it is security theater. It's not about the products, it's about protecting the value of the IP. A known broken IP block is worthless, a IP block that is bound by NDA still has value because you can't tell anyone it is broken. This is also why you can't tell what IP blocks are used, companies don't want end users to know what blocks are in use because end-users are not covered by the NDA. It's all about protecting the business model.
@edolnx So they do it to prevent anyone from finding out that the hardware they designed has bugs?
@alwayscurious No, it's all to lock the end users out of the hardware to protect the business with the DMCA. Break a bootloader? Jail. Show an exploit on a device? Jail. These devices suck, but with the DMCA they suck and are extremely valuable to sell.
@alwayscurious I'm not trying to sound conspiratorial - these are the answers I get from hardware vendors when I ask these questions. It's all about protecting fragile business models and ensuring things that should be "commonly available" retain a "High Value for the Licensing Company"
@edolnx Does it really protect the business model, or is this a misunderstanding? I still don’t see how this increases the value to the licensing company. Just because I know the interface to something doesn’t mean I can replicate it.
@alwayscurious it protects the company by reducing competition. Building these IP Blocks is time consuming and expensive. Many groups in the EU are following a model like you propose, but in the US you simply can't get funding for that business model.
@edolnx How does hiding the interface make it harder to compete?
@alwayscurious another company would need to violate the DMCA to reverse engineer the software, which is a huge risk and liability. So no company does, thus limiting your competition
@edolnx Why does allowing the user to run their own code make the device less valuable? I really don’t understand.
@alwayscurious end users are not in the value chain. IP Vendor, SoC integrator, and Tier 1 Customer are the value chain. Everything else is side effects.
@edolnx Why does allowing the user to run their own code decrease the value, instead of not changing it?
@alwayscurious because many if these devices tie to a saas and using the device without the service lowers the value of the company who sold you the device. This is why Sony pulled Linux support from the PS3, for example.
@alwayscurious What you have stumbled upon is the entire industry shift from "Open Standards General Purpose Computing" to "Application Specific Computing", Before things like interface controllers had "no value". Now everything is proprietary and we (the end user ecosystem) are trying to unlock Application Specific Devices to be General Computation Devices
@edolnx Is there anything that can be done to reverse that shift?
@alwayscurious it's hard and extremely capital intensive. What OpenAI did to DRAM, Apple did to leading edge wafer fabrication at TSMC years ago. They big players for whom money is no object only want their peers playing, and end users refuse slower systems built on older processes as they are too used to ever faster speeds (and thus no software optimization meaning slower hardware is exponentially slower)
@mjg59 TBF you cant do that with x86 laptops in the last 5 years either - but not because of the implied OS/FW interface problems but completely bogus firmware.
@stefanct @mjg59 I was very tempted to say that this is mean, but then I remembered the laptops I've bought and repaired, and they either didn't wake up properly (Dell XPS), had sporadic problems waking up (Thinkpad) or couldn't go to sleep at all for the first four months of my ownership until a GPU firmware update hit (Framework 13).
so, yeah.
@funkylab my t14s gen3 amd at work have these suspend-related quirks: doesn't always suspend; randomly wakes up later (seconds to hours later!); on wakeup sometimes WLAN, DPs on docking station, audio are broken (independently, and without any pattern)... further sleep-wakeup-cycles randomly fix those. and that's with 6.18 and newest firmware for laptop and docking station on a 3+ years old system.
@stefanct @mjg59 What do you mean by “completely bogus firmware”?
@alwayscurious my favorite thing: laptop does cold boot successfully into boot loader if docking station is attached.