Tim

@tseufert
0 Followers
1 Following
8 Posts
An entity
@jonhendry @DJGummikuh @davidgerard Yes this. AI giants want NVidia datacenter GPUs, which use HBM DRAM, not DDR5 or LPDDR5 like your desktop or laptop PC. Memory manufacturers had to shift production. Also note that semiconductor manufacturing has a lag time. Fab capacity is often quoted in WSPM or wafer starts per month; "starts" is a load bearing word because individual wafers may take multiple months to go all the way through every step from raw silicon to tested and packaged parts.

@davidgerard The replies claiming they're a kind of GPU are wrong. "AI" computation boils down to multiplying matrices, often at reduced precision. NPUs are simply gangs of CPUs or DSPs tailored to be very power and area efficient for this specific kind of number crunching.

The first mass market NPU was probably Apple's Neural Engine, part of 2017's A11 Bionic chip. Its first application was an actual useful feature, Face ID biometric unlocking in iPhone X. I wish they'd stayed away from LLMs.

@davidgerard Had a bout of NAS desire recently myself and would've bought a Unifi UNAS Pro, but ran out of NAS interest before spending money. Upside: cheapest 7-drive NAS with 10G networking that I know of, and it's a well established company. Downside: if you expect a NAS to have all the software features of a Synology, this isn't it. It's Unifi's first NAS, and they've chosen the path of rolling out features over time rather than trying to launch with everything.

@alexr Rosetta 2 only implements ring 3 x86, so it can't virtualize a whole OS. Probably never will, since AIUI the emulator derives a lot of its performance from assumptions made possible by this limitation.

It may stick around anyways, as there's already use cases beyond running x86 macOS binaries. Apple somewhat recently added a version of it for use inside Arm Linux VMs, enabling them to execute x86 Linux userspace binaries. It's also used by WINE.

@[email protected] No, it's clear what you're about: disregarding reality when it doesn't fit your prejudices.

on that note, launchd and SMF both shipped in 2005. The UNIX wars were over, Sun and Apple were trying to stay relevant in a world dominated by Microsoft. Classic UNIX init was holding both of them back, so they replaced it.

Because infighting etc, it took Linux longer to make its switch, and the results were arguably inferior. Motivations were the same though: classic init wasn't good enough.

@[email protected] It's tiring to argue with people who are very determined not to let outside information in. I'm probably not going to reply again if you keep doing that.

It was never about lock-in. Apple open sourced theirs; not sure what Solaris did. (Jumping to "lock-in" as an explanation is a thought-terminating cliche you would do well to rid yourself of.) The real reason is that sysvinit and friends such as BSD rc were 1970s/80s dinosaurs badly out of place in the modern world.

@[email protected] No, it isn't. You shouldn't take poor execution of an idea as proof that the idea is flawed.

There are many reasons why commercial UNIX-ish operating systems went in this direction first - not just Mac OS X, but also Solaris with SMF. systemd wasn't even the first attempt at bringing an init replacement similar to launchd and SMF to Linux, but it was the one which won.

@cstross So, you're using launchd, introduced in Mac OS X 10.4 Tiger. Its success at replacing init and rc scripts inspired Lennart P. to write some fanfic, AKA systemd.

Classic init and boot just didn't meet this century's requirements, they really did need to go. There were several other attempts at this for Linux (some due to NIH, some because L.P. and pals are very annoying to work with), but systemd actually managed to be the best of them, despite the warts. So it won.