Microsoft says “Prism” translation layer does for Arm PCs what Rosetta did for Macs
Microsoft says “Prism” translation layer does for Arm PCs what Rosetta did for Macs
RISC-V is an open standard under an open and free license. Which means that it doesn’t require an expensive proprietary licensing fee. Effectively it means that it has the potential of creating cheaper hardware that manufacturers can create with lower cost overhead and whatever improvements they make upon the designs can be used for free by other manufacturers. And it is the necessary development bed upon which open source hardware can be created.
The RISC-V ISA is free and open with a permissive license for use by anyone in all types of implementations. Designers are free to develop proprietary or open source implementations for commercial or other exploitations as they see fit. RISC-V International encourages all implementations that are compliant to the specifications. […] There is no fee to use the RISC-V ISA. FAQ
The benefits, basically, are that it can provide an architecture that is designed for modern computing needs that can scale well into the future. That means high performance with low power consumption and heat.
The x86/64 model has been up against a wall for a while now, pumping out red-hot power hogs that don’t suit modern needs and don’t have much of a path forward wrt development compared to ARM.
Huh?
32-bit ARM and x86 were both from 1985…
It did take ARM a lot longer to make 64-bit work
S0 standby is borderline unusable on many PCs. On Apple silicon macs it’s damn near flawless.
My current laptop is probably the last machine to support S3 standby and I do not look forward to replacing it and being forced back into a laptop that overheats and crashes in my backpack in less than 15 minutes. On my basic T14 it works ok for the most part, but my full fat Thinkpad P1 with an i9 is in S0 standby for longer than a few minutes, and sometimes uses more power than when it was fully on. Maybe Meteor lake with it’s LP E cores will fix this but I doubt it.
There’s nothing stopping x86-64 processors from being power efficient. This article is pretty technical but does a really good explanation of why that’s the case: chipsandcheese.com/…/why-x86-doesnt-need-to-die/
It’s just that traditionally Intel and AMD earn most of their money from the server and enterprise sectors where high performance is more important than super low power usage. And even with that, AMD’s Z1 Extreme also gets within striking distance of the M3 at a similar power draw. It also helps that Apple is generally one node ahead.
On the x86 architecture, RAM is used by the CPU and the GPU has a huge penalty when accessing main RAM. It therefore has onboard graphics memory.
On ARM this is unified so GPU and CPU can both access the same memory, at the same penalty. This means a huge class of embarrassingly parallel problems can be solved quicker on this architecture.
It’s been a while since I’ve coded on the Xbox, but at least in the 360, the memory wasn’t really unified as such. You had 10 MB of EDRAM that formed your render target and then there was specialised functions to copy the EDRAM output to DRAM. So it was still separated and while you could create buffers in main memory that you access in the shaders, at some penalty.
It’s not that unified memory can’t be created, but it’s not the architecture of a PC, where peripheral cards communicate over the PCI bus, with great penalties to touch RAM.
Well for the current generation consoles they’re both x86-64 CPUs with only a single set of GDDR6 memory shared across the CPU and GPU so I’m not sure if you have such a penalty anymore
It’s not that unified memory can’t be created, but it’s not the architecture of a PC, where peripheral cards communicate over the PCI bus, with great penalties to touch RAM.
Are there any tests showing the difference in memory access of x86-64 CPUs with iGPUs compared to ARM chips?
Here is a great article on the topic. Basically, x86 spends a comparatively enormous amount of energy ensuring that its strong memory guarantees are not violated, even in cases where such violations would not affect program behavior. As it turns out, the majority of modern multithreaded programs only occasionally rely on these guarantees, and including special (expensive) instructions to provide these guarantees when necessary is still beneficial for performance/efficiency in the long run.
For additional context, the special sauce behind Apple’s Rosetta 2 is that the M family of SoCs actually implement an x86 memory model mode that is selectively enabled when executing dynamically translated multithreaded x86 programs.
I’m not expert, but I can tell you that Apple Silicon gave the new Macbooks insane battery life, and they run a lot cooler with less overheating. Intel really fucked up the processors in the 2015-2019 Macbooks, especially the higher-spec i7 and i9 variants. Those things overheat constantly. All Intel did was take existing architectures and raise the clock speeds. Apple really exposed Intel’s laziness by releasing processors that were just as performant in quick tasks, they REALLY kicked Intel’s ass in sustained workloads, not because they were faster on paper, but simply because they didn’t have to thermal throttle after 2 minutes of work. Hell, the Macbook Air doesn’t even have any active cooling!
I’m not saying these Snapdragon chips will do exactly the same thing for Windows PC’s, obviously we can’t say that for sure yet. But if they do, it will be fucking awesome for end users.
to the end user it doesn’t matter if it works.
Emulation is always slower and eats more battery. Microsoft’s laziness is proof they don’t care about that hardware, so may just as well buy an iPad Pro instead.
Emulation is almost always slower and eats more battery.
FTFY. There have been some cases where emulation actually outperforms native execution, though these might be, “the exceptions that prove the rule.” For example, in the early days of World of Warcraft, it actually ran better on WINE on Linux than natively on Windows.
For example, in the early days of World of Warcraft, it actually ran better on WINE on Linux than natively on Windows.
WINE literally stands for “WINE Is Not an Emulator”.
To be fair this is also a translation layer and not an emulator.
Prism is an x86 emulator for ARM. If you think that Prism is “a translation layer and not an emulator”, I refer you to the very first word of the second to last paragraph of the submitted article.
That’s assuming the writer knows what they’re talking about. Last line from the second paragraph:
Windows 11 has similar translation capabilities, and with the Windows 11 24H2 update, that app translation technology is getting a name: Prism.
And first line from the third paragraph.
Microsoft says that Prism isn’t just a new name for the same old translation technology.
That’s assuming the writer knows what they’re talking about.
Certainly more than you because Prism emulates an x86 CPU and WINE doesn’t, therefore the WINE comparison is still wrong.
Edit: Please prove the writer wrong.
only that it allows for running x86 code on ARM. This does not inherently require emulation as demonstrated by Rosetta 2, which is a translation layer.
WINE doesn’t “translate” one CPU architecture to another CPU architecture either, so the WINE comparison is still wrong, no mater if CPU translation is called emulation by you or not. WINE is a wrapper for API calls within the same CPU architecture. That’s it.
WINE doesn’t “translate” one CPU architecture to another CPU architecture
“Windows apps are mostly compiled for x86 and they won’t run on ARM with bare Wine”
What you linked is an effort to combine WINE with the QEMU x86 emulator which is an emulator because it emulates CPU calls. Hint that it’s an emulator is in the name “QEMU” and an actual quote from the wiki page you linked and clearly didn’t care to read: “Running Windows/x86 Applications: See Emulation”
EDIT: Let me also quote from the readme file of the Hangover project:
Hangover uses various emulators as DLLs (pick one that suits your needs, e.g. works for you) to only emulate the application you want to run instead of emulating a complete Wine installation.
This is a pretty interesting counter example: eteknix.com/running-yuzu-on-switch-gives-you-bett…
But, as others have said, exceptions confirm the rule.