Here’s your chance to own a decommissioned US government supercomputer

https://lemm.ee/post/30823399

Here’s your chance to own a decommissioned US government supercomputer - lemm.ee

Who wants to go in with me on this?
Let’s turn it into a Lemmy server.
Killing a fly with canon there, aren’t you?
Building a cannon to shoot for the moon
I call it future proofing.
Some people don’t understand exponential growth. We expect Lemmy to have 50 billion users by 2026 and we need to be ready for it.

Winner looking at his electricity bill:

I hope Matthew Broderick buys it.
And then programs it to play a nice game of chess
No, let’s play Thermonuclear War
*Global Thermonuclear War

Power consumption: 1.7 MW

I hope it stays decommissioned. We’re burning up the planet too fast already.

Pop up a solar farm and you are good to go, baby!
Yeah just need 10Mw+ of solar and like 40mwh of batteries to power it 24/7
So just the 10 million dollar solar farm? easy peasy
Plus the batteries, unless you only want to run the thing while the sun is up
Yeah, I saw the 4Mil BTU on another page, that’s a lot of heat output
Can it run Doom?
No. But it will run NetBSD 😇
I’m guessing that it can run multiplayer Doom with ray tracing turned on.
It can run all the dooms…in parallel
It could with software ray tracing, but it doesn’t have any GPUs. The CPU cores aren’t especially fast either, they just have a lot of them.
I upvoted cause obligatory joke but there is a map that has like 100k enemies in it, I would like to try it out.
Seems like it’s cheap to start the bidding at $2500 but the cheapest thing is probably the initial purchase price after moving it, buying the needed cabling, and electricity bills.
Whoever buys it will most likely just part it out and sell it on ebay.
It’s all Broadwell Xeons. Sure, there’s 8000 of 'em, but after you factor in purchase price, moving and storage costs, time spent parting out nodes, shipping costs, etc… I think you’d have a hard time breaking even, and for an end user you can get like 4x the FLOPS per socket at half the power consumption with current server CPUs.
I bet manpower costs are significant as well. How many people are needed to run this thing? You probably need engineers with an esoteric set of skills to put it back together and manage it which would not be cheap.
It may be running SLED, but just imagine all the specialized, tweaked af code running on top. They didn’t just pop in a LiveCD and click “Install”.
No, they probably had to pop the live CD into each node individually and click “instal”. Then run a script on each one to join it to the cluster.
Of course. I was obviously referring to what it takes to operate it after that. Not to mention how complicated setting that whole mess up is.
Kind of, you would use a deployment node to manage the individual blades, they are running really specialized software that is basically useless without the management nodes. It wouldn’t be difficult to spin it up (Terascale would have it ready to batch out jobs within a few hours) but you are going to need to engineer your building around it to even get that far. Your foundation needs to support multiple tons of weight, be perfectly level, be able to deliver megawatts of power, remove megawatts of heat (it is water cooled, so you need to have infrastructure and cooling towers to handle that), and you need to be able to get it into the building to begin with. I have worked on this system a few times, just moving it would literally cost upwards of 7 figures. The computer is pretty easy to use, it’s all of the supporting infrastructure that will need a literal team of engineers. I could (and have, kind of) spin the machine up to start crunching data within a day on my own. Fuck moving it, and double fuck re-cabling it. Literal miles of fiber in those racks.
I tried hard to oversimplify. Thanks for spoiling it.

They didn’t just pop in a LiveCD and click “Install”.

Obviously not. In 2017, they would have used a live USB thumbdrive instead of a CD.

Yup, most of these are just a lot of relatively normal hardware put together into one system.
There’s a reserve price
That I get, but I’m sure the reserve isn’t that high if the starting bid is at $2500. It just seems low for the $30,000,000 the computer cost in 2017.
Damn that’s crazy. When I was just out of college I built the touchscreen web app that promoted this thing in the lobby of UCAR. Looks like it’s still running for now: hpctv.ucar.edu
HPC-TV Cheyenne

That is a really cool resume item, ngl

Do you mind me asking the languages/frameworks backing it? (e.g. JavaScript/Node)

Thanks! It was a Python backend that the data science team at UCAR built and a Vue.js front end.
Neat application. Looks fun :D
The specs seem to be just enough to run a Minecraft server that doesn’t freeze when one player explores new chunks.
no use, minecraft server is single threaded. it won’t hit 20TPS in an even slightly complex world no matter how much compute you throw at it
Yeah but you can put the whole world on RAM disk.
Just might be powerful enough for SolidWorks not to crash
Not if you’re running Monte Carlo

This thing is basically the size of my apartment.

Which means I have room! How much?

Iirc bid yest was 15k reserve not met tho
Currently around $51k, reserve not yet met. Buyer responsible for transportation and cabling not included, fyi.
TIL that Silicon Graphics still exists…
It doesn’t. Bought by hpe.
It runs how much Flops per Watt?
It’s kind of lame that they need to junk the entire apparatus after only a decade. I get that processor technology moves on apace but we already know it does that so why doesn’t a universal architecture exist where nodes can be added at will?

If you have too many “slow” modes in a super computer you’ll hit a performance ceiling where everything is bottle necked by the speed of things that are not the CPU: memory, disk for swap, and network for sending partial results across nodes for further partial computing.

Source: I’ve hang up too much around people doing PhD in these kinds of problems.

I would imagine it’s very difficult to make a universal architecture but if I have learnt anything about computers it’s that the manufacturers of software and hardware deliberately created opaque and monolithic systems, e.g. phones. They cynically insert barriers to their reuse and redeployment. There’s no profit motive for corporations to make infintitely scalable computers. Short sighted greed is a much more plausible explanation.

When you get to write and benchmark your own code you’ll see technology has limits and how they impact you.

You can have as many raspberry pis as you want, and accomplish faster computation if you can use the same budget with Xeon on dozens of MB in cache and hundreds of gb in ram with gigabit network cards.

10 years from now these Xeon will be like rpi compared to the best your money can buy.

All of those things have to fit in a building, not a desk. The best super computers look like Google’s data centers, but their specific needs dictate several tweaks done by very smart people. Super computers are supposed to solve 1 problem with 1 set of data at a time, not 100 problems with 1000,000 data set/people profiles at a time which are much easier to partition and assign to only 1000th of your data center at a time.

It’s more of an operating cost issue. It’s almost decade-old hardware. It was efficient in its day, but compared to new hardware it just costs so much to run you would be better served investing in something with modern efficiency. It won’t be junked, it will be parted out. If you are someone that wants a cheap homelab with infiniband and shitloads of memory you could pick up a blade for a fraction of what it would otherwise cost. I fully expect it to turn into thousands of reasonably powerful servers for the prosumer and nerd markets instead of running as a monolithic cluster.
A decade is a lifetime in technology. Moore’s law had just ended when this was put together.

One of the reasons why I work in industrial controls. A good day is me sneaking in tech that came after the year 2000. Employment for life and I get to branch out to related stuff. Employer is paying me to take ME and chem-e classes now.

I don’t know why anyone would spend their life chasing the newest fad tech when you can pick a slow moving one, master it, and master the ones around it. Would much rather be the person who knew how the entire system works vs knowing the last 8 programming languages/frameworks only 1 of which is relevant.

But hey glad there are people who decide on that lifestyle I like having a better cellphone every year.