Два рождественских червя 80-х: как доверие к сети стало проблемой задолго до фишинга

Праздники в ИТ часто выглядят одинаково — независимо от десятилетия. Меньше людей в офисах, меньше изменений в инфраструктуре, меньше внимания к мелочам. И сегодня мы воспринимаем это как очевидную истину: длинные выходные — время повышенного риска. Но в конце 1980-х эта мысль ещё не была частью профессионального мышления (по крайней мере, в практике академических и корпоративных сетей). Компьютерные сети того времени казались чем-то устойчивым, почти «институциональным». Они были дорогими и медленными. Пользователи часто знали друг друга лично, а саму сеть воспринимали как продолжение научного сообщества — но не как потенциально враждебную среду. В такой атмосфере доверие не считалось уязвимостью. И именно в этой среде появились одни из первых широко заметных сетевых инцидентов, которые заставили операторов думать категориями «инцидент / реагирование / процедуры». Оба случились в праздничный сезон. Оба были замаскированы под безобидные шутки. И оба показали, что даже самые доброжелательные системы могут навредить сами себе. Статья подготовлена по мотивам материала IEEE Security & Privacy , публикации Брайана Хэмэна и отчёта команды безопасности SPAN (NASA) .

https://habr.com/ru/companies/ostrovok/articles/983132/

#информационная_безопасность #История_интернета #Компьютерные_черви #Социальная_инженерия #Фишинг #Сетевые_протоколы #DECnet #реагирование_на_инциденты #Человеческий_фактор #Киберугрозы

Два рождественских червя 80-х: как доверие к сети стало проблемой задолго до фишинга

Праздники в ИТ часто выглядят одинаково — независимо от десятилетия. Меньше людей в офисах, меньше изменений в инфраструктуре, меньше внимания к мелочам. И сегодня мы воспринимаем это как очевидную...

Хабр

@landley @jschauma @ryanc @0xabad1dea yeah, the exhaustion problem would've been shoved back with a #64bit or sufficiently delayed by a 40bit number.

Unless we also hate #NAT and expect every device to have a unique static #IP (which is a #privacy nightmare at best that "#PrivacyExtensions" barely fixed.)

  • I mean they could've also gone the #DECnet approach and use the #EUI48 / #MAC-Address (or #EUI64) as static addressing system, but that would've made #vendors and not #ISPs the powerful forces of allocation. (Similar to how technically the #ICCID dictates #GSM / #4G / #5G access and not the #IMEI unless places like Australia ban imported devices.

I guess using a #128bit address space was inspired by #ZFS doing the same before, as the folks who designed both wanted to design a solution that clearly will outlive them (way harder than COBOL has outlived Grace Hopper)...

If I was @BNetzA I would've mandated #DualStack and banned #CGNAT (or at least the use of CGNAT in #RFC1918 address spaces) as well as #DualStackLite!

Back in antediluvian telco networking days, a transcontinental dedicated telephone line would involve the regional Bell carrier, AT&T long lines, and whatever other regional carrier was active at the other end.

And when installing or during link outages, the inevitable finger-pointing would ensue.

Had an install delay and the New York Telephone telco claimed to be having trouble locating the AT&T Long Lines facility. AT&T tried not to sprain an eyebrow in the con-call and told NYT (or was it NYNEX by then?) to follow any of their over-a-million-lines leading into the AT&T NYC facility.

All this for maybe 1200 or 2400 baud point-to-point data links, too. Or for serial muxes. (Shudder.) Muxes were devices designed to pass several different serial links over one line, and too often getting stuck locking up all of the multiplexed lines.

Network links have gotten immensely faster, and immensely more flexible, and muxes far less common with IP. Residential uplinks are now routinely massively faster than the early 10 MbE data center links, too.

Those 10 MbE links having replaced DMF32 or DMR or other point-to-point data links between individual computers in the data center. The hassles with drilling ThickWire vampire taps aside, the reduction from 10 MbE links in both wiring needed and routing difficulties was impressive.

Matt Blaze’s image of an AT&T Long Lines microwave link tower:

https://www.flickr.com/photos/mattblaze/51261791084

PS: sending DECnet async network links over some point-to-point serial lines would lock up connections when an ASCII XOFF character happened to get transmitted in the data. Fun times. Remember to turn off in-band flow-control signaling before lighting up async DDCMP. (Same for SLIP. DDCMP worked far better than SLIP, too.)

#RetroComputing #retrocomputing #ATT #telco #digitalequipmentcorporation #DECnet #ASCII #networking #notworking

AT&T Long Lines Oak Hill Tower

Flickr

So I think I have #DECnet running.

I see lots of (UNKNOWN) packets in tcpdump between my klh10/pdp-10 #tops20 instance and the pydecnet router.

I guess I need to read some manuals now to figure out how to actually test it.

I found the pyDECnet router here:
http://mim.stupi.net/pydecnet.htm

PyDECnet - A DECnet router written in Python

@feoh Sadly we have a tendency to ship units like that directly to recycling here. I did manage to get my hands on an incredibly sexy Wyse terminal once though.

Speaking of #VT100, I once wrote a pretty awesome terminal emulator for #DECnet under #DOS. Writing support for VT100 meant showing up at the local #DEC office and begging them for any sort of documentation. The woman at the reception thought I was mad, but managed to find an admin in the basement who was kind enough to provide some photocopies of the spec. I was 17 or 18. It was in the early 90s. Good times! :-)

@GenghisKen My benchmark for “ivory tower” command syntax was DEC EMA, the so-called Enterprise Management Architecture, with the only extant example being NCL.

NCL was the main command interface for what was variously known as DECnet Phase V, DECnet OSI, and DECnet Plus.

NCP was a syntactic joy, in comparison.

Back in that era, we had an official request to add EMA support into a DEC product under development, and the local manager decided to skip the work after my six-months-to-integrate-EMA estimate.

OSI was no joy to deal with, either.

#DEC #EMA #NCL #DECnet #OSI #DigitalEquipmentCorporation

The increases in computer hardware efficiency and density and improvements in related tooling is all quite remarkable.

In the mid 1980s, a herd of DEC VAXen (VAX-11/780-5, 2 × VAX-11/750, and a runt VAX-11/730 with the idiot, err, with the integrated disk controller IDC730 and an R80) were about a third of the ~3-ton AC chiller load, a #PRIME 2250 Rabbit (or ilk) was another third of the load, and a #IBM 4361 chest-freezer #mainframe was the remainder.

The VAXen were a little more than two ~10 meter rows of 19" racks, so a fair amount of the available floorspace.

With the tractor-feed printers, and with 19" racks of dresser-drawer-sized hard disk drives, and nine-track tape drives, a DEC PDC power distribution center—an ever-humming hydra built from a power transformer and arm-sized electrical cabling feeding power into the data center servers—and the rest of the common 1980s-era server accoutrements, and a pair of AC chillers, that whole data center was close to 15 meters square.

Running backups on the #DEC gear would wake up the second ~redundant chiller with the heat from the nine-track drives, and from the HDDs, too.

That two-drawer filing-cabinet-sized little PRIME box was just amazingly hot. And not "hot in a good way" hot. Just hot. Swelteringly hot. Got AC problems? Power down the Prime.

Where have we gotten to, some 35 years on?

A recent smartphone will outrun that whole herd of VAXen, has more memory and more storage and more cores, and quite possibly might outrun that whole data center. Though that IBM mainframe was fast for its era, and silly-fast at abending if the job setup wasn't just so.

Development tools and server operations tooling have all seen a massive increase in capabilities and features and scale and scope, too.

The Apple Xcode IDE is so far past pre-Y2K edit-compile-link-debug, and so far past an early such as LSEDIT, that there's really just no comparison. (But if edit-compile-link-debug works for you and yours, or if your critical apps are still based on #VAX, by all means, go for it. I won't makefile, err, won't make you change.)

We're also far past 1980s-era #DECnet, #SNA, and the rest, and the insecurity of the era. Sniffing DECnet, IP, or DECserver LAT traffic was trivial, too. (Some might make the case that attackers have no idea that DECnet even exists, or how to look for it, of course.)

Ah, well. Times change. The old stuff was good for its time, and can sometimes even still be useful, but IT and expectations and tooling have all moved on. And gotten immensely denser, and more power-efficient.

That I can hold that data center in my hand...

I have a habit of mirroring retrocomputing projects on my Github organization so that I can patch and integrate them. The most recent addition is PyDECnet, a user-mode #DECnet Phase IV stack that can be used to connect to #HECnet - and it is under relatively active development upstream.

#retrocomputing #openvms #vax
https://github.com/retroprom/pydecnet/tree/master/pydecnet

pydecnet/pydecnet at master · retroprom/pydecnet

Patches for PyDECnet. Contribute to retroprom/pydecnet development by creating an account on GitHub.

GitHub
@scottmatter Once upon a time it was... remembering sending emails from DECnet to bitnet
#DECnet #bitnet

A memory about #email..

Around 1989 or so, I was in Massachusetts on a call with to a colleague in #Washington, #DC,. I was working at #DEC (Digital Equipment Corporation) on their internal #DECnet network, and I forget how he was connected (if I ever knew).

At any rate, he asked me to send him something and I did, and we continued speaking. We were both utterly astonished to hear the "feep" of its arrival in his inbox a few minutes later while we were still on the phone. Unheard-of speed.