Another new Let's Encrypt-secured website, another case of getting 82 hacking attempts in the first 30 seconds after it came online for the first time ever.

Do. Not. Kid. Yourself. "Oh, I'll delete the config files once I get it working." "Eh, I'm just some nobody. Who's going to care?" Yeah, I'll tell you who: people who watch for new sites come online and then immediately slam them with hacking probes.

"I'll get to it soon" means you better do it within 3 seconds.

https://honeypot.net/2024/05/16/i-am-not.html

Honeypot.net

I am not exaggerating this: I created a new hostname in DNS, …

@tek Thanks for that. I too have received 25 requests for `/.git/config` in the last 24 hours alone.

Fortunately I have no files with any of the names listed in your post anywhere in my `/var/www` tree.

If you don't plan to serve it, don't put it anywhere from which it can be served.

@simon_brooke 💯. Assume that everything you upload will be found and downloaded and proceed appropriately.
@tek even if the hostname is gibberish, and DNS AXFR prohibited, most virtual hosts with https will happily divulge all hostnames the certificate applies to.
So yes. Get the security sorted and any framework configured before you deploy any meaningful or valid content.
Reminds me of the days of installing XP on an open internet connection. Compromised before first boot.

@zymurgic Heh, solid analogy. It was a race to download patches before getting pwned.

But it’s the cert transparency lists that are the real landing beacons for attackers.

@tek and lots of software has a setup wizard where you create the admin account...
@schnittchen Mmm-hmm. “Oh look, we have our own Wordpress server now!” “So do we.”

This got me thinking about the possibility of setting up some subdomains with the sole purpose of building a list of the IP addresses of those bots and not publish those subdomains anywhere other than in the TLS certificates.

I don’t even need a full HTTPS implementation running on those subdomains. All the server would need to do is to receive the TLS client hello and parse the SNI.

@kasperd There are hundreds of millions of compromised IPs. I’ve tried something like that before but it became obvious I’d be blocking the whole Internet.

Blocking hundreds of millions of IP addresses doesn’t sound like a problem to my ears. Of course whether you want to do that or not may depend on the site you are hosting.

Of course with that many addresses you don’t want to list each IP address individually. Something a bit more thought trough will be needed.

It would be nice if there weren’t that many abusive devices connected to the internet. But implementing systematic blocking of them might be part of what can incentivize the necessary cleanup.

@kasperd I had written a script to block an entire AS at a time. Pretty soon I started wondering if anyone would be left to access the site.

@tek I set up the firewall first, then use this to check it. This also allows me to check that logging is working; I should see all those probes in my logs. (The Tekniquelly-inclined can use nmap if they wish.)

https://www.grc.com/shieldsup.htm

@tek I don't understand the point of your statement.

do you advocate for LetsEncrypt? Are you against it? Do you recommend any action in particular?
I am kind of a profane on this topic,,

@ankhZero @tek when you get a let's encrypt certificate, the associated domain names are published in a certificate transparency log. the original post is saying that attackers are monitoring those and trying to exploit newly set up servers before they're hardened
@mica @ankhZero Yep, that’s right. Let’s Encrypt is a fantastic service which I’ve donated money to. They’re making the Internet better. However, there are some annoying side effects inherent in the whole system, not because Let’s Encrypt did anything wrong, but because if you take the good, the bad is kind of unavoidable.
I don't think it's an endorsement of LetsEncrypt, just a statement that hackers have godlike powers to immediately divine when a new website exists and attack it within seconds. Which might seem to be the case, but what's really going on is DNS root servers are either controlled by, or selling out to website burglars. No other way for them to know who to attack so quickly.

So it's not LetsEncrypt's fault per se (though they could very well be selling out too).

CC: @[email protected]
@cy It’s all about the transparency logs. They basically say “here’s a new hostname and we’ve verified that it exists and is running a web server”. Those logs exist for good reasons, but they mean there’s zero obscurity anymore.

@cy @tek @ankhZero

Godlike powers to immediately divine
No other way for them to know

It's not DNS. It's Certificate Transparency logs.

You're right that it's not Let's Encrypt's fault, though. Any certificate provider worth their salt will be publishing public Certificate Transparency logs: https://certificate.transparency.dev/

That means that the instant you receive a certificate, anyone monitoring those logs will know about your website. Unfortunately, malicious actors are amongst those monitoring.

Certificate Transparency : Certificate Transparency

Certificate Transparency

...what really?

No, seriously... really?

(checks the website)
Who watches the watchers?
CT depends on independent, reliable logs because it is a distributed ecosystem. Built using Merkle trees, logs are publicly verifiable, append-only, and tamper-proof.
Oh my god that is a stunningly bad idea. Both from a security and a privacy perspective. Is it just me? This is obviously The Worst Thing You Can Do, right?

CC: @[email protected] @[email protected]
@cy @ankhZero I think it’s good on balance. It keeps me from issuing www.google.com without anyone noticing. And we’re going to get those malicious requests in minutes anyway.
The malicious requests would not come within minutes, because there's nothing else that publically announces when a new website exists, and if there is we need to cut that shit out too.

I can bet there are millions of computers currently compromised because they had a security leak when they were setting up their website. And I don't care who issues google.com. Our browser should warn us when google's key changes, and then people would fix that shit fast, without any requirement that everyone report their vulnerability to every malicious organization in the world.

CC: @[email protected]

@cy @ankhZero Those are legit arguments. I understand them and the concerns behind them. Counterarguments:

1. The attacks will come in moments anyway. Consider things like Shodan that continually scan the entire IPv4 space.
2. It's not just google.com. yourownsite.com could be spoofed, and the main tools preventing that, HSTS and cert preloading, are fraught with peril. Need to update to a new cert? Don't screw up the HSTS or no visitors can come to your site for the next 13 years!

continually scan the entire IPv4 space.
That takes a LONG time, considering it's at least 4 billion packets per scan, routed all around the world. Anyway that's an argument for IPv6, not for "Certificate Transparency Logs".

It's also an argument for erasing the funds of rich fucks so they can't afford to continually barrage the Internet with their scans.
yourownsite.com could be spoofed
Not if nobody has heard about it! And what if it is spoofed? Do I call up the "Certificate Transparency Logs Police" and tell them that the record published to that log isn't legit? How do I prove that?
Don't screw up the HSTS or no visitors can come to your site for the next 13 years!
HTTP Strict Transport Security
https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

That just uh... tells browsers to change http to https. The only way no one could visit is if you removed SSL and only served stuff over HTTP. The article says "If the security of the connection cannot be ensured (e.g. the server's TLS certificate is not trusted), the user agent must terminate the connection[2]: §8.4  and should not allow the user to access the web application" but that's just true for SSL in general, and it's stupid because your browser shouldn't decide what you are or are not allowed to do. It's not your mom!

So, if you mess up the HSTS, it... still connects via SSL and everything's fine. And if your certificate's expired, everyone's browser throws a hissy fit, regardless of that HSTS thing. Am I reading that right?

CC: @[email protected]
HTTP Strict Transport Security - Wikipedia

@cy @ankhZero My bad. I meant the cert pinning thing that was a fad a little while back, and fortunately seems to have been abandoned. But yeah, that’s exactly the idea. If you see that Tiny Registrar has issued an invalid cert, there are ways to report it. That’s critical feedback that must exist.

I think the general takeaway is that we’ve collectively ripped off the “obscurity” band-aid. That’s just not a thing anymore. Before you put anything on the ‘net, make sure it’s secure.

I should't have my site reported to burglars before I can determine my registrar issued a bad certificate. I just go to mysite.com, and it's not my site. And "we've ripped off the security band-aid. That's just not a thing anymore," isn't exactly a resoundingly good idea.

Plus again, what's more likely that a registrar will risk getting caught issuing bogus certificates, or that people will actually lose control of their machines, making a botnet crisis of unprecedented scale?
@cy In practice, rogue registrars were busted frequently creating fake certs on behalf of governments and other spies. Now browsers generally won't use root certs from CAs which don't publish their certificate logs. It's bad if your or my server gets pwned. It's horrible if gmail.com gets compromised and a malicious actor can gather credentials from millions of users at ones. That's the kind of thing the logs are meant to protect against. And that's not just hypothetical, sadly.
So, when a rogue registrar creates fake certs on behalf of an evil government, and publishes the logs of those certs, who's going to determine they're illegit, again? All the millions of users getting their credentials stolen will have their browsers check and see "oh it's in the log" and not warn them. Someone spots it and raises an alarm, but all those millions of users are already heavily censored and never hear about it. They go to https://mozilla.org to update their browsers to the latest secure version and...

For that matter, how are browsers supposed to check this certificate log, without that getting intercepted? They try to get it from https://certificatelog.com and…

And why is everyone using gmail?!

Anyway like I said, the browser could warn people when a website's key has changed. No public log of all websites needed.
Mozilla - Internet for people, not profit (US)

We’re working to put control of the internet back in the hands of the people using it.

Mozilla
@tek Reminded me about how connecting a fresh Windows XP directly to the internet gets it infected before it finishes downloading security updates. (IIRC that was even before it went EOL, feeling a bit old now)
@tek I assume there are real time feeds out there of new domain registration and also DNS lookups and the bad actors are just running tools that constantly try and retry til they come online (and after). Blech. Sad really.
@tek Oh lol reading the replies it turns out Let's Encrypt is publishing it which yeah makes sense. I just assumed everything else also has legit (or not) ways to stream possible new targets.
@tek All that said: yeah. Nothing went on my hetzner when I set it up that wasn't intended to be public while I got nginx and let's encrypt working. Terrifyingly I was doing it while suffering frequent small seizures in 2023.

@r343l That's smart operations.

And wow, yeah, that would add a whole dimension that I couldn't even imagine.

@tek In retrospect it's absurd. When I was done flipping DNS (it had been managed by commercial wordpress before), I literally had my first noticeable to family seizure (because I went non-responsive and then after was gibberish speech). But apparently decades of training will let you do tasks "successfully" under absurd conditions. Not recommended. Later had to check I didn't eff it up. Luckily I take complete notes of every command/change as I go as a regular practice.

@r343l My family is so accustomed to me being non-responsive and uttering gibberish. 😀

Half of my blog is me publishing the notes I took for some complicated thing that I'd never remember if I didn't write it down.

@r343l The ways are legit, just regrettable. The transparency lists are important to have. It's just a bummer that the bad guys can weaponize them like that.

I've heard arguments that we shouldn't have the transparency lists because they aid attackers, but to me they all sound like "if only attackers didn't know we existed, we'd be perfectly safe!", which is just flat-out wrong.

@tek to this point I could see an argument for a shortish delay but yeah it's an illusion. I assume there are attackers that are trying just random IP addresses (that aren't known to be well hardened or even have a server at them yet) at intervals.
@r343l Domains, I'm certain. I don't think that's true for DNS, not least because I ran my own DNS for years and I'm certain I didn't publish new names to any list. I see getting queries for common hostnames like @ and www, but you can make up any random string of ASCII and start getting hits on it the moment you register an SSL cert for it.
@tek has always been crazy to me that there's web services that are configured with a default admin user with a publicly known password
@tek or even worse, an initial setup step through a web browser with no authentication
Years ago, a student put a Raspberry Pi with default account and password to public IP address with very little filtering. It took 23 seconds after bootup before someone logged in unauthorized via ssh. And that IP most likely had some other system before that, so it was just random IP scanning.
Now as any https site existence is revealed via transparency logs (not only LE), it is immediate.
Have a access control (firewall or server config) if you need some changes before going fully live.
@tek
@tek What about an ipv4 server without dns name, or ipv6 with letsencrypt? IE, is it CT or just ipv4 sweeps?
@dascandy @tek especially when you have a certificate from any authority you will get scanned. CAs have to provide a public log of every certificate they sign. This log is constantly scanned by malicious actors.

@tek people always monitor "new". New often means inexperienced. Which means easy to hack.

You see it at companies too. New employees get notably extra phishing in their first few weeks. And that's even easier to track. They just look at places like LinkedIn to see who changed their status and figured out employee email syntax.

@tek

Really helpful that /.well_known/ means you can't just blanket block /. in a URL in many cases.

@tek yep i was very surprised by that the other day exact same query too
@Puffin It's kind of shocking the first time, isn't it?
@tek i mean i knew it was going to happen, since i look at the logs on my server but in less than 3s was the real surprise

@tek I once had to install Windows server, and didn't have the latest version. So I installed the version I had on DVD,, hooked it up to the internet to get and install the latested service pack.

It was hacked before it could do so.

We provide managed secure hosting services for SMEs. Most people we speak say "we don't need you, what is wrong with the $5 public VPS our web guy uses"? They don't have a clue what is lurking out there...

@wanwizard "If you think you can doing it securely for cheaper, you're more than welcome to give it a shot. Here's my business card. See you next month!"
@tek Ha! Security is like insurance. It is expensive and a cost, until you've needed it and didn't have it.
@wanwizard Story of my career, right there! 😂😭😂
@tek People tracking CT logs and doing weird shit with their new targets is exactly why I decided a long time ago to only get wildcard certs. Sure, you can guess there's a "www" among those, but the others, good luck with a dictionary.
The background radiation is bad enough as it is already, I don't need to advertise my presence even more.. gives me at least some breathing space to test my own shit before others do.

@WooShell I get both sides of this. One hand: obscurity isn’t security. Other: but I don’t need to take out a billboard.

I think the CT logs are good on balance. It slightly changes the timeline but not by a whole lot.

@tek what, no /wp-login.php? are they even trying
@tek Funny thing is it aint even people, it's just bots that will hammer any new site that pops up with a predefined script to run the nmap ssh-brute script and some wordpress exploits.