For #CyberSecurityAwarenessMonth, I'd like to start with a basic assumption we often seem to overlook:

If you don't need the data, don't keep it. Or put another way: you can't lose what you don't have.

Cheap (virtually unlimited) storage encourages us all (people and organizations) to keep lots of sensitive data we don't need - and there are plenty of examples of that coming back to bite people in sensitive places.

For #CyberSecurityAwarenessMonth, let's also try to dispell an old, out-of-date thought about security programs:

The idea that "we don't have any data/IP/PII/PHI/etc. worth stealing means we aren't a target."

If you learn nothing else from the epidemic of #ransomware, please learn that cybercrime is about extortion - by whatever means works. Taking data hostage is merely one of several attack types. If you depend on a particular system, particular data, or particular capabilities that could be disrupted by a malicious actor, you must recognize the risk of those and protect them accordingly.

For #CyberSecurityAwarenessMonth, I'd like to bring up a trend that concerns me. I'll call it the #ShinyObject trend. Security programs need to be built on fundamentals, built on a solid base that is flexible and thoroughly prepared to address threats. A #CyberResilient security program is focused on reducing risk. But the Shiny Object based program is focused on "the threat of the day" (not even the threat of the week or month anymore, they come too fast).

Even worse, when we see organizational leaders expect a new answer from their security team every time a new headline on "CEO Doomsday Weekly" website and then asks "are we protected from <insert shiny object>?" we show that:

1. Leadership doesn't have faith that our program is resilient
2. We haven't helped them understand our program's resilience

I think this problem exists for us in our personal lives as well: we focus on "oh no, I better patch today because of <insert shiny object>" vs. "I'm doing my routine patching that is part of how I keep my systems safe." (assuming we patch at all)

Don't fall into this trap.

For #CyberSecurityAwarenessMonth, let's recognize the point of having an awareness month for cybersecurity in the first place:

It isn't to inform everyone of all the stuff your company's cybersecurity team is doing, but to help everyone understand how cybersecurity impacts them, both personally and professionally.

This is our industry's far less catchy #SmokeyTheBear opportunity, since we don't have a government organization with a marketing budget supporting us.

At least, that's how I see it, and how I intend to use it.

CISA and NSA have released their "Top 10" list of cybersecurity misconfigurations and I'm assured they aren't from the home office in Wahoo, Nebraska. (disconcertingly, they also didn't provide them in a 10-to-1 countdown, which is just, well, sad)

1. Default configurations of software and applications
2. Improper separation of user/administrator privilege
3. Insufficient internal network monitoring
4. Lack of network segmentation
5. Poor patch management
6. Bypass of system access controls
7. Weak or misconfigured multifactor authentication (MFA) methods
8. Insufficient access control lists (ACLs) on network shares and services
9. Poor credential hygiene
10. Unrestricted code execution

https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-278a

#CyberSecurityAwarenessMonth

For #CyberSecurityAwarenessMonth, let's remember that a strong security program isn't just about technology.

We continue to see a successful attack vector into organizations is the corruption of the process on resetting credentials: both password and multi-factor authentication. This attack vector is not about technology, it is about process and people.

This is one example - and perhaps one of the most impactful. Give those processes a thoughtful review, and then make sure your people understand why they need to adhere to them.

For #CyberSecurityAwarenessMonth, let's talk about what #MultiFactorAuthentication (MFA) is and is not.

There are three recognized factors of authentication:

  • Something you know (like a password or PIN)
  • Something you have (like a hardware token or software certificate)
  • Something you "are" (like a fingerprint, DNA sample, or facial recognition)

Multifactor authentication is any time that 2 or 3 of these factors are used for authentication such as:

  • password and token code
  • password and face scan
  • token and pin
  • token and fingerprint and pin

What MFA is NOT is using multiple examples from a single authentication category:

  • password and a second password
  • fingerprint and face scan
  • token and certificate

Why aren't these considered MFA? Because of the core concept that MFA's purpose is to protect against the compromise of any one type of authentication, not simply the compromise of one password or one certificate. The situation where a malicious actor is able to steal a password database is one example of exactly why 2 different passwords is not considered a viable MFA solution.

For #CyberSecurityAwarenessMonth, let's talk about the relatively new concept of the #Passkey as an authentication method.

A passkey is a modern replacement for a password that solves for key issues with passwords:

  • You don't have to make up a password for a site or service that utilizes passkeys
  • You don't then have to remember those passwords either

Domain experts feel that this solution is more secure than passwords for the simple reason that people don't generally create good passwords, and are proving (thus far) to be harder for phishing and some other malicious credential stealing attacks to succeed against.

You'll have to configure the use of passkeys for every website/service/app that you will use them on, but the benefits seem to be real, simplifying login and improving security.

The FIDO Alliance (https://fidoalliance.org) has proposed the logo below for use on sites that accept passkey access.

FIDO Alliance

FIDO Alliance is focused on providing open and free authentication standards to help reduce the world’s reliance on passwords, using UAF, U2F and FIDO2.

FIDO Alliance

As we continue to talk about #CyberSecurityAwarenessMonth, let's pivot to something you can do for yourself - AND your business.

Modern #PasswordManagers are fantastic tools that help solve for one of the biggest issues we have in cybersecurity today: bad and reused passwords.

A quality password manager these days has the following features:

  • End-to-end encryption, with the encryption key "hidden" from the password manager company
  • Support for passkeys - which should help us transition to a "less-password" future (not to be confused with a "passwordless" future)
  • Support for MFA solutions
  • The ability to create shared groups of passwords, so you can share passwords to things like your news subscriptions with your household, but keep your bank passwords private - this is even more powerful in a business setting
  • multi-platform, multi-os, and multi-browser support -even Linux for those of us who like that sort of thing

I'm not here to endorse any one password manager - you can find plenty of reviews on them yourself. But for heaven's sake, if you're not using one today, why not?

Closing out week two of #CyberSecurityAwarenessMonth and it feels like a good time to bring up one of the most effective things individuals and organizations alike can do to prove their cyber security posture: patch, patch, and patch again.

Sounds basic? So is washing your hands to help prevent the spread of germs. But like hand washing, some of us take patching more seriously than others.

Unlike hand washing, there's no need for hot water and soap. So just patch. Regularly. Often. Make it a routine.

As we approach the half way mark for #CyberSecurityAwarenessMonth, I'd like to talk about tech debt. Just like there is a fine line between a classic car and a rustbucket, so too, there is a fine line between a reliable piece of software and an unsupported, unpatched, gaping hole in your cybersecurity defenses.

This isn't just true at the office, this is true at home too. As an example, Android versions 10 and older are no longer supported with security updates. Yet more than 25% of worldwide Android deployments are still on versions 10 or earlier.

Does your SmartTV still get security updates?
Is your WiFi router's plastic turning yellowish?
Are you one of the people propping up that 3% market-share of Windows 7?

Maybe it is time to give up on that "project car" in the front yard and rid yourself of some of that tech debt.

Continuing on with #CyberSecurityAwarwnessMonth, I want to highlight the importance of making good #risk decisions.

At a basic level, risk calculations have to take into account three factors:

  • likelihood of the negative event happening
  • how severe the event would be
  • how your countermeasures will impact either of those other factors

We, as humans, tend to over exaggerate the likelihood of "big, scary" events, and we prepare heavily to mitigate risk from them. Let's call this the "zombie apocalypse" event that people over prepare for. By contrast, we tend to underestimate "routine" events' potential impact. I'd point out that far more people in the US have lost their homes in the past 100 years to fires caused by careless cigarette use than to a zombie apocalypse.

As we think about risk from a cyber security perspective, it is important to remain aware of this bias, and to actively resist it.

Continuing this thread for #CyberSecurityAwarenessMonth, I'd like to highlight how choices in software can make big differences in the amount of data is slurped up by big companies about you. If we accept that privacy is a part of cyber security, then these are important tips:

  • Third party cookies are on by default in most browsers. Currently, these are one of the most insidious tracking tools that impact web browsing. Privacy focused browsers like #Firefox and #DuckDuckGo are rare exceptions that stop this.

  • Nearly every app you install from a "major company" tries to read data off of your phone, including a shocking amount of data that doesn't seem to have any actual relationship to providing you whatever service or solution the application is supposed to perform. I generally recommend uninstalling apps that you only use occasionally, or installing a tool like the #DuckDuckGo mobile app that helps block those queries and transmissions (especially for those of us on Android, which lags behind Apple iOS in privacy protections - which isn't to say iOS is perfect here.

  • You may also want to understand what your car is doing with your personal data. The #Mozilla foundation has been researching car privacy, and their findings are disturbing. https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/

Addressing these - if you don't want to have to be a privacy guru - will require political pressure, even laws. Sorry, even cyber security can't avoid politics.

*Privacy Not Included: A Buyer’s Guide for Connected Products

All 25 car brands we researched earned our *Privacy Not Included warning label – making cars the worst category of products that we have ever reviewed

Mozilla Foundation

With less than two weeks to go for #CyberSecurityAwarenessMonth, let's hit on a simple concept that is difficult to act on because of our human nature:

Do not click the link!

No, don't do it.

OK, so I've provided this advice, and if you scroll up even one or two posts, I've shared links for you to click, haven't I? Wow, what a jerk I am for giving you advice and then also daring you to ignore it. You're right. The advice has to be smarter than that. Clicking any link has cyber security risk. Connecting to the Internet has cyber security risk. So let's talk about not clicking the high risk links.

Recognizing those high-risk links can be difficult, but here are some suggestions for identifying them:

  • They came from a text message that isn't a number you already know about. (When I start getting texts from UPS, FedEx, etc., I create a contact for them so I know this is the number they use to text me.
  • They try to convince you you need to click the link URGENTLY or something you want to have happen won't - you won't get that delivery you're waiting for (which delivery? Who knows, but at least one of them is probably important, maybe it is that one) or some other urgent need
  • They are demanding information from you and you should click the link to provide the information

Whenever you suspect that the link came from a malicious source, always browse to the "real" website for that service/company and use your account, a tracking number, etc. to attempt to get to the information through that site, without clicking the link.

Unfortunately you're going to have to use your judgement on these things. We haven't found a way to prevent malicious links as a whole. So keep your wits about you, think before you act, and do your best.

We're in the home stretch of #CyberSecurityAwarenessMonth. Seems like a good time to talk about the "castle wall" security mentality.

Just like real castles belong to a time long ago in our history, so, too, does this sort of thinking in cyber security. gunpowder, cannon balls, paratroopers, and so many more offensive improvements in warfare have made the castle wall and moat of centuries gone by obsolete.

So, too, have advances in attacks against organizations. Additionally, your organization can't hide behind those castle walls and expect to be productive: your people work from outside the castle, and your customers and partners all live outside of your castle as well.

Make sure your cyber security program is built with those realities in mind. Be ready to detect and respond when a malicious actor gets past your boarder controls. Understand when that trusted person has either had their identity compromised or has chosen to become a threat actor. Segment and segregate your infrastructure. Recognize that your walls are just one security control, and every security control must be watched, maintained, and adjusted continuously.

We have just over a week left for #CyberSecurityAwarenessMonth. Seems like a great time to talk about compliance.

In the decades I've spent in this business, I've never seen compliance used successfully to push a broad security program. Here's what happens instead:

  • Organizations argue scope down to the barest possible limits so as to limit what they are responsible for
  • Security programs are then built to that barest possible scope, not for the organization as a whole
  • Organizations then shop for an auditor who will validate their interpretation - vs. one who will spur them to improvements

In short, compliance initiatives are seen by organizations as impediments to conducting business, and therefore deserving only of the smallest amount of attention and money that will satisfy the auditors. Your arguments for the need for and value of cyber security need to have relevance to the business/organization you're trying to protect, and compliance barely registers.

I'll readily admit, that as a wide-eyed, innocent security practitioner for whom security was a goal in and of itself, it took me way too long to learn this lesson for myself.

For today's #CyberSecurityAwarenessMonth post, I want to talk about the difficulties that need to be addressed with public threat data.

As an MDR company, we at @Deepwatch know that simply ingesting unfiltered public threat feed data into a SIEM as searches is a great way to run up your false positive rate by orders of magnitude. We also know that malicious actors have the same access to those threat feeds that you do, and they look to see if their tactics are being reported on so they can pivot (when possible)

Because of these drawbacks, here are some suggestions to maximize your value from threat feed data:

  • Curate the data before you use it for searching. Picking out higher quality feed data, and validating it has relevance in your environment will help you minimize the busy work of false positives.
  • To aid in curating that data consider industry specific feeds and commercially available feeds. These may not solve your curation problems, but they can help reduce it.
  • Recognize that by the time you get that publicly available data it may be stale. Search weeks back as you can reasonably in your security data/logs for matches on these IOCs, not just the past 24 hours or just "going forward."

Even with these drawbacks, external data about threats is critical to supporting a #CyberResilient organization.

Today's #CyberSecuirtyAwarenessMonth topic is choosing the highest priority log sources for your #SecOps needs. Unfortunately there is no "one size fits all" concept here, every organization has different needs and prorities. However, most organizations will see significant value with these data sources as a starting point:

  • Active Directory (or other central auth system)
  • Cloud Native Infrastructure
  • Endpoint Detection and Response
  • Firewall
  • Multi-Factor Authnentication
  • Web Proxy (or related solution)

This list gets you coverage for a wide variety of key data relevant in almost any cyber security instance. Add other data sources as appropriate in your environment.

For the last Thursday of #CyberSecurityAwarenessMonth, I'd like to ask a question:

As a #CyberSecurity professional, how many weeks of training do you take per year?

I ask because I know we have an ever increasing number of tools to manage, we're being asked to be more fully integrated into understanding #BusinessRisk, and the environments we are protecting continue to get more and more complex. I want to see how we're training ourselves to address these issues.

3 or more weeks yearly
0%
2 weeks per year
0%
1 week each year
0%
I watched a webinar once
100%
Poll ended at .

As #CyberSecurityAwarenessMonth winds down to the final few days, I'd like to bring up a few #OnlinePrivacy thoughts - after all, privacy and security ride in the same cart, even if they aren't exactly the same. Sadly, online communications these days don't make it easy to understand your privacy options, or protections. So here are a few concepts to help guide you.

  • If you post it online, it never goes away. - This is primarily about social media solutions, but is a general guideline to remember anyway. So when you post something, know that it will be there in 10, 20, 30, 40 years. Even deleting a post later doesn't really mean it goes away, unless you live somewhere covered by GDPR data protections, and even then things are iffy.

  • Stop sending private date via 'Internet postcards.' - SMS/MMS (texting) and email (along with many instant messaging solutions) are effectively the same as postcards through the regular mail. The post office can read your postcard, the local postal carrier can also read that - would you post your SSN, credit card number, or other private data on a postcard? I certainly hope not. These online solutions are the same as those: your cellular provider (and your recipient's provider) can read your SMS messages. Every email forwarding system between your mailbox and your recipient's can read your email. And every free email solution I'm aware of (Gmail included) is mining your inbox for advertising data to share your preferences and interests with advertisers. That means they have algorithms and even possibly AI reading your email - so that stuff is intentionally slurping up all those postcards.

  • Do not confuse "will not share" with "cannot share" your data: - If an online solution tells you they won't share your data, that can mean they have the ability to see and read your data, but they will not use it beyond whatever they feel they can do with it short of sharing it out. That's a choice they are making. That's also a promise they're making to you - that your data will be protected from a breach situation as well. (we know what an empty promise that can be these days) However, when a company tells you they cannot share your data because even they can't actually read your data, you've got something going. @signalapp is one of this group of organizations. They, and their peers (of which there are disappointingly few), are not promising they'll "choose" to respect your privacy subject to the whims of their C-suite, they're promising you they can't violate your privacy with regard to the content of your messages because they can't read the payloads anyway (and neither can anyone but your intended recipients - to within the best of everyone's abilities. So read privacy statements carefully, and pick good communications solutions to protect your privacy.

Following these three guidelines won't guarantee your privacy online, but they will help significantly.

In this penultimate #CyberSecurityAwarenessMonth 2023 post, a thought about all those restrictive company rules like #AcceptableUsePolicy, #DataAccessPolicy and restrictions on accessing organizational data or systems.

No matter if you agree with them or not, these policies represent your organization's decisions on how to reduce risk and protect the organization's cyber assets. In fact, they are implicitly stating that if you follow these rules and the company is somehow breached through activities that are allowed in these policies, you aren't personally to blame.

When you violate these policies by doing things like copying company data to your personal computer or other, similar actions, you're making a risk decision on behalf of your organization - a decision you're not authorized to make. You're saying: "I'm making a better risk decision about my organization's cyber security and cyber resilience than it made with those policies." (Unless you're the author of those policies, then you're saying something even worse)

Accepting risk is a big deal. You get to accept risk for yourself all day, every day. Accepting it for your organization can be the difference between the company existing another day or folding completely, with all the legal and financial ramifications that go along with that. The responsibility of those decisions goes well beyond "but I like using my Mac" or "well, I didn't have my work computer with me but I needed to do ."

If you have a legitimate reason to need an exception to these policies, please always submit a request to the appropriate resource (in writing - always give yourself a "paper" trail) so that those who are responsible for making risk decisions for the company can do their jobs. You never know, they may approve your request.

Rounding out #CyberSecurityAwarenessMonth, I'm going to recycle an article I wrote about a year ago. I reread it as I was planning this post, and I realized it was still as valid as it was when I wrote it last year about #Misinformation's impact on cybersecurity. https://www.linkedin.com/pulse/cybersecurity-misinformation-security-problem-bill-bernard-cissp/

The only place where this didn't age well was my attempt to keep politics out. Sadly, #Politics and #CyberSecurity intersect at the point of funding for the CISA. This begs the question, "what is it about the CISA that is causing some in congress to want to defund it?" (I'll leave it to you to find the news articles on that) While I try to refrain from day-to-day politics conversations here (as I have both strong opinions and a strong need to not alienate would-be business partners) this one needs to be brought up, in context, as a direct example of this intersection, and a need to not let political motives supersede our attempts at improving cybersecurity in the US as a whole.

Cybersecurity - Misinformation As a Security Problem

Misinformation has a very political context these days - think "fake news!" The word conjures up images of horrible memes about issues with vaccinations, allegations about election fraud, and many other topics that people generally don't feel comfortable with because of the politics of them. But ide