Theia Institute: Non-Profit Think Tank

@theia@infosec.exchange
18 Followers
20 Following
49 Posts
A registered 501(c)(3) non-profit focused on new approaches to #cybersecurity, #AIgovernance, #AIethics, and other technology trends.
The institute's mission is to provide cutting-edge thought leadership that reframes traditional security leadership for the 21st century by addressing cybersecurity, enterprise risk management, AI governance & ethics, and related topics. In addition, the institute is also tasked with providing practical security solutions within a modern business context.
Verification Linkhttps://gist.githubusercontent.com/todd-a-jacobs/205611c1e2195f37117f1f58ec8b549d/raw/fb9a22f04d04978baaee511f9452b590255a49fa/@theia@infosec.exchange.html
Web Sitehttps://www.theiathinktank.com
LinkedIn Pagehttps://www.linkedin.com/company/98059988
LinkedIn Communityhttps://www.linkedin.com/groups/12856278

I will be moderating an executive round table via Zoom from 3:00-4:30pm US/Eastern tomorrow for The Ortus Club. The topics are ones I’m always passionate about: #cybersecurity & #businessresilience.

This is a peer-driven round table. No one’s pitching anything. The goal is to bring a broad spectrum of industry luminaries together to share their experiences, insights, and collectively brainstorm about ways to future-proof our security strategies.

The round table is open to IT & cybersecurity leaders in North America. Space is limited, but there are still a few no-cost seats remaining for the #thoughtleaders in my extended network. You can sign up at the link below, but the clock is ticking.

No matter how well-attended these events are, it’s always more fun with a friendly face or two in the crowd. I hope yours will be one of them, and look forward to seeing you there!

https://www.linkedin.com/posts/todd-a-jacobs_why-legacy-cybersecurity-is-putting-your-activity-7310488468139200512-nodq

Why Legacy Cybersecurity is Putting Your Business at Risk—And How to Build… | Todd A. Jacobs

I will be moderating an executive round table via Zoom from 3:00-4:30pm US/Eastern tomorrow for The Ortus Club - Executive Knowledge Sharing. The topics are ones I’m always passionate about: #cybersecurity & #businessresilience. This is a peer-driven round table. No one’s pitching anything. The goal is to bring a broad spectrum of industry luminaries together to share their experiences, insights, and collectively brainstorm about ways to future-proof our security strategies. The round table is open to IT & cybersecurity leaders in North America. Space is limited, but there are still a few no-cost seats remaining for the #thoughtleaders in my extended network. You can sign up at the link below, but the clock is ticking. No matter how well-attended these events are, it’s always more fun with a friendly face or two in the crowd. I hope yours will be one of them, and look forward to seeing you there! https://lnkd.in/ePaFaJDs

There's too much talk about centralized #AI, and not nearly enough about #edgeAI. The real future will mirror the historical shift from centralized mainframes to decentralized personal computers: lower-cost, lower-power, distributed computing at commodity prices.

#DuckDuckGo is now offering free, #anonymized access to a number of fast #AI #chatbots that won't train in your data. You currently don't get all the premium models and features of paid services, but you do get access to privacy-promoting, anonymized versions of smaller models like GPT-4o mini from #OpenAI and open-source #MoE (mixture of experts) models like Mixstral 8x7B.

Of course, for truly sensitive or classified data you should never use online services at all. Anything online carries heightened risks of human error; deliberate malfeasance; corporate espionage; legal, illegal, or extra-legal warrants; and network wiretapping. I personally trust DuckDuckGo's no-logging policies and presume their anonymization techniques are sound, but those of us in #cybersecurity know the practical limitations of such measures.

For any situation where those measures are insufficient, you'll need to run your own instance of a suitable model on a local AI engine. However, that's not really the #threatmodel for the average user looking to get basic things done. Great use cases include finding quick answers that traditional search engines aren't good at, or performing common AI tasks like summarizing or improving textual information.

The AI service provides the typical user with essential AI capabilities for free. It also takes steps to prevent for-profit entities with privacy-damaging #TOS from training on your data at whim. DuckDuckGo's approach seems perfectly suited to these basic use cases.

I laud DuckDuckGo for their ongoing commitment to privacy, and for offering this valuable additional to the AI ecosystem.

https://duckduckgo.com/chat

DuckDuckGo AI Chat at DuckDuckGo

DuckDuckGo. Privacy, Simplified.

The entire institute, including our board, panelists, and members, are all extremely proud of this news about one of our founders, Dr. Lisa Palmer. Way to go, Lisa!

She's not currently on Mastodon, but you can find out more about her work and her many impressive projects at:

  • https://www.drlisa.ai
  • https://x.com/palmerlisac
  • Unfortunately, LinkedIn doesn't allow deep links directly to Dr. Palmer's feed. However, for more information, you can find her original post linked from our LI company page instead. https://www.linkedin.com/feed/update/urn:li:activity:7248061550496772098

    DrLisa.AI | Applied AI Advisory Firm | We help clients to drive tangible value from AI.

    Dr. Lisa AI is an Applied AI Advisory firm that helps their clients to drive tangible value from AI. Founded by Dr. Lisa Palmer using her doctorate in AI and her lifelong career as a technology executive, they serve Microsoft Partners, Enterprise Companies, and Public Sector Organizations with a clear mission - to empower every person and organization on the planet to achieve more through AI.

    DrLisa.ai

    This is an #AIethics and #AIgovernance issue that the institute has already taken a firm stand against. We hope you'll join with the institute and many of its members in sharing how to opt out of this new #datamining practice on #LinkedIn.

    Thanks to Dr. Lisa Palmer for top post referenced in the link below, and to @todd_a_jacobs for his recent posts raising awareness of the related privacy, legal, and copyright issues we must all grapple with as #AI embeds itself deeper into our daily lives.

    Please share your own opinions and experiences with us in the comments. Theia is a non-profit think tank that thrives on interaction; we are *not* a megaphone and really do want to hear from you!

    1. Our post with comments
    https://www.linkedin.com/feed/update/urn:li:activity:7242964789608550402/

    2. Dr. Palmer's original post
    https://www.linkedin.com/feed/update/urn:li:activity:7242951902429151233/

    Theia Institute™ on LinkedIn: #aiethics #aigovernance #datamining #linkedin #ai

    This is an #AIethics and #AIgovernance issue that the institute has already taken a firm stand against. We hope you'll join with the institute and many of its…

    #Shellprogramming skills are pretty portable between #Linux, #BSD, and #macOS, but some of the underpinnings of macOS are non-standard. It helps to remind yourself that macOS is not a standard #BSD #Unix variant; Apple's #Darwin based systems do a lot of embrace-and-extend under the hood. Here's a practical example that comes up often in the enterprise.

    Most #Linux systems export the current user's login name to the LOGNAME environment variable (often via sourcing /etc/profile) and may also export the user's default shell from the user's #GECOS record in /etc/passwd to the preferred shell (set by an application or the user) as the SHELL environment variable. The canonical way to get access to the user's default shell on most Unix-like systems is by parsing /etc/password or another NSS database with the getent utility, e.g. getent passwd "$LOGNAME" | cut -d: -f7.

    There are other means to do this on Linux too, but macOS doesn't provide this common #POSIX compatible userspace utility. Instead, Darwin relies on opendirectory(8) for storing and accessing GECOS records, requiring other tools to retrieve the information. You can query a user's GECOS record on Darwin like so:

    # directly from the Open Directory service, local or remote
    dscl . -read "/Users/$(id -un)" shell | awk '/^shell:/ {print $2}'

    # from the directory service's cache on the local system
    dscacheutil -q user -a uid "$(id -u)" | awk '/^shell:/ {print $2}'

    Be aware that there are other ways to do this, too, but old school utilities like whoami have been deprecated in favor of id -un, and finger as implemented on most systems (e.g. via [x]inetd, or reading various #dotfiles from users' directories locally or over the network) is considered a security risk.

    In containers, especially with non-standard shells, or with centralized #IAM using #LDAP or #ActiveDirectory, you may have to match the local #userID to a remote #LDIF record to before grepping for the data you need. In addition, nsswitch.conf, PAM modules, NIS+, or other less-common data sources may need to be consulted and each will generally have specific utilities for looking up the stored or cached information that is equivalent to what's normally provided in the 7th GECOS field for each user on standard Linux and Unix systems.

    As always, your mileage may vary based on use case or implementation details. On the plus side, problems are rarely insoluble when you know where to dig for a solution!

    #Cybersecurity and #encryption are complex. The institute's educational mission aims to bridge the practical business and personal impact of technology with the hidden complexities that make secure, trustworthy systems seem unattainable.

    https://infosec.exchange/@todd_a_jacobs/113120037188348160

    Dr. Todd A. Jacobs (@todd_a_jacobs@infosec.exchange)

    Attached: 1 image Topics like #cybersecurity and #encryption are difficult to talk about plainly because they *are* complex. While it's usefully reductionist to tell users that HTTPS is more secure than unencrypted HTTP, it can also lead to oversimplification (and thus a lack of adequate #infosec funding) when designing and implementing #securitycontrols. Consider the following excerpted information I recently shared in one of the LinkedIn communities when trying to explain why a URL or TCP/IP socket *by itself* doesn't create a secure connection. --- The "HTTPS" in a URL is a URI *scheme* that is interpreted by the browser as an instruction to establish a TLS connection over which the HTTP protocol can be be negotiated. The actual TCP/IP transport layer handshake, TLS and HTTP protocol negotiations, and encrypted payload communications between client and server are handled in other layers. ## Useful References Hypertext, URIs, and Schemes : https://www.rfc-editor.org/rfc/rfc9110#section-4.2.2 : https://www.rfc-editor.org/rfc/rfc8820#name-uri-schemes : https://en.wikipedia.org/wiki/List_of_URI_schemes TLS (sometimes still referred to as "SSL" for historical reasons) : https://www.rfc-editor.org/rfc/rfc8446

    Infosec Exchange

    According to #Yubico, it took six months for a firmware vulnerability that allows cloning of #YubiKeys using #EllipticCurveCryptography to be resolved and responsibly revealed to the public. That's not the problem.

    The real problem is there will always be another unpatched vulnerability just around the corner. That's why we need new ways of framing what #cybersecurity should look like in today's modern enterprise. Old-school #defenseindepth still has a place, but businesses must find new ways to reduce the amount of sensitive data that's at risk in a #databreach when all layers of defense are inevitably pierced.

    https://www.yubico.com/support/security-advisories/ysa-2024-03/

    Security Advisory YSA-2024-03

    Security Advisory YSA-2024-03 Infineon ECDSA Private Key Recovery Published Date: 2024-09-03Tracking IDs: YSA-2024-03CVE: In ProcessCVSS Severity: 4.9 Summary A vulnerability was discovered in Infineon’s cryptographic library, which is utilized in YubiKey 5 Series, and Security Key Series with firmware prior to 5.7.0 and YubiHSM 2 with firmware prior to 2.4.0. The severity of the issue […]

    Yubico

    TL;DR: #AI & #ML data aggregators break the law when they consume data unlawfully, whether or not the output of their systems is considered fair use. Let's talk about the underlying data, not just the tools!

    Most big AI & ML data aggregators aren't sharing their data openly in compliance with #FOSS or open-content licenses, or adhering to the #TOS of data they scrape from less litigious content creators. They are trying to avoid legal liability by signing "content deals" with big media companies, but this just papers over the fact that commercializing data in violation of the copyright holders' license terms, terms of service, or contracts of adhesion is a criminal act and that hiding that data inside proprietary databases and models is simply one of the ways these companies are attempting to dodge lawsuits and liability.

    Copyright theft hurts independent writers, bloggers, educators, and journalists more than it hurts media moguls. Signing content licensing agreements with the likes of GitHub, Hatchette Group, or the New York Times is all well and good, but this typically doesn't compensate the actual content producers or bring the unlawful aggregation into compliance. These sorts of deals insult every #opensource and #opencontent creator. #DRM isn't the answer, and neither is paying off big media. #OpenAI and others should either pay for that data directly, make the data publicly available under the share-alike clauses common to most open source/content licenses, or exclude it altogether.

    Many years ago, Canada took an interesting approach that wasn't based on chasing individual "pirates." Instead, they taxed storage media like burnable CDs. The main flaw in that approach was that it still mostly benefited large companies who could collect meaningful amounts from this tax rather than paying small, independent artists. Nevertheless it was (and is) a better model for society & the creative commons than paying "protection money" to big media conglomerates while continuing to build for-profit business models that violate the copyrights and licensing terms of anyone who isn't deemed a significant legal threat.

    As a society, we can and must do better. If our copyright laws (including commercial software licenses and terms of service) are outdated and no longer serve society then they should be updated or fixed. However, deliberate and open theft should never be permitted to become business-as-usual. The fact that the output of such systems may or may not be sufficiently transformative to be labeled theft or plagiarism doesn't change the fact that almost all of today's commercial options currently rely on very large data sets filled with material that belongs to others who weren't compensated for their work, and where most copyright owners lack sufficient visibility to even determine whether or not their work was unlawfully used.

    Two of our founding members, Dr. Lisa Palmer and @todd_a_jacobs were recently interviewed for a @FastCompany article about #GenerationZ and#AI in the workplace. Thank you both for sharing your insights on this timely topic.

    Stevens, E. (2024, June 4). "Five things Gen Z should do to prepare for AI in the workplace." *Fast Company*. Retrieved from https://www.fastcompany.com/91134109/ai-job-tips-for-gen-z

    Five things Gen Z should do to prepare for AI in the workplace

    AI has transformed the job market in the last year, here’s how recent grads can be better prepared.

    Fast Company