Todd A. Jacobs | Pragmatic Cybersecurity

65 Followers
90 Following
173 Posts
Executive Director, Theia Institute ✪ Chief Information Technology Officer, CodeGnome Consulting ✪ AI Governance ✪ Cybersecurity ✪ Strategic Policy ✪ Board & C-Suite Advisories ✪ Keynote Speaker ✪ Panel Facilitator
Verificationhttps://gist.githubusercontent.com/todd-a-jacobs/280f046b804df6da00ce260eb8db7024/raw/41ca3ef349d71f2c8073c610b3b6c28c9557c933/infosec_exchange_verification.html
Theia Institute™ Think Tankhttps://www.theiathinktank.com/
LinkedIn, Personalhttps://www.linkedin.com/in/todd-a-jacobs/
LinkedIn, Company Pagehttps://www.linkedin.com/company/theia-institute-think-tank
CodeGnome Consultinghttps://www.codegnome.com/

Define Your AI Use Cases Before Your Metrics

Companies need to start reframing #AI #metrics like utilization rates of specific AI systems (which they often think of as #COTS tools anyway) based on #use_cases, not as a one-to-many tech solution for every problem domain. There'd be a lot fewer corporate implementation failures if they thought of AI systems as "hammers and screwdrivers" suited to particular tasks rather than as Swiss Army knives that are generically suited to an arbitrary and/or ill-defined set of objectives.

Celebrating New Credential

I'm celebrating a new credential. I'm also celebrating the people who made it possible.

I'm proud to have received my Theia Institute Founder's Badge yesterday. It demonstrates two years of work with some truly brilliant and inspiring people, all of whom are not only "Emerging Technology Thought Leaders" but also deserving of the title of "Visionary Founder."

Sharing Credit with Others

While I still work for Theia Institute, I don't consider this my honor. The real honor is in standing on the shoulders of giants like my friends and colleages there including (in LinkedIn's pseudo-alphabetical order): Barak Engel, Daniel Kinon, Doug Shannon, Lisa Palmer, Jim Desmond, and Q. Wade Billings.

A lot of credit also goes to donors, business leaders, conference organizers, educators, journalists, and others who not only believed in Theia's mission, but have actively supported us over the years. That list would be too long for this post, but they each deserve their day in the sun too. I hope everyone who made took part realizes the real honor is theirs.

Related Links

The new #IBM_Granite 4.0 #Micro_AI model is now available for #beta_testing. First impressions: it's decent for its intended use case, but unsuitable for novices because it requires pre- and post-processing to avoid silly typo-induced hallucinations about imaginary products like "IBM Branite" even at Q8_0. Here's an easy fix for power users:

---
system_prompt:
coherence:
preprocess: [autocorrect_input, fix_spelling]
postprocess: check_coherence

https://huggingface.co/ibm-granite/granite-4.0-micro

ibm-granite/granite-4.0-micro · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

I boosted several posts about this already, but since people keep asking if I've seen it....

MITRE has announced that its funding for the Common Vulnerabilities and Exposures (CVE) program and related programs, including the Common Weakness Enumeration Program, will expire on April 16. The CVE database is critical for anyone doing vulnerability management or security research, and for a whole lot of other uses. There isn't really anyone else left who does this, and it's typically been work that is paid for and supported by the US government, which is a major consumer of this information, btw.

I reached out to MITRE, and they confirmed it is for real. Here is the contract, which is through the Department of Homeland Security, and has been renewed annually on the 16th or 17th of April.

https://www.usaspending.gov/award/CONT_AWD_70RCSJ23FR0000015_7001_70RSAT20D00000001_7001

MITRE's CVE database is likely going offline tomorrow. They have told me that for now, historical CVE records will be available at GitHub, https://github.com/CVEProject

Yosry Barsoum, vice president and director at MITRE's Center for Securing the Homeland, said:

“On Wednesday, April 16, 2025, funding for MITRE to develop, operate, and modernize the Common Vulnerabilities and Exposures (CVE®) Program and related programs, such as the Common Weakness Enumeration (CWE™) Program, will expire. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE as a global resource.”

USAspending.gov

@loleg This is interesting. I haven’t had the opportunity yet to think through all the ramifications, but I’m glad that some countries are attempting the difficult balancing act here. It matters less whether they succeed than that they try.

I’m against neutering AI systems in a fruitless attempt to make them harmless, but I also think risk appetite is always a balancing act. The real failure is when the only risks are to others. That’s the legal, social, and financial path of highest profit, but it’s also not genuine capitalism.

Real capitalism requires people to risk loss in order to gain, but currently the technology industry in the US offloads all the risk to consumers and taxpayers. Every company is now “too big to fail,” but individual consumers, artists, publishers, et al. don’t get bailed out or spared systemic risk.

Collectively, we can do better. I’m interested to see which of these many initiatives do better by doing things better.

The Artificial Intelligence and Data Act (AIDA) – Companion document

Table of contents Introduction Canada and the global artificial intelligence (AI) landscape Why now is the time for a responsible AI framework in Canada Canada's approach and consultation timeline How the Artificial Intelligence and Data act will work High-impact AI systems: considerations and sys

@Zarkonnen That’s a fair point. I’m deliberately using the term the way many people outside of IP law think if it precisely because it has been used this way by large corporations.

It seems hypocritical of big tech to call it “theft” by individuals, but when they do it it’s only “infringement.” While that is the correct legal term, it sounds a lot cleaner and much too clinical to describe deliberate pillaging of the commons or private individuals in a less accusatory way than would be the case if the roles were reversed.

If a private individual helped themselves to Meta’s non-licensed source code, would their press release call it infringement, fair use, a right to repair or make backups, or would they just call it “theft?”

This is the fundamental quandary of accepting asymmetrical terms of engagement. Why should a corporation—legally a person under US law—be afforded more gray area or benefit of the doubt than an actual human being? That’s the real ethical and legal question. Allowing the infringing party to frame it otherwise is a strategically unsound position from which to argue an injustice.

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

@Catawu @briankrebs I’m not really interested in their frame of reference or what they think about the people impacted. That’s not because I don’t care, but because I think it's irrelevant to the deeper underlying issues.

I’m actually more interested to what extent this situation may violate #HIPAA and other #patientprivacy laws. Part of the functional challenge in what is currently going on at the federal level is that many privacy and #healthcare safeguards such as HIPAA are a complex mixture of laws passed by Congress and regulations defined by the executive branch to implement those laws.

I am not a lawyer, but I do deal with #privacyregulations and #regulatorycompliance issues professionally. To the extent that the administration is arguing that they have constitutional authority to make changes to the implementations developed and overseen by the executive branch itself, the extent of what is being done seems unprecedented but may not be illegal per se. I am not qualified to make that determination, but I think it's the foundational question that needs to be asked.

On the other hand, the parts of HIPAA and other federally-enacted laws regarding #healthcare and privacy are in fact laws established within our country’s constitutional framework. The executive branch can’t simply wish clearly-established laws into the cornfield. Unfortunately, many laws leave a great deal of the implementation details—whether unintentionally or through deliberate delegation—to the executive branch, the states, or various regulatory agencies. In turn, many of those regulators also operate to one extent or another under the executive branch, and that further complicates the picture.

Many federal laws leave a great deal of wiggle room for interpretation to the executive and judicial branches whether not by design, but congressionally-enacted laws and protections provided by the Constitution itself cannot simply be ignored. While there's definitely a difference, separating a "law" from the "regulations" that implement that law isn't necessarily a simple exercise.

The real challenge is that our republic was designed as a Venn diagram of overlapping roles, responsibilities, and authority that were meant to operate in a state of carefully-balanced tension. The republic's framework has never been tested this broadly within my lifetime, if ever. Even though how our three branches of government should work is material covered in any decent highschool civics class, the complexity of statutory vs. regulatory authority requires legal and Constitutional scholarship that is more than the average citizen can bring to bear on the matter. I'd like to think I understand these issues better than most—and I certainly have my own personal and professional instincts about what's right and wrong—but I wouldn't dream of claiming to understand all the nuances involved.

Professionally, I am taking a deliberately apolitical approach to what is a very legitimate set of questions about constitutional authority. Likewise, my apolitical but professional experience tells me that there is entirely too much gray area around the constitutional and legal topics to determine with certainty what is legal as opposed to what is moral or ethical. In my professional experience, what is right and what is lawful aren't always the same.

Unless society as a whole is willing to revisit some of the underlying assumptions collectively made over the past several hundred years about the differences between legislative laws and the administrative regulations that implement them, this problem is unlikely to go away anytime soon. In fact, it is likely to spread to other areas with similar gray areas. As an argument by analogy, the current legal mess around #copyright and #LLM training may be similar in terms of being pure sophistry where the term "fair use" is clearly being used in an intellectually dishonest way, but apparently it's far enough into the gray to pass legal muster right now. Decades or centuries of legislative layering has led to a legal framework that never envisioned modern realities. Revisiting and revising centuries of legal accretion would require a strong moral compass, a great deal of political courage, and in-depth analysis by legal and constitutional scholars (among others) in order to address the very real institutional unraveling we're observing.

Sadly, in a society that frequently classifies expertise as “elitism" such a brutally honest conversation is unlikely to happen soon. A broad reconsideration of how our republic was designed to function and a hard look at how it actually functions would require high levels of both personal and political courage. It's even less likely to be rapidly prioritized without sufficiently clear political self-interest from a majority of those with the remaining authority to materially affect the outcome.

What I’ve said may strike some as political opinion rather than strictly analytical observation. However, my statements are deliberately based on well-established sociological and psychological norms rather than current politics. I feel confident in asserting that the likelihood of Congress or the Supreme Court—much less the general public—addressing these things effectively in the near term is essentially zero. For any elected or appointed official acting alone, the risk of asserting constitutional prerogatives vastly exceeds both the collective will of their respective institutions and the already-ceded institutional powers required to do so effectively.

I will be moderating an executive round table via Zoom from 3:00-4:30pm US/Eastern tomorrow for The Ortus Club. The topics are ones I’m always passionate about: #cybersecurity & #businessresilience.

This is a peer-driven round table. No one’s pitching anything. The goal is to bring a broad spectrum of industry luminaries together to share their experiences, insights, and collectively brainstorm about ways to future-proof our security strategies.

The round table is open to IT & cybersecurity leaders in North America. Space is limited, but there are still a few no-cost seats remaining for the #thoughtleaders in my extended network. You can sign up at the link below, but the clock is ticking.

No matter how well-attended these events are, it’s always more fun with a friendly face or two in the crowd. I hope yours will be one of them, and look forward to seeing you there!

https://www.linkedin.com/posts/todd-a-jacobs_why-legacy-cybersecurity-is-putting-your-activity-7310488468139200512-nodq

Why Legacy Cybersecurity is Putting Your Business at Risk—And How to Build… | Todd A. Jacobs

I will be moderating an executive round table via Zoom from 3:00-4:30pm US/Eastern tomorrow for The Ortus Club - Executive Knowledge Sharing. The topics are ones I’m always passionate about: #cybersecurity & #businessresilience. This is a peer-driven round table. No one’s pitching anything. The goal is to bring a broad spectrum of industry luminaries together to share their experiences, insights, and collectively brainstorm about ways to future-proof our security strategies. The round table is open to IT & cybersecurity leaders in North America. Space is limited, but there are still a few no-cost seats remaining for the #thoughtleaders in my extended network. You can sign up at the link below, but the clock is ticking. No matter how well-attended these events are, it’s always more fun with a friendly face or two in the crowd. I hope yours will be one of them, and look forward to seeing you there! https://lnkd.in/ePaFaJDs