Harley Geiger

307 Followers
153 Following
44 Posts
Simple country cyberlawyer.

🚨 New #CIRCIA regulations are coming! Businesses linked to critical infrastructure sectors will soon need to report #cybersecurity incidents & ransomware payments to CISA. Comments open until June 3, 2024.

cc: @HarleyGeiger @venable
https://www.jdsupra.com/legalnews/circia-cyber-incident-reporting-for-3028680/

CIRCIA: Cyber Incident Reporting for Practically Everyone? | JD Supra

A sweeping array of businesses are another step closer to requirements to report cybersecurity incidents and ransomware payments to the federal...

JD Supra

The AI red teaming community tends to use terms borrowed from security, but in a way that risks legal confusion. As AI is embedded into regulated sectors, the need for more precise terminology is growing. Two examples:

1) “Safety.” The AI red teaming community often broadly refers to testing for non-security harms as “safety” testing. This includes testing for bias, discrimination, synthetic content, copyright infringement, and more. While there should be a generic way to refer to non-security risks for AI, the term “safety” is already used throughout security standards and regulations to generally mean physical safety. See, for example, CIRCIA.

2) “Vulnerability.” The AI red teaming community often refers to algorithmic flaws that cause non-security harms as “vulnerabilities.” As with “safety,” the term “vulnerability” is used throughout security regulations and often carry legal obligations for regulated entities. See, for example, the vulnerability scanning and management requirements in the NYDFS Cybersecurity Regulation.

Neither policymakers nor the AI/ML community intend to apply legal obligations and standards for “safety” and “vulnerabilities” to non-security AI risks. Adopting more specific terms can help avoid confusion and time wasted explaining how “vulnerability” has different meanings in AI versus cybersecurity.

Two suggestions: “algorithmic flaw” rather than “vulnerability.” And, drawing from the NIST AI Risk Management Framework, “trustworthiness” rather than “safety.”

#AI

MUST READ: This morning, a powerful letter was delivered to the UK Home Office, signed by 27 leading security experts concerning proposed amendments to the Investigatory Powers Act (IPA) that would be counterproductive to #cybersecurity.
https://cdt.org/insights/open-letter-from-security-experts-voices-concerns-over-the-proposed-changes-to-uk-investigatory-powers-acts-notices-regime/
Open Letter from Security Experts Voices Concerns Over the Proposed Changes to UK Investigatory Powers Act’s Notices Regime

The proposed amendments to the UK’s Investigatory Powers Act (IPA) have prompted a powerful open letter addressed to the UK Home Secretary from security experts united in their commitment to a secure, reliable, and inclusive internet. Their profound concerns highlight the detrimental impacts on digital security and privacy that these changes would have, namely that […]

Center for Democracy and Technology

Shout out to the Security Research Legal Defense Fund for helping us go public about our train research! We're honored to have been their first grantees.

Without their financial assistance we would've had to crowdfund our legal bills, or even worse, stay quiet about the locks we've found in Impuls trains.

If you're facing legal threats (or even anticipate the possibility of such threats) as the result of security research we definitely recommend reaching out to them.

https://www.securityresearchlegaldefensefund.org/

Security Research Legal Defense Fund

We aim to help fund legal representation for persons that face legal issues due to good faith security research and vulnerability disclosure in cases that would advance cybersecurity for the public interest.

A raccoon is behind the massive power outage tonight in downtown #Toronto that is currently affecting 7,000 customers.

https://toronto.ctvnews.ca/raccoon-behind-downtown-toronto-power-outage-affecting-thousands-hydro-one-1.6752553

Raccoon behind downtown Toronto power outage affecting thousands: Hydro One

A raccoon is behind the massive power outage in downtown Toronto that is currently affecting 7,000 customers.

Toronto
Last #shmoocon next year. 😢

Another edition of #infosec #followfriday is here! Here’s some new great accounts I’ve discovered…

@larsborn
@ollie_whitehouse
@ntkramer
@blacktop
@grimmware
@Anthony_Kraudelt
@faker
@HarleyGeiger
@jimmyblake

#Discoverability is a little trickier on Mastodon so I find this is a great way to help out the community by mass-boosting accounts. I encourage others to boost #intro posts or share lists like this when they can. Cheers!

The Hacking Policy Council filed comments with the Copyright Office in support of ethical AI hacking. The comments urge the Copyright Office to establish an exemption to Section 1201 of the DMCA for independent good faith testing of AI systems for bias, discrimination, & harmful output.

https://www.copyright.gov/1201/2024/comments/Class%204%20-%20Initial%20Comments%20-%20Hacking%20Policy%20Council.pdf

By identifying and disclosing algorithmic flaws so that they can be corrected, AI alignment research and AI red teaming are beneficial practices to help ensure the trustworthiness, and fairness of generative AI systems. However, DMCA § 1201 prohibits bypassing access controls to software without permission of the copyright owner, which can restrict independent AI alignment research.

DMCA § 1201 already has an exemption that protects good faith security testing, which has proven to be beneficial to the overall security of the tech ecosystem. But, as the Hacking Policy Council detailed in a recent white paper, AI systems are tested for a variety of potential harms - not just security.

So, the Hacking Policy Council urged the Copyright Office to establish an exemption under DMCA § 1201 to protect research where the researched bias or misalignment may not directly affect security or safety (for example, research demonstrating flaws that cause a generative AI system to engage in racial or gender discrimination, or to produce synthetic child sexual abuse material). The Hacking Policy Council suggested adapting language from the existing security research exemption and Executive Order 14110.

Extending protections for AI research and red teaming under DMCA Section 1201 not only fosters responsible development, but also promotes transparency, accountability, and trust. By addressing potential legal gaps and uncertainties, we can establish frameworks that improve and preserve #AI alignment, ultimately safeguarding both technological advancements and societal interests.

The Hacking Policy Council released a white paper calling for clarity and legal protections for AI red teaming. https://lnkd.in/e4UMv9AM

While organizations may be familiar with red teaming to test software for security, “AI red teaming” has a broader scope by testing AI systems for flaws and vulnerabilities that include security, bias, discrimination, and other harmful or undesirable outputs - as demonstrated by the recent Biden Administration Executive Order on trust in AI: https://lnkd.in/et7v-yCB

AI red teaming, when performed in good faith, aims to identify and disclose misalignment in AI systems so it can be corrected and thereby help ensure trustworthiness of the system. To encourage information sharing of AI misalignment and enable independent AI red teaming, the Hacking Policy Council recommends:

1) Develop consistent alignment goals for AI red teaming. Governments should work with the private sector to develop consistent goals for AI alignment in the context of AI red teaming. This will enable AI red teaming to test for dissonance with those goals.

2) Protect information sharing for AI alignment purposes. Governments should ensure legal frameworks to facilitate security information sharing are adapted to encourage and protect information sharing for harmful, discriminatory, or undesirable outputs in AI systems.

3) Prepare to receive misalignment disclosures. Organizations should prepare to accept disclosures from independent AI red teamers. This may require adaptations to security vulnerability disclosure programs and handling processes to accommodate disclosures for harmful, discriminatory, or undesirable outputs in AI systems.

4) Clarify legal protections for independent AI red teaming. Governments should ensure legal protections for independent security research extend to independent #AI red teaming performed in good faith.

LinkedIn

This link will take you to a page that’s not on LinkedIn

“A train manufactured by a Polish company suddenly broke down during maintenance. The experts were helpless – the train was fine, it just wouldn’t run. In a desperate last gasp, the Dragon Sector team was called in to help, and its members found wonders the train engineers had never dreamed of.“

https://badcyber.com/dieselgate-but-for-trains-some-heavyweight-hardware-hacking/

Dieselgate, but for trains – some heavyweight hardware hacking – BadCyber