US Copyright Office states common AI research does not violate DMCA

The United States Copyright Office has clarified legal rules for trustworthiness research and red teaming of artificial intelligence (AI) under the Digital Millennium Copyright Act (DMCA), Section 1201, stating common AI research techniques are not a violation. This statement follows repeated requests for clarification by the Hacking Policy Council. 

“DMCA is and has often been used in the past to clap-back at security researchers, in a way that has often created a chilling effect and discourage research in the first place,” says Casey Ellis, Founder and Advisor at Bugcrowd. “This ruling means that security and safety researchers operating in good faith are able to use the different techniques called out in the Register’s Recommendation without fear of reprisal under DMCA. The bottom line implication is that good-faith security research against LLMs, while not explicitly granted an exemption, has been explicitly called out as ‘not violating DMCA’ in the recommendations around the ruling.”

The clarification of this legal rule means AI trustworthiness researchers will have legal grounds to defend against threats of lawsuits for the use of common techniques in their research. However, other trustworthiness research techniques may still be at risk of a DMCA Section 1201 violation. 

https://www.securitymagazine.com/articles/101233-us-copyright-office-states-common-ai-research-does-not-violate-dmca