Skip to main content

Researchers warn court ruling could have a chilling effect on adversarial machine learning

A photo shows the U.S. Supreme Court building in Washington, D.C.
Image Credit: Daderot

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


A cross-disciplinary team of machine learning, security, policy, and law experts say inconsistent court interpretations of an anti-hacking law have a chilling effect on adversarial machine learning security research and cybersecurity. At question is a portion of the Computer Fraud and Abuse Act (CFAA). A ruling to decide how part of the law is interpreted could shape the future of cybersecurity and adversarial machine learning.

If the U.S. Supreme Court takes up an appeal case based on CFAA next year, researchers predict that the court will ultimately choose a narrow definition of the clause related to “exceed authorized access” instead of siding with circuit courts who have taken a broad definition of the law. One circuit court ruling on the subject concluded that a broad view would turn millions of people into unsuspecting criminals.

“If we are correct and the Supreme Court follows the Ninth Circuit’s narrow construction, this will have important implications for adversarial ML research. In fact, we believe that this will lead to better security outcomes in the long term,” the researchers’ report reads. “With a more narrow construction of the CFAA, ML security researchers will be less likely chilled from conducting tests and other exploratory work on ML systems, again leading to better security in the long term.”

Roughly half of circuit courts have ruled on the CFAA provisions around the country and have reached a 4-3 split. Some courts adopted a broader interpretation, which finds that “exceed authorized access” can deem improper access to information as including a breach of some terms of service or agreement. A narrow view finds that accessing information alone constitutes a CFAA violation.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The analysis was carried out by a team of researchers from Microsoft, Harvard Law School, Harvard’s Berkman Klein Center for Internet and Society, and the University of Toronto’s Citizen Lab. The paper, titled “Legal Risks of Adversarial Machine Learning Research,” was accepted for publication and presented today at the Law and Machine Learning workshop at the International Conference on Machine Learning (ICML).

Adversarial machine learning has been used to, for example, fool Cylance antivirus software to label malicious code as benign, and make Tesla self-driving cars steer into oncoming traffic. It’s also been used to make images shared online unidentifiable to facial recognition systems. In March, the U.S. Computer Emergency Readiness Team (CERT) issued a vulnerability note warning that adversarial machine learning can be used to attack models trained using gradient descent.

The researchers found that virtually every form of known adversarial machine learning can be defined as potentially violating CFAA provisions. They say CFAA is most commonly relevant to adversarial machine learning researchers due to sections 1030(a)(2)(C) and 1030(a)(5) of the CFAA. Specifically in question are provisions related to defining what activity is defined as exceeding authorized access to a “protected computer” or causing damage to a “protected computer” by “knowingly” transmitting a “program, information, code, or command.”

The U.S. Supreme Court has not yet decided what cases it will hear in the 2021 term, but researchers believe the Supreme Court could take up Van Buren v. United States, a case involving a police officer who allegedly attempted to illegally sell data obtained from a database. Each new term of the U.S. Supreme Court begins the first Monday of October.

The group of researchers are unequivocal in their dismissal of terms of service as a deterrent to anyone whose real interest is to carry out criminal activity. “Contractual measures provide little proactive protection against adversarial attacks, while deterring legitimate researchers from either testing systems or reporting results. However, the actors most likely to be deterred are machine learning researchers who would pay attention to terms of service and may be chilled from research due to fear of CFAA liabilities,” the paper reads. “On this angle of view, expansive terms of service may be a legalistic form of security theater: performative, providing little actual security protection, while actually chilling practices that may lead to better security.”

Artificial intelligence is playing an increasing role in cybersecurity, but many security professionals fear that hackers will begin to use more AI in attacks. Read VentureBeat’s special issue on security and AI for more information.

In other work presented this week at ICML, MIT researchers found systematic flaws in the annotation pipeline for the popular ImageNet data set, while OpenAI used ImageNet to train its GPT-2 language model to classify and generate images.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.