June 3, 2022

The U.S. Department of Justice (DOJ) recently revised its policy on charging violations of the Computer Fraud and Abuse Act (CFAA), a 1986 law that remains the primary statute by which federal prosecutors pursue cybercrime cases. The new guidelines state that prosecutors should avoid charging security researchers who operate in “good faith” when finding and reporting vulnerabilities. But legal experts continue to advise researchers to proceed with caution, noting the new guidelines can’t be used as a defense in court, nor are they any kind of shield against civil prosecution.

In a statement about the changes, Deputy Attorney General Lisa O. Monaco said the DOJ “has never been interested in prosecuting good-faith computer security research as a crime,” and that the new guidelines “promote cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

What constitutes “good faith security research?” The DOJ’s new policy (PDF) borrows language from a Library of Congress rulemaking (PDF) on the Digital Millennium Copyright Act (DMCA), a similarly controversial law that criminalizes production and dissemination of technologies or services designed to circumvent measures that control access to copyrighted works. According to the government, good faith security research means:

“…accessing a computer solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability, where such activity is carried out in a manner designed to avoid any harm to individuals or the public, and where the information derived from the activity is used primarily to promote the security or safety of the class of devices, machines, or online services to which the accessed computer belongs, or those who use such devices, machines, or online services.”

“Security research not conducted in good faith — for example, for the purpose of discovering security holes in devices, machines, or services in order to extort the owners of such devices, machines, or services — might be called ‘research,’ but is not in good faith.”

The new DOJ policy comes in response to a Supreme Court ruling last year in Van Buren v. United States (PDF), a case involving a former police sergeant in Florida who was convicted of CFAA violations after a friend paid him to use police resources to look up information on a private citizen.

But in an opinion authored by Justice Amy Coney Barrett, the Supreme Court held that the CFAA does not apply to a person who obtains electronic information that they are otherwise authorized to access and then misuses that information.

Orin Kerr, a law professor at University of California, Berkeley, said the DOJ’s updated policy was expected given the Supreme Court ruling in the Van Buren case. Kerr noted that while the new policy says one measure of “good faith” involves researchers taking steps to prevent harm to third parties, what exactly those steps might constitute is another matter.

“The DOJ is making clear they’re not going to prosecute good faith security researchers, but be really careful before you rely on that,” Kerr said. “First, because you could still get sued [civilly, by the party to whom the vulnerability is being reported], but also the line as to what is legitimate security research and what isn’t is still murky.”

Kerr said the new policy also gives CFAA defendants no additional cause for action.

“A lawyer for the defendant can make the pitch that something is good faith security research, but it’s not enforceable,” Kerr said. “Meaning, if the DOJ does bring a CFAA charge, the defendant can’t move to dismiss it on the grounds that it’s good faith security research.”

Kerr added that he can’t think of a CFAA case where this policy would have made a substantive difference.

“I don’t think the DOJ is giving up much, but there’s a lot of hacking that could be covered under good faith security research that they’re saying they won’t prosecute, and it will be interesting to see what happens there,” he said.

The new policy also clarifies other types of potential CFAA violations that are not to be charged. Most of these include violations of a technology provider’s terms of service, and here the DOJ says “violating an access restriction contained in a term of service are not themselves sufficient to warrant federal criminal charges.” Some examples include:

-Embellishing an online dating profile contrary to the terms of service of the dating website;
-Creating fictional accounts on hiring, housing, or rental websites;
-Using a pseudonym on a social networking site that prohibits them;
-Checking sports scores or paying bills at work.

ANALYSIS

Kerr’s warning about the dangers that security researchers face from civil prosecution is well-founded. KrebsOnSecurity regularly hears from security researchers seeking advice on how to handle reporting a security vulnerability or data exposure. In most of these cases, the researcher isn’t worried that the government is going to come after them: It’s that they’re going to get sued by the company responsible for the security vulnerability or data leak.

Often these conversations center around the researcher’s desire to weigh the rewards of gaining recognition for their discoveries with the risk of being targeted with costly civil lawsuits. And almost just as often, the source of the researcher’s unease is that they recognize they might have taken their discovery just a tad too far.

Here’s a common example: A researcher finds a vulnerability in a website that allows them to individually retrieve every customer record in a database. But instead of simply polling a few records that could be used as a proof-of-concept and shared with the vulnerable website, the researcher decides to download every single file on the server.

Not infrequently, there is also concern because at some point the researcher suspected that their automated activities might have actually caused stability or uptime issues with certain services they were testing. Here, the researcher is usually concerned about approaching the vulnerable website or vendor because they worry their activities may already have been identified internally as some sort of external cyberattack.

What do I take away from these conversations? Some of the most trusted and feared security researchers in the industry today gained that esteem not by constantly taking things to extremes and skirting the law, but rather by publicly exercising restraint in the use of their powers and knowledge — and by being effective at communicating their findings in a way that maximizes the help and minimizes the potential harm.

If you believe you’ve discovered a security vulnerability or data exposure, try to consider first how you might defend your actions to the vulnerable website or vendor before embarking on any automated or semi-automated activity that the organization might reasonably misconstrue as a cyberattack. In other words, try as best you can to minimize the potential harm to the vulnerable site or vendor in question, and don’t go further than you need to prove your point.


31 thoughts on “What Counts as “Good Faith Security Research?”

  1. rip

    Remember Randal Schwartz (https://en.wikipedia.org/wiki/Randal_L._Schwartz). He was prosecuted by Intel (for whom he worked) for doing penetration testing of their software. I took some of his courses that he held to try to cover his legal costs.
    ——
    In July 1995, Schwartz was prosecuted in the case of State of Oregon vs. Randal Schwartz, which dealt with compromised computer security during his time as a system administrator for Intel. In the process of performing penetration testing, he cracked a number of passwords on Intel’s systems.[14][15] Schwartz was originally convicted on three felony counts, with one reduced to a misdemeanor, but on February 1, 2007, his arrest and conviction records were sealed through an official expungement, and he is legally no longer a felon.[16][17]

  2. rip

    I uncovered multiple flaws in an operating system used by the DOD (WWMCCS) while under a DOD contract and prepared a report listing the flaws. The report was classified higher than my Top Secret clearance and never made it to the target audience. The last time I looked several of those flaws were still in existence.

  3. mealy

    One assumes that depends on how good of a lawyer you can afford.

  4. JamminJ

    “Kerr added that he can’t think of a CFAA case where this policy would have made a substantive difference.”

    Yes, of course. When the policy makes a substantive difference, there is no case to hear about. The district attorney won’t even bother with it.

  5. JamminJ

    “And almost just as often, the source of the researcher’s unease is that they recognize they might have taken their discovery just a tad too far.”

    This is very true. What I have seen is that many independent researchers (bug hunters) have a problem with knowing the proper audience.
    They should be writing their PoCs and reports solely for the vulnerable company’s security team, with only just enough to prove the vulnerability to them. The security team would have enough insight to be able to recognize the vulnerability with just a limited scope example.
    However, many researchers think they have to prove the vulnerability to the entire security community. Sometimes that is true for situations after the disclosure period and when the flaw is ignored for a long time. But too often, I see them writing for a broad audience by going too far.
    Perhaps the ego takes over, and they are looking to boost their own reputations and don’t care as much about limiting harm or better security. When that happens, they cross the line which “good faith” may not cover them.

  6. Billy Jack

    I found a bit of a security issue on one site in April and am waiting for them to fix it. It doesn’t put their site at any risk but it can raise the potential for their users to encounter issues that may not be obvious.

    If someone sends you an attachment in an e-mail with a PGP key attached, the site asks if you want to trust the PGP key, it assumes that the key is for the sender without displaying it and only shows the first six characters of the fingerprint. So if you send an e-mail with the public key and their reply includes the public key, the site thinks that they public key is that of the sender. If you accept the public key, then any e-mail you send to them will be encrypted with your own public key instead of the pgp public key for the recipient.

    Note that the site stores the pgp public key for the contact in the contact list record for that correspondent and the site will use that public key for the recipient when sending them an e-mail even if the e-mail address for the key does not match the e-mail address for the correspondent.

    About a week ago, my nephew reinstalled the OS on his laptop without copying the PGP keys. He called me to ask how to recover the key and so he ended up creating a new key. Then, he replied to an older e-mail from me to send me his key. Instead of having his public key as an attachment, it replied with my public key that was attached to the e-mail I had sent. It is possible that his reply included his key as well — I didn’t look — but when I clicked on the trust key dialog, I noticed (because of the testing in April) that the portion of the fingerprint that was shown matched my key. If I had not already been aware of the problem I would have just accepted the key as his since I was expecting him to send an e-mail with his public key at that time.

    Seeing the e-mail address from the PGP public key block would help. And we really do need the entire fingerprint even if the e-mail address is displayed.

  7. G.Scott H.

    The policy snippet from the LoC does not clarify much either. The good faith portion references flaw/vulnerability while the not in good faith portion calls them security holes. Both imply intent, but the wording of the actions seems to leave the definition of any particular act up to either the accuser or the prosecutor.

    The LoC does not have a good track record on the DMCA in my view. They were considering whether carrier unlocking of cellphones would qualify as an exemption to the DMCA. Carrier locking of a cellphone has absolutely nothing to do with copyright, it did not need to be exempted from DMCA. There are plenty of other examples of not understanding the tech basis the laws are supposed to cover.

    1. Tyler

      As much as I want to be optimistic that this is a restoration of sane policy by some good-intentioned people in D.C. I do not think we are going to fully comprehend what the D.O.J considers ‘good faith security research’ until we see either an attempted prosecution or ‘press leaks’ related to such. Merely having criminal accusations against one can cause a profound loss of ones employment and other liberties.

      Yes. The circular logic of DMCA rulemaking is silly in itself ‘Here. You can bypass these specific security controls on these specific devices but you have to make the tools yourself, cannot ask for outside help and do not tell anyone!’.

      Makes me wonder if a book about reverse-engineering certain ‘controls’ would be legal like the PGP manual and source-code books back in the 1990s.

      1. mealy

        Anything outside the EULA is a “crime” is the actual possible paradigm so you’re right.
        Fight. Vote. Wallet. Feet. #ProbablyFutile

        1. Tyler

          What an EULA contains and what is actually enforcible is a different question entirely.

          In the past there have been cases stating that a prohibition of certain user-rights in the EULA does not override those rights in certain circumstances.

          1. mealy

            True but they WANT to make anything outside of the EULA a crime. That’s their goal.
            Most people aren’t going to be much resistance vs a well funded legal team.

  8. Justin Shafer

    Doesn’t matter to me. Public FTP is and has always been public.

    1. bobby

      Hey if you thought this was bad, try sending a FOIA regarding immigration. There’s a non-zero chance of the responding agency simply trolling you by sending you entirely redacted pages and that’s it. 20 months after you submit the initial request.

  9. Sam in Mellen

    Look no farther than the Governor of Missouri threatening to prosecute a reporter for simply left-clicking on a state website and finding a vulnerability. Education might be costly but ignorance is much more costly..

  10. bobby bonilla

    Another good time to remind folks that the impetus for the CFAA’s passage was literally Matthew Broderick’s 1980s classic WarGames. The vague and overly broad language is in part another manifestation of how willingly congress delegates authority to agencies that are able to interpret vague statutes and implement them as they see fit. Thank you, Matthew Broderick.

    1. timeless

      While we’re crediting movies and actors, don’t forget Trading Places’s contribution of the Eddie Murphy Rule:

      > The Dodd-Frank Wall Street Reform and Consumer Protection Act was passed in 2010. > You can find Section 746 on pages 364 and 365 of the law’s 849 pages. Section 746 has been called the Eddie Murphy Rule.

      [1] https://econlife.com/2019/12/the-eddie-murphy-rule/

  11. Vb

    Seeking payment for security research is called “extortion”. Until that changes, I won’t be notifying websites or vendors of any bugs that take more than zero effort to discover. Unless there is a bug bounty, there is too much risk of lawsuit or prosecution in reporting bugs.

    1. an_n

      Wouldn’t that depend on how you “seek” it? It’s not an extortion to ask for anything. It’s extortion to demand payment and threaten xyz if they don’t, but you can also report bugs anonymously if that ticks the box. Doing work unsolicited for someone you don’t have a relationship with isn’t a good strategy for being compensated. If your sole motivation is money there are probably better ways for you to get it than security altruism outside of specified bug bounties, that’s true. If you’re doing it for a living you pick your targets.

  12. Clausewitz4.0

    Use contracts. No liability to you, in case of prosecution. The client is supposed to get all needed authorizations to conduct the pentest.

  13. Morganjohn

    The policy excerpt from the LoC is not very enlightening either. The section that is in good faith refers to the issue as a defect or vulnerability, whereas the piece that is not in good faith refers to it as a security hole. The phrasing of the activities seems to leave the meaning of any given conduct up to either the accuser or the prosecutor, yet both assume purpose.

  14. Conduent Connect Feps

    This works really well for us, thank you! Facing same issue here. Help is appreciated.

  15. Ioseph

    And not a single trace of protection for the right-to-repair folks. Not that any large corporation would ever stoop so low as to use the legal process to harass anyone. It’s only IP protection…LOL

  16. Mahhn

    “If you believe you’ve discovered a security vulnerability or data exposure”
    If I found a security issue outside of my work scope; I might ignore it, maybe tell others about it, but there is no chance I would inform the vendor – much to dangerous. At the end of the day I want to go home, not be in the news.

      1. Mahhn

        nope, I don’t believe anyone is anonymous, just hiding for the moment. If someone want’s to find you they will. In the case of a crime they will gladly take a Scapegoat in place of a criminal. I’d rather watch the world burn than take a chance of being falsely accused for it. Been down the other road, lost all respect for authority a long time ago – more corruption at the top than the bottom for sure – but eh, that’s where the money is.

  17. Blanche Dubois

    Re. “-Embellishing an online dating profile contrary to the terms of service of the dating website;
    -Creating fictional accounts on hiring, housing, or rental websites;
    -Using a pseudonym on a social networking site that prohibits them;”
    are apparently CFAA OK.

    Gosh, Lori Drew and her “Josh Evans” scam was ahead of its time.
    Cost?
    Only the suicide of Lori’s “target”, a 13 y.o. girl, Lori’s next door neighbor.
    Google Lori Drew to refresh the memory cells.

  18. ninjaturtle

    There used tobe a saying posted on the wall of our IT office: “Don’t open the can of worms unless you’re fishing for trouble.”

  19. tomcat

    Stumbled on this …
    Please don’t sue, Will have to sell on Darknet to fund attorneys , will ruin it for Yu & me.

Comments are closed.