Americas

  • United States

Asia

Oceania

mnadeau
Senior Editor

SolarWinds CISO: Know your adversary, what they want, watch everything

Interview
Nov 04, 202111 mins
Advanced Persistent ThreatsCyberattacksMalware

The compromise of SolarWinds' Orion software changed the company's approach to security. Tim Brown shares some hard-won advice for how CISOs and software vendors should prepare for supply chain attacks.

timbrown solarwinds ciso 3x2
Credit: SolarWinds

Late last year, a group believed to be Russia’s Cozy Bear (APT29) successfully compromised SolarWinds’ Orion update software, turning it into a delivery vehicle for malware. Nearly 100 customers of the popular network monitoring tool were affected, including government entities and cybersecurity company FireEye.

The attacker was able to gain access to SolarWinds’ IT infrastructure to produce trojanized updates to the Orion software. FireEye, which first discovered this software supply chain attack, said it required meticulous planning and manual interaction by the attackers.

While researchers consider the attack noteworthy, so is SolarWinds’ response. The company quickly brought in capable outside help to not only address the immediate crisis, but to also help review their security operations and craft a strategy to better guard against future software supply chain attacks. SolarWinds has openly communicated its knowledge of the incident and the steps it is taking to improve its security posture.

CSO spoke with Tim Brown, SolarWinds CISO and vice president of security, about how this incident has changed the company’s approach to security. Brown is responsible for both product and internal security.

How has your role changed since the attack?

Prior to the attack, I didn’t necessarily call myself a CISO because I was focused on both security operations and product security/strategy. My goal has always been a mix between product as well as operation. That’s important when you have a product development environment that you do have that mix. We do we take on the security aspects of an operation. Our primary delivery is products, so it’s very important that our security team is tied into both sides of that.

Once the dust settled from the actual incident, what was your process for deciding how to move forward?

First step was during the investigation, we brought in CrowdStrike to do macro inspections of everything. They worked with us for about five months, really digging into all the details of every workstation, every server.

At the same time, we brought in the KPMG forensics team because we needed a little bit of a different skill set. We needed somebody to focus on the engineering and development environments, and then do micro inspections.

To gain efficiency, we had CrowdStrike focus on macro, KPMG focus on micro. We’re meeting with them daily for the first month or two, and getting a punch list of, “Hey, these are things you should look at.”

One of the things out of the investigation was that we needed better visibility across the entire environment. Prior to the incident, I ran my own SOC—a very, very reasonable SOC with good coverage. Workstations and servers now have CrowdStrike Falcon for monitoring.

SecureWorks then takes my AWS information, my firewall information, my Azure information, Microsoft 365, and all my workstation and server information from CrowdStrike. Then my SOC becomes a third SOC. Visibility has gone from one SOC to three, 7×24. That visibility has worked extremely well for us to be able to see everything.

Another change was a full-time red team. Red teaming was part time prior to the incident. Full-time red teaming allowed us to take a couple roles. One is internal red teaming of the infrastructure, testing all the controls that we put in place and making sure that they were sufficient and that our SOC was catching the right thing.

We’re on a regular cadence to internally pen-test every solution and then we externally pen-test as well. That gives us a complementary approach. It also hooks up closely with the engineering environment, which was doing their own internal security testing.

[It’s] tripling up again, the testing in the security testing by multiple parties: external, internal with my team and internal to the development team.

How has the security mindset of the company changed for your team and the business in general?

People [tell] me they always have issues trying to get developers to change and developers to think security. With this incident, our engineering community was very upset. Somebody broke into their house and changed their environment. Anything that we want to have happen, [the engineering community is] right on top of that. They are very, very in tune with security in general.

One of our pillars for secure by design is people—creating a culture of security. This is an ongoing journey. We do training, we encourage reporting and we involve all employees.

From [our] executive leadership, I don’t think Sudhakar [Ramakrishna], our CEO, goes through an all-hands meeting without talking about security. Security is talked about at all levels.

The other part of it is from the sales team and the sales mindset. What’s top of mind right now for our customers is security. So, it becomes not just an internal thing. It’s also business enabler.

Software companies, even outside the security industry, want to talk about their security features now.

Absolutely. We see more difficult, more detailed questions being asked by our customers about our security processes. I think that’s good. It allows them to push security companies on the right track or push product companies in the right direction and educates them on what they need to look at.

What guidance or tools are you giving customers to help mitigate supply chain threats?

We have had secure configuration information in many different places. Post incident, we pulled it together into one document and one area to say, ‘Here’s the way to implement in a secure fashion.’

Especially with the on-premises solutions, it’s a partnership. We need them to be able to take the right actions, and we need them to be able to configure in the right way. We don’t always have that insight to how they configured. In some cases, they’re completely air-gapped and they don’t talk to us. They simply take the product and install it. It’s important that they realize how to appropriately, securely configure, monitor and manage the product.

Are you providing more visibility into your ecosystem and the services that you’re using?

We’ll expose what tools we use. We will tell them, “Hey, we use Checkmarx to do static code analysis. We use WhiteSource to look at open source.”

We will we talk more about our SDL [secure development lifecycle] process and the safeguards that we put in place into the environment. And truthfully, like most vendors prior to the incident, we were pretty vague, because it’s an on-premises product.  Do they really care how we protect and build? Now everybody does, which is a good step forward. I talk to other CISOs, and they see that the questionnaires are getting harder, the information that they’re requesting is getting harder. They’re requesting more openness. It’s good for the industry.

You mentioned some things that you’re working on that aren’t implemented like the least privileged access model for products and internal auditing. Do you have a timeline for those?

Internal auditing, which will be an internal audit of everything from a line of code all the way down to product, is looking like Q1-ish 2022. The least-privileged model for products has already started with documentation and initial implementation.

That’s a start. We’ve already made changes regarding agents and other things to help give people ideas of how you should configure [if you] collect data from this agent. We’ve done things like making it so that the alert system runs under a different account, and you can specify that account with the appropriate privileges.

Next step is the integration with privilege management systems so that we don’t necessarily need passwords inside our product. We can get them out of approved password management system. Those things all start looking toward how we make it so that we have the minimal privileges necessary but still be able to perform the functions that we’re doing.

And that would help with customers who might not have as stringent access controls as they should?

Exactly. It would just put some safeguards for those customers at the right level. During the incident, we had an Orion Assistance Program with our partners. Our partners would both help people with upgrades and help validate configuration to make sure they were appropriate.

What should the software industry be doing to better protect everyone from supply chain attacks?

First, make sure their own house is in order. Make sure that they are prepared for that level of adversary coming into their own environment. Then have a plan prepared if it does occur. Continue to practice your incident response process.

Second, for the customers, you should make your products so that they are more resilient to inappropriate configuration, more resilient to attack in general. Whether that is guides on how to configure, whether that is tools, whether that is configuration help, it all comes down into helping customers configure appropriately inside their environment to be more resilient.

From the industry perspective, if we look at how [President Biden’s] executive orders are spelled out, it’s more transparency. It’s looking at software, those materials, it’s looking at all your components that you’re utilizing within products and making them more public. [It’s] understanding and providing information on your development frameworks and your development cycles and how those look like.

That’s going in the right direction from a transparency perspective. The software industry should embrace that reality, not just do the basics, but help it make it so that the frameworks and information that we expose really does help to secure the environment and make it more resilient to attack.

What’s the most important task other CISOs should be doing at companies that are likely to be targeted by this kind of an adversary?

One of the first lessons that everybody should be aware of is the level of a threat actor. The nation-state is not some movie prop. It’s a very real threat actor that is patient, extremely thoughtful, on a mission, and very quiet in the environment. All those things that make them hard to discover and hard to combat is that adversary that we faced today. [Those same models] will start shifting to organized crime.

If you don’t understand what [adversaries] would be after, start there. Start by understanding your environment, start by understanding what they would be after, start by doing that quick visibility into your environment, so that you’re watching everything at every moment, and make sure that you do have broad ranges of visibility across the environment.

Make sure that you have the safeguards installed into your environment.  From a development perspective, [make sure] that you’re managing vulnerabilities, ones that you know about, ones that third parties know about, and that you have a process in place to be able to manage them appropriately.

One of the lessons is, no matter how much you practice [incident response], it’s going to be different. When something of this level happens, you just need to be ready with your processes and procedures. We were there until two in the morning every morning for two weeks, simply because there’s just so much to do.

Have the right people on speed dial; you can’t do everything yourself. When you get something like this level, bring in folks that have done it before. From a messaging perspective, from a response perspective, from an investigation perspective, all those things require having skilled people involved who have been through it before.

About a year before the incident, we put a process in place that every security bug, whether it’s recorded externally, by our tools, or somewhere else, [becomes] a Jira ticket, just like regular bugs, but it gets a security tag, CVSS score. My security team monitors those. If they don’t meet our internal SLA for resolution, they go through our RAF (risk assessment form) process, where I have to sign off on the risk and the head of engineering signs off on the risk. That raises the level of how you deal with vulnerabilities in the product to an appropriate level to make decisions on whether something is fixed and how it gets fixed.

Have processes in place that make sure that you are at a moving forward on the vulnerability front, because it won’t necessarily be the threat actor is coming into your environment and changing code like they did in ours. It could be a threat actor discovering zero days in your products and being able to take advantage of those. So, make sure you have coverage in both of those areas.