‹  Back

June 30, 2022 · 8m read

Security through Humility — a conversation with Gordon Chaffee

Material Team 

@material_sec 

Material CEO Ryan Noon recently chatted with well-known technologist Gordon Chaffee. Most recently Gordon led security and privacy teams at Google and Google Cloud. Prior to Google, Gordon started a cybersecurity firm and was a key early employee (and SVP Engineering) over a ten year stint at Riverbed Technology.

In this interview Ryan and Gordon discussed the art of leadership, how to design security and engineering organizations, and important trends in security.

What led you to security? Give me a quick rundown.

In the 1990s, I started reading Phrack and classic writeups like “Smashing The Stack For Fun And Profit.” I found it fascinating to see how people looked for ways systems break.

I joined a super early startup called Valicert (a couple of people in a bedroom looking to raise funding) working on digital certificate validation. This was shortly after the SSL 1.0 spec had been written and implemented into Netscape. The Chief Scientist was Paul Kocher, probably the primary author of the SSL 1.0 spec. There I began learning and using SSLeay (subsequently renamed OpenSSL). In trying to make sure our service would be robust, I would take valid client requests sent to our server and mutate them in various ways (something that became later known as “fuzzing”). At the time, I found and fixed 33 bugs in SSLeay and sent the patches to Eric Young, the maintainer. As I understood, this was the most extensive set of bugs he’d ever received at once, and it showed me the power of using scalable approaches for looking for security problems.

I only stayed with the company for about a year, and then I didn’t work in security again for almost 20 years. However, I did continue to follow topics in security. In particular, I liked to read the reports from threat researchers at companies like McAfee or Symantec that reverse-engineered malware and explained how those systems worked.

In 2014, about six months after Docker launched, I co-founded a container security company, Defend7. We saw a need to change security from a host and VM-centric world to one where containers were first-class citizens and services were scaled up and down dynamically. We raised a VC round, but we were early and lost our way as there was minimal container adoption in the industry then.

I then joined Google’s central Security and Privacy team as a Director of Engineering. Most were long-time security practitioners, but I brought a different perspective from my history of working in companies selling to enterprises. At that time, Google struggled to understand enterprise businesses’ needs, so I helped bridge the gap. I spent four years at Google where I was responsible for the software teams building major parts of Google’s data protection infrastructure, detection and response systems, and Google Cloud security and privacy services. The first public service I was heavily involved in launching was Google Cloud Access Transparency which leveraged Google’s data protection capabilities to generate audit logs for customers when Google employees accessed Google Cloud’s customers’ data. When I decided to leave, I led a software development team of ~600 people.

What best practices have you learned around designing organizations and culture?

The culture you instill is fundamental: humility, respect, and trust are foundational. I think the book, 5 Dysfunctions of a Team, captures this well. Culture starts with a foundation of trust which enables different perspectives to be openly discussed. This leads to shared understanding, and I believe better solutions come from shared understanding. Then comes the decision. Everyone needs to commit to the decision, even if they have a different perspective, and be accountable for their part in the execution of the decision. I like the notion of “strong opinions, weakly held” since I want people to have starting perspectives but be willing to change their opinions as they learn about different aspects of problems.

Of course, I need to have competent leaders in my teams since I’m critically dependent on them for everything we do. So my job is to get the right people, figure out how to get the most from them, and ensure a safe, supportive environment.

I like the notion of “strong opinions, weakly held” since I want people to have starting perspectives but be willing to change their opinions as they learn about different aspects of problems.

Gordon Chaffee

What recommendations do you have for those looking to develop their security leadership skills?

Join an environment that thinks about security well. For example, Microsoft, Google, Stripe, Netflix, Snap, and Square (Block) are companies that think about security seriously, and they have excellent leaders from which to learn.

Read the writings of thought leaders like Phil Venables, current Google Cloud CISO, and former long-time head of security at Goldman Sachs. I love his writings because he does an excellent job framing complex issues.

Working effectively with others and understanding the business is extremely important in security leadership. Know where the company gets its value so you can shape security around the benefits and risks to the organization.

Working effectively with others and understanding the business is extremely important in security leadership. Know where the company gets its value so you can shape security around the benefits and risks to the organization.

Who do you see as some of the best leaders in this industry?

I’ve been fortunate to work closely with some amazing security leaders. A few of my favorites include:

Adam Stubblefield: Adam co-leads Stripe security and is a former distinguished engineer at Google. He is one of the world’s best security thinkers and has a fantastic ability to relate to and motivate people. He developed engineering security thought leaders who could balance business needs with security and privacy risks. He always asked excellent questions to help people understand the gaps in their thinking that guided them toward good decisions rather than telling them the answers. He cared deeply about structured paths for developing people.

Niels Provos (@NielsProvos): Until recently, Niels was head of security at Stripe and was the first distinguished engineer in security at Google. He started the Safe Browsing project and led Google security and privacy infrastructure. He always calmly and objectively approached complex challenges and took a profoundly principled approach to getting security right.

Lea Kissner (@LeaKissner): Lea currently heads security and privacy at Twitter. We worked together at Google, and I felt that they were the best personification of privacy at Google. Lea had a deeply nuanced understanding of the different risks and how people reacted to systems. I learned a lot from Lea: it isn’t enough to be secure; you need to give users visual indications that make them understand or feel secure. For example, a secure messaging system may choose not to transmit messages when there is a malicious adversary in the middle, but the indications of failed message delivery will commonly cause a user to switch to an insecure messaging system because they think the secure system is failing.

Phil Venables (@philvenables): Phil joined Google after I left, and Google’s CISO, Royal Hansen, had previously learned from him. He is a fantastic security thinker, which he captures in writings on his website at philvenables.com. One example is where he talks about the importance of good UX in security. We commonly blame “human error” when something goes wrong in a system that requires a human to understand and distill far too much information to make a decision. However, the real problem was likely a poorly designed system where humans were set up for failure and operated far better than expected.

Heather Adkins (@argvee): Heather’s been at Google for over 20 years, and she was a Senior Eng Director leading the detection and response team at Google. She did a lot of work outside Google on election security, and she was one of the authors of Building Secure, Reliable Systems. She is one of the best communicators about security, and she captures the emotions around issues to help drive action. Her constant worrying about what might go wrong made me feel comfortable that Google wouldn’t be blind to things going wrong.

What are some of the most positive developments you've seen in security in the last few years?

I have a few thoughts here:

  1. The expansion of security key usage. With how common phishing has been for a long time, there needed to be a much better way to keep people safe than running employees through training exercises. In enterprise environments, security keys can be implemented very effectively with both a meaningful improvement to security and a good user experience. For example, Google's enterprise implementation is excellent. On the consumer side, security keys aren't usable for most websites, and there are weird implementations on some sites, such as only allowing one security key to be registered. In addition, almost all have a bypass mechanism that effectively bypasses the strength of security keys. I'm happy with the progress of the FIDO Alliance and the industry alignment across Microsoft, Apple, and Google to support U2F, and I hope most financial sites will support security keys.
  2. Shared security model. I like how developers can make choices that give them a starting point for security. For example, they can make it the cloud provider's responsibility to patch the operating system and application stack and focus more on the value they provide to their customers. Of course, this doesn't work for all services, but offerings like serverless enable offloading of security and scaling to the providers. Of course, this makes the team more dependent on the provider, so the provider must handle the security well and make the service reliable.
  3. Confidential Computing. This is important to enable cloud users to reduce the risk of the cloud provider in the threat model and ideally eliminate the provider. This started with Intel SGX, but it was very limited and required largely rewriting or minimally recompiling against special libraries and using special tools to make it work. AMD took an alternate approach with its Secure Encrypted Virtualization (SEV). It has improved over time in newer chips with SEV-SNP to protect against a variety of hypervisor attacks, while Intel TDX provides similar functionality. All the cloud providers now offer some confidential computing capabilities for VMs, although I don't believe those apply to higher-level services such as cloud-managed databases.

    I like confidential computing for removing the cloud provider from the security boundary. In the past, if a company had its own data centers, its legal team would deal with governmental legal requests. Without confidential computing, governments can come to the cloud providers to access data, and the cloud providers aren't the ones best positioned to push back. Plus it gets messy with governments around jurisdiction: governments anywhere may try to get access to any data in a cloud provider. So from my POV, It would be good to see alignment again between governments and the entities associated with that government, such as citizens, companies, and organizations associated with a particular country.

What are some worrisome trends?

  1. Supply chain attacks like Solar Winds and now Okta. Most companies have adopted many SaaS services, so they end up with a bunch of exposure if vendors are compromised. This is again top of mind with the compromise of Okta’s support subcontractor. I think it would be naive to assume that most SaaS vendors handle security well.
  2. Ransomware. This continues to worry me because of the real-world impacts of attacks. But it also brings me optimism because it helps put a price on bad security practices and helps companies justify more significant investments.
  3. The increasing complexity of our infrastructure. AWS provides over 200 services. There are a plethora of SaaS services in use. In addition, any established company has a historical set of systems that typically need maintenance. How can teams realistically understand the risks of using and connecting all these services?

If you could change one thing about the security industry, what would it be?

Much more emphasis on getting the basics right such as with the concepts of secure-by-construction (credit to Christoph Kern) and defense in depth when building the infrastructure. Assume something will go wrong with the first layer of defense and still protect the assets. This is analogous to what Material does by assuming that your Google or Microsoft account has been compromised.

Perhaps a little dated now, but stop claiming ML/AI solves every problem. ML/AI is particularly beneficial when applied to issues like fraud and some kinds of abuse, but it has much less apparent wins in detecting threats. In particular, when a security analyst can’t understand why an issue was triggered, it undermines confidence in the system.

ML/AI is particularly beneficial when applied to issues like fraud and some kinds of abuse, but it has much less apparent wins in detecting threats.

Any specific security resources you recommend?

I read and listen to many books, so there are a ton I could recommend. The first is How to Win Friends and Influence People. I’ve found most of its advice to help make progress with people and teams. In particular, being an effective listener and understanding the other person’s perspective make everything easier. For example, one of my most significant values at Google was effectively engaging with teams that hadn’t had very good relationships with the security and privacy org. They felt we were asking them for things without listening to their challenges and frustrations.

Other great books include Crucial Conversations, 5 Dysfunctions of a Team, Debugging Teams, Sapiens, and Range: How Generalists Triumph in a Specialized World.

Being an effective listener and understanding the other person’s perspective make everything easier.