agsandrew - Fotolia
As GitHub’s first chief security officer, Mike Hanley has the mammoth task of overseeing all things security across the organisation, from corporate IT security and compliance to the security of its platform, which millions of developers rely on to collaborate and build software each day.
His appointment earlier this year also comes at a time when security threats are growing amid the pandemic, including the sophisticated SolarWinds attack allegedly conducted by nation-state actors that had subverted the software build environment to inject malicious code.
Under his charge, GitHub has been driving efforts to shore up adoption of security best practices among developers and organisations, building on the company’s approach towards developer-first security.
In a wide-ranging interview with Computer Weekly, Hanley shares his views on what the cyber threat landscape means for developers, what it takes to build secure software and his priorities for the year ahead.
How has the pandemic changed the developer landscape with the growing demands on IT and remote work?
Hanley: At a high level, there have been a couple of interesting trends that are reshaping the developer landscape. First, obviously, with Covid-19 and all of us working from home, you can no longer walk up to somebody’s desk to see what they’re working on or talk to them about how you’re implementing a feature. We’ve completely changed that situation now, so we’re more reliant than ever on remote collaboration tools to get our jobs done, stay connected to our teams and coordinate development.
That means the platform and tools we use have to be able to support increased traffic, with additional features and functionality to make sure the experience of collaborating on a platform is even better than what you could get in person. The nice thing is that GitHub was designed for highly remote and distributed teams. If you think about the open-source community, it was always distributed, and you always have people working together from around the globe to build software together.
Read more about cyber security in APAC
- The narrowing gender gap may be a cause for cheer, but more needs to be done to curb discrimination, and attract and retain women for cyber security roles in Asia-Pacific.
- Five-year longitudinal study by Imperva shows the proportion of databases with at least one known vulnerability in Australia and Singapore are among the world’s highest.
- OT security experts shed light on the state of OT security in the region, and what’s being done to address skills, competency and organisational challenges.
- Singapore updates its national cyber security strategy to shore up the security of critical infrastructure and enterprises while growing its cyber security industry, among other goals.
We were well-prepared in terms of getting infrastructure to support those changes that are happening in the ecosystem around us. We believe that with more development moving to the cloud, it’s imperative to be able to access online talent across the globe. GitHub is very much leaning into how we can facilitate the idea that development is moving to the cloud with things like Codespaces, but also getting additional features and capabilities baked into the product.
We’ve also seen some high-profile sophisticated attacks like the SolarWinds incident. What is your sense of those sorts of attacks and how should developers be prepared to address them? There have been some solutions being proposed, such as requesting for a software bill of materials and enabling reproducible builds.
Hanley: I think a lot of the projects that you just mentioned are super important to building trust into the broader software ecosystem which has been shaken with the unprecedented volume of supply chain attacks.
At the same time, while we might continue to invest and work together on those projects with the broader community ecosystem – GitHub, for example, recently became a premier sponsor of the Open Source Security Foundation – developers have a lot of things that they can use to protect the ecosystem.
One example is turning on multi-factor authentication (MFA) for their GitHub account, and we’ve done a lot of work to ensure account security on GitHub.com and how people interact with features is done securely. To your point, attackers are quite sophisticated and often have large teams doing those attacks. Together, they are looking for easy places to get in that will help prevent attribution. If you leave the front door to your house unlocked and you don’t have cameras around, it’s pretty easy for somebody to just use the front door or do whatever they need to do to get in, like putting a ladder against the side of the house or cutting a hole in the roof.
Likewise, for accounts that don’t have MFA in place, I can password-spray or phish the account holder to get right in the front door of the house. That makes it much harder for me to be attributed. Adversaries, being rational actors, are just trying to get the job done, and they will search for easy ways to blend in with things like that, which is unfortunately all too common.
So, really helping to drive the basics of things like good account security and hygiene makes it substantially harder for attackers to take over somebody’s account. By extension, if you’re working in an organisation, whether it’s an open source project or a corporate organisation, requiring the participants or maintainers to use MFA to contribute to the projects is another table-stakes security measure that you can deploy.
One of the things that has been talked about a lot in the past few years has been this idea of DevSecOps. What are your thoughts on that? There are some who believe that security should be part of DevOps anyway and then there’s the other group which feels that having security in the term emphasises the need to have security teams as part of DevOps teams.
Hanley: I think you correctly pointed out that it’s a philosophical debate in terms of which one people subscribe to. Regardless of which route people take, what I think is true is that whether you call it DevSecOps, or DevOps with a separate security function or some other software development lifecycle with a security model in place, I always go back to the security hierarchy of needs pyramid published by Forrester several years ago.
At the bottom of the pyramid, there are three things which I think are most important. One is your security strategy in terms of what you are trying to protect and how, and the second is developing a talent pool to execute that strategy. The third is around practices and policies.
Near the top of the pyramid, there are all kinds of things that people sometimes chase too quickly before they have a strategy in place. Through training and acquiring the best talent to execute that strategy, you can adapt to any model, whether you believe in DevSecOps or some other software development model that you subscribe to.
Do you see security teams remaining separate from a lot of development teams?
Hanley: Yes, I do, and it varies by organisation. I personally find that a lot of the best security work is happening in engineering teams. My personal philosophy, and how we do it inside GitHub, is that while I have a large, well-resourced team, our focus is on delivering great outcomes together with our partners in the engineering teams through a security partner model.
And we embed with teams to make sure they can focus on building things. We are there to support every phase of the design lifecycle and the security requirements that need to be baked in, from testing, deploying and securely operating a service. This might seem like a DevSecOps model by some definitions, but my approach is about the security team enabling engineering teams to be security superheroes.
Could you share more about the automation-related initiatives that GitHub is driving to ease the pressure on development and security teams?
Hanley: The example most people are familiar with is secret scanning, which addresses a common developer pattern where out of convenience, or because you’re in the middle of your work, the easiest way to get the job done is to just commit the secret into code. Sometimes that’s intentional, sometimes it’s not, but it is an unambiguously unsafe or non-secure practice to have done that. GitHub has that built right into the flow so we can flag that for you. We work with dozens of partners to get those secrets invalidated, protecting you from that mistake if you do that in a public repository, for example.
Another example closer to developers who do the day-to-day writing of code is our code scanning technology built on CodeQL. We work with the open source community, industry and research community to bring in queries that add value to code scanning to our customers and open source projects that use it.
We’re helping to prevent people from ever shipping a software defect in the first place because we learn from the corpus of known bad patterns, bug bounty submissions or things that people have reported to us. We’re heavily focused on making sure we continue to grow the language support for that product and increase the density of queries that are baked into each language we support.
The goal is to allow developers to focus on the things that they want to do, which is to build features and capabilities. We help to make the security part of that as easy as possible, so it’s sort of giving them the security superhero cape – even if they don’t have a deep expertise and background in the field.
Attacks against OT systems
With the significant increase in the number of attacks against operational technology (OT) systems, are your customers who operate OT systems using GitHub in any way to address the challenges they are facing?
Hanley: I’ve actually spoken to some customers in the past few weeks who work with embedded systems and OT. The general nature of some of those conversations is that in a lot of cases, the constraints are really around the hardware lifecycle that they have to deal with.
For example, you might have a plant floor device that’s got a 25-year lifespan, and so you are heavily limited by the capabilities that got shipped to you 10 to 15 years ago.
There are some unique challenges associated with the hardware model, but that’s just a generalised comment about some of the challenges that exist there. The software and firmware development for those can happen on GitHub, just like any other software project. But I do think the OT space specifically has many challenges associated with hardware lifecycles.
Priorities moving forward
What are some of your priorities for the next year or so?
Hanley: GitHub is a key player in helping to maintain the trust and security of the open source ecosystem. As you know, most open source software lives on GitHub and it’s a great privilege for us to be able to host these awesome projects and communities.
That also comes with great responsibility, as we have to make sure we protect the platform, create a safe space for people to work on projects and make it easier for developers to be secure. It can be difficult for developers who don’t have a security background to meet security outcomes and objectives that they care about. So, the workflows and tools we provide on GitHub make those capabilities broadly available to customers and open source developers – and doing it in a way that’s in context and in line with the way they’re developing software matters greatly.
We’re also focused on how we can help people to adopt better security practices. We recently deprecated password-only authentication for Git operations, but we’re also looking for ways to drive up adoption of MFA on GitHub.com accounts and give organisations more visibility into the security posture of their repositories.
We are also looking to help people better trace down their dependencies, and provide better data in the GitHub Advisory Database so that people can make more informed decisions about their exposure to a vulnerable dependency. Those are the types of things we want to continue to focus and double down on because they will have a substantial impact on the entire ecosystem when we do those things right.