Source: www.hackerone.com – Author: Jobert Abma.
Imagine being part of an organization that has a security team that manages risks by saying “no” to change in an era of cloud migrations, remote-first work, and increased dependency on our digital presence. People develop an aversion to working with security teams because they block them from getting their work done. They don’t come up with solutions. The result is people making decisions in silos which often leads to more security risks. This model is from the past.
Instead, security teams are turning into enablement teams. They help develop solutions that enable others to get their work done without introducing risks that go unnoticed. They’re transitioning from ambulance chasers to strategically and tactically placing warning signs to reduce accidents altogether. This transition means that planning is becoming increasingly more important, but teams lack the data insights to predict where risks are most likely to occur and fail to articulate why working on something is important for the business. The result is frustrated engineers and architects as priorities don’t seem to align with technical debt, incidents, and escalations. So how do we change this?
Data-driven security
We have to become data-driven. In order to do so, we must have multiple data sources that guide where and what to focus on. Some may say that your annual penetration test is the only input you need, but that doesn’t align with how we develop software. Many organizations ship code multiple times per day, so continuously monitoring, measuring, and iterating is key. Data-driven security teams embrace failure and figure out how to respond instead of believing they won’t make mistakes. They avoid working on the neverending list of hypothetical risks. We get there by figuring out which problems are more prominent for your organization, and this takes a long time to get there.
We have an atypical source of threat intelligence at HackerOne. Our business revolves around giving organizations access to a community of hackers to reduce cyber risk by hacking them in the name of defending them. With over 200,000 vulnerabilities found by hackers, we have the most robust database of vulnerability trends and industry benchmarks on the planet. These reports represent real-world security weaknesses found by friendly hackers who can think like attackers.
Today at AWS re:Invent, I explored this data. Specifically I dove into common vulnerabilities in AWS systems, including Server-Side Request Forgery (SSRF) worsening in severity, Improper Access Control and Information Disclosure vulnerabilities caused by dangling DNS records, and misconfigured IAM policies. You can read more about the three most common here: HackerOne Joins AWS Marketplace as Cloud Vulnerabilities Rise. In this blog, I’ll explore how to use this vulnerability data to help better defend against common threats and dive into best practices many companies have adopted to mitigate these and other vulnerabilities.
Configuration as code, infrastructure as code
Snowflaking, which often happens in fast-growing organizations or organizations that lack specialized infrastructure teams, often leads to unforeseen security vulnerabilities. To avoid this and scale a teams’ development efforts, more and more development teams adopt configuration as code and infrastructure as code using AWS CloudFormation templates and Terraform. AWS CloudFormation, for example, helps model a collection of resources and manage them throughout their lifecycles, by treating infrastructure as code.
Although this is a best practice by itself, I’d like to go one step deeper here: over time, you’ll learn more about general pitfalls in your code, but will also learn about issues specific to your architecture. For each of those pitfalls, do a root cause analysis and ask how to prevent the same mistake from happening again. Many of these problems can be prevented using static code analysis tools, like TFLint and AWS CloudFormation Linter. Automating the detection of these will help you reduce risk and scale onboarding of new engineers.
Principle of Least Privilege
This goes without saying, but, where possible, shift security left and start off doing the right thing. It’s way harder to rectify security risks after-the-fact.
One good example is starting out using multiple accounts to logically separate systems from each other. This adds complexity and greatly benefits security. When configuring a resource, start from the most restrictive policy and start opening up from there. In the name of security, it is better to have to open up a policy more and more over time based on the organization’s needs than to further restrict it. These are both components of the Principle of Least Privilege, which, simply put, explains that a user should only be given the privileges needed to complete their tasks. If a user doesn’t need access to something, they should not be able to get access. By separating systems from the start and building the most stringent restrictions in at the beginning, it’s easier to set the precedent of non-access rather than defaulting to unrestricted access.
Log and monitor everything all the time
AWS provides some services for monitoring itself, like CloudTrail for account governance, Config to audit configurations, Security Hub to give insight into your security posture and enable you to create a combined view of alerts from different places, and GuardDuty for threat detection. All of them come highly recommended. Many organizations use them in conjunction with platforms like Splunk, SumoLogic, or their own ELK stack to do additional alerting and monitoring. This is useful for early warning signs, but also for incident response in case something did happen and you need to get to the bottom of it.
IAM profiles over credentials
Identity and Access Management (IAM) profiles and AWS’s Security Token Service (better known as STS) allow security and development teams to seamlessly integrate systems without ever configuring static keys or credentials. Creating these lists reduces the chances of exposing information to unauthorized parties. It also helps reduce the time infrastructure and security teams often spend on things like periodic or incidental key rotation and enables you to better scale deployments.
Hackers empower your teams
Collecting data about which mistakes you’ve made is critical when prioritizing security projects. With the ever-increasing speed of releases and deployments, we need a better model to keep up. Collecting this data means that you should also continuously learn from your mistakes and enable the teams you support to move fast without breaking things. Always ask the question: How can I prevent this mistake going forward? In most cases, mistakes happen — that’s why all us security practitioners are here — so instead of fretting over what happened, focus on continuous improvement.
Pieces of data can come from many places — penetration tests, automated testing, training, and more. At HackerOne we believe that hackers are the missing piece in this puzzle. Their vulnerability intelligence is unlike any other because they can think like attackers. Hackers mitigate cyber risk by searching, finding, and safely reporting real-world security weaknesses for organizations across all industries and attack surfaces before criminals have the chance to exploit them. Hackers empower you to become a data-driven security team that thrives in a world of ambiguity and change; a team that doesn’t block, but enables.
The 8th Annual Hacker-Powered Security Report
Original Post url: https://www.hackerone.com/application-security/what-years-aws-hacking-tells-us-about-building-secure-apps
Category & Tags: –
Views: 0