Skip to content

How do you not make the same mistakes as you did before cloud?


One of the things that every customer struggles with to a greater or lesser extent is how to deliver the same security, risk and compliance outcomes as they did in their on premises environment.

The way that people approach this challenge goes a long way to deter mining their success. As organisations progress on their cloud transformation journey, they typically try to apply their existing security processes to the workloads they are deploying. This is partially successful in the early stages for a couple of reasons. However it does not set people up for long term success.

The reasons that this is, or at least seems to be, successful are as follows. The first significant workload will usually have a dedicated project team and heightened organisational focus. This means that there a bunch of folks working to drag it across the line. Secondly at this point in the transformation journey there are a limited number of AWS services that are being used for a single workload. This makes the security risk & controls approach similar to the traditional engagement organisations are used to. One of the unintended consequences of a successful significant cloud workload is that it can be assumed that ‘cloud is solved’. However, while demonstrating what is possible with this workload, it is important to focus on what the structure that enables the next one, ten or hundred workloads looks like. Understanding where that first project was dragged over the line compared to building something that enables the velocity that an organisation will expect in the next thing people should focus on. As organisations accelerate into the hundreds of workloads their ability to scale the human processes will be tested. The way to fix this is by changing the mental model, using technology to support cultural change and evolving the way that different parts of the organisation engage with each other. It may initially feel that the security & governance part of the organisation are losing control. In fact it allows these groups to implement rigorous mechanisms to improve the consistency and raise the bar for security. The one thing to be aware of is that this requires support from the non-security leaders in the organisation. The other thing to consider is that stopping forward momentum until this is properly ‘solved’ will not work. If the business cannot access modern ways of building and releasing applications through the structured processes then shadow IT is a likely consequence.

What do you do?

So, what are the actions organisations can take to do this & who should own them? There are $number of key things to focus on. Some are organisational, some are technology and some are cultural. All of them relate to each other in some way.

Firstly the mental model for security in the cloud needs to evolve. The main human part of the work needs to be moved in to things that have a consequence for multiple downstream consumers. One assessment to one workload is not a scalable use of people’s time. If a person can perform an assessment of a platform, service or technology that results in many people being able to self-service security outcomes that is a much better use of time.

Secondly, automate all the things. This is both a technical and cultural directive. If there are checks that need to be performed these should be codified to be operated programmatically and automatically. The checks should be done as early as is practically possible in the dev, test, prod cycle. The cultural component of this is that consistency of technology choice becomes hugely important. Getting away from snowflakes or ‘my application is different’ means that the architectural patterns and subsequent checks are consistent. This doesn’t mean that people can’t make technology or design choices that affect the applications they build. They should be doing this within the boundaries of understood patterns. Application teams should be able to spend most of their time on things that affect their applications and consume the rest of the capability. The focus of the organisation should be to help all teams ‘ship securely’. Automated validation of configuration with fast feedback to the people who own the application is a key part of that capability.

Finally organising around product teams, specifically one that provides capability for others to consume, allows people to focus on things they should. Security is everybody’s responsibility, but to help that a platform that delivers foundational components is the best way to organise. The platform team should operate transparently and with a mechanism for receiving feature requests that are assessed in the context of business priority. There are three key things to get started with:

  • Federation of access into the AWS platform for users
  • Automated AWS account provisioning
  • Consumable logging and monitoring services

Working at the right level

One thing that customers struggle with is the concept of service governance or service whitelisting. It is very easy to get sucked into a ‘but what about edge case?’ mindset when looking at application teams wanting to use new services. The key things here are to ensure that the architecture function in an organisation is doing an initial (lightweight) validation that the service looks useful and there is not already a way of solving a particular problem. The other thing is to understand the appropriate level of service governance. The requirement is to quickly enable a service for consumption while ensuing that the organisation is protected. This means providing recommendations for additional protective or detective controls that can be implemented to allow safe usage. The requirement is not to assess every single edge case or potential threat related to a service. There should already be platform/organisational/process controls for some things. For example code written by a human in dev should be validated before it is deployed. This validation should be as automated as possible and apply to infracode as well as application code. The other thing to keep in mind is that the platform should enforce the principle of ‘keep the humans away from the data’. Ideally interactive access should only be in the dev environment. Any other environments should be deployed by pipelines that are well understood and consistently deployed. This may take some adjustment from the business and need updates to applications. Exempting teams from this and not driving a culture of automation and repeatability will result in problems down the track.

Have a look at the talk GRC304 for more detail on the mental model plus some technology examples for how to deliver this outcome. Everything should be focused in enabling teams to ship securely. Every time.