The title might be the biggest “duh” statement ever but I continue to be surprised at how many technology/cyber professionals miss this. They feel it’s all about the “network” and the “infrastructure”. We can’t really blame them, as there is a huge chance these professionals started their careers “on premises” and kept with the same understanding and knowledge when they shifted to the cloud.

We cannot use the same thinking in the cloud that we used on prem because data doesn’t reside within any one domain of control. It spans across numerous boundaries in it could be residing locally on an endpoint, on a server in the local data center, or in a SaaS solution in the cloud. This means the data is sitting on a cloud providers network somewhere in the world. Unless you build the location into your architecture or specifically state this requirement in the service level agreement your data residency requirements, it could be anywhere. It’s still out of your purview of protection using SaaS but you have a responsibility to protect it wherever it resides.

Cloud providers are quick to tell you they are responsible for the protection of the cloud and you, as users, are responsible for the protection in the cloud. This statement kills me because the “devil is in the details”. Companies are terrible at patching their own on premises systems, let alone keeping track of the 100’s of VM’s they might have in any one cloud provider. In a future blog I will discuss my frustration when technology companies make you “turn the security feature on” rather than “we turned in on and here are the risks to your data if you turn it off”.

When we focus on designing out topology using a network mentality, we implement solutions originally built to keep people out of the network (or in) and not focused on who might be accessing data in either domain. We need to focus first on data identification so we can figure out how/when to protect it.

In the cloud there must be a renewed focus on data protection and the security of the applications accessing, moving, managing, or touching this said data. In order to do this we have to rewire our brains a bit. On prem, we didn’t care about the data as long as it was sitting in the perimeter of our control. Anyone on the inside was trusted and anyone outside was not. Easy as pie!

It’s not so easy in the cloud age. We need to have an “assume compromise” and “zero trust” mentality 100% of the time. In my past blogs I have mentioned the importance of due care and due diligence, the importance of implementing multi factor authentication (MFA), and picking a security framework. These are the basics and once you have these in place you can focus on a more holistic ($2 word) data protection architecture. Here are some items to consider in your data protection journey:

  1. First step is understanding your data journey is going to be just that, a journey. With the advent of cloud computing, processing capability, and data creation you should be prepared for upwards of multiple petabytes of data or even exabytes. Think “data ocean” vs “data lake”[1] and eat the elephant one byte at a time.
  2. Organize a company wide data risk and threat management team who can work across the organization identifying the most critical data and make recommendations/decisions on how best to protect this data. This team should be made up of a cross company team with representatives from every department.
  3. Pick a tool to give you visibility across your whole network environment. Consider cloud-based tools with connectors to on premises tools so you can get a full view of everything you have. consider all areas whether they be on prem, cloud or hybrid multi-cloud. This can be a managed service, or one of the newer cloud SaaS companies providing these services.
  4. Run a report and then sit down with the management team described above to discuss the output of this report. Develop discussion points to help the executive team understand why protecting this data is important and what the analytics stated was important. They might be similar, but often times very different. The most used system vs the most important system could be very different. This is where the organization should have a good handle on where their data is traveling/sitting and what applications are being used to work with the data.
  5. Take the data, the input from management, and build a build out the organizations risk tolerance dashboard showing these systems and accompanying data. This should include how critical these applications/systems are to the ongoing business. If one critical system goes down or data is lost how long would it take to recover? How long would it take to rebuild?
  6. Run a worst-case scenario exercise with your IT department and security team. Once they have a good handle on the main issues invite the leadership and/or business leaders in to conduct a tabletop exercise. This is where you really have the ability to see how decisions would be made and identify the response gaps you might have because of those decisions.
  7. Rinse and repeat as often as you can, continuously fine tuning and working off known issues.

Bottom line, companies need to identify a framework, take inventory of their data (both critical and non-critical), implement a system to monitor across the whole of the company’s environment. This should include on prem, cloud, and in many cases multi-cloud environments. Run analytics and build out your risk management strategy and reporting structure. Bring in the leadership early and often to review as you go, making sure everyone knows their role in the process. finally, don’t be afraid of what the process shows. It’s going to be ugly at times, but this is how we get better. Identify the issues and work a plan to get better.

About the author

Shawn Anderson has an extensive background in cybersecurity, beginning his career while serving in the US Marine Corps. He played a significant role as one of the original agents in the cybercrime unit of the Naval Criminal Investigative Service.

Throughout his career, Anderson has held various positions, including Security Analyst, Systems Engineer, Director of Security, Security Advisor, and twice as a Chief Information Security Officer (CISO). His CISO roles involved leading security initiatives for a large defense contractor’s intelligence business and an energy company specializing in transporting “environmentally friendly materials”.

Beyond his professional achievements, Anderson is recognized for his expertise in the field of cybersecurity. He is a sought-after speaker, writer, and industry expert, providing valuable insights to both C-Suite executives and boards of directors.

Currently, Anderson serves as the Chief Technology Officer (CTO) for Boston Meridian Partners. In this role, he evaluates emerging technologies, collaborates with major security providers to devise cybersecurity strategies, and delivers technology insights to the private equity and venture capital community.

Overall, Shawn Anderson’s career journey showcases a wealth of experience in cybersecurity and leadership roles, making him a respected and influential figure in the industry.


[1] Data Lakes Revisited | James Dixon’s Blog (wordpress.com)

Please follow and like us:
Please follow and like us:
onpost_follow
Tweet
Share
submit to reddit