Cloud, Assurance, Forensics, Engineering

Category: Artificial Intelligence

Observations from RSAC2024 – A Security Roadmap for AI

Most of us have fully recovered from our very busy week at this year’s RSA Conference. The massive cyber security event which takes place in San Francisco with over 60k of my closest cybersecurity friends. As most of us already figured would be the topic de jour, there were very few if any in attendance, who were not talking about GenAI. Specifically, the impacts it is and will have on our industry and the rest of the world as we know it.

I have written about Artificial Intelligence (AI) in the past and how it’s going to be the integration of GenAi and different other solutions which will truly cause significant disruption. GenAI and the combination of other technologies such as robotics, medical, oil and gas exploration, retail delivery, fast food experience, and even tier 1 and 2 security operations center functions. This all sounds really cool and fascinates me with the massive potential GenAI has to impact the world.

Boston Meridian Partners, the company I work at, hosts a reception on Sunday evening each year prior to the conference. We host this meeting for numerous startups and friends from the private equity and venture capital world as well as many C suite executives with interest in cyber security. Our goal the past few years has been to get some top-notch speakers to share their wisdom with the crowd and this year’s speakers did not disappoint.

We had Chris Krebs from SentinelOne, Brian Finch from Pillsbury Winthrop Shaw Pittman LLP, and Kate Kuehn from WTI who shared key points on regulatory issues (Note: Thankfully we have the EU who have established many key requirements for the world to follow as our own US government has been slow to pass any legislation with real teeth). They also spent time talking about risk and the importance of collaboration and coordination. While we discussed many key investor topics around GenAI it couldn’t have been a better way to set the stage for the RSA Conference and our very full week of over 150 meetings from across the community. 1

I took away quite a few pointers as I met with startups, CEOs, speakers at numerous events, and in general discussion around a good craft beer or cocktail in the evenings. Here are some take aways from and things to ponder as we push GenAI initiatives in our companies and industries we support.

  1. As mentioned above, collaboration and coordination are key to success. It might seem like a no brainer but many of us are hardheaded and like to “go it alone” which can be a big mistake. It’s imperative we work closely with industry partners, government agencies, and relevant councils to manage AI-related risks and incidents. Fostering this collaboration will enhance GenAI security across the collective.
  2. Risk – I have spoken on this, written about it, and will shout it from the highest mountain as long as I have air in my lungs; “It’s about the data”. It’s super critical to conduct thorough risk assessments specific to GenAI deployments and focus on the data risk. It’s being sucked like a vacuum into these Large Language Models (LLMs) with little to no understanding where the data is going or how it is being used. It is critical for CIO’s and CISO’s to identify potential vulnerabilities, threats, and attack vectors related to AI technologies.
  3. Zero Trust and/or Secure by Design – We use the term “it’s easier to bake it in than spread it on like peanut butter” but often we find companies doing this very thing. Prioritize security from the outset. Ensure those GenAI systems are designed with zero trust (we trust nothing and no one without verification) and with security in mind, incorporating Multi-Factor Authentication, encryption, and access controls.
  4. Supply Chain and 3rd party security – Extending security considerations throughout the entire GenAI supply chain is now a must do these days. One cannot assume the suppliers are doing the right thing or have you in their best interest. They should, but it’s up to you to verify and set up the appropriate controls and service level agreements. This goes back to the “collaborate” discussion above and ensuring safe and responsible use of GenAI.
  5. Finally, we have the geek moment and have to allow technology and or the “hunters” to red team. This should be performed regularly as GenAI exercises and tabletops with the executive team’s involvement. By simulating attacks organizations can identify weaknesses and improve defenses. Since it’s often illegal to go on the offensive against adversaries we must have strong defenses in place.

Overall, it was another amazing week in San Francisco, and I enjoyed meeting so many innovative companies on the show floor. While GenAI is still in its infancy it has quickly become a show of force from all thing’s cybersecurity. GenAI will speed up our ability to do our jobs (but also the adversaries) but we have to be strategic and work faster through the traditional “blocking and tackling” abyss we so often fall into. Teamwork makes the dreamwork!

If you missed us at RSA, I along with the team at Boston Meridian Partners will be at Blackhat, Las Vegas this coming August so please reach out to us via our webpage and LinkedIn below.

www.bostonmeridian.com

Boston Meridan LinkedIn Page <- Follow this company!

Learn More: CISA Roadmap FAQs, CISA AI Roadmap, Cam Sivesind article on “cisa-roadmap-for-ai”, Grayson Milbourne – Forbes Article on “Small Business Roadmap for AI”

About the author

Shawn Anderson2 has an extensive background in cybersecurity, beginning his career while serving in the US Marine Corps. He played a significant role as one of the original agents in the cybercrime unit of the Naval Criminal Investigative Service.

Throughout his career, Anderson has held various positions, including Security Analyst, Systems Engineer, Director of Security, Security Advisor, and twice as a Chief Information Security Officer (CISO). His CISO roles involved leading security initiatives for a large defense contractor’s intelligence business and an energy company specializing in transporting environmentally friendly materials.

Beyond his professional achievements, Anderson is recognized for his expertise in the field of cybersecurity. He is a sought-after speaker, writer, and industry expert, providing valuable insights to both C-Suite executives and boards of directors.

Currently, Anderson serves as the Chief Technology Officer (CTO) for Boston Meridian Partners. In this role, he evaluates emerging technologies, collaborates with major security providers to devise cybersecurity strategies, and delivers technology insights to the private equity and venture capital community.

Overall, Shawn Anderson’s career journey showcases a wealth of experience in cybersecurity and leadership roles, making him a respected and influential figure in the industry.

  1. https://www.linkedin.com/in/christopherckrebs/
    https://www.linkedin.com/in/brianfinch-cybersecurity/
    https://www.linkedin.com/in/katekuehn/
    ↩︎
  2. www.linkedin.com/in/shawnanderson/ ↩︎

Be vewwwy quiet….The AI Robots are hunting us….

Well, at least they might if we don’t plan appropriately. This blog will explore the world of cybersecurity in AI and the potential of this technology as it advances.

In our ever-connected world, we entrust AI with a plethora of information about our lives, from our daily routines to our most personal records. While this technology offers incredible benefits, it also raises important questions about privacy, security, and control. In this blog post, we’ll explore the impact of AI on our lives, drawing inspiration from a recent miniseries and delving into the crucial role of AI and Machine Learning (ML) in the realm of cybersecurity.

AI in “Class of 09”:

Recently, I watched a fascinating miniseries on Hulu called “Class of 09,” which revolves around an FBI class of 2009. This series delves into AI, taking us through the past, present, and future, offering a unique perspective on technology’s evolution and its effects on society. The central story arc is centered around an AI system that starts as a tool to assist agents but eventually turns into a formidable weapon to identify and confront wrongdoers. As the AI becomes increasingly sentient, it begins to view humans as threats, much like the dystopian scenario depicted in “iRobot.”

The massive amount of data – is an ongoing issue.

Back in the ’90s, the technology world was grappling with the idea of a 1-gigabyte hard drive as a significant storage solution (if we only knew!). Fast forward to today, and we find ourselves in the era of “zetta and yottabyte” (1021 – 1024 data storage, where the scale of information is staggering. To put it in perspective, envision a stack of 8 1/2 by 11-inch papers stacked as high as the Washington Monument – that’s roughly the equivalent of 1 gigabyte of data. Now, multiply that by millions to billions, and you’ll grasp the immense volume of data in the cloud.

Not only is the amount of data and the proliferation of AI an issue, but we also have cyber adversaries operating with ruthless determination, driven by motives that often disregard feelings, morals, and laws. They seek data, money, fame, or other objectives, and they stop at nothing to achieve their goals. In this high-stakes game, we, as defenders of cybersecurity, must act proactively and swiftly.

The Ethics of AI:

This storyline raises an important question: how do we ensure that AI systems are used responsibly and ethically, rather than targeting individuals based on mere suspicion? As AI advances rapidly, we need to implement checks and balances to ensure fairness and control. The line between progress and potential chaos is thin, and we must tread carefully.

Rigorous Security Practices:

To effectively combat threats, rigorous identity practices are essential. Verifying the identity of users and devices is a fundamental step in safeguarding data and systems. Implementing strong identity practices can help prevent unauthorized access and potential breaches.

Security frameworks work for cybersecurity and as I’ve stated in past blogs, “just pick a framework”. You don’t have to be picky, but you should consider one for your particular set of requirements. For some the CIS Benchmark (Formerly Sans top 20) might work, others NIST, CoBIT, or something from ISO. AI should not be any different and you should find a framework for it and around the Large Language Models (LLMs) you will be working with.

As AI and ML continue to evolve, it’s vital to establish a security framework for large language models. These deep learning algorithms are becoming integral in various applications, but their potential misuse can pose significant risks. A structured framework can ensure responsible use and mitigate potential security concerns.

There is a very promising future of AI, if only we used it as a tool in the toolbox. A really fast, smart, and innovative tool but one none the less. The thing about tools is they have to have a purpose and some are complex enough you should learn how to use them properly, so you don’t hurt yourself or others. Despite the massive data challenges, AI holds immense potential for enhancing our lives.

Exciting developments are underway in fields like autonomous vehicles, aerial imaging using drones, robotic surgical systems, exoskeletons, collaborative robots, automated farming, smart home devices, virtual assistants, virtual reality, and space exploration. The future of AI and robotics is indeed bright, limited only by our imagination.

Conclusion:

While AI has the power to transform our lives for the better, it also demands our vigilance and ethical considerations. As we navigate this AI-powered world, it’s crucial to strike a balance between innovation and responsibility. The cybersecurity landscape is evolving, and AI is at the forefront, empowering professionals to safeguard our digital realm. What are your thoughts on this? Follow my page for more insights into the exciting world of AI and technology.

Top 10 AI books you might be interested in

About the author

Shawn Anderson has an extensive background in cybersecurity, beginning his career while serving in the US Marine Corps. He played a significant role as one of the original agents in the cybercrime unit of the Naval Criminal Investigative Service.

Throughout his career, Shawn has held various positions, including Security Analyst, Systems Engineer, Director of Security, Security Advisor, and twice as a Chief Information Security Officer (CISO). His CISO roles involved leading security initiatives for a large defense contractor’s intelligence business and an energy company specializing in transporting environmentally friendly materials.

Beyond his professional achievements, Shawn is recognized for his expertise in the field of cybersecurity. He is a sought-after speaker, writer, and industry expert, providing valuable insights to both C-Suite executives and boards of directors.

Currently, Shawn serves as the Chief Technology Officer (CTO) for Boston Meridian Partners. In this role, he evaluates emerging technologies, collaborates with major security providers to devise cybersecurity strategies, and delivers technology insights to the private equity and venture capital community.

Overall, Shawn Anderson’s career journey showcases a wealth of experience in cybersecurity and leadership roles, making him a respected and influential figure in the industry.