SecureStack Organizations is now available

SecureStack Organizations is now available

The SecureStack team has been working on this release for months as we were introducing several new concepts which meant we had to rearchitect many parts of the platform.  But most importantly we are introducing several new pieces of functionality in this release.

Role-based access control

The first of these new features is our new Organizations functionality which allows you to create a SecureStack Organization and share access to that Org with your teammates, partners, and compliance bodies.  Initially, the Organization functionality will allow the creation of users with 3 roles:  Owner, Admin, and User.  However, over the next two months, we will introduce even more granularity around access controls.  As an example, we want you to be able to share just the compliance report for one application with an auditor or partner and not the whole application.


Software bill of materials (SBOM)

We’ve been working on this feature for a long time and this feature is incredibly important to us.   The reason it’s so important is that we don’t believe that the SBOM tools that exist today are giving customers what they really need, which is a comprehensive “ingredients list” of their software.  Instead, existing tools focus on showing customers *some* of the open-source libraries that they are using in their applications.  Unfortunately, that’s like showing someone a wheel when they really want to see the whole car:  its just a small part of a larger, more complex thing.


Continuous Compliance

SecureStack is the first platform to provide its customers with real-time compliance reporting on their software development lifecycle (SDLC).  Every time your CI/CD pipeline is run we take a snapshot of the application’s compliance exposure.  SecureStack can map your SDLC to ISO27001, SOC2, CIS and the Australian ISM.  


SecureStack provides security coverage across the whole of your SDLC

Our platform helps you protect your most valuable asset:  Your source code.

SecureStack is easy to use as it’s a SaaS-based platform so you can be up and running in less than 3 minutes with complete coverage.


If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!


LastPass hacked and source code stolen

LastPass hacked and source code stolen

LastPass was hacked and their source code was stolen.  What can we learn from this?

LastPass, just sent an email to its customers saying that it has been hacked.  Again.  LastPass has publicized that it’s been hacked at least 4 times: in 2011, 2015, 2021, and now in 2022.  In addition, there have been a number of other security events where security researchers have identified vulnerabilities in the LastPass platform that could have led to data loss or password compromise.  You can see the list of historical security incidents here.  This time, the attack vector is different than previous incidents, so let’s drill into what happened.

What we know so far:

  • LastPass alerted its customers yesterday that there had been a security incident and that they had hired an incident response company.  We don’t know which IR firm they’ve hired.
  • The attack happened two weeks ago and targeted its development environment.
  • LastPass is saying that a single developer’s account was compromised and that source code and some “proprietary LastPass technical information” was accessed.
  • LastPass says that no passwords or vault material was accessed
You can find the LastPass official blog post here:

Other than that, there are no other details, which is a bit concerning.  But, let’s take this at face value right now.   According to the LastPass press release no passwords appear to have been stolen or accessed.  This is a good thing!  Unfortunately, this type of attack is potentially more dangerous than a data breach where a set of point-in-time passwords was stolen.  Let me explain why.

How could this be worse than my passwords being stolen?

This attack appears to have targeted the LastPass development environment where the company writes, stores, and manages its source code.  It’s essentially where all the intellectual property for a software-based company is stored.  And the reality is that most companies are software-based now, with most of their revenue and growth tied to their software-based systems.  This centralized storage of a company’s intellectual crown jewels is a very sexy target for criminals.  Unfortunately, as we’ll see below, this core IP is not as protected as other parts of the IT infrastructure.


Developers are the best place to target it you want to own someone

Software engineers have access to the source code for the platforms they work on.  For LastPass that access probably includes how things like encryption of passwords are performed, when MFA challenges are performed and how LastPass detects anomalies.  Thes are core fundamentals of how the LastPass Vault works.

So imagine if you had access to that source code and you could change one small thing that sent all the passwords to a remote API endpoint you owned?  Or worse, maybe you sabotaged the encryption algorithm to give yourself a backdoor so you could easily decrypt any master passwords stored with LastPass. Or more realistically, if you stole the source code and didn’t have the opportunity to add anything malicious to it, you would still have access to how core LastPass functions worked because you had the source code.

All of that is speculation, and hopefully, that’s not what’s happened with LastPass this time.  But the reality is that access to source code is one of the most sensitive things you can get, and yet we don’t generally protect it very well.

Why are developers so easily targeted?

Most source code management (SCM) platforms use someone’s work email for the username that they use to interact with the SCM platform.  This is common practice as it’s the easiest way to manage a large number of devs instead of letting them choose their own usernames.  However, this means that if you know a developer’s email you have half of their login credentials already.  This is made worse by the fact that many orgs don’t sync their SCM logins with Active Directory so the management of those SCM creds is a manual process.  Also, the use of password controls mandated by AD is ignored by SaaS providers like Bitbucket, GitLab, and GitHub as they are managed independently.

Unfortunately, this common practice means that developers are constantly being targeted by automated phishing campaigns that can be very effective as the logins themselves are easy to copy.  The one to the right uses a domain that is similar to GitHub but is in fact malicious.


Okay, if it’s so easy to phish developers, what other security controls can we use?

The reality is that developers don’t typically use any other security controls other than a username and password, which we’ve just shown are under constant attack.  The centralized versioning system that most developers use is called Git and it has several security functions built into it.  Unfortunately, most development teams don’t take advantage of these security features, so if you can successfully phish them you are in.  I can’t say this is how the LastPass incident went down, but it’s certainly possible.

Contrary to what you might have heard, security is relatively easy to add to developers workflow

There are many security controls that software engineers can use to secure how they write, test and deploy code.  Even better, many of them are simple, easy to implement, and are automatable.  Why then do so few devs actually use these security controls?  In my experience, it’s a cultural thing.  They aren’t incentivized as software engineers to prioritize security, so they don’t.  We need to change this culture and instead help devs understand that if they use these security controls, we are all better off.

Earlier this year I wrote and published the DevSecOps Playbook, which is an open-source step-by-step guide to implementing DevSecOps for any type and size company.

You can find the Playbook at:

The DevSecOps Playbook is 58 controls and is very detailed and prescriptive.  If you are looking for some simpler tips you can use to protect your software engineers from the same thing happening to you that happened to LastPass, we’ve got you covered!

6 easy things to increase source code security

Enable MFA for SCM

Bitbucket, GitHub, and GitLab all support the use of MFA for logins.   This simple security control means that there’s more than just a crappy password between your source code and attackers.  Even better, if you use GitHub or Bitbucket or GitLab for SSO to other platforms the requirement to use MFA at login will propagate across automatically, providing security there too.

The obvious limitation with this method is that you are only going to get an MFA challenge when you log in, so doesn’t provide any security when you are actually interacting with code (pulls, fetches, clones, etc).  That’s why the next option is particularly important.

Use SSH keys to interact with SCM

There are two protocols that are supported when you interact with source code in your SCM.  The default is HTTPS which is encrypted but doesn’t offer any additional security.  The other option is to use SSH for transport which requires SSH keys as it’s PKI based.  This is significant as you can change, expire, and update your keys based on company requirements.

My team has gone even further, as we use an app that manages SSH private keys behind an MFA challenge.    This means that each time we push code we get a challenge on our mobile devices.  Combining this control with the first one (MFA) adds a layer of security that is very powerful.  You can see how to add SSH keys to your GitHub account here.

Signed commits

When an engineer makes an important change to source code they will “commit” that code to version control so it’s protected.  Unfortunately, by default git just keeps track of the author via the email address of the author which can be easily faked.  If you want to verify that the person pretending to be really is Linus Torvalds, then you need to sign your commits with GPG. It takes 60 seconds to set up and then can be easily automated so that every commit is signed.  Find out how to sign commits here.

Scan your source code for vulnerabilities

While this control wouldn’t have stopped the LastPass hack, it’s still a super important security control.  Good engineering teams will use software composition (SCA), secret scanning tools, and static analysis (SAST) code scanning solutions in their local environments.  It’s super important to scan source code for issues when it goes through your CI/CD pipelines, but it’s important to do on your developer’s laptops too.  Git provides an automated function to do this when you commit code with something called a pre-commit git hook.   Unfortunately, a lot of engineering teams are NOT running security tooling on their laptops so there is no way to stop security issues or malicious code from getting into the centralized source control system.  Yes, you can eventually remove it there too, but it’s a lot harder to do than if you stopped it before it got there.  I like to use the analogy of someone poisoning a city’s water supply.  It’s a lot easier to stop them from pouring the poison into the lake than it is to take the poison OUT of the lake.

Sync your SCM provider to AD

One of the common themes I see is that enterprise orgs will use a cloud-based SCM like GitHub or Bitbucket, but not connect their active directory to it (usually because of cost).  Because they aren’t linked, the users in the SCM platform must be managed separately.  This causes challenges as users have to manage two separate sets of credentials, one for AD and one for SCM.  It also means that accounts are often out of date with people still having accounts in SCM when they no longer work for the company.  Onboarding, off-boarding, password resets, and everything else is more complicated in this scenario.

Security training for your software engineers

Because of the unique security challenges presented to developers, they need specialized security training.  That’s why we use partners that teach how to create more secure code, but also how to secure the systems they use to manage the code.  Reach out to me if you would like an intro to one of our partners that does this.


SecureStack provides automated security for your software engineers

Our platform helps you protect your most valuable asset:  Your source code.

SecureStack is easy to use as it’s a SaaS-based platform so you can be up and running in less than 3 minutes with complete coverage.

If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!


How to enforce HTTPS on your web application

How to enforce HTTPS on your web application

Enforcing HTTPS is a lot harder than most people make it seem

So, lemme break this down into the 7 (yes 7!) different things you are gonna need to have configured to meet the requirement in the top paragraph:

  1. create the unencrypted “origin” service
  2. create a load balancer, CDN or other service to answer inbound encrypted requests.  If you are going to do this all on one server it’s harder to get this all right.
  3. create a valid certificate for the encrypted endpoint whether that’s on an origin server, load-balancer, CDN, etc
  4. redirect requests from port 80 to 443 preferably at a load-balancer.  If you do this on a server, make sure that you are redirecting all inbound HTTP to HTTPS.
  5. if you are using a load-balancer or CDN make sure that you firewall the origin server or service so that clients can’t connect directly to it and bypass the control.
  6. check that all links (especially external links!) in your app or website are using HTTPS (ie.,
  7. enable HSTS which will *enforce* the “https://” part which means that if you haven’t ticked the box in #5 above you will get a HTTP error

See, not so easy!  Unfortunately, many security frameworks gloss over this complexity so the people actually implementing the control aren’t aware of how complex it is to “serve HTTPS exclusively”.



How do I enforce HTTPS?

Enforcing HTTPS is harder than it sounds and most software engineers don’t realize the complexity



SecureStack provides security coverage across the whole of your SDLC

Our platform helps you protect your most valuable asset:  Your source code.

SecureStack is easy to use as it’s a SaaS-based platform so you can be up and running in less than 3 minutes with complete coverage.


If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!


Risky Business

Risky Business

Risky Business Podcast

In this edition of Snake Oilers we’ll be hearing from Google Security — Anton Chuvakin is appearing on their behalf to talk more about how switching to its cloud-native SIEM actually makes sense now. Paul McCarty from SecureStack will be along to talk through their latest stuff, and it’s interesting actually because they’re doing software composition analysis that includes a lot more information than just what code is going into an application — what services is the application using? Which APIs? They’ve also built some really nice compliance tools where you can do a single scan and see how you measure up against various regimes…

You can find the original podcast at the site here:


If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!


Australian ISM – Guidelines for Secure Development

Australian ISM – Guidelines for Secure Development

The Australian Cyber Security Centre (ACSC) is the arm of the government that provides guidance on how to improve cybersecurity in Australia.  As part of this mandate, they have been producing a document called the “Information Security Manual” (ISM) since 2017.

You can think of the ISM as a single document that helps Australian businesses and government know how to address cybersecurity challenges.  In reality, the ISM is really a collection of different guideline documents that focus on specific areas of IT.  Some of the existing guidelines address things like system hardening, database management, network management, using cryptography, and many others.  These guideline documents as an aggregate can be thought of as “the ISM” and can be used to increase an organization’s cybersecurity maturity which benefits both the organization itself, but also Australian society.

What’s in the Guidelines for Secure Development section of the Australian ISM?

In December of 2021, the ACSC released the latest version of the ISM which for the first time included a Guideline for Secure Development.  This document lays out a framework for building and maintaining secure software development processes.  It is a total of 21 controls and is more prescriptive than what we typically see from other frameworks like APRA.

You can find the Guidelines for Secure Development here:

The format for this blog post

I wanted to write this blog post to help Australian orgs know about this new compliance requirement from the ACSC.  The new Guidelines for Secure Development document is split into two sections which we’ll address separately below.
Those two sections are:  Application development and web development.  

We’ll break both of those two sections down into their individual sub-sections and the controls that exist at each one of those stages. At the end, I talk about how you can assess and implement the controls in the ISM.

From this point on I’ll refer to the Guidelines for Secure Development as “GSD” for brevity’s sake.

Okay, let’s dig in!

Section 1: Application Development

This section of the GSD is applicable to all forms of software development including: client/server, web and mobile.  So special emphasis on this section should be placed on all assessments you make using the GSD.


Within this top level section there are 6 sub-sections:

  • Development environments
  • Secure software design
  • Software bill of materials
  • Secure programming practices
  • Software testing
  • Vulnerability disclosure program

Development, Testing, and Production Environments

Segregating development, testing, pre-production and production environments into discreet separate workspaces is one of the core security principles of secure software design.  This segmentation can limit accidental issues and malicious attacks from spreading from one environment to another.  Software engineers are limited to dev and testing environments so that bad code or third-party issues can’t be added to production directly.

There are 4 controls in this sub-section that deal with environment segmentation, the scope for development changes, data segregation, and user access.  If you can’t answer yes to all 4 of these please stop what you are doing and go and address this now!

Secure Software Design and Development

This sub-section deals with the identification of software development risk during the design and development stages.  This sub-section has two controls:  One for “secure design principles” and the second for threat modeling.

I feel like this section is under-baked and needs some love.  What are “secure-by-design practices”?  Would have loved this section to be more prescriptive.  Maybe in the future, we can add things like application baselines, secure code training and application ownership labels.



Software Bill of Materials (SBOM)

This section only has one control and it’s all about SBOM.  SBOM stands for “Software Bill of Materials” and the reason that it’s so important is that it delivers something we never had had before:  a complete “recipe” of what is in an application.  An SBOM is a single source of truth for all software dependencies, frameworks, libraries, resources, and services that went into making a specific software solution.   Most definitions of SBOM agree on the above, but some go further and say that any known vulnerabilities and cloud-based services should also be included in the SBOM.  To me, this makes sense as an SBOM should be both an end-to-end description of the application, but also should list any deficiencies and liabilities.  If one of the components used to build an application has a known vulnerability, it should be codified in the SBOM.

SBOM is a huge topic right now as the US government mandated several important software-related security controls last year.  One of those was the requirement for all companies looking to sell to the US government to provide an SBOM.  This mandate is extended to any company that is part of a software dependency or supply chain even if they themselves don’t wish to sell to the US government.

Unfortunately, SBOM hasn’t delivered on its promise yet as very few organisations are actually creating SBOMs when they build software.  If you want to know more about SBOM please check out our blog post on them here:

Application testing and maintenance

There are two controls in this section.  The first deals with testing software applications, both internally, as well as externally.   The second talks about software engineers needing to resolve issues found in their applications.  This is an important part of the document and makes no bones about the engineer’s responsibilities.


Even though there are only two controls here the description specifically calls out static analysis (SAST), dynamic analysis (DAST), web vulnerability scanning, and software composition (SCA) requirements.  It also calls out penetration testing and it also mentions “prior to their initial release and following any maintenance activities”.  To me, this sounds like automated tests during continuous integration and deployment (CI/CD). 

So that should really be 6 controls minimum.  I expect this to be fleshed out on the next version of the GSD.


Vulnerability Disclosure Program

There are actually four controls in this sub-section.  The first three are somewhat redundant switching the terms “policy”, “program” and “processes” which might confuse people.  Luckily the last control is straightforward and requires that orgs use a security.txt file to advertise their VDP information.

I think we can simplify this section in this way:

  • Are security researchers able to come to your website and find how to contact you if they’ve found a security issue?
  • Have you partnered with a platform to allow security researchers to bring security bugs they find to you?
  • Do you have a set of documents that describe your security policies?  And can your employees find it?



Section 2: Web Application Development

This section of the GSD is applicable to applications available on the web that users interact with primarily via a web browser.  This section should be carefully following if you are building web apps.


Within this top level section there are 6 sub-sections:
  • Open Web Application Security Project
  • Web Application Frameworks
  • Web Application Interactions
  • Web Application Input Handling
  • Web Application Output Encoding
  • Web Browser-Based Security Controls
  • Web Application Event Logging

Open Web Application Security Project

The OWASP is an organization that is trying to help encourage application security through its community and projects like Zed Attack Proxy (ZAP) and the purposefully vulnerable Juice Shop project.

This section has only one control and it explicitly states that orgs should be following the Application Security Verification Standard (ASVS) when building web applications.



While I am personally a fan of the OWASP ASVS I am a little confused about why one security framework, the ISM, is referencing another, the OWASP ASVS?  Isn’t the point of the ISM to be a standalone security framework?  If so, why are we then nesting another framework within the ISM?

Web Application Frameworks

This section has one control and emphasizes the need to use existing “robust” web frameworks.  I think the main point here is to use off-the-shelf components to provide session management, input handling, and cryptographic operations.​

Web frameworks like Angular, React and Laravel are awesome and save time for development teams.  Unfortunately, sometimes these frameworks come with built-in dependency issues, and the react-scripts npm package is a great example of this as there are hundreds of transitive dependencies in this package.  Many of those transitive dependencies are out of date and insecure, as is often the case with Javascript packages in NPM.   So, we have to make sure that while we are using components that save us time and offer security benefits we aren’t causing ourselves future troubles by using the wrong frameworks.

Web Application Interactions

This section has one control and it’s pretty specific:  All web application content is offered exclusively using HTTPS.  That sounds pretty straightforward, right?

Unfortunately, enforcing encrypted HTTP traffic is more complicated than many people think and require multiple controls and functions to be aligned.  Engineers need to make sure that HTTP is redirecting to HTTPS, that HSTS is enabled and that SSL/TLS is terminated in a secure environment.

I wrote a blog post about enforcing HTTPS which you can read here:

Web application input handling

This section also has one control: Validation or sanitisation is performed on all input handled by web applications.  That sounds relatively straightforward but is fairly difficult to do and requires using multiple controls and functions.

Input validation requires equal parts developer training, testing of the source code, and testing the web application.  That’s 3 different sets of tooling to address to achieve this requirement.


Web Application Output Encoding

This section has one control which is:  Output encoding is performed on all output produced by web applications.  This is a necessary requirement as the use of un-encoded data can cause serious issues as special characters can be interpreted incorrectly by the web application.



Web browser-based security controls

While this section has only one control it speaks to the need to address browser-based attacks like cross-site scripting, CSRF, and click-jacking.   Modern web applications using things like Javascript run entirely in a user’s client-side browser.  Traditional security controls can’t help here and this is why a new generation of controls was born, most of which are delivered as HTTP response headers.  Content Security Policy, or CSP, is the best and most powerful of these but unfortunately, most websites do not use CSP.


Web application event logging

The final sub-section has two controls associated with it.  The first says that all access attempts and errors need to be logged.  The second, stipulates that all logs are stored centrally in another location.

Unfortunately, we see less web server and application logging than we used to.  In the era of the public cloud, many engineering teams misinterpret logging functions like AWS’s Cloudwatch and Cloudtrail which log events at the cloud layer, and NOT at the application layer.  To be very clear:  Enabling Cloudwatch and Cloudtrail are NOT effective application logging solutions.



How do we assess and implement these controls?

Okay, so now that we’ve laid out all 16 controls in the new Guidelines for Secure Development document, where do we go from here?   Well, part of the challenge of this new ISM document is that it spans across the whole software development lifecycle (SDLC).  It talks about things the developer needs to do (local software testing) and it talks about segregating deployment environments.   It talks about things that happen at the beginning of the lifeycycle, and things that happen at the end of the lifecycle.  It talks about how to build your web applications and it talks about how your customers should be protected while using that application in a browser.

Unified ISM compliance coverage for the SDLC?

All of these disparate controls focusing on different parts of the SDLC means that there’s a broad surface area to assess and quantify against.  This is one of the reasons that when we were building SecureStack we intentionally wanted to integrate into the multiple platforms our customers use.  Unified coverage for the SDLC means integrating into source code management providers like GitHub, Bitbucket, and Gitlab,  It also means integrating into the continuous integration and deployment and build platforms.  And it definitely means integrating into the public cloud providers like AWS, Azure and GCP.  But finally, it also means that you need to have continuous awareness of the web application at the heart of this as well.


How can SecureStack help you assess your ISM compliance?

The SecureStack platform can help you assess and quantify your ISM GSD compliance with our SaaS platform.  We help you integrate your source code platform, CI/CD processes, build environments and your public cloud providers, and we do it all in less than 5 minutes.

Thanks right!  You can see assess your entire software development lifecycle in less than 5 minutes with SecureStack.  Check out the video to the left to see how!


If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!


Automate responses to security questionnaires!

Automate responses to security questionnaires!

Are you sick of filling out security questionnaires to meet some compliance or audit objectives?  Most of the software engineers we talk to that have to fill these pesky forms out hate the process.  Many of them say to us that they really want something that could automate responses to security questionnaires.

I had to fill out my first security questionnaire back in 2002. The questionnaire came in the form of a large Excel spreadsheet and I remember looking at it and thinking at the time, “Man, that’s a LOT of questions!!” I wondered how they would know if I was telling the truth or not? I could just put anything down and how would they know if I was being honest? Would they come onsite and audit us?

Fast forward to 2022. We still use spreadsheets for our security questionnaires. They pretty much ask the same questions they did 20 years ago, and they are still being used by the same people. It is a hallmark of #infosec that you get told to fill out the questionnaire and it’s just something we all do.

Here’s the thing:  Security questionnaires aren’t that good at determining risk

  • Security questionnaires are lengthy and take a long time to fill out. The industry doesn’t trust them very much but continues to use them for lack of anything better. Only 14% of organizations surveyed said they are highly confident that security questionnaires represent true security profiles.
  • Finding the right people to talk to and getting responses from them is a real challenge.  This is especially true if you are trying to ascertain compliance around software development processes and you need to talk to software engineers.  Engineers are busy delivering software and often actively hostile to the audit or compliance processes.  Software engineers will sometimes not respond to queries which leaves the party responsible for collecting the data in a lurch.
  • Questionnaires are usually not very accurate. The person filling it out usually doesn’t have the answers and often makes up answers or guesses, neither of which help. When asked, most risk professionals admit they have very little confidence in security questionnaires.
  • Security questionnaires are a single point-in-time snapshot. Even if somehow, the original data captured in the questionnaire was accurate, they don’t get updated which means that you can’t really trust them for very long.

Developers are usually not aligned with security or compliance objectives

As mentioned above, security questionnaires aren’t great at determining risk in normal circumstances, but it’s even harder when you are trying to audit your software development processes.  Gathering data from software engineers can be really tough.  They are often resistant, and sometimes downright hostile to the idea that their development processes need to meet some compliance requirements.

Software engineers’ metric of success is not usually security or compliance-related.  Instead, it’s how fast they can deliver new features.  So, this explains in part why developers are often so against security teams asking them questions about their security protocols.  


How do you audit the CI/CD process?

Auditing the CI/CD processes to identify whether there are security or compliance gaps can be difficult.  Often, even gaining the right access can be a problem.  Engineering teams are often the ones with administrative access to the source code management (SCM) and CI/CD platforms, so getting them to provide access can be challenging.

Wouldn’t it be better if we could somehow automate the responses to these security questionnaires?

github default is public

SecureStack automates responses to security questionnaires which saves your team time and money!


If you like what you see, book a demo!


Paul McCarty

Founder of SecureStack

DevSecOps evangelist, entrepreneur, father of 3 and snowboarder

Forbes Top 20 Cyber Startups to Watch in 2021!