In this episode of AppSec Builders, Jb is joined by security expert, John Steven, to discuss his BSIMM study findings, the fundamental shifts in AppSec, software-defined security governance, and much more.
Through his firm Aedify, John advises innovative security product firms as well as maturing security initiatives. John leads one such firm, ZeroNorth, as CTO. For two decades, John led technical direction at Cigital, where he rose to the position of co-CTO. He founded spin-off Codiscope as CTO in 2015. When both Cigital and Codiscope were acquired by Synopsys in 2016, John transitioned to the role of Senior Director of Security Technology and Applied Research. His expertise runs the gamut of software security—from managing security initiatives, to cloud security, to threat modeling and security architecture, to static analysis, as well as risk-based security orchestration and testing. John is keenly interested in software-defined security governance at the cadence of modern development. As a trusted adviser to security executives, he uses his unparalleled experience to build, measure, and mature security programs. He co-authors the BSIMM study and serves as co-editor of the Building Security In department of IEEE Security & Privacy magazine. John is regularly invited to speak and keynote.
[00:00:02] Welcome to AppSec Builders, the podcast for practitioners building modern AppSec hosted by JB Aviat.
Jb Aviat: [00:00:14] So welcome to this episode of AppSec Builders. Today I'm proud to interview John Stevens. So, John is the founding principle at Aedify where he advises product security firms. John, before that, you led ZeroNorth as a CTO and before that you were leading as co-CTO at the Cigital firm. Welcome, John.
John Steven: [00:00:36] Hello, how are you? Thanks for having me.
Jb Aviat: [00:00:38] I'm great, thanks for joining. So John, another thing that you've done is that you co-authored BSIMM, so could you let us know what it is and how it can be a useful tool to AppSec builders?
John Steven: [00:00:50] Yeah, it's worth clarifying because it's frequently misunderstood. The BSIMM is the building security in maturity model observational study. We went out and over a period of 11 years we've studied about two hundred and over two hundred firms and asked the question, what do you actually do to build your security initiative and to secure your software? And it doesn't prescribe what to do, but you can use it to look at what firms that are within your vertical or that look similar to you in terms of maturity, are doing with their time and money, and decide whether or not you want to replicate those behaviours or cut your own.
Jb Aviat: [00:01:29] So you are interviewing like CISO application security practitioners, developers like every actor of the security game.
John Steven: [00:01:38] Yes. Historically, the list has looked like what you described. What was interesting to us about the last two years of this study is that when we began talking with the CISO, they'd say, oh, you need to talk to the VP of Cloud on this, or actually you need to talk to the SREs and to to delivery or to the VP of engineering. The people we had to talk to fundamentally changed over the last two years. And that was a key finding that we we wrote about this year, that the people doing the work of security were shifting from the security group to the engineering, digital transformation and cloud groups.
John Steven: [00:02:20] And that's a big deal, right, because there's been these phrases that we've held dear for 10 years or more. You know, building security in is something that we've said for two decades. Me and a colleague argue as to who said shift left first and we've ended to like November of 2001 when we first said that. It was a long time ago. The other thing we say is that security is everybody's responsibility. Every developer, every engineer, every operator needs to think about it. And we've been harping on those things forever. And what we see is now that engineers, now that SREs, now that operators are taking a really first class citizen role in security, people are taking that 'security is everybody's responsibility' to heart. And in fact, who makes up a security initiative has now changed. And that's a really big deal.
Jb Aviat: [00:03:08] Yes, it is. And so a trend that we have seen over the past two years is like QA testing moved from dedicated teams towards the hands of developers and they are now writing their own data and then monitoring their own deploy, running back if necessary. And so what you describe about security is following the same trend, right? So the teams are now starting to own security by themselves.
John Steven: [00:03:35] Yeah, and we see what we call engineering-led security initiatives, where engineers are not only acting as security champions and participants in a program, but the owners of practice areas and the drivers of the program. So it's not uncommon in some organisations, particularly ISPs, that are more mature, for them to have a Product Security Lead or a Chief Architect who has full purview and responsibility for security and for those people to do the things that you'd expect the security group to do prior. Pick defect discovery tools, tune those tools and drive to a secure coding standard, you know, generate and administer a training program associated with those standards and those tools, you know, and build security blueprints and so on and so forth.
Jb Aviat: [00:04:22] And so you mentioned shift left. So now what I understand is that you are not like advertising shift left anymore. So, due to this change in the industry, now that security is done to be done by people that are actually conceiving and building the things.
John Steven: [00:04:41] With the benefit of time, anything, anything will look wrong, I suppose. So, you know, when we talked about shift left, we were thinking about all of those organisations that use spiral or iterative development or even worse, waterfall. And, you know, we would talk about, look, you know, we can pentest your software, you can apply testing to your software. But wouldn't it be better if you moved earlier in the lifecycle and found those bugs as you were developing them so that they were easier to remediate? And that was the basis of shift left and everybody cited the rational study and it's cheaper to to fix things earlier and yada, yada. You can see why that's a valuable precept. But think about how orchestration platforms and how software delivery has changed over the last five to seven years. We're using Kubernetes you know we've changed the way virtualization has happened. We're layering on top of Kuberntes things like Istio. More and more of the way we deliver software is becoming code, and the whole infrastructure is code movement and the whole delivery and pipeline orchestration movements. What that means is that more and more of the stovepipes between build, test, deliver and operate are being broken down, so that a DevOps engineer can shepherd a greater percentage of the software lifecycle in self-service mode. I don't have to hand something over a wall to you. I can walk it further down the lifecycle pipeline myself. And even the the bridge between Dev and prod is becoming a softer wall than it has historically been.
John Steven: [00:06:26] Cloud, open-source, all of these are self-service technology stacks that allow you, again, further control over a larger percentage of the lifecycle. And so what that means is that code is creeping right in the lifecycle. When you use Kubernetes Istio configuration files, when you use infrastructure-as-code, cloud service provider configuration, what you're doing is you're driving that code right in the life cycle and saying more and more of the way I build, package, deliver and operate is going to be software defined. So more accurate than shift left is maybe shift to where the code is. And what we're seeing is that the code is shifting right. So your know my keynote of the BSIMM conference two years ago was shift right to do security everywhere. And it was extremely aggravating to the attendees because after two decades of moving to the left and trying to get closer to design and requirements, I mean, Laurie Williams out of North Carolina has published a study that says that as much as 10, 11 percent of your code may be infrastructure-as-code and that 30 percent of the churn, month to month in your code bases, is that code. So there's data based evidence that says that that code is moving right.
John Steven: [00:07:42] And so we must move right with it if we want to get earlier. And so this is really never REST. Your security initiative needs to follow the trends in technology and respond with the same principles, get earlier, re-evaluate how those principles apply to the new tech stack. Does that makes sense.
John Steven: [00:08:04] That's fascinating. So shift left, she right or shift to where the code is but there is not only code, right. At some point we need to go beyond because as careful as you are when you design a system, when you when you write your code, there may still be vulnerabilities left or any flaws of any kind. So monitoring the code is not enough.
John Steven: [00:08:28] That's right. So when you talk about shift left or shift everywhere, you're talking about proactive or building security and telemetry. The thing the capability you're trying to to build for your organization is deliver better software with fewer security flaws. But to your point, you're not, it turns out, I know this is going to shock everybody. You're not going to deliver perfect software. Not the first time. Not the tenth time. I think we can all conclude that software will have flaws in it. And so some organizations are saying rather than infinitely iterating my security practices, taking on cost and taking on complexity, maybe I listen to those people in my organization who are focusing on speed of delivery and agility and apply some of that same concept to my security initiative. What if instead of slowing people down to build better software, I participate in their desire to deliver software faster and build resiliency into my security capability. And that speaks to what you're saying. You not only have to proactively find defects and fix them, you have to observe potentially malicious or vulnerable behavior and do something that will make you resilient against that exposure.
John Steven: [00:09:48] So people are saying, if I can combine a 'building security in capability' with a resiliency capability, I'm going to have a much more robust security program. And instead of my costs becoming infinite on the building security inside, I'll have a balanced approach where I will do the best job I can to deliver and I will have a very confident ability to respond when I get risk telemetry based on behavior and operations and I'll pick where I'm going to solve a problem. Because, you know, when we cite the rough model, we're sort of oversimplifying. There are definitely problems that is cheaper to find based on their production shadow than in requirements. And so having resilience where you can redeploy in 30 minutes based on what you observe is terrific. And what this has driven for security initiatives is a technology challenge. How do I combine my building security in telemetry that comes from legacy tools, like static analysis, dynamic analysis, composition analysis tools, with my observational tools that are post deployment operational?
John Steven: [00:10:58] And most importantly, how do I inventory the bits and bytes that are running and map them to the bits and bytes that created them both in terms of the artifacts that go into creating them and the pipelines that create them. How do I tie the people responsible for operating these pieces of the infrastructure and developing and delivering those same pieces of infrastructure? And then how do I know what my code looks like in my service mesh, in my network and those things and identity look like. And so there's been a whole set of obviously technologies, and this is a space you guys play in during the day, where firms are trying to help organizations understand how to tie those disparate pieces of telemetry together so they can see the full picture and then choose how they're going to decide to respond to risk. Are they going to look at operational data? They're going to look at data from the software development lifecycle, or are they going to combine pieces of telemetry and they make an action based on.
Jb Aviat: [00:12:00] Yes. So I'm aware of tools that do that to the left on the code part, so I'm sure you have a GitHub or GitLab in mind, when we mentioned gathering data at code level or deployed at the CI level. So those companies GitHub, GitLab they are every day, more and more like security vendors, because they offer more and more like amazing security features and are very well placed to do so. But on the other hand, for the monitoring, the runtime, for the prediction path do you see tools that manage to aggregate those information from the left path and from the right path.
John Steven: [00:12:35] So GitHub and GitLab are definitely the 800 pound gorillas in this space. Both of them, in my opinion, doing a great job defining the bones of a security framework for these engineering led initiatives. They're saying you need defect discovery capabilities, so we'll help you plug those things into your pipeline, we'll route the vulnerability or defect data to the right developers. We'll track those change requests and track dora metrics like time to fix. They're doing a great job of that platform and the scaffolding. Those platforms coalesce around code, right? Theyre SEM platforms, right. So they're always going to do a better job on the builder side. Some of them are introducing features that speak to proto operational stuff like security research in and out, like bugcrowd or that stuff that goes in and out, security advisors to go out, crowdsourced defector vulnerability data comes in. They're not credible, in my opinion, yet, on the operations monitor telemetry side. You know, obviously there's a bunch of vendors that handle that. I mean, there's there's vendors that handle it from an aggregate telemetry perspective, the zeronorth's of the world, there's people that handle it from the testing perspective, like IAST vendors.
John Steven: [00:13:46] There's there's certainly people like Sqreen that handle it from the RASP perspective and the instrumentation protection side. You know, what I have said about that is that, to investors, to buyers, is a lot of these technologies that are doing the aggregation that act as sort of competitive peers to GitHub and GitLab on the aggregation side, they're pretty early days. Right. And the challenge with that is that how many security tools are there? You know, there's massive fragmentation on a legacy stack. Gartner thinks you need 10 to 30 tools on the cloud stack to get a clear picture of your, and that's just essentially one cloud. You know, there isn't any sort of de facto standard reporting format for these vendors to use in aggregate to. Right now, people are spending a lot of time either using a Series A Series B maturity technologies and making progress in that space or building their own. And in fact, in an adjunct study that I did, which was not published, but there was a compendium to the BSIMM called the DevSecOps study. We found that a third of BSIMM firms have built their own aggregator and have tried to plug it into their particular Frankenstein tech stack of GitHub or whatever. And shockingly, those firms that have done that have spent on average eight to ten million dollars building defect vulnerability management, slash aggregation technologies. So this is a really interesting space, it's sort of at the nexus and fulcrum of your capability to provide resilience, but it's a space right now where you have to pick a vendor. You could build your own, I don't think that's the right move, potentially, but you have to pick a vendor and kind of co-evolve with them.
Jb Aviat: [00:15:28] So sometimes you have new vendors. And so we did Y Combinator in 2018. And so we interviewed like 150 different companies during that during our batch. And we had an idea and we found that so many companies had built it already that we had a doubt, like is everyone building that in-house? And that's how we we pushed our playbook capability. But so that's something that was so widespread and it feels so far away from the core business of those company that it was really hard for us to believe that so many companies had built it already. So I perfectly understand what you mean. And so each time we are as a security vendor, when we are exploring a new feature and we always find customers that have built that already, and no matter what is the complexity, you have some specific needs that are not addressed today in the industry and a lot of companies are holding their own. So, a third, that goes with that. Intuitively, I would say the same thing.
John Steven: [00:16:26] At Synopsys, one of the things that I did was sort of manage the acquisitions of the portfolio a bit. And one of the spaces I looked at, pretty aggressively, was RASP. And of course, at the time, three years ago, RASP and IAST were more intertwined than they are today. And still vendors intertwine those notions. One of the things I'll say about this DevOps dichotomy is that you do have a specific and hard choice to make. And you see people struggle when they try to hedge that it does fall on one side or the other. When you build and you guys probably know this, when you build telemetry technology, you have a choice as to whether or not to make it fast and robust or whether or not to make it thorough and give you good telemetry on provenance and other things that help you debug. In other words, you can build a developer tool that provides visibility or you can provide an operator tool that hardens and provides audit capability. It's extremely hard to build something that does all of those well. You have to pick: fast and hard or slow and visible. And trends like IAST or similar that try to drive, you know, sort of visibility into production, I think are challenged because one, you know, they don't end up fulfilling their promises in terms of not changing the way the code executes and really providing a high fidelity picture. But also is this resilience movement gains ground with security initiatives. People are saying, OK, I've admitted that I'm going to deliver software with security defects in it, what I need is, is to harden and to have visibility to respond. And so for years, I've kind of said when I look at organizations that try to fall on one side or the other or hedge their bets, I would say it's important for people to provide themselves a hardening and resilience play. Don't try to to move development into production you know and instrument your production environment for rich, you know, developer feedback. Test and then deploy and harden, you know and so I hope that the RASP market takes off more than it has historically because I think it's been sort of weighed down by the challenges of trying to serve IAST and RASP together if that makes sense.
Jb Aviat: [00:18:46] Yes, makes a lot of sense. We definitely see interest going up on RASP. I think the technologies are becoming a bit more mature.
John Steven: [00:18:55] Yeah,
Jb Aviat: [00:18:56] More and more people are using it in production every day. When I think three years ago people were saying, "no, no RASP, doesn't work in production, it's not possible," I think we have seen a real switch in the mindset from this point of view.
John Steven: [00:19:09] So I agree. And let's talk about that a bit more so that people thought to themselves, look, I can't deliver known buggy software. I have I have compliance problems with that. When we studied what defects get discovered and what defects get fixed over time in the population of people who's security initiatives we built. We found was that it varies depending on the maturity of the organization. But, you know, potentially a majority of the findings you find today, you found not because they didn't exist yesterday, but your ability to detect them improved. So two things could happen, right? A developer could sit down and write a defect into the code yesterday or your security tool could get better or you could add a new tool and find something that was there for a while that you didn't know how to find before. And so when I say that a potential majority of those findings that you find today will be because of an increase in detective power, not because the developer introduced a new error. That means that those those defects were there, you know, when you put spring boot, you know, version 1.0 in your application eons ago. 'What's that? Where do I have that?' They didn't store passwords correctly. And you may have found that you know a year ago, but that's been there for three years, you know, and so you might be deploying every 30 minutes right, continuous delivery. But again, that defect is long standing. And so I think people are coming to grips with that reality and they're realizing that that drives them towards that resilience exercises their muscle more than the proactive muscle. And so they're saying, look, I'm going to have long standing bugs, I'm going to have a bugs the developers introduce on a daily basis. I'm going to come to grips with the fact that my production environment is going to be flawed. I'm going to work on being as responsive to that new telemetry on the flaws as I can be. And if you have prophylactic technology in place like a RASP, you can respond very quickly by introducing some patch to that scenario, some remediation to that scenario, that doesn't force you to reexecute your pipeline. But increasingly, if you're a mature dev ops organization, you can execute that pipeline very quickly and deliver by the end of a half a day. So that's a very different mechanism from a security workflow perspective than we were talking about three years ago. Whether we were talking or that we were doing five years ago, that wasn't sort of imaginable.
Jb Aviat: [00:21:40] Yes. So I completely agree. When you mention new security defects being discovered. So you can have several regions for those defects, one of them is yes, static analysis, one of them is I don't know, just security research. And you have new vulnerable versions, could be a bug bounty, it could be a lot of different things. But, so I agree, when you have mature modern organization that can deploy very easily, it's easy to get rid of one of the defects. But don't you think that a lot of finding those bugs will actually come from the production and from observability or security tools?
John Steven: [00:22:19] I don't have any data that tells me this many will come from production and this many will come from building security. But what I can say definitively is that regardless of where they're found, more of them will be live in production than won't. Right, I mean, you only build greenfield code once you manage brownfield code forever. And as that code base grows and it always grows, it never shrinks. Right, those defects are going to be in production. So I would say there's probably, you know, an asymptote where where it's more likely over time as the code base gets larger, that you'll find those things post prod than not.
Jb Aviat: [00:22:56] Yes, definitely. I agree.
John Steven: [00:22:59] And I would maintain know again, things are shifting right as to where you're going to find things. Static analysis became very hard on source code because source code is very irregular. But infrastructure is code in cloud service provider configuration is highly irregular, very parseable Yamal terraform files, Kubernetes configures, Helme charts. So those things end up defining the way that services connect and that the way that you're your dev frameworks are configured and how services connect. So whole swaths of vulnerability types. Are you encrypting traffic east to west? How are you storing passwords? Do you apply input validation? Do you apply authentication or authorization prior to dispatch? Do you output in code? Whole swaths of those vulnerabilities are best visible in reading those config files. You can scan those things on, check in and complain the developer didn't configure it correctly, but you don't know what that configuration means. Is that the configuration for dev, unit test staging system test? The only source of truth is, is what comes out of the artifact and is pushed a production or whatever. So you're going to want to look at the production config because it may have no relationship to what the development configurations are. So even if you do check for that stuff in dev and that's your preference, you have to repeat those checks in the production environment. And then you see that in the vendor space. Right. People are going to drift detection and and guardrails because they're saying, look, we're all about moving left and we're all about giving the developer better telemetry earlier. But at the end of the day, we have to make sure the production environment one is deployed in the hardened and compliant form. But two, it stays that way. Because what we fear is an operator logging in and making a change based on a need at the moment and not undoing that or malice or whatever. So you know I think a BSIMM activity that we added in 11 was frankly converting a lot of these proactive checks to guardrails and drift detection. In other words, moving development controls to production. That was the longest. Yes, I could come up with. Sorry.
Jb Aviat: [00:25:12] No no it's perfect. All right. Could we talk a bit about blueprints John? I know you have a lot of opinions on how so the SDLC is moving from Gates to more real time and asynchronous controls. And I think, like your vision is to generalize what we call the paved roads, or the guardrails into what you call blueprints. Would you mind sharing about that?
John Steven: [00:25:38] Well, yeah. I mean, so I have a two year old and a six year old, so there's a lot and there's 'no' in my world, just like there is in security and, you know, how many times do you want to say no to somebody? Right. And how many times do you want to use static or dynamic analysis to point out that someone's made a mistake? So Cross-site scripting is the perfect example, I mean, almost any line of code that you write out to the Web could be vulnerable to, you know, some sort of injection attack. Right. Any query you're going to make, any ORM you use, like there's there's just sort of a large surface area in which the developer can get a defect. If you change your perspective and say, well, I'm going to use a contextually where output encoding library. And if the developer commits to incorporating that, then I don't have to check for this whole class of vulnerabilities. I have to check for them disabling the library and then going off the guardrail or going unsafe. Parametrized queries and so on and so forth. Secure by default framework configurations that handle your password storage and encryption settings correctly being able to crush whole classes of defector vulnerability by having pre-audited something that, you know, works and then releasing that allows your security initiative to fork and say there's a fastlane.
John Steven: [00:26:56] And if you want to run holding fireworks on roller skates with scissors, that's cool, because we've got pads and goggles and we've got all the right controls on you and go nuts over there. If you don't have those protections in place, then maybe we're going to have to hold your hand and walk. That's a way to get around the cultural friction of you know development wants to deliver faster and continuously and security, wants to slow them down. You can get as fast as development wants to go if you've already done and cash the result of the security audits. And so to me, blueprints or reference architectures are the code and design equivalents of that, respectively. And in an environment where you're pulling containers from from Registry's, when you're checking in infrastructure as code and sharing that and copy-paste format, the blueprints and the reference architecture are themselves becoming code. Blueprints and reference architectures are something I've been talking about for two decades. It stunk when you were talking about it in the 90s because you were talking about a word doc and all your developers were on a Solaris box or Linux box and they didn't have a word doc. They weren't going to go over to their other laptop that hadn't been powered up in for years. and you know. Now, with that stuff being something you can pre-populate in a repository or put it in a registry and have it be the default image being pulled down, this it's actually tractable. And that's what excites me so much about the shift in technology and the shift in culture to engineering led security initiatives, because now we have the tools to do all the things we've been talking about as code.
John Steven: [00:28:36] And so I have probably six or seven examples of where CISO with whom I worked building security initiatives, went to GitHub or somewhere similar, and there are now individual contributors. Was it because they were terrible security executives and they are now demoted? No. It's because they felt the best way to get leverage and to have a bigger impact was to go build this infrastructure, secure infrastructure or security as code or governance as code into an organization that was highly automated.
Jb Aviat: [00:29:08] I agree. Joint security vendors, please.
John Steven: [00:29:10] Yeah and so this blueprint's thing, this reference architecture thing, if you're listening and you go, yeah, I tried that and it didn't work, no one used them. Kick that idea back up off the shelf because it's an idea whose time has come, because more and more of our pipelines, our governance, our infrastructure and orchestration is code. And so you can implement those those blueprints as code.
Jb Aviat: [00:29:31] Interesting. So your story returns to repeat. So it took like 20 years for the web to go from PHP to ruby on rails. Right. Don't you think we have something similar that might happen with, I don't know, like terraform? Today the way we write terraform is still immature if we compare to the level of sophistication that you had in web frameworks from a, I don't know, a configuration. Yeah, oldest security model of web framework, aren't we super far away from that, and do you think we already have the ability from a technical standpoint to build realistic blueprints in all of those areas that are starting to adapt code?
John Steven: [00:30:14] It's still a ton of work, and yeah it's messy. I mean, watching the terraform sausage get made is.. Is messy, especially if you go off script. I mean, these technologies work really well in one vertical. Right. If you're using ec2 instances and well published docker images and, you know, services that are sort of well trodden and terraform providers that are super well maintained and everything is hunky dory. When I tried to convert, I have 22u in my house, 11 in the house and 11 in my garage as a backup and trying to terraform that, or if you're a bigger firm, trying to terraform your data center providers, which aren't as well maintained. They don't work as well. There's a lot of rough edges to these technologies. There's a lot of automation and convention and policy that isn't there yet.
John Steven: [00:31:03] It's real work. So yeah, you still have to do the job and the job is hard. The difference is once you do the job now, you can check it in and scale it across the organization. Compare that with before when you had to pentest and help somebody with remediation of an application. You had to replicate that job on all three hundred to three thousand apps. So the difference is when you do the job now, you can replicate its application. The initial work still sucks. I mean, it's still a Vietnam to get that file to work, but once you get it to work, then you can reuse it. And that's the value. And that's great, we should like that. Because when it becomes too easy, then our job will simply be like business analysts. We'll give the requirements and the operator will just bang out that config and there'll be nothing fun for us to do.
Jb Aviat: [00:31:48] And we will live in a secure world and our mission will be completed.
John Steven: [00:31:53] Yeah, I won't see it. Too old. No, I mean, the other thing I will say is that I started in security in 1998. It was possible to be an expert in dynamic static analysis development. It's not conceivable to be an expert in every practice area anymore. The pace of change in each practice area has increased and in development, the proliferation of different technologies, I mean, you could list 30 orchestration technologies. You know, the different cloud, the different API stacks. I mean, COVID, slowed Amazon down. They were delivering a new API like every three and a half weeks, you know. So there is a tremendous amount of work to do to cover this space. And that fragmentation increases the messiness that you referred to in trying to, you know, to implement your solution in any of them. But I'm concerned about the pace of change because the pace of change has increased exponentially. But also all of the things we're using now are self-service by developers and operators, which means that security groups aren't necessarily aware of what their engineering teams are delivering using, meaning they don't know what they have to go learn. So as a security professional in today's world, your job is visibility into what those tech stack adoptions are. And you have to figure out a way to strap a rocket to yourself, to come up to speed with those things and provide those blueprints and values fast enough. And that's, unsurprisingly, why engineers are taking leadership roles. The best security executives are not doing the impossible task of keeping up with that learning. They're enabling deputising and fostering the engineers to self-service the security as mentors to their security journey as they're developing the new or adopting the new development technologies. And if you read sort of the front matter of the BSIMM 11, that's really what we talked about. Your job is to get off the damn field. You're just going to get run over. You know coach the players, or the engineers and operators, to get the job done with their much greater strength and technical skill.
Jb Aviat: [00:34:01] Have you seen organizations that you felt were especially successful at that?
John Steven: [00:34:06] I think there are about 20 organizations that we've observed that reflect real delivery on that promise. We don't tend to rank, order and label and name them, but their names would not surprise you.
Jb Aviat: [00:34:18] Yeah, I guess. And so maybe some things they have in common are they all very pretty young organizations rather than legacy ones?
John Steven: [00:34:28] No, there's actually, so they're not vertically aligned. There's ISV, there's health care, there's finance. They're not new or old. I mean, you know, you've got Netflix, which is very new, and Amazon, which is relatively old, or Intel or EMC, which is very old and they have sort of at times similar commitments to continuous delivery. There are minor differences in the way that, say, somebody who delivers hardware changes from someone who's a single, like a Twitter, who delivers one software platform. But that's more about how they deliver their business value than other things. I would say that founding members of the BSIMM like EMC and Microsoft, that have taken a very iterative approach to how they think about product and software delivery and have since day one had security mandates that make security everybody's responsibility, are great things, are great organizations to look at in terms of what's possible. They're not great organizations to look at in terms of replication. Microsoft has like one QA person for every three developers. They had a security mandate from the CEO and a month long stop to deliver security training like no one's going to spend as much money as organizations like that do. But on the other hand, these organizations that you can take away messages from having a product security person whose full-time job is security in each line of business or each team, having them drive the selection of tools and the tuning of those tools and the selection of runtime protection so that they get something that works for them as opposed to a grenade thrown over the wall at them. You know, just there are things sort of activities that you can see that sort of coalesce around those engineering. Let it where where security becomes critical path to delivery and becomes delivery's responsibility and it's taken seriously. And security acts as a subject matter expert mentor or a secretary to record what they're doing and report it to the board into compliance and audit. Those are sort of, to me, good practices to follow.But you have to you have to commit to the cost of them,
Jb Aviat: [00:36:26] Of course, thanks a lot John, that was a really insightful, interesting conversation. What I appreciate, above all, is that often when talking with security experts about the future of security, it can sometimes be very pessimistic. And when I'm listening to you, I really appreciate the fact that you are striking everything that is allowing a change, a very, very positive change in the industry. And I feel glad to have been able to share that with everyone. Thank you. Thank you so much.
John Steven: [00:36:55] I have been miserable about security since since my involvement in it. And you now, people who've have known me my whole career really know that. But what really strikes me right now and why I left the company I built for 20 years, is because these new technologies are in fact making it possible for us to do the things we talked about wanting to do. And just, you know, when we went from C and C++ and Solaris boxes to the Web, a lot of us sort of looked at it like, that's a nice fad. Come back when you're serious. That was, of course, the wrong side of history. Right. Then Mobile happened and we didn't really deal with the mobile challenge as quickly as we should have. Cloud has come and the things that it's brought really provide us an opportunity we can get on this bus or this train or whatever, and we can be a part of this continuous delivery movement and make it secure or we can be left behind. This is really an interesting inflection point. And for people who have risen in their careers and who have become, you know, in management or bureaucracy, and they miss these elements of being technical and generating those impactful wins and then scaling those wins across the product, team and organization. The tech stack is now allowing you to do that as a security professional. And so it really is an exciting time if you want to invent yourself as a as a security aware developer. And that's just sort of these things only happen once a decade kind of thing. So to me, it's a great inflection point and it's worth talking about it and it's a call to action.
Jb Aviat: [00:38:30] Thank you. Thank you so much, John. I wish you an amazing day.
John Steven: [00:38:34] You as well.
Jb Aviat: [00:38:35] And thanks alot for sharing your thoughts with us.
John Steven: [00:38:39] Yeah, you too. Thank you for the opportunity. I really enjoyed.
[00:38:42] Thanks for listening to this episode of AppSec Builders. You can find all the resources discussed during this show on www.AppSecbuilders.com. Be sure to subscribe to our podcast to get updates on our upcoming episodes.