Episode 3

Complex System Security: a CISO Perspective with Emilio Escobar

Published on: 16th December, 2020

In this episode of AppSec Builders, I'm joined by DataDog CISO, Emilio Escobar. Emilio's extensive experience at Hulu and Sony Interactive and his contributions to Ettercap all provide a unique perspective on team maturity, managing complex systems across enterprise, leadership insights, security ownership, and becoming the CISO of a public company.

Follow Emilio on Twitter and Linkedin at the below links:

https://twitter.com/eaescob?lang=en

https://www.linkedin.com/in/emilioesc/

Resources

Ettercap:

Book Recs:

Episode 3 Transcript

Jb: [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.

Jb: [00:00:14] Welcome to the third episode of AppSec Builders today I'm proud to receive Emilio Escobar, who's CISO at DataDog. Welcome and good morning, Emilio.


Emilio: [00:00:24] Good morning. Excited to be here. Thanks for having me.


Jb: [00:00:24] Thanks lot for joining us. So you recently joined DataDog as a CISO, but you have a broad experience as a security leader, at DataDog today. But before that, Hulu, Sony, and I think you are also the maintainer of a famous tool for security geeks like this, which is Ettercap, right?


Emilio: [00:00:48] Yeah, that is correct. I'm one of the three main maintainers of it, and we've been doing it for about nine years already.


Jb: [00:00:56] Do you want to share a bit what Ettercap is about? I used it regularly into pentests'. That's an amazing tool.


Emilio: [00:01:02] Sure. Ettercap has been around for a long, long time, I think, since 2006, and it had slowly died down in around like maybe two thousand eight, two thousand nine. But it is a man in the middle attack tool. It's leveraged by a lot of pentesters for doing man in the middle attack to their customers and trying to obtain credentials for for services like SSH Telnet and what have you. How I got started with it was that when I worked at Accuvant Labs, I was a pentester, one of my colleagues was using it or trying to use it for an engagement that he was working on. And he was running into some, some bugs. And he reached out to me and asked me if I knew how to code in C. I said yes. And he's like, I'll give you five hundred dollars for if you solve these two for each of these two bugs that, that I'm running into. So looking at the code, I was able to fix the issues that he was running into. I never got that thousand dollars back. But what that started was the conversation between him and I. This is Eric Meilin, who I believe is that BlackBerry now about like, hey, should we actually resume the support for Ettercap? We wanted it to work well in MacOS. We wanted IPv6 support. We wanted all these new features that it wasn't supporting. And we reach out to ALoR and NaGA the original authors and they were gracious enough to allow us to to run with it as long as we kept it open source. Right. And that was the commitment that we gave them. So fast forward nine years. We've we've added a few versions. Now, I'm less involved in the coding because I really don't just don't have the time for it, but surrounded by two people who are active. So feel free to check it out on GitHub and submit pull requests, issues or use it and give us feedback.


Jb: [00:02:51] Amazing. Yes great tool, I used it a lot. So after being a pentester, you went to Sony, Hulu. So two companies in the entertainment world.


Emilio: [00:03:34] Yeah. Yeah. So I actually met PlayStation during my consulting days. Right. For some engagements that we did with them and,and a few years later they reach out to me and said, hey, we're looking for to grow the team, we're looking to grow the application product security side of the house. So I joined as employee number two for, for that discipline. And we were able to grow it to a pretty significant team. We were able to build capabilities also out of the Tokyo office out of Europe. So it was it was pretty good program. The team is still growing, is still active. And it was a lot of fun. It was. But it was the first time that I was on the receiving end of attacks from groups like Lizard Squad Anonymous. Right. So PlayStation is a big target and things like fraud and fame and fraud and all those things were a lot of the factors that we had to go sell for. So really, really interesting set of challenges like gaming faces right up. Time is everything. And we have a very opinionated customer base. Right. Like gamers care and they will let you know pretty quickly, I guess.


Jb: [00:04:38] And yes, Sony has been in a couple of important leaks were you in the company when that arrived, it must be insane to live that from the inside.


Emilio: [00:04:46] I wasn't part of PlayStation during their big outage. I supported them as a consultant. I joined after as an employee and for Sony Pictures, theyre a separate entity, right. So we collaborate, but for something like what happened to them, it's thanks but no thanks kind of approach from them. Right. And rightfully so. And I think they had the right support from the FBI and everyone else involved in their investigation. So we only supported from building a discipline and a practice, but not. Step out of the way and let us do what we do, because they have a pretty good team there as well.


Jb: [00:05:16] Yes, OK, interesting. And so then it was Hulu when we first Emilio you were looking at Hulu and I guess that there you had like very distributed architectures. Right. Would you mind sharing a bit about the context at Hulu?


Emilio: [00:05:32] Yeah, certainly so, yes. I joined Hulu to grow and build a security practice there and with a very heavy emphasis on product development. So SDLC security. How do we enable velocity? Time to market is everything, you know, obviously for a streaming platform. When I joined Hulu, we were working on the live TV product, so uptime became even more of a concern. Right. Video-On-Demand, if you can watch a video now, you might try in an hour. But live TV, if it's a Super Bowl or the World Cup or what have you, you want to watch it when it happens and not sometime later, unless you purposely record it because you can't watch it when it's live. So uptime was a big concern. So joining Hulu, I discovered the complexity of the architecture right. It was a complete microservice environment. At PlayStation, they were working towards microservice and segmenting things in smaller type of workloads. Hulu had that built. So dealing with that complexity was something that I wasn't faced with at PlayStation. So it just required a different approach of security, right. Everything was automated. Hulu had a platform as a service framework built by Hulu, which was really interesting where developers to get push can push a production and the containers will get built out and everything. So I thought all the right things were in place. We just had to get security in them to make sure that things were done appropriately. But we had to we had to rethink the whole legacy approach to security, of being a gate, doing code reviews and, you know, how do you do static analysis? How do you do dependency scans and all those things? Because you know a developer can get push any time and they were doing over three hundred deploys a day to production. Right. So it was a lot to catch up to.


Jb: [00:07:14] And could you could you give us some numbers so we can see the scale of that, like how many developers, applications, repositories, if you have that in mind.


Emilio: [00:07:23] Yeah, yeah. If I remember correctly and I'm sure it has changed since, but I think that towards the end of my Hulu tenure we had over 600 developers and I believe the number was around twenty-three hundred microservices. Now, whether that's the right number or not, that's a separate conversation. Right. But that was what we were dealing with and language frameworks were all over the place, right. So we wanted developers to be creative and effective in whatever language they felt the most comfortable with. So we had to support JavaScript, Python, Golang, I believe we have some Scala and node.js and what have you. So it wasn't a centrally standardized environment where everyone was coding Java 


Emilio: [00:08:05] and uspring framework and all those things that you can get a little bit more commodity out of those, we had to scramble a little bit.


Jb: [00:08:12] So, I understand, and as a CTO, it's a tough balance to give a lot of autonomy to people, but also you need to keep a certain degree of currency in your deployments.


Jb: [00:08:23] So I'm curious to understand, so ok a lot of different languages, but I guess this also means a lot of different frameworks, a lot of different coding styles and practices, right. That's a nightmare for a security owner.


Emilio: [00:08:37] Yes, it is. Yeah so I think, you know, we had to rely on the developers being strong at what they're good at, right at coding, right, so we had to leverage that partnership. You know, all these frameworks, obviously different attack surfaces, right. So we had to find ways of how to put security in place in a matter that wasn't disruptive, that didn't impact production, that was easily adoptable, right. So starting with the "why" making security the default, right, I always tell teams that if you have a developer choosing between defaults and security, default is always going to win. So why not make security the default? So we have to take chip away at that mindset and approach, right. So we had to put, leverage as much of CICD as we could, do things as infrastructure as code, leverage security controls that you can load the library or through infrastructure as code or some sort of automation. So a lot of self-serve is why we wanted developers and teams to serve themselves security and we had to build a paved roads for them to have that enabled for them. But that on the back end to your point of how do you maintain some level of consistency and priority towards quality and security? We made big strides and efforts into tying security as a quality entity.


Emilio: [00:09:53] Right. A lot of times to see security and quality being two separate worlds. And they want to approach using different processes and different language to approach what I consider to be the same problem, right. If I'm a consumer of a service and whether it's a functional bug or a security bug is still impacts my experience, right. So, I united them to the point that we were reporting to the executives and stakeholders, security issues as part of the quality conversation, right. And we use the same language as in like escape defects, recurrent defects, and track those because we wanted to leverage that already made it already established interruption process QA had for developers for security concerns as well. And that that got us a lot of wins there where we we're not just saying, hey, we want to do this because the security is like here's a quality element to it that everyone cares about. As a developer, you don't want to be the reason for why a service or there's a bug in production that people complain about in Reddit or whatever. You have pride in the work that you do. So I think leveraging that helped us a lot with security.


Jb: [00:10:55] Super interesting. But I guess when you have a bug so it could be impacting the customer experience, like, I don't know, they can't start a movie, it could have a security issue. In the end you want both to be fixed, but the available developer time is still limited. How did you prioritize security versus quality? I guess you still have to make that code somehow?


Emilio: [00:11:17] Right, yeah, and that's exactly why I thought combining those two problems into the same conversation helped, because then we can actually do the trade-off conversations in one forum versus having silos for security or quality issues and sort of not being able to combine the two of them. So, yes, we have to be very pragmatic about if it's a security issue, how easy is it to exploit? How likely is it to be exploited? What's the impact of exploitation? Right. And Hulu being very strict about the quality of the product, even if it was a security issue that will lead to a bad experience from a consumer, whether they couldn't start a movie, a show, they couldn't save something to DVR or whatever core functionality the product has, we will still treat it as equally important as a functional issue, right. So that how the bug manifests itself became less important than the impact of the bug to consumers, right. So that put, again, that put the two security and quality in the same conversation, and then we will have the trade off talks. If it was a functional bug that was being seen by 68 percent of the consumer base and a security bug that was only being presented to 3 percent of the consumer base then that was a no brainer, right. We will choose the functional bug issue over the security bug so that's where pragmatism comes to comes to play.


Jb: [00:12:36] Right, makes sense, makes sense. And so with such a large distributed architecture, so you have a lot of simple small pieces, but the overall complexity is insane, I guess. How did you manage to cope with that? Did anyone have like a holistic vision of the system? How did you, like, enumerate two thousand services?


Emilio: [00:12:57] Yeah, yeah. It was definitely a lot of tribal knowledge for sure. And that was a problem, right. Because well, I think one thing is also to admit to the fact that security will never have the same level of understanding and visibility as like the developers have of their own software and services. So this goes back to the mindset of why security is there, right. So security is there to help developers write secure code and secure and stable services. But if you try and spend energy on security, being able to see and understand one hundred percent of what's there, then that I think you're burning a lot of candles on that side that maybe is not going to drive a lot of results. It's good to have an understanding, but is it good to have it at one hundred percent understanding? I don't think so, because you can rely on the developer community of your company to give you that understanding and empower them to make those decisions. Just measure what security looks like for them. Right. So one example is around abuse of services. And one of the things that we did was that we were empowering development teams to be able to block what they thought was malicious traffic. And the reason for that was like the security team was getting paged, let's say, at 4:00 in the morning, because some some IP's were hitting a few services pretty hard. Right. And the question that we were always getting from developers is, is this, is this a security concern or not? Or is this attack traffic or not? And it always puts us in a weird position because we don't know necessarily how the service gets called. Like, yes, we have an idea, but we don't know it better than the developers who built that service know, right


Emilio: [00:14:32] So I would always like to, we always turn around the question to them and say, hey, based on the use cases that you've built into the service and what you see for, what, P99 or normal patterns look like for you, what do you think? Right. And the answer would always come back and say, yeah, this looks like they're trying something weird that is not part of the normal flow. So the question then was like then you plot them versus we've logged them for you. So we actually build those capabilities for them. And one of the team members on the Hulu security team built a service because now we have to deal with the erroneous blocking of somebody who is a human doing something that was just a mistake. So we, my team built a service called "IsitblockedbytheWAF.hulu.com" that customer service could access internally and say, hey, this person is complaining about it here's a description of what they were trying to do. Are they actually being blocked and that they can actually unblock from there. So we enabled the unblocking part as well. But ultimately, what that led to was teams making more informed decisions for the things that they fully own and therefore reducing the need for security to be able to know one hundred percent of everything that's happening, because that's just unrealistic for a dynamic environment like a microservice cloud environment that Hulu is and so is DataDog. So we're not here to cover all the ground. We're here to make sure that people can cover their own ground.


Jb: [00:15:53] Super interesting! And I guess as security teams, we are always looking to get a stronger connection to the developers and to the other teams. So the fact of giving them the power and ownership, choosing who to lock is amazing in that sense. But as we see it, I guess the teams were already like owning the operations of the service, the availability, the performance, etc.. Right.


Emilio: [00:16:15] Yes.


Jb: [00:16:16] So you already need a pretty distributed model to make that work?


Emilio: [00:16:19] Yes, absolutely. Yes. That only works if your company has the philosophy of "if you build it you own it" type of mindset. Right. So if the developers are just there to write code and they push it and some other team is then responsible for the operational aspects of the service and and uptime, then again, you're just creating silos of knowledge. I don't see how a developer can be a successful software engineer if the performance aspects of whatever that developer is working on is sort of like abstracted from them. So how do you optimize your code? What do you add to write caching layers? What do you write at the right optimizations to your service if you don't have the visibility or control over that? So that's why I love that mindset of of you build it, you own it.


Emilio: [00:17:07] It does create a little bit of security nightmare because then you yes, you do have engineers and you require access to that production environment, you require engineers to have that level of access. So instead of like focusing on removing that access, is providing the right access. Right. So one example, a lot of times developers need access to production databases or production configuration files. And the reason for it is because there's no other tooling that is exposing them to the configuration settings of their service and for them to be able to make changes if they need to in a safe way. So instead of saying, no, you don't need access and you shouldn't have access because you're a developer is more about like, how do we actually enable you to do what you need to do in a way that's safe? Right. So this goes back to the whole premise of security being there. Is to allow people to do their job without that fear of threat. Right, so that's one way of doing it is like I understand you need to do this because you're the one getting woken up at 2:00 in the morning when your service goes down. How about we give you the right tools that enable you to do that where maybe you don't have to get woken up, where a tool just does it for you and is being tested and proven so much that we're confident you don't have to get woken up if it does something right. So that's it's just a different mindset. Right. But you have to have that philosophy across the organization, otherwise it's not going to work.


Jb: [00:18:23] Yes. And so did you trust all the teams to react equally to security alerts or issues? 


Emilio: [00:18:33] I think this depends on the maturity of the team. Right. I haven't met a team that absolutely does not care about security. However, I have met teams that have other concerns that are not security at the moment, or I have met teams that don't understand what security is there to do, how can we help and what does it mean for them? So clarifying does go a long way. So, yes, we have. But you have to start with trust, right? I think the lack of trust is a poison in organizations and on teams. Right. And if you don't have it, I guess security doesn't trust developers, then how do you expect them to trust security? And I think trust goes both ways. And I've seen a lot of security teams sort of not trust developers, but then complain about security, not having a voice at the table. Right. And it's like, well, obviously, because if you don't establish a culture of trust, they're not going to trust you. Right. So why expect trust when you're not given it? So, yes, we have to start with a level of trust, but we also have to understand where those teams were from a maturity standpoint. Right. Like, if you work with a team that is doing no regression testing, no unit testing, most likely they're not going to do any security testing. Right. So help them fix their stuff and then security will gain a benefit from it. It might be a longer term investment that you want, but in the end, you get the full investment, rewarded back to you versus you got security done. Great. But have you actually helped the organization move forward and the answer will be no.


Jb: [00:19:53] I love the concept of. Yes, different levels of maturity. And yes, obviously, the first priority is to have a service that works. And once you're there, you can then invest in different areas like security in such a large environment. Was that something that was conscious? And like maybe you knew about the maturity of the teams? Did you have like a spreadsheet or was that more like we give everyone the same test, but some are more successful than the others?


Emilio: [00:20:21] Yeah, I think it's more of the latter. Right. I don't think we had a structure around sort of tracking where maturity was. I prefer for those to be gained by conversations and relationships versus labeling. Right. And it's just our job to make sure that while we're building relationships with external stakeholders and teams, we can call those things out and we build a system where we feel comfortable calling those things out. And most likely they are aware of where their deficiencies are and if you open that door, they will come to you with that. Right. And they will say, hey, I understand where you're coming from, but I'm worried about X, and that is the gift, the token that security is given that you have to leverage and say, OK, let me help you with X so then we can work on Y, right. And a lot of teams approach it with X is not my problem. Get back to me when that's solved, then that door gets closed for you. Right. So no, we didn't have a rigid way of doing it, but we sort of understood, based on conversations and answers that we would get and questions that we would get, of where teams were. But we were not it wasn't like an official thing, right. Because I don't like to label teams that way. It was just more of a be more situationally aware of how much handholding you need to give teams and be accepting of that. 


Jb: [00:21:37] So at the scale of 600 developers, how do you spread the word like that's enforcing something that is cultural? Right. Like you own the security, you own the security aspects of your production of your code. How did the word spread out to all those teams?


Emilio: [00:21:52] I don't think I have like a silver bullet for doing that right. I think you have to find what works for your organization, like, for example, at Hulu. You know, I always push the teams to have BrownBag sessions to get in front of the engineers. We will talk about Engineering All Hands, we will find any forum that we could to make security visible and the partnership model to be outspoken about.


Emilio: [00:22:14] Right. We're here to help. So leverage whatever the company has. Right. So, you know, at Hulu they have these "Hudos" program, right which is a Kudo's program. So I always make sure that we gave people kudos for helping and working with us. Right. One of the ideas that I stole from somebody was buying like dumpster fire candles. Right. And it was like giving people a dumpster fire candle as they solved a dumpster fire, right. Or using budget for having security backpacks, t shirts, what have you like people care to be recognized, so just leverage that. So I think a combination of those things was what got the word out. I mean, I don't think we were perfect when the time that I left. But comparing how the team was through my initial time at Hulu where we're in and it was a big progress there, but it was just with being humble and being open. Right. And not being afraid of telling a developer team, hey, I don't understand this. You know, can you help me? Right. So that way we're also showing weakness. So I don't have a silver bullet, but I think leverage whatever your organization has, use rewards programs, use BrownBags and demos, get in front of people because you will find out that people care more about security than you think.


Jb: [00:23:24] I love the, I love this conclusion. I agree, if you give them the right interlocutor and the right mindset to communicate, yes, developers and any engineer actually care about the thing that is well done and security is obviously a part of that definition of well done. And one thing that I find also impressive is on such a large scale system, the tooling to monitor, to examine, I don't know what to look for, attacks to check traces from one subject to another.


Jb: [00:23:55] It's something that is still in progress to date. Like we don't have a good unified way, so things are arising like open telemetry is. I guess, working at DataDog, you are very aware of that and but like, this is changing fast and so we can see the end of the tunnel. But did you like compelling tuning to help you with that in the bug security incidents happening in deep microservices, for instance?


Emilio: [00:24:20] Yeah, I mean, this is something that DataDog is also involved in solving, right. Like you mentioned, being part of the open telemetry group and contributing to that. I agree with you. I don't think it's a problem that is fully solved, right. And we are making you know we have been making strides on solving and helping customers solve that DataDog. At Hulu, I don't think we had a good answer to that. So we leveraged a lot more on the teams coming to us with whatever they were seeing, right. We had a good understanding of where the controls were applied to, where the blind spots were, but then everything is, you know, you're making business decisions as to like how deep down you add security controls to it. So we had to rely also on the visibility and observability of the services produced, right. So make sure that we told, we work with developers on, hey, you should log these transactions that are erroneous so that way we can follow them up at some point where somebody will see them and react to them. So it wasn't necessarily with like security controls, but we had to take a different approach of leveraging observability, telemetry and relying on products like DataDog of being able to centralize that and build a tracking and monitoring for it and alerting for it.


Emilio: [00:25:34] So, yes, we have to get really creative about the signals that we were looking for and what alerting to do. But at Hulu we found out that developers were really in tune with, how their services were performing and they would immediately alert us if they saw something weird happening with like the last ten minutes, right. So they were there were on it. So observability there was pretty crucial for us and it helped us a lot. So that way we were not required to put the same level of controls that you would put to a service that access close to the Internet and have those like sensitive data down to the service that is probably the most far removed from everything in the world that might handle, like, I don't know, like beacons from from clients. Right. But this is like six layers removed from the world. So that allowed us to be that be cost conscious, but also friction conscious as well of where do we actually want to what are the battles that we want to go fight and where we want to focus energy? Right. We have finite resources dealing with infinite attacks. Right. So we have to be very specific about where we pay attention to.


Jb: [00:26:39] Yes. Thank you. So you mentioned, yeah, Datadog and so when did you start at DataDog Emilio? 


Emilio: [00:26:45] Yes, I joined DataDog in September this year. So it's early, three months in and it's been an amazing journey so far. It's a great organization, but it's still, you know, drinking the firehose, as you say, and learning a lot. There's a lot of ground to cover. So I'm excited about the challenge and opportunity to learn a different world. Right. I come from the consumer world. So joining a B2B enterprise company is a different world, but similar engineering challenges that even a consumer company goes through. Right. So I'm familiar with those. So getting exposed to customers and the customer demands and asks and rights within their contracts is a new world that I've been getting exposed to.


Jb: [00:27:31] Yes. I'm very admirative DataDog, I really love the company and the products. At Sqreen we use DataDog for our internal systems monitoring. The pace of the company's impressive new features being released every, I don't know exactly how much, but I feel I get newsletter weekly with amazing announcements in it, avid of that as a geek at heart as well. And so, yes, that's a really high paced company, I guess, Hulu was also pretty modern and very high paced with everything that you described. But that makes fast moving environments and so basically, the heart of what you need to secure as a CISO is really moving, moving fast. How are you coping with that?


Jb: [00:28:20] Well, that's assuming that I'm coping it. I'm just kidding, No, I think yes, DataDog is interersting.


Emilio: [00:28:26] Right? Because the joke that I have is that it is an enterprise company that wants to move at consumer company speeds again, going back to time to market and the teams are very, very active, to your point, the product teams and the engineering teams are extremely high energy and pushing on all cylinders. So I think it's following the same philosophy, right. And this is the interesting challenge of how do I apply the philosophy that I just mentioned to an enterprise company and how do we get there? That one really exciting thing to see after joining DataDog and being exposed to it is how much people care internally about the quality and the security of the product. Not that Hulu didn't care. Right. But there's that element when you work for an enterprise product company where you have to have the security right. I think the difference between a consumer and an enterprise company is consumer sort of expect security to be done. Enterprises demand security to be done, and they put it in contracts for you. Right. So showing and seeing how people internally actually care about satisfying customers and helping customers is great. But then on top of it is how do we actually help customers get better at what they do? Right. So I think that's there's something exciting about DataDog that I wasn't seeing on the consumer side. Consumer is you watch the shows that you want to watch, we give you the content you think you want to watch. But that doesn't mean we're going to make you a better TV watcher. Right. That's not a discipline that I think you want to improve on. Here is we're helping companies go through cloud transition or expansion or just more devops approach, and we're helping them with that along their journey. So I find that really cool and exciting.


Jb: [00:30:05] And doing that also means that the company is sharing so many projects you have in-app agents, you have server agents, you have cloud watchers, integrations, etc. And so that's also lots of open source. Did you find the open source dimension a difference to deal with? I can imagine like more open source mean more likelihood of leaking secrets, receiving security issues from nonstandard channels. Any challenge related to that.


Emilio: [00:30:36] Yeah, yeah. So yes, the ones that you mentioned, it does create a new window or a different type of window of sort of reflection of what things on the inside are. Right. So one of the good things about open source, but it could also be an Achilles heel if you're not careful about it, is that you will care a lot about the quality of the code that you're open sourcing, because now obviously you have unlimited pairs of eyes looking at it. But if you're not conscious of that and you actually open source things that are low quality, then that might give a threat actor indications of how things are internally as well. So you have to be conscious of that. So that's a challenge. It does present opportunities for for mistakes, right. Of keys, sensitive values being exposed through open source, because I say we have something that was closed sourcing within DataDog and we decided to open source and we didn't do a full review of making sure nothing was there. So we've had those concerns. Right. And it's way more than I've dealt with career wise in the past. So it's quite a challenge. But again, going back to people caring like that means that we have more than just a security team looking for those things.


Emilio: [00:31:44] Right. So that's that's a benefit where we have engineers who care and they will do their own sort of process of making sure that the code is of the quality standard that DataDog has, which is pretty high, and making sure that we're not leaking things there that we don't want people to see. And then how do we receive security signals from from multiple channels? This goes back to how do we actually enable the teams to be able to handle those signals and approach us when they need help versus maybe trying to get security in the way of everything? Right. We have a a pretty large scope to cover as a business. So, again, we have to make sure we prioritize where where do we spend the energy. Right. So if we have an open source repository and we've leverage, let's say, GitHub disclosure settings where you can define what those are, then we work internally, what that process needs to look like or leverage what we already have existing. Right. So just reusing a lot of what works for us.


Jb: [00:32:40] All right. You mentioned that's a large scoop. And so, yes, how many, many persons at DataDog today Emilio?


Emilio: [00:32:51] Last I checked, we're about twenty-five to twenty-seven hundred and we've been growing pretty massively you know we manage billions, if not trillions of transactions, right, of observability transactions for our customers and we have thousands of Kubernetes workloads running and so that the scale is pretty massive. But we we have a significant amount of energy and talent focus on working on that.


Jb: [00:33:14] So one thing that I really find impressive is you coming in a new company, and so you're here you need to really understand what's happening. And you mentioned prioritization. So how do you what was your personal strategy as an individual, but was a C-level individual in that company, or did you manage to cope with that, to understand the context and to get your mental model of DataDog security?


Emilio: [00:33:39] Yeah, and it's still something I'm working on, right. I think that the environment is so large and I don't even know if I will be able to get to the point where I can say "I know where all the the skeletons in the closet are" or, you know, everything about everything. And maybe I shouldn't. Right. So my personal strategy behind it is my first three months have been nothing but meeting with people, understanding, asking them what are their priorities, what do they care about? Where do we want to go? Being a CISO, meeting with the business executives and the leadership team and the Board and understand what do they care about, right, and how can we help. Understanding a little bit of the past from the sense of I care less about the decisions that are already made because those are made decisions, you can't do anything about them. I care more about understanding why that decision was made, what were the constraints? What were, what was the environment that led us to those decisions. So that way we can understand how do we improve from those especially decisions.


Emilio: [00:34:36] I come in as a new person, you go know, I would have done it differently, but I don't judge past decisions. Right. I think decisions were made because they were the right decisions to be made at a certain time. So I removed sort of that mindset out of my system and just care more about how do we move forward. So it's been meeting everyone, getting an understanding of where we're going. You show up with an open mind and people will come to you with their problems, right, and that's been the case. So taking copious notes and try to understand, but then going to the teams, right, I joined a team here at an organization that already had priorities defined for a quarter working for the next quarter, so decided to not disrupt any of that and let the flow go where it's going and then slowly introduce whatever changes I think are necessary for the benefit of the company. But how do we introduce change? When I don't know where the organization is going, so I focused more on that first.


Jb: [00:35:29] Of course, and so I guess that means meeting with any kind of stakeholders, because part of that is also understanding where the company is going.


Jb: [00:35:37] From a strategic standpoint, you want to be really tied to the business and to whatever value the company is willing to provide to the customers and so according to their roadmap. So I guess that's also ensuring that you have a perfect alignment with everyone at any level of the company.


Emilio: [00:35:54] Yeah, yeah, right. I wouldn't necessarily say it's a perfect alignment, but I think it's workable enough. Right. And I'm a believer of disagree and commit. Right. So like I mentioned, like there's already a flow of things happening. So if I disagree with any of them, we were already committed to working on them. Right. So I wasn't going to disrupt any of that. But getting enough alignment to understand what are the "now, the soon and the later" concerns and then really do some strict prioritization personally of like what are the things I need to work on now versus in the future? And then, you know, coming in new to an organization, there's nothing wrong with you relying on people to get you right, guide you and help you. Right. So DataDog does a great job of assigning you a buddy and a mentor and then meeting people. Right. So I just don't focus on leaders only. There are influencers in organizations who are individual contributors. Right.


Emilio: [00:36:43] So try to meet the people who either have been here for a while or are respected enough that they can influence change. And those are the partners that you want to build community organization because you're not going to do it by yourself. Right. And you need a lot of help. So I wouldn't say I've got in perfect alignment, but at least alignment enough that we can continue moving forward with what our priorities are and then slowly introduce whatever new philosophy I think we need to start thinking of. At different levels, right, at the tactical technology level, process, business level, you know, I think a CISO, is part is a business stakeholder, right. So being part of the business and having an opinion about sales leads and marketing and all the areas of the business that we care about, not just be the security person in the room, because then that means they're only getting two percent value out of you when they want one hundred percent. Right. So so understanding how those departments work and what they care about and having an opinion and helping them is valuable as a CISO.


Jb: [00:37:42] When I was a pentester, we had a recurring joke that was like fifteen years ago. So things have changed for the best. But the joke was, oh, look at the CISO. 


Jb: [00:37:52] Thats easy to find in a company, that's the person who will always eat alone at lunch, and so that was certainly that was kind of true back in the days. And so for the best, things have changed a lot. And so that's really heartwarming listening to you describe as your CISO function like that. And so I guess this takes a lot of leadership, right, to identify those right individuals and to bring them with you.

Emilio: [00:38:18] Yeah, absolutely. And I think the transition that you're seeing is from CISOs that maybe drove change to authority now where you're going to see CISOs driving change through influence. Right. And my feedback and my advice to to anyone who talks to me that they want to be on the CISO path is learn, right. Security becomes, in my opinion, security expertise is always going to be important for the CISO. But if you have the right team, then you let them handle the security aspects. I think a CISO should worry more about the philosophy and psychology and how to communicate, how to motivate people, how to influence people. Right. So I'm a book reader, so I keep wanting every single person who I meet. You can ask my new team how many times I said I'm going to be throwing books at you. But, you know, one of the great books that I think is still valuable today and it's been great, it was written a long time ago, is it's how to win friends and influence people. So reading books like that, understanding, how do you actually, how do you know how do you put yourself in the other person's shoes? Right. Like there's a book that Jennifer Tejada from PagerDuty recommends is "Grit". Right. You need to read about how do you work with people who've gone through challenges and how you find them but how do you empathize with them? So I would advice any security person or any technical person who wants to go into a leadership role that to to focus more on that side of the House versus being the go to person for technical knowledge, because people are not going to measure you that way. The higher you go, the sort of the leadership route and they want more out of you and it's not for a CISO it's not like tell me about security is is tell me about the business. Right. And you can't get that unless you know how to talk to people.

Jb: [00:39:52] So the books you mentioned they are all about the human element and the psychological element, of course, that's critical and I think in any leadership position. But you also mentioned the business, right? So it's like you are in a public company the business does not flow by itself, so how did you learn about that?

Emilio: [00:40:12] Yes, that's very true. Right, joining a public company, you have to learn about terminologies that maybe you weren't exposed to before, right? So I'll give kudos to to Joe Sullivan from CloudFlare.

Emilio: [00:40:23] He recommended a book that I was a little bit skeptical about, ended up reading and enjoy it thoroughly is called "How Finance Works", right? It talks about how you actually value companies, how you evaluate companies, and describes things like balance sheet. Right. Like what viability is long term, short term. So if you're like me, who who had to take accounting classes in college but then fell asleep through most or most of them, I would highly recommend you read those, you know, I followed venture capitalist firms, right. And a more to not because I'm interested in being an investor myself, but more of how do they actually find value in companies that they invest in. What are the things that they look for? What does growth actually mean? Right. What do you optimize for growth, stability, hypergrowth, some sensible growth? Right. So I love hearing these perspectives and learning from these people. So, yes, absolutely. There is an element of the business that you have to learn as a public company of what you can and can't say. What does it mean? Leverage for members, If you're a member of a public company like partner with board members who are there to help you. And there's nothing wrong with showing weakness, right? They hire you for a reason. And the reason wasn't because you had 100 percent of the answers, because they think you can contribute to what's already there. So understand your limitations and leverage people who can compliment you.

Jb: [00:41:42] That may be a great ending note 

Jb: [00:42:43] All right, Emilio thank you very much for your time today. I really appreciate having you here.

Jb: [00:42:47] Thank you very much for your time and I wish you the best of luck as a CISO of a successful and fast growing company.

Emilio: [00:42:55] Thank you, no thanks for having me. This was really interesting and fun, thank you. Good questions, I love them.

Jb: [00:43:01] Thanks for listening to this episode of AppSec Builders. You can find all the resources discussed during this show on www.AppSecBuilder.com. Be sure to subscribe to our podcast to get updates on our upcoming episodes.

Next Episode All Episodes Previous Episode

Subscribe to the AppSec Builders Podcast

Receive updates on new guests, episode resources, reminders for new episodes, and much more. To subscribe you can also send an email to appsecbuilders@sqreen.com with the subject “subscribe”.
Show artwork for AppSec Builders

About the Podcast

AppSec Builders
The podcast for practitioners building modern AppSec.
AppSec Builders features practical and actionable conversations with application security experts and practitioners. Topics range from understanding and solving classes of vulnerability, building protections to efficiently scale with your business, and core best practices to strengthen your security posture. AppSec Builders is hosted by Jb Aviat, CTO and co-founder at Sqreen and former Apple Red Team member where he was a reverse engineer, pentester, and developer.

Contact us at appsecbuilders@sqreen.com

About your host

Profile picture for Jean-Baptiste Aviat

Jean-Baptiste Aviat

Jean-Baptiste Aviat is the CTO and co-founder of Sqreen. He spent half a decade hunting security bugs at Apple, helping developers fix them, and developing protections used by millions of devices.

Prior to Apple, Jb was a full-stack, white-hat hacker for a consulting company, developing numerous security tools in whatever language he needed to hack into.