In this episode of AppSec Builders, I'm joined by New Relic Principal Engineer and AWS Serverless Hero, Erica Windisch. Erica has decades of experience building developer and operational tooling to serverless applications. We discuss all things serverless including why you should care about serverless security, designing app security when migrating to a serverless environment, how to scale your app security with serverless and much more.
Erica is a Principal Engineer at New Relic and previously a founder at IO pipe. Erica has extensive experience in building developer and operational tooling to serverless applications. Erica also has more than 17 years of experience designing and building cloud infrastructure management solutions. She was an early and longtime contributor to OpenStack and a maintainer of the Docker project.
Follow Erica on Twitter and Linkedin at the below links:
Transcript for Serverless Security with Erica Windisch
[00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.
Jb Aviat: [00:00:14] Welcome to this episode of AppSec Builders today I'm proud to receive Erica Windisch, we will discuss about serverless and serverless security. Welcome, Erica.
Erica Windisch: [00:00:24] Hi.
Jb Aviat: [00:00:26] So Erica you you are an architect and principal engineer at New Relic, you are also an AWS serverless hero previously you were founder at IO Pipe, an before that were security engineer at Docker. Right?
Erica Windisch: [00:00:41] Ah correct yeah.
Jb Aviat: [00:00:42] So thank you so much for joining us today, Erica. I'm really excited to have you as a guest today.
Erica Windisch: [00:00:50] Thank you for having me.
Jb Aviat: [00:00:51] So, Erica, Serverless as an AWS serverless hero, I guess you know almost everything and you are very, very aware of what's happening in the serverless world. Before we dive in, like some AWS specificities, maybe you could remind us what is serverless and how does it differ from the traditional world, especially from a security standpoint?
Erica Windisch: [00:01:14] Absolutely. So, I mean, my background, it's not just Docker, it's building open stack. It's building web hosting services. And, you know, this is an evolving ecosystem that, I mean, in the 2000s was, you know, as simple or as hard as taking your content and uploading it to a remote server and running your application to as complex as running your own servers. Right. And these, of course, are options that are available to you now. But increasingly, developers are moving towards dev ops. They're using containers. They are finding that CI/CD and deployments and all of these things are useful tools for the organizations to move quickly and operating physical machines as pets, as we would call it, versus cattle, which as a vegan is probably not the best metaphor. But, you know, over this time, we've been increasingly going higher level and operating and deploying and building at higher level layers. And serverless is that highest layer in a sense where rather than building a micro service is shipping a service that runs on a VM in a container and a host that you have to manage and operate, even if that's part of a larger Kubernetes cluster.
Erica Windisch: [00:02:33] Instead, you just take your application and you give it to your cloud provider and your cloud provider runs it for you. There's a lot of advantages to this, largely that the platform is fully managed for you to a large degree. You know, you don't have to maintain operating system patches. You don't have to maintain Kernels. You don't have to do anything other than operate your application. And really, the biggest disadvantages to this are that you do lose control of managing some of these pieces. But for most users, there's there's a benefit and a game to not having to operate components that are not mission critical or I mean, arguably they're mission critical because your applications are not going to run without a kernel of some sort of however, that kernel can be tuned, it can be optimized, it can be hardened and it can be done by Amazon rather than having to make that your problem, because you and your organization often may not have the expertise or the time to invest in having the same level of security that Amazon can provide out of the box.
Jb Aviat: [00:03:36] Yes. So that's the ability for users to focus more on what they know, more like their business strategy rather than their infrastructure, rather than there are server configuration you need. So from this point of view, that's much more focused towards what you knew and what it would do as the cloud provider knows best. Right. So that's a lot of advantages from a security standpoint, because, as you said, it's everything that is a maintenance like security updates et cetera, is dedicated to the cloud providers and its not your responsibility anymore. So is that like the best thing from a from a security standpoint, migrating to to serverless?
Erica Windisch: [00:04:14] So I will add an additional caveat here, which is that mean Serverless is a concept. There are multiple products that provide serverless capabilities. Amazon LAMDA being one of the most popular S3, arguably being one of the first Serverless products, and many users are already using S3. So from a certain perspective, you are already using SERVERLESS services and S3 has minimal attack vectors, but there are also large attack vectors. Potentially you could leave your buckets open.
Erica Windisch: [00:04:46] I think that actually just today there's big news that this app called Parler, this alternative to Facebook run by right wing conservatives. And what happened there is that they left S3 buckets open, apparently, and they were in the middle of a shutdown as well, and their services were compromised. And one of the things they've done there is having misconfiguration of their applications. They rely a lot on other serverless Services such as Okta, which they're apparently running a free trial of, and they were removed from that service and then they were then in a situation where people were compromising their services because they didn't have many services available. Now, this is a particular case where they were denied for acceptable use policies for what I consider pretty reasonable reasons of being denied service. But the point kind of stands in a way that here is a company that was relying a lot on some of these serverless services and they found themselves still at the mercy of security vulnerabilities despite doing that. And in some ways, it opened up them more to being disconnected, having Twilio disconnect them, having all these other point solutions that were arguably serverless services, shutting them down because they relied heavily on the platforms on which they were no longer allowed to use.
Jb Aviat: [00:06:06] So your point is that using serverless puts you at risk of the solution provider?
Erica Windisch: [00:06:11] No, not necessarily. No, actually, that's not the point I'm trying to make so much as in they were hacked before they were shut, before they removed some of these services, they were using serverless services and they still got hacked. Right. So the point is more that Serverless itself doesn't ultimately protect you from application level compromises. Right? Right. It does protect you from some of the infrastructure level compromises. It doesn't stop you from other attack factories. Yes, it is true. It doesn't protect you from being bad people and getting yourself kicked off of services. But it also shows that you can use some of these services that are supposed to provide you third party security controls and they can still fail you.
Erica Windisch: [00:06:53] Yes, I guess it's multiple points. Obviously, they made a lot of really critical mistakes, both technologically as well as politically.
Jb Aviat: [00:07:03] So basically using serverless is not perfect. You can still make like configuration mistakes, security mistakes at various places of the thing. You mentioned also application security. That yes is not prevented by the fact that you are using serverless because the code you are running is very similar to what you were writing in a regular application.
Erica Windisch: [00:07:26] Exactly. You're still building applications. So application security is still essential right. If you're relying on something like Okta or Auto0, it's very easy to misconfigure those and to use them incorrectly. You know, it's possible to have Twilio out and not have two factor working correctly or not having it verify phone numbers. Apparently, you can have S3 and you can leave your buckets open. Right. And that is a large part of my point.
Jb Aviat: [00:07:53] Yes, absolutely. One of the opportunities I would see with Serverless is that usually you are starting sometimes from scratch, or at least you need a new CI you need a lot of new things when you are moving to Serverless. So that's also a chance for you to use the infrastructure as code to use a more higher level of deployment frameworks, for instance. And so that could be a place where you can bake some security controls to maybe review you on telephone files or your cloud information files to ensure that you don't have such issues. Are you familiar with such practices, Erica?
Erica Windisch: [00:08:29] Yeah, there are definitely companies. A lot of the larger companies actually use their own custom serverless application frameworks where they bake in a lot of these constraints and security controls for everybody, for everybody that is using that framework. I do see that to be a pretty common use case, especially again larger companies. But even with the smaller companies, I think that CI/CD Is a place where you can then slip in some configuration, whether that's, you know, serverless configuration or even if it's potentially Kubernetes. I don't think it's strictly related to Serverless. I think that was serverless. You have a lot more control over your application via configuration, right? Just because I mean, there's less infrastructure. So I guess it goes both ways, right? You have less control and more control. Right. Like all the knobs that you can turn in configuration. Argueably there's fewer of them, but they're more applicable to your applications specifically rather than knobs that are specific to infrastructure. Like you're not turning knobs that control your IO in general. Other than your on Lambda, you can control how much memory you get, which does control how much IO you get and how much CPU you get. But that becomes more of a billing function. It says, how much am I willing to pay for the service and how much performance am I going to get out of what I'm paying for. But I think that's a little bit different than the level of control that gives you whether or not you are running a certain VM or a different operating system, a different kernel, things like that which are out of your control with serverless applications.
Jb Aviat: [00:09:58] Yeah. And so to me, I'm actually not sure that serverless means less ops. And you said it's a different kind of controls because if you are a developer. Before you were doing zero ops, all the orchestration you were doing was I dont know API or microservice level, maybe application level, if you move towards serverless, you might suddenly start to use things such as step functions that will orchestrate how your functions are communicating together. And so this is Ops a developer starts doing that they were doing previously. So that's also something that is kind of new.
Erica Windisch: [00:10:33] I think that moving away from infrastructure operations to application operations is I think that not operating the hardware gives you more time to focus on operating your application, making sure your applications working, getting your application tests to work, building out more functionality in your application of all of this means that you're using your tools more for application support rather than for infrastructure support.
Jb Aviat: [00:10:58] Yes, I agree. And if you look at the you know, there is the typical Venn diagram where you see security operations and developers. And so to me, if we consider serverless like the things are getting more intricate because you have actually a very different kind of Ops when you are moving to serverless. And so one of the things that could have been previously the responsibility of the operations could now be falling into the hands of the developers. So, for instance, who is responsible to define the privileges that a given function should have in terms of IAM and cloud permissions, that the developers who exactly knows what does and is writing. Like I dont knowdon't one function or several functions per day or that the ops actually are not aware of the business logic. I don't know if you see similar.
Erica Windisch: [00:11:48] Yeah, I see a lot of organizations creating roles and policies organizationally and providing those to developers and developers that need to use these policies. Configure this way. And for a lot of organizations that works. It does create some challenges around the CI/CD platform. And it can create barriers sometimes because if you want to deploy serverless applications and nobody has yet deployed or built your serverless role or has authorized that for use in your or for lambda in particular, if they don't create the necessary roles for lambda and they don't allow you to create those functions with the right roles and permissions, it becomes a barrier towards adoption within your organization. That said, there's advantages towards using locking down things like that organizationally. And I think that a balance has to be struck between, you know, enabling innovation in your company and this like top-down operation level security that happens again in a lot of companies. And it's a balance. It's not necessarily an easy balance to make. I think that a lot of organizations are very set in their ways because they're not expecting Serverless. It is more and more common. Like I know at New Relic it's something that more and more teams are looking at using, but it's still something that is challenging to potentially use as well. Just because you need to have your CI/CD system set up correctly, you need to have team members who are familiar with learning and building things serverlessly it is a different paradigm and it just challenges to especially again the larger organization or depending on how you structure your your operations.
Jb Aviat: [00:13:28] Yes, there is a balance between security and usability. So it's not a new thing. Obviously, from a security standpoint, you would think that the principle of least privilege is super important and that's something that you should keep in your lambda, but probably not to the point of having like one IAM rule per serverless function, because I guess that makes the whole thing super how to scale and even I don't think IAM is a good way to manage like hundreds of rules for you, for your serverless deployment.
Erica Windisch: [00:13:57] Yeah, I think it becomes challenging, though, because a lot of serverless applications do not have really great input validation. So that, of course, does vary according to each language and according to each developer. But most of the code written for Serverless or LAMDA in particular is known in Python, and these are dynamic languages. They are not statically typed. Minimal input validation is often given for these functions. So you know having open IAM permissions does also potentially mean potentially having invalid input past to these functions, which does mean that you probably should want better input validation, depending on how open your IAM permissions are. I mean, there is a good argument, which is that you should have good input validation and starting IAM but we also live in the real world and we recognize that doesn't always happen.
Jb Aviat: [00:14:47] Yeah, too much complexity is also an enemy to a decent security, but that's a good thing that you are touching because the scale that you have when you deploy serverless, instead of managing like one code base you are managing maybe ten or fifty code bases. And so there is a difference in terms of scale that you didn't have previously.
Erica Windisch: [00:15:09] So, you know, I would say that Serverless enables you to build scalable applications and what is good about this is that rather than your application falling over is it will scale and it will also charge you. So it does open up some potential for denial of attacks. Serverless tends to be very inexpensive. So it's not usually a large bill, but it is possible to potentially force a serverless application to scale. Right, almost like a denial of service attack. But instead of denying the service, you are denying a denial of wallet because you're just charging you're putting so many resources that you're just racking up their billing because the service is going to scale. It's going to support your requests, its just going to just keep charging more and more S3 is the same problem. Right.
Jb Aviat: [00:15:57] Denial of wallet issue. I like it.
Erica Windisch: [00:16:00] Yeah, but I did forget the original question.
Jb Aviat: [00:16:04] So it was about the scale. And I think challenges such as, I don't know, like vulnerable dependencies, for instance, is tractable when you have a few code bases. But if you multiply those code bases by 20 or 50, that's much harder to track at that scale.
Erica Windisch: [00:16:20] So I think the challenge for me is not necessarily the code bases, but the deployments, because each serverless function is a deployment of code and each of those deployments is an immutable artifact of that code and a snapshot in time. If you are building your application and you don't have good CI/CD, that code could be out of track with what is in Git. You might have code or applications that are working well for you. And here's the I think a big difference between traditional application of Serverless is that if you have a micro service that was serving, say, 15 rest points and you replace it with 15 serverless functions serving one rest End Point each, you now have 15 deployed services. And if one of those rest end points doesn't need updates in a year, it might fall behind the other code bases just because it's not getting those updates. So what some organizations do is they force deployments. You know, they might do minor repairs and deploy every function simultaneously so that they're getting continuous deployments, they're getting continuous updates. So I don't think that it's in a possible problem to solve. I think that it just means that you need to have continuous deployment. And I think that deployments need to be more continuous than the code updating, like by itself, like it's better to deploy it with no code changes than to let it sit for a year. Right. Is what I'm getting at. Like, you don't want that deployment sitting for a year just because that particular function hasn't changed. And in traditional environments, very often that, you know, less used function will get redeployed over and over again because it's all part of a larger package and because it's no longer part of that larger package. It doesn't get the updates frequently unless you design for it and you plan for that to get those updates even when you're not making code changes.
Jb Aviat: [00:18:16] Yes, correct. Correct. And the one solution to that problem is like runtime monitoring rather than Github or code level monitoring, because that's obviously always up to date. And then depending on the on the deployment.
Erica Windisch: [00:18:31] Now, that's an excellent point. It's observing what's actually running is really important for knowing what your security posture is currently, because what is in code is not what's running on your servers.
Jb Aviat: [00:18:44] Yes, you can tell Solarwinds about that. So to me, one big difference with the serverless as well is you tend to have much more inputs that is coming not from an HTTP request, but from a stream from any kind of cloud mechanism, a notification asynchronously. And so you mentioned the input validation earlier. But I think that's also a domain where developers tend to trust the thing that is not directly coming from the user and where you can actually have a lot of like injections and similar issues.
Erica Windisch: [00:19:21] Yeah, I think that it's very common for a lot of serverless developers to trust that the input that they're getting to their serverless function is in the format and you know the types and everything else that they're expecting. That really should not be true. But at the same time, we are talking about languages like Node and Python that to a large degree are not typed. Things like typescript can help, general input validation can help. It’s definitely something that you should be aware of going into writing these serverless applications. Or if you're not writing them, if you're an operator, if you're a security architect at the company that's building these functions, it's something to look at. I agree.
Jb Aviat: [00:19:59] I agree. And so the fact that we're moving from like the, I would say, traditional model where we are hosting the application on the server or on a docker container and moving towards something where you are just running code somewhere is a big chance for a lot of security tooling as well, because what you're using for monitoring intrusion detection and prevention on the server cannot easily be applied to the serverkess world. Do you see such transitions? People trying to find new solutions? Are you aware of existing solutions that on thart behalf?
Erica Windisch: [00:20:37] Because they're serverless, I've definitely seen new application models come out because they are containerized. Serverless is kind of a version of container's. I guess one thing that both of those have done is make it easier to run resource intensive tasks for a wider number of users. Right. Because serverless is allocating you a dedicated compute resource to handle your request. It does mean that you have more dedicated compute power for that request than you do outside of a serverless environment because you are isolated per request and you have a certain amount of CPU allocated and storage allocated. So I would say that there is more dedicated resources and that allows you to do I mean, there are people that do machine learning inference inside of LAMBDA containers and for certain applications that's excellent. It's mostly something you're going to look at running with something like Tensor flow of lite rather than a full tensor flow. But for handling individual requests and doing machine learning against those requests. Serverless is an excellent just because it is these dedicated CPU cores, these dedicated resources for those requests. I think that building that out to run on containers is significantly more complex, certainly not impossible people are doing it, they do quite a bit of it. But I do think that it is more complex to run those sorts of applications on servers and arguably less secure as well. If you're doing, again, machine learning inference, you are running that inside of a sandbox. Dedicated compute runtime seems to be, for me, the ideal compute environment.
Jb Aviat: [00:22:20] Ok, I understand. And for a security team that would be like transitioning and that is exploiting several, server security tools. What do you see them using on serverless, do you see alternatives to security tools and new ways to monitor or detect attacks?
Erica Windisch: [00:22:39] I've seen some interesting new tools come out. I think that for the most part, existing tools are probably already in your tool chain might be the best solution just because you want to minimize the changes for your developers, you want to minimize the complexity, the variation from what your users are already familiar with. On LAMDA, you can now ship and running docker containers or OCI images, which is really powerful. It does mean that you operate and control more of your stack than you do if you don't use docker containers. So there is more of a security footprint to operate and monitor and observe. But it's still less than full containers, right? It's a hybrid. So a lot of those container tools and container tooling will work for you. New Relic has a serverless product Snyk has support for serverless products. A lot of the security companies, I think, are supporting Serverless these days in one way or another to varying degrees.
Jb Aviat: [00:23:33] Yes. But still, if you are using, I don't know, like for Cisco based tools such as Sysdig, for instance, that something that becomes hard to use or impossible to use in the most serverless deployments atleast that I'm aware of, for instance, most of the cloud vendors solutions will use, I don't know, like or a detective or whatever, but AWS WAF, for instance, is very hard to plug in most deployment because the input is not coming from the web source in most cases.
Erica Windisch: [00:24:03] Mm hmm. Yeah. Gosh, I feel like there's a lot there. Yeah, not everything comes from the web which is true. You know, a lot of serverless applications are evoked from things other than the web, which I think is generally more secure and not necessarily strictly more secure. But I think that there's a lot of Web-based attacks that you certainly avoid by not being a Web-based LAMBDA. I feel like there was something else, though, that you're getting at here, and I didn't quite pick it up.
Jb Aviat: [00:24:31] Yes. So one of the big differences. So, yes, you cannot do that. But some of the typical security tools do not apply here. But I think a lot of security tools that you don't need any more.
Jb Aviat: [00:24:42] Oh, I see what you're saying. Using and using Amazon WAF, like as your firewall, for instance. I mean, I think, again, we go back to input validation, you still have to validate your inputs. But it's true that in theory, a lot of these things are provided out of the box for you know S3 has a lot of controls, for instance, out of the box. Again, it comes down to configuring those correctly because just because it's serverless doesn't mean it's configured correctly out of the box.
Jb Aviat: [00:25:09] Yeah, absolutely, absolutely. So a big added value to the serverl4ess deployments, where I think that case you are ripping away good part of some Ops is when you are operating microservices, for instance, is true. And when you think of the putting zero trust at the application transport layer in microservices, where you would you want to authenticate one application talking or one service talking to another service, have cryptography on that etc. a lot of those controls that you need to manually build in microservices is now coming out of the box with AWS because they already have like certificates everywhere. You already have a way to define the granular roles and IAM permissions that will help you restrict what a given serverless function can do. And that's that's an amazing improvement to security.
Erica Windisch: [00:26:02] I think that's absolutely true. As long as you can manage your AIM policies. And I think that that's a really, really big caveat.
Jb Aviat: [00:26:10] Yes. Yes. Are you aware of tools that help manage such policies at scale?
Erica Windisch: [00:26:15] I would love to be aware of more tools for this. I'm sure that some exist. I'm not super tied into that space. I would love to hear recommendations from your listeners.
Jb Aviat: [00:26:24] So I don't think I have a great recommendation to manage such permissions at scale. I know that some companies are doing a very good job of it, but that's something that we would ask to the listeners, I believe.
Erica Windisch: [00:26:35] I do know that there's some tools such as iCloud custodian that do allow you to scan your Amazon deployment for security configurations and then apply them as policies which could be used in this capacity. But it doesn't directly solve the problem. It's just a way to apply policies to your account. You still have to write the policies you still have to manage those policies, you still need to.. you know it's a stepping stone towards a solution like that.
Jb Aviat: [00:27:01] Yes, I think using a lot of infrastructure as code and software defined policies can really be helpful in ensuring that you are doing a good job. But that's only one part of the problem. And so in terms of security, there is one class of vulnerabilities that we didn't discuss. So, for instance, how do we level vulnerabilities such as a meltdown or spectre? What are you thinking of that in the context of serverless, Erica?
Erica Windisch: [00:27:27] Yeah, so I think that when I was a Docker something I looked at, and this is back in 2014, I was looking at mitigations for Meltdown and Spectre before they were known.
Jb Aviat: [00:27:37] Yes, I was about to say it was twenty eighteen. I believe that 2014 before they were well known that worked.
Erica Windisch: [00:27:44] Yeah. Well, so what we did know at the time was that there was an attack called Flush Reload and Flush Reload was a predecessor to Spectre that allowed information to be leaked out of the L3 caches. You know, I looked at ways we could get around this and what I learned was largely the same things that we learned out of Spectre and meltdown, which was that these are fatal CPU problems. We can't really work around them. We came up with some workarounds that we then found workarounds around those workarounds. And to be quite honest, we kind of gave up because there weren't any really clear solutions. And the really clear solution was that Intel needed to fix our CPUs, which of course didn't happen until the problem became gigantic. It was already a big problem, but we did recognize was that a hybrid solution would probably be the best solution. That is a hybrid between containers and VMs, which is what Firecracker does. And I think that Amazon firecracker, which is what powers LAMDA and powers Amazon, is probably the best solution that we have towards meltdown and Spectre besides, of course, you know replacing CPUs.
Jb Aviat: [00:28:53] And are you aware of any research on that side of, because, like, you get no guarantee from the cloud vendors on what kind of isolation do you have when your actual LAMDA running?
Erica Windisch: [00:29:05] Yeah, so it's actually open source. So firecrackers, open-source. You can go to GitHub and you can download it, you can run it yourself. It is a container runtime that is backed by VMs but still have access to name spaces in the host kernel. So it is a true hybrid solution between VMs and containers. And I think that that's probably what anybody should be using if they're looking at mitigating Spectre today.
Jb Aviat: [00:29:31] Ok, super, super interesting thanks, Erica. So now in terms of best practices, if I wanted to start to sell on this project today and I'm looking for really top notch practices to ensure that I have the cleanest setup as possible, what would you recommend or what do you see happening today?
Erica Windisch: [00:29:49] I would say there's two main pathways I would recommend. One is to choose a framework such as a serverless framework and build around that and deploy with that. And then the other side is to build around containers and support around containers. And I think there's a lot of really strong cases for building around containers except for security, because you do have a wider security boundary around those containers than you do around application code that you ship and you don't have to maintain an operating system because as soon as you ship a container, you're shipping an operating system. When you're shipping just the application, you're relying on a pre-existing operating system that will be maintained and operated for you. And getting all of those updates automatically out of the box on a daily or hourly basis. Whereas when you move the containers, you become responsible for the updates yourself, which will necessarily be a delayed schedule compared to what Amazon is doing for the fully managed environment. But I think there's a lot of advances towards using containers from a developer perspective, both from developer productivity as well as security, because even though you do have a wider security footprint now, because your boundaries been expanded, you also have more tools to help you. Right. So I think that you have to make an educated decision around these two choices. And I'm kind of afraid that if you choose to not do the container route, that you're going to have users that want to build with containers anyway and you're going to support both. So it might be better to just support containers on top of Lambda, because I feel like you're going to have to support them regardless, whereas the non-container version is probably optional. You can probably get away with not supporting that and just supporting containers. I think it'd be harder to support traditional serverless applications without supporting containers at all at this point. Now now that Amazon supports it.
Jb Aviat: [00:31:42] Yeah, and it's also probably something that's just much easier if you want to transition to another cloud vendor, test locally, debug locally, etc..
Erica Windisch: [00:31:51] Exactly. Exactly.
Jb Aviat: [00:31:53] So you are an AWSserverless hero, so I'm assuming that you know everything about AWS and serverless, but what do you know about the other cloud vendors and their efforts on serverless else and maybe the good and bad things that you see everywhere.
Erica Windisch: [00:32:07] Yeah I know, I know quite a bit about the other providers too. Azure in particular, I think is really popular with C sharp support. So I think a lot of the C sharp users run on top of Azure, Node and Python are less popular there. And I think that some of, as far as I know, I don't think they support container images. You know, everything runs on top of windows. So it's a very different environment. You can run, your Node.js Applications and as long as they just do the node.js things and they use all the things that are part of the Firefox libraries with the Netscape libraries that are used inside of node because Node is basically kind of built on top of Firefox component's.
Jb Aviat: [00:32:44] So, yeah, the Azure way to build things. So I think that's pretty clear. And so do you see specific advantages?
Erica Windisch: [00:32:52] Yeah. So I guess what I was getting at was that it's running on windows, so a lot of the applications will just work until they don't because you're doing something Windows specific but that works, that works both ways. Sometimes you need something that requires the Windows API and you need a place to run it. And Azure becomes a really obvious place to run things that require .net cbk things that require Windows APIs, things that require a specific Windows, compile binary or library. It becomes essential. Right. And so that becomes like obviously the best place I think also can make sense to run on Azure if you have particular business contracts that demand that you run on Azure. I think Google is kind of interesting because they went down a path of Google cloud functions, but also Google Cloud Run and Google Cloud Run has some similarities with Container's for LAMBDA. I don't think it's a bad platform to choose. IO pipe, we actually ran on top of Google and then migrated to the LAMBDA and that was well years ago. So I don't think it would be fair to say that the decisions that we made would be decisions we would make now. But at the time, four or five years ago, it was just not as developed of a product. But I think that goes without almost without saying because Google was and Azure were behind LAMBDA, I would argue they're still behind Lamba, but they're definitely further along than they were when I decided not to use them.
Jb Aviat: [00:34:11] So, yes, so for me, something that caught my attention recently on LAMBDA was the release of the extensions. So that means that if you are a software agent vendor, then you can rely on the extensions and so I know new relic is using that we are currently building a Sqreen onto serverless thanks to that. And I don't think that's something that is available on other cloud vendors, for instance. So that's one kind of discrepency where, like using external tooling is much easier today on AWS that its is with other vendors.
Erica Windisch: [00:34:44] I think there's some truth to that. The LAMBDA extensions are really great for providers like New Relic to say here's your LAMBDA function that you've written and you want to add New Relic to it. Now you add an extension and you kind of click the button a little bit and now you have observability for your Lambda, whereas before extensions it was more of an ordeal to instrument your code. Azure has something called bindings, which provide some similarity but isn't quite exactly the same thing at all and it's something that we actually have had discussions with all the providers like Google, Amazon, Azure, about how do we have a better experience for our users? And they're all a work in progress. Amazon has things about their extensions, API that are still in progress, as well as Microsoft and Google having limitations of what they provide as well. I wouldn't say that there are necessarily behind each other like strongly. I think that they're just very different.
Jb Aviat: [00:35:39] Yes they tend to be very different. I hope we'll see some convergence on that. As we've seen with the containers, for instance, in the open Lambda initiatives or I don't know yet. So my next question Erica would be about traceability. That's the big question that start to have a lot of answers. And like open telemetry is becoming the de facto reference with all vendors working on that. Do you see that working well in the world of serverless? what is the best you can do today as a serverless developer to have good traceability on your architecture.
Erica Windisch: [00:36:15] Yeah, so I think that right now the products available from companies like New Relic as well as our competitors are a really great place to start. Open tracing is something that is you know, it's kind of pending. Vendors such as ourselves are working on it. Amazon's working on it, our competitors are working on it, and it's coming along really quickly. I really don't want to provide promises as to what will be provided when and where. I know I probably can't say too many details .
Jb Aviat: [00:36:47] Ok, OK. All right, Erica, I think we've covered several things. Is there anything that you'd like to add to share? Are you dying for for a given question subject?
Erica Windisch: [00:37:00] I don't think so. I don't have anything to plug or anything, so.
Jb Aviat: [00:37:03] Ok, well, I think that's good. I think we had a good overview. We went through many different topics. So I think that's good. All right. All right. Thank you very much, Erica.
Erica Windisch: [00:37:13] Thank you.
Jb Aviat: [00:37:14] I had a great time talking to you about everything serverless and everything security related. Thank you so much for joining us today. And I wish you have a great day.
Erica Windisch: [00:37:25] Thank you. You too.
[00:37:29] Thanks for listening to this episode of AppSec Builders. You can find all the resources discussed during this show on AppSec Builders dot com. Be sure to subscribe to our podcast to get updates on our upcoming episodes.