DevXPod

The inner feedback loop w/ Waldemar Hummer & Oleg Šelajev

Chris & Pauline Season 2 Episode 5

The hosts  ▻

Our guest  ▻

Things mentioned ▻

Let's chat some more! ▻

Waldemar Hummer:

I think in general the cloud has brought us this unbelievable scalable system that you can deploy any kind of workload to extremely like high scalability and performance for production workloads, but at the same time it's also brought quite a bit of complexity

Oleg Selajev:

Eventually one person starts using that, everyone's using that people see value, and then the team increases the velocity of development because they just naturally occur, they start breaking each other's code and then people higher up management start to notice as well, and they're like "how come they can deploy without breaking stuff and you cannot?"

Chris Weichel:

Welcome. You're listening to the DevX podcast, the show where product and engineering leaders share their thoughts and insights on developer experience. I'm Christian Weichel joined by my co-host Pauline Narvas.

Pauline Narvas:

Welcome to the podcast, Waldemar and Oleg! This is a going to be such an exciting episode, this is the first time we're going to do this, two guests from two different companies. So we've got Waldemar from LocalStack and Oleg from AtomicJar. This is gonna be so exciting, thank you so much both of you for joining us today. We're really excited to have you on board. How are you both?

Waldemar:

Doing fantastic. Thanks so much for having us.

Oleg:

Yeah. Pretty excited about this episode. Thank you.

Pauline Narvas:

Yeah, we're really gonna get into this. So, for those who've never heard of either of you, can you tell us a bit about yourself and what you do?

Waldemar:

My name's Waldemar, I'm the CTO of LocalStack. Originally from Austria, Vienna, I got my PhD of Computer Science from University of Vienna back in 2010 to 2014 ish, and then working for different companies, the first one is Atlassian. So I joined Atlassian in 2016. That's also where the first lines of code of LocalStack were initially written. We had this use-case back then that I was joining the data engineering team, and we had all pipelines and stuff running in AWS. We wanted to have like easier way to run RCI builds and also local development. And then from that point on, we managed to open source the project and it was growing over the years, more as a side project to be honest. And then, I've been involved in different jobs in the meantime, and then for the last one and a half years or so, we've actually started building a company and a growing team around LocalStack, and have quite a nice growing international customer base so I'm really looking forward to having this chat with you all today.

Pauline Narvas:

That's awesome. Can I quickly say I absolutely love Vienna? I travel there as part of one of my Interrailing adventures a few years ago and it's one of my favorite places actually. Oleg, how about you?

Oleg:

Yeah, I cannot boast being in a beautiful place like Vienna, I'm joining you from Tartu, Estonia. I bet you haven't been in Tartu yet, Pauline.

Pauline Narvas:

No, I haven't. But I've heard about Estonia before and apparently it's nice. I've heard good things.

Oleg:

It's very nice. It's very small country. We are very proud. We have excellent metrics like the most unicorn startups per capita, which I think is a metric is Estonia invented, that's why we so good at it, but doesn't matter. Estonia is great. I love Estonia. If you have a chance, come in summer because the rest of the year is less cool. So I work on the developer relations team at AtomicJar, which is the small startup recently started by maintainers of Java Libraries, and we work on simplifying and enhancing integration testing for developers. There are libraries in all popular programming languages. When I joined, there is a big community of users and technologies that kind of work together with TestContainers, so that was a very interesting place to be, and before I was at Oracle, believe it or not, working on GRA VM, which is a sort of a JVM runtime thing with very cool implications in the Java world. In general, my background is mostly Java, but I dabble nowadays in Go (language) and a little bit of Python, which I think is cool. So that's mostly me.

Chris:

Awesome. So happy to have both of you on the show. As Pauline said, is indeed first to have two wonderful people in the same episode. So what brings both of you together? The reason the four of us are here is that both of your backgrounds, both of your products that you also represent here directly touch on the relationship between developer experience and infrastructure that's necessary to write the software that we do. Can you say a few words on what you see the impact of infrastructure complexity is on developer experience, and hence productivity?

Waldemar:

Yeah, that's a great question. So I think in general, the cloud has brought us this unbelievable scalable system that, you can deploy any kind of workload to extremely like high scalability and performance for production workloads. But at the same time, it's also brought quite a bit of complexity. If you look at what previously was like a local operating system with all your processes running and all interprocess communication, storage, and so on, that's now basically transferred to the cloud, specifically with things like serverless architectures, where you have all sorts of different, for example, Lambda functions and AWS that are reacting to certain events that are coming from queues and so on. So it's generally lots of moving parts which are giving the developers a lot of flexibility to build scalable applications. At the same time, it introduces quite a complexity, especially when you think about these quick iterations that you wanna have, what we sometimes refer to as the inner depth cycle, the things where you actually wanna be able to quickly make changes locally, review them, debug them, see the impact, make a change. So these things are becoming a bit harder in the cloud these days, because you basically have this notion of your code is always running remotely and there's a few approaches also from the cloud providers and how this is being tackled. But the one that LocalStack is following and we're putting all our eggs in the basket is emulation, so we provide really emulated local APIs and it allows us to do quite a few amazing things, and I think we're actually pretty similar there with what TestContainers is doing, and maybe that's the segway to the test container side.

Oleg:

Thank you. Yeah. So from the TestContainers, point of view. TestContainers is a tool, right? The libraries integrate on with Docker as they run time as an environment to run your necessary services on one end, on the other end with your programming language or frameworks both test and application frameworks to simplify, creating those ephemeral environments that you need either locally or for testing purposes, because while we say that the inner loop is all about introducing local changes and being able to quickly verify and see whether what you're doing actually makes sense. Or are you stepping on other people's toes in the team? Are you breaking everything else while you're fixing your typos in the label somewhere? There is a need to have this reproducibility across different environments, between the team members, and one of the most important team members is your CI environment, right? You do want to have a way to run things locally. It could be a cloud ID or it could be local ID, or it could be like your mainframe server that pretends to be a cloud ID, but like you want to run them locally and in a similar way to run it in CI, because without this reproduction, like you will get mismatched expectations, you will get test failures. You'll not know whether what you're doing actually is beneficial or is it gonna break everything. There needs to be this kind of, sort of computability between those, the same way as there needs to be computability between your CI and your actual production environment. If there are differences, like how can you trust one? So, there is definitely this synergy of messaging between what the containers simplifies doing and what LocalStack allows you to do.

Chris:

So, what I heard was that the key challenge for that infrastructure complexity poses for developers today is the reproducibility that you have across different environments, the space you develop vs. your CI vs. Production and the complexity of the infrastructure itself, i.e. replicating the services, getting access to the services that you need to develop. How do those factors affect you in a loop? What challenges and what opportunities do they provide as we write our code?

Waldemar:

Yeah, that's a great point. So one of the classic examples that we sometimes use is: so we focus a lot on kind these serverless workloads, for example, Lambda functions, or also containerized applications that you can run entirely locally with LocalStack and one of the benefits that local execution brings to the table is that you can literally have your code in your IDE and by the way, this can also be, as Ole mentioned, like a remote IDE like Gitpod, for example, you have your code there, and you can simply essentially mount the code directly into an environment that really resembles or replicates what the real Lambda environment will look like. So what this basically gives you is a containerized sandbox environment where your code is executing your application code and has access to all the APIs, all the services that it would usually also talk to in a real cloud environment. What this gives you is like in terms of the inner dev loop, no need to redeploy your changes, right? So maybe some of you have noticed this popular XKCD comic with people are just making some fun games because "my code is compiling and I have to wait couple of minutes, that's why we can play some games." These days it's almost like "my code is deploying to the cloud, so I can grab a coffee in the meantime and wait to deploy". So the bad news is there's less coffee breaks because you actually get more productive, you have less wait times. But on a serious note, I think it just really enables people to have less idle times and just these quicker dev cycles terms of how to interact with their application at infrastructure.

Oleg:

Yeah, that's a great point. Maybe 10 years back or 12 years back in the Java ecosystem, for example, there were those big application servers, like the cloud nowadays, right? You could deploy different applications into the same server and it'll expose some sort of admin console and your development cycle would be "you change things, you package them, you throw it into the application server, it picks it up, it starts it" There was this product called Gable which would remove all that, it was a Java agent, which is very cool, and it'll rewrite how your GVM work and then it'll map runtime classes into your workspace so you just do the changes locally, and then it'll be picked up by the server. And that was excellent, and people loved that very much. And then for a few years, he did a good middle ground where our applications became smaller, we didn't need application servers anymore. It would start fast. It would start work locally. You might need a local instance of a database or maybe your message broker somewhere, but now we went so much right that we came back left right on the other side, because now all those services are available in the cloud, but now you need to move the code to the cloud. So this is incredible what LocalStack is doing, but bringing the cloud to your development environment, which is great.

Waldemar:

Exactly, yeah. I think that's a great way to think about it, that local has different sort of notions, so it doesn't have to be on your local machine, but could also be in a CI environment or in a remote ID, for example. And what are the other nice things, almost as a side effect, this emulation gives us is the ability to just be more playful in terms of how the services and APIs are being provided. Also, maybe if you can just briefly touch upon this notion of there's different shades of gray when we talk about emulation of services, right? You have this basic kind of mocking where you maybe just provide the very easy crowd operations, create, re-read, update, deletes, and resources. Then you might have some emulation which really replicates pretty much the majority of the service all the way to like very high fidelity emulation where you spin up like entire clusters of EKS or elastic search clusters and so on, and I think like along this shade of gray, there might be different use cases for different users. And I think it's also interesting for us to explore more and more what are the typical use cases that people use? What kind of fidelity do you need in terms of the fidelity of the emulation to become productive with your particular use case? So just one example is if you work on, let's say an Kubernetes on a control plane, you're probably just interested in having an emulated way that a pod are being provisioned or cluster being provisioned if you work on the control plan, but if you're actually interested in the application side, then you wanna have like the full thing being really able to deploy parts into it and so on. So there might be some different shades of grey. Not sure if that's also something that you experienced only Chris in your day to day?

Chris:

Yeah, as you were speaking, I was immediately reminded of telepresence, which is a tool that lets you replace deployments and parts within a Kubernetes cluster with a process that runs "locally" and it makes that process beliefs that it runs within a container. It will give it the same environment variables, including secrets. It will provide files, conflict maps, et cetera, that are mounted using SS HFS, but it runs "locally and else" and so you can debug this and you get a faster turnaround than needing to turn this into an OCI image, deploying this again, waiting for services to cycle. The other thing you mentioned locally is very interesting. It happens more often than not that nowadays I say locally, and I obviously mean within a Gitpod workspace, which causes a good amount of confusion for in a lot of conversations.

Oleg:

I think the term that you want to use and it's a new one. And I personally, I don't like the word because it rubs me the wrong way, but somebody said remocal, which is the tools that are remote, but feel local. I'm on the edge, whether I'm using that word myself remocal, it's okay as a word remocal, but it's exactly what the mental image is. Right? It's like with Gitpod, if I would use Gitpod and when I'm using Gitpod, I'm using with the JetBrains integration, because I cannot imagine living without my proper Java IDE, and it works by running the backend in the Gitpod environment, but my IDE runs locally and you just connects there and there's the integration that makes it, it's like a rich client on my machine but the need of the computations are happening in the cloud environment, so it's a very remocal experience, and-

Chris:

I might just steal that word.

Pauline Narvas:

I dunno how I feel about it.

Chris:

Yeah.

Pauline Narvas:

So it's an interesting word.

Oleg:

It does sound like like moist, right? Like it just weird word.

Chris:

My initial connotation was it has fanatic similarity to muggle, as in Harry Potter, so much. I wanna be associated with that.

Pauline Narvas:

That's exactly what it's like.

Oleg:

But like eventually I think most of the tools for developers, or at least the ones that we are gonna use daily will start to get more and more features. So this remark quality, right? Because you don't get better developer experience than with

local tools:

it's fast, it's responsive, it can work without your connection, whether that is something that actually happens or not people very often are like, oh my God, how can it work without internet? And you're like, have you been without internet in the last year? No.

Chris:

Right? Never. Unless you haven't been on a German train? No.

Oleg:

I mean, I'm sure there is this decent hotspot, the world is gonna get covered in 5G and then whatever G's coming next so it's not a problem. And then it does give you the scalability. It does give you resources. It does give you the sort of the pricing models that are more predictable and easier for teams to handle, because do you want to buy like an expensive MacBook to just learn that next month there is an Apple Event and now you're suddenly not running the best, right? Just probably not.

Chris:

That hybrid approach, the remocal that we're seeing on the development side, Waldemar. I also hinted at that on the sort of bringing this infrastructure complexity down and like closer to the developer that there is a spectrum of choices, a spectrum of emulation. Could you speak about how would one navigate the spectrum? How do I choose how much fidelity I need and how much permeability is there? Like how much can I walk this spectrum or do I need to make fixed choices?

Waldemar:

Yeah, that's a great point. One thing that we are seeing is that as soon as teams start adopting things like, for example, LocalStack or also TestContainers or other tools for local, it also influences the way how the repositories are structured, the processes are structured and making things a bit more geared towards architects like microservices, where maybe things are a bit more like units of deployment and units that you can actually test easily. In the ideal case, you would just have the ability to run, for example, an entire service, let's say a Lambda function, a dynamo DB table, and if you have the components locally, the entire thing, but then once you get to the integration point it's getting interesting. Right? And those are some of the things that we are exploring. Also with our customers where part of your stack may be running in the real clouds, and we are we have different mechanism to deal with this, for example, proxying, where you basically have some kind of local representation that allows you to speak in a proxy way with the real resources in the cloud, but not having the full replica locally, but is accessing the ones that you're actually interested in. For example, a dynamo table or an S3 bucket way, which just basically pull down the files that you're interested in. The other aspect is, if you want to go the reverse direction is, we are now working on tooling where you can point at a existing AWS environment, an account, and then actually mirror or copy most of the resources into your local environment, you then have a representative copy of working with that locally. So there's a couple of things that we are now exploring, and I think it's really interesting because we, in a way also redefining the way people think about cloud apps, like these boundaries between local remote hybrid are blurring a bit. And one more aspect I would like to mention here is state management. I guess we'll talk about this more in the episode, but like having a representation within your container of the state and allows us to do quite sophisticated mechanisms to persist the entire state shared with team members, bring up the exact same application to different machine and so on. So I think it like a bit of a longer answer to your

question is:

I think that it really depends on the use case, but it's very exciting space to explore actually these boundaries and how they're blurring.

Oleg:

This is to generalize the sentiment. I don't think there is a right answer. There is a correct answer, the correct answer is that you want your tools to be flexible enough and you want to give as much power to developers as you can. So whether this is coping your real environments or whether this is selecting like a number of things or configuration through API or like automatically parsing, like I dunno, Kubernetes YAML files to recreate like a copy, right? The ecosystem is so diverse, everyone works on similar applications. Well, if you squint your eyes enough, it's a service that makes a couple of HTTP queries and like sends files here and there, uses a few endpoints in your cloud environment and then parses data converts Jasons to XMLs, right? And then at the end, it all is to serve like cat videos or your bank account statements. But at the same time, everyone approaches that very differently and not just in the current technological choices, but in their journey, like what the teams have, how the teams were doing things before, that influences very much how teams are doing things currently, because they have seen certain problems and then they appreciate certain parts of the solution. So the real answer is you want to give the full flexibility to developers and make it easy to do the right thing that will work for 80% of the people.

Chris:

I love that, we've come full circle. The first episode of this season, we had Anton Drukh, the former VP engineering at Snyk, and he defined developer experience as making it easy to do the right thing. A sentence that I've heard myself say many times since then. The point that he just had that past behavior of teams greatly influences how teams will develop and the choices they make, say I'm part of a developer experience team at a medium to large size company, and I see that it's hard to do the right thing. It's hard to write good tests, it's hard to work against realistic infrastructure. How have you seen tools like LocalStack or TestContainers being introduced? Who introduces them? What's the journey that people go through?

Waldemar:

Yeah, that's a great point. And I think you touched upon a very important aspect is this reproducible environments also to get team members up to speed really quickly. I think it's imperative in today's collaboration settings that you have a standardized set of tools that people can

start using to get productive:

obviously your IDE, obviously your test environment, but then also the infrastructure that you use to run your application, your test. What we see with LocalStack is that the largest set of our users is definitely using it for these local development cycles and just basically running your application and using it for development and testing. But there's also a learning component to it: just using it as a sandbox environment that allows you without having to spend any AWS credits, just getting to grips with how do these APIs really work? How can I make an S3 request? Very easy "Getting Started" things working with the cloud provider, and I think it's the way that it simplifies the infrastructure management is one of the greatest points that we're seeing, because it's not only the cost aspect and the time aspect, but also increasingly companies and organizations are investing a lot of money and resources into providing Dev Experience. These DexEx teams that are just sole purposes to make a seamless developer experience, and oftentimes this would also involve things like cleaning up accounts that have some leftover resources, creating all sorts of DevOps automations and things, and if you have an environment that you can basically just, we call it"throw away environment", just spin it up, you tear it down and you don't have to worry about anything anymore. That can be like an accelerator for local development and also team collaboration. Oleg, do you have similar experiences in that area?

Oleg:

Yeah, a little bit, but for TestContainers, libraries, it's a library, right? Or, well, they're a collection of libraries, but for any particular language, it's a library, and we see the usage of that spread out through organizations, very naturally. Somebody who knows that they want to use TestContainers, or they have problems or they're like, "This is the new code base. You don't have any tasks. How can you even work on this?" Look, once you experience the good way of doing things, it's very hard to then go back to square wheels and like being like, oh, how do I do things here? So people naturally become champions of TestContainers, and the good thing is that since it's a library, once it's in the project, everyone starts using that. So, there is a certain virality component to task containers and we try it very hard to do a good job of being a very good citizen for the both open source community and the particular integration project, so we have a recheck of system of modules, like a module for LocalStack or a module for Kafka or a module for MongoDB. Once you start using it within the project, it's very easy to then show how writing integration tests, the test that you can rely on for evolving your application becomes cheaper in terms of time and effort. It is also confined in the code, so it doesn't rub people the wrong way, right? There are no manual steps that you need to do that some people might not like. It sits in a very convenient integration spot within the ecosystem, and then eventually one person starts using that, everyone's using that, people see value, and then the team increases the velocity of development because they just naturally occur. They start breaking each other's code, they naturally occur, and then people higher up management start to notice as well, and they're like, "how come they can deploy without breaking stuff, and you cannot?" So the knowledge spreads and then everyone becomes a happy user of TestContainers, and we all live in a better world with more reliable software. Just kidding. But, well, we could, but we are not there yet. So yeah, so the developer experience is paramount. Like it's essential, right? I feel like developers have so much on their plate, then very often teams have to struggle through things and if you start they are opening JIRA, right? Waldemar, no offense to Atlasian or anything, or any other slow task tracker. You just feel down, and then if you start your day from your IDE running things, and then you, as a developer, you feel empowered. So, yeah, I think task containers is in a very lucky spot there.

Chris:

Yeah. Listening to this with the perspective of a development experience team, like if I were to bring this to the teams that I serve, I could certainly see the benefit that would provide for them. For LocalStack specifically, the first question I would expect I get is "how do you know this is compatible?" So with TestContainers, you're probably gonna use, I don't know, Postgres, the Docker image that is out there, so it's the real deal, so to speak with LocalStack, it's exactly the point that it's not the real thing in all its complexity, how do you guarantee compatibility?

Waldemar:

Yeah, that's great point. The one point that keeps us busy at LocalStack and there's different ways how we approach this today, and it's much more systematic approaches than we had in the past. We've invested quite a bit into developing tooling for first of all, what we call parody testing. We have a fairly comprehensive integration tests suite where essentially we run our tests first against real AWS, then we even have a way to record all responses, and then basically compare them, run the testing as LocalStack and compare the results to make sure that there's no discrepancies. In our integration tests, we really tend to have a very strong focus on this parody aspect. The second thing that we increasingly leveraging is other open source tools. So for example, Terraform has a very comprehensive test suite

for the AWS provider:

it's something like 600,000 lines of Go code, and usually this will run against AWS, and we simply point that to LocalStack and we run these tests to see how our APIs are basically performing. The nice thing about these Terraform tests is it is probably the most accurate semantic representation of AWS APIs out there, so we have the API specs, which is one thing, but the other thing is how do sort of sequences of API calls behave and and Terraform happens to be very specific about making sure that all the schema are correct, all the requests and responses have the proper fields and so on, so this gives it a lot of insight into improving the parody, and yeah, so generally speaking that there's a lot of push in that direction. We're also increasingly tracking metrics. We're actually publishing them also in our documentation so that people are really aware of what's available and what's not, so that's certainly one of the areas where we invest a lot in this whole parody aspect of our APIs.

Oleg:

This is brilliant, Waldemar. I haven't thought about that, but if I would think many hours in a concrete mindset, how to do this, maybe I would come up with that, but this is (a) absolutely brilliant idea. This is how people test their run time implementations for more conventional run times like the JDM, or this is a little bit how we test our Docker environments for TestContainers. TestContainers work with any compatible Docker environment, and we just run our test suite. It is a very comprehensive user of the Docker API, so we can run it against Docker Desktop or Docker that is exposed in the mini cube local Kubernetes environment, and they expose Docker API, and we can run our tests against that Docker implementation, and we can be pretty sure that is very compatible, because we do cover like a lot of corner cases that are not usually covered by the normal, oh, Docker pool, Docker run, maybe exposed like a port of mount something. I haven't thought in about that in terms of like LocalStack compatibility testing, but it is an absolutely brilliant idea. Love it.

Waldemar:

Yeah, it's great. I definitely recommend our blog at LocalStack.cloud/blog. We recently published an article about this Parody Testing, our team has put out a really nice blog post that I can recommend to all the listeners to take a look at explaining exactly how we do things like co-generation from the API specifications, how we do these parallel testing, snapshots and so on.

Chris:

Super impressive. So if I recommend this to my team, I can be very certain that what we are programming against actually behaves like the real world behaves, so once this gets deployed to AWS, that's awesome. Now, Cloud-Native and AWS is very common set of APIs that folks write code against, another one is the elephant in the room that is Kubernetes. A lot of applications today run on Kubernetes, no matter if it makes sense or not, how would I be using TestContainers in a Kubernetes world?

Oleg:

There are two questions there, maybe three actually, but there are two main questions. One is that a lot of services run in Kubernetes. For example, many teams have CI environments that run in Kubernetes, and very often that doesn't provide docker, because there's a different sets of APIs and well, Docker API is actually very convenient for specifically integration testing, because it allows you to control exactly when the whole life cycle clean up and how you expose things. So it's very convenient for us to rely on the Docker runtime. But if your CI environment doesn't run with the Docker exposed, then you are in trouble, it's a little bit awkward. So currently there are different solutions and CI environments also try to provide some sort of a Docker deamon somewhere, it could be a different code, or it could be a different VM altogether. TestContainers works well with remote Docker deamons aswell, or well, product plug shamelessly, Atomic Chart tries to resolve this part of the problem by developing Task Containers Cloud, which is the sort of the backend where your containers can run without actually requiring local Docker, so it's in a private beta, so it's not very accessible to everyone yet you can drop me a line or Google it and sign up for the waitlist if you really experience this problem, but you kinda get the best of both worlds, your CI and the containers running elsewhere in the cloud. So that's one part. The other part is when you develop against Kubernetes, you develop, let say a Kubernetes operator or say a Kubernetes software. You need a Kubernetes cluster to test things and you want it to be an actual Kubernetes implementation. For TestContainers, at least TestContainers Java has the K3S module, so you get an abstraction in your code that you can say, "I would like a Kubernetes cluster, and then I would like my local process writing be an operator." and then you can write normal integration tests and it'll integrate with this Kubernetes cluster the same way as if that would be run somewhere, and you can specify your YAML files and all that lovely thing, so in that regard, it's no different from having a service dependency on Kafka or LocalStack

Chris:

One of the things that Kubernetes does, and I'm sure I'm gonna be hunted down for this, but the more I think about it, there is a lot of similarity between what WebSphere was and what Kubernetes is today, in terms of services it offers and things it does. One of them is service discovery, so I have this application running that discovers its services that pull its authentication through Kubernetes, and if I wanted to write integration tests against that, but not have the infrastructure actually running in a cluster, would TestContainers, help me there?

Oleg:

Yeah, absolutely. You just say give me a ES cluster, like a new K3S container, and you will get the K3S container running in a Docker container, but like with the ES cluster within, and then you can do whatever you want and you can put your service discovery or you can put your secrets in there and then, or your role based out indication, and then you can deploy your application in there and then yeah, interact with it like a normal Kubernetes cluster, which is interesting because you get to the other side of the problem of having reliable and trustworthy integration test with real environments is that you need to spin up those environments. So you need to wait a few seconds before it all finds itself and gets available to you, and it gets better all the time, but you might want to avoid doing that for every individual unit test, right? So the patterns for that would be you and I'm pretty sure that it's the same with teams who using LocalStack with TestContainers. Unless your test put everything in this H case where everything gets broken and you want to test against "what will happen if I if I break my cloud?" Unless you want to test that, your test normally work with a single instance of those complex services you depend on, so you'll have one LocalStack and then all your tests run against that, or you'll have one Kubernetes cluster managed by TestContainers, and then all the tests run through that which is great because you have the flexibility to do that.

Chris:

I just now thinking about different scenarios, how TestContainers and LocalStack are used together or separate. What's the most ordinary unexpected kind of application that you've seen of either of these pieces of technology?

Waldemar:

That's an interesting question. I can certainly see the canonical integration for TestContainers in LocalStack is, maybe from your Java program or somewhere you say, "yeah, I would like to have a LocalStack container." And you basically just spin that up with TestContainers. All the configuration is being taken care of for you because LocalStack is also a highly configurable system, so there's all sorts of mount points, you can configure port configurations, other environment variables and so on, and it's good to have a kind of container abstraction in your programming language that takes care of this, so that's the vanilla integration. Some of the more exotic cases that we see is, for example, some users trying to mimic production workloads with LocalStack, it's something that we usually don't recommend doing really, because simply it's not really built for production workloads, but in some cases, if there's like an offline environment, totally shield it off from the internet, then some like development teams just prefer using the tooling that comes with all the AWS ecosystem and then using that, leveraging that against something like LocalStack. Other than that, I've seen a few interesting cases: for example, we've been running LocalStack on a Raspberry Pi, just a hackathon within the team, which was a fun experience, but I don't think it's a very common use case. You can basically spin up your container wherever you want. That's also the nice thing about this attraction, and coming back to Kubernetes, what they really did a great job at is providing this platform, these attraction levels that allow you to build a lot of additional functionality on the layers on top of it, and the community aspect of it is amazing. If you look at the CNCF map, for example, all the projects that exist there. I think the cloud providers currently are still obviously providing their proprietary APIs, but they're a bit still lagging behind this open ecosystem of tools they can build upon that and I think it's great to have something like, for example, stacks or TestContainers which gives you a bit more flexibility in terms of how you integrate with different workloads, be on a Raspberry Pi, be it on your local machine, in the CI environment and so on. Just this portability aspect of bringing your environment with you to wherever you want use it.

Oleg:

Yeah. I don't think I've seen a lot of unusual setups or interesting cases with the integration test vendors and LocalStack, but one of the more cooler things that at least the frameworks in the Java world are doing is TestContainers, API. There is nothing specifically about tests, right? It's just the programmatic API to manage the Docker containers and then it exposes and you can run it however you want. It integrates with your application frameworks where you can write the same wrapper around it yourself and can do whatever your, like the sky is the limit. So what the frameworks are doing, it's repurposing TestContainers for the local development environment enhancement, so you CL your project and imagine it requires and technical stack for running. You need the database Kafka, you need maybe your cloud. I haven't seen this with LocalStacks specifically, but I think it would be super cool to explore whether that would work and what needs to be tweaked to make it work. So you deploy your project, you have your code, but you don't have your setup. You run your project and the framework or this rapper, this piece of code that you wrote, sees that you want to access a database, you see the driver database drivers, or that you want to use like AWS endpoints or something. And there is no configured, right? You don't actually know where the database is. So one thing that you can do, you can like drop everything and say like, oh error, like no database. I cannot run, but what the frameworks in the Java world are doing nowadays is they're like, huh, I need a database, but I don't have one configured. I'm gonna provide one with TestContainers because I know how to spin up Postgres that I see your driver, you requested the, like a Postgres JVC driver, like here's your Postgres, and I know that at the end of the life cycle for the application, I can clean that up nicely, and that is so, so cool because you can just get your project, you can clone it and you run it. You don't need to have any setup, you will never forget to run your Docker, you don't need any complex Docker composed YAML files to describe everything. The code does that for you, so it go, it works beautifully for developing experience, especially for the new people on the team. So I think it would be super cool to check whether that thing can work with LocalStack. Imagine you are developing something and you're like,"oh, here's my Lambda." and you can just run it, and it creates the LocalStack instance for you, it just configures, it drops your coat and says "here's your role! I'm your Lambda." and there is an S3 bucket somewhere and everything. And it's all available for you out of the box without any configuration. That would be like super cool, I think.

Waldemar:

Yeah. That's a brilliant idea, and I think the common interface could be, for example, like the AWS SDK. So assume you have your Java program, and you make some SDK call and then literally by just figuring out, I'm executing in a local environment."Ah, okay. So therefore I'm just gonna spin up the container now." which allows me to talk to the local environment.

Oleg:

If you have your SDK. But you don't have, say credential specified for the real thing or like end points for the real thing, or like your I dunno, custom hosted LocalStack instance, somewhere you can be just, "oh, here's one with TestContainers". I think that'd be a very great idea to explore

Waldemar:

Absolutely. That's fantastic. The other thing I found really interesting is this whole notion of of state management, I briefly touched upon it before. By default, what happens in LocalStack is when you start up the container you create a bunch of resources, you tear everything down again, it's a throw away environment, basically. Now in some cases you may want to restore the state, having a persistent state with your instance, so you restarted and it brings up the same resources again. We approach this in two different ways: one is just persistence, which basically means all your API calls are basically persistent and store to disk which is just a continuously storing it to disk, and the second piece is what we call Cloud Parts, it's actually a new concept that we've recently published as part of our V1 release which basically allows you to take a snapshot of the running instance, so you take a memory snapshot and you can store it, extract it out from the instance and interact it almost like with Git objects, you can push them to server, somebody else can pull them down, so really managing the state of your entire cloud application as a shareable unit. You can push the state, pull it down again, and I'd curious to hear how this works both in a TestContainers context, but also I guess in a Gitpod, managing the state and how we can push this forward in this idea of you're having both the APIs, the functionality, plus the state as well.

Chris:

So for Gitpod, it's a home run. It fits right into the story, like the throwaway infrastructure and the ephemeral death environments, two names, same idea, having your test infrastructure, your state as code, rather than some fixture that lives some external system that has a big line of red tape around it and a sign that says "don't touch it", and inevitably someone will, and then it goes down and no one knows how to restore it back into a state so your test will pass again is a big, no go. Being able to spin up that environment from code as part of your test, as part of your Gitpod workspace makes a ton of sense to me. I love that idea.

Waldemar:

Yeah, that's awesome. That's also what sets us apart from like the cloud providers, because obviously having everything under these like heavyweights APIs, and it's all like a black box, you can't really access what's happening behind a Lambda service or other services, but having this sort of representation as some in memory artifact on your local machine, you can actually snapshot it, right? You can sort a state, you can restore it afterwards. So it gives you a lot of flexibility to rethink the way how state is actually managed for other applications and managed services in general.

Oleg:

I think I agree with the general sentiment that it is really interesting. It's definitely something that TestContainers users can benefit from, because if you can externalize and serialize state that are boast speed up your tests, but also provide you with the mechanism to pinpoint exactly the issues, right? You can guard against the regressions for the particular state by saving that and literally saying like, "oh, I want. My application work with that state at all times." Like we will never break in the same way again, which is a wonderful feature of those higher level tests, right? Like why TestContainers are very often what allows people to get more confidence in their test yet is that you don't run pieces of code as isolation, right? You spin up the application and dependencies that are as true to the real things as possible, and then you verify functional requirements. Like "what does the application do when this is my input?. what does my application do when this is my input and my Kafka is not reachable through the network? or whether it's reachable or what's with this state or do my database migrations actually work with the current sort of state of the data?' The actual real questions very few projects break because people like reverse strings in the wrong way or do like algorithmic errors. It's all about "oh, I had this corner case in my data, like somebody was missing the surname and then, or I imagine that there are like 24 time zones, but now it's a crazy complex domain." and then that's where the issues are coming. So externalizing and saving the state is something that can bring a lot of confidence in development.

Waldemar:

That's a great example that you mentioned there are like, I think, especially if you think about like these event driven applications, right? So a lot of times your application state might depend on, let's say messages that are in the queue which then gets dispatched to certain services, and maybe even the ordering of the messages matters for the particular logic that's being executed, and one of the brilliant things about managing the state is now assume you have a CI built, right? You have a CI pipeline, you're running your tests with, let's say TestContainers and LocalStack and now you have a rare build and trying to figure out, "okay, what's going wrong?" Right. So I don't know,"some test is failing. How can I replicate it?" So the beauty of having the state management is now that you can actually take a snapshot of what you had there, pull it down to your local machine and then replicate it and just see that you can debug into the different resource that you had there, your queue, your Lambda function and so on. I think some CI providers are providing the opposite model where you can SSH into an instance, for example, so GitHub Actions allow you to already access SSH into a running container, into running CI built, where you can then do the debugging, which is also nice and cool, but if you can reverse that and really pull the state down to your local, that's a lot more powerful even for the debugging purposes.

Chris:

Reminds me of a concept that I haven't really seen much in terms of availability that is time traveling debuggers that basically lets you step back through time as execution happened would be useful.

Pauline Narvas:

Amazing. Well, that was a lot to take in. To be honest, I was listening actively like "wow, there's so much information here." But yeah, we've actually reached the end of the podcast now, and what we usually do to finish off is we ask our guests about one thing they'd like to shout out about that they've learned about recently, or just want to share. This could be a learning, someone that's impacted you, it could be tech or non-tech related. We'll go around the room. You know what, let's start with you, Chris.

Chris:

So my shoutout this week is actually another podcast and it's 'The Art of Accomplishment.' if you're looking into self discovery, if you're looking into becoming more aware of how you act around others, if you want to become more empathetic and have more wonder about the world, learn more than you know, this is a good place to start. It helped me a lot understand better how I react and how I act, how I show up, so'The Art of Accomplishment.'

Pauline Narvas:

I'm gonna cheat a little bit and also say +1 to Chris's recommendation there. I actually specifically wanna call out a episode from 'The Art of Accomplishment' called Embracing Intensity Emotion (in) Series Two, and that episode was just, one of those things that I'm still processing, but it blew my mind, and since then I've been sharing it with everyone I know, cause I think it's really important that everyone takes a step back and focuses on the emotional part of their day to day work and to just remember that we're all human, especially in tech. So, yeah, highly recommend that episode Oleg, let's go with you. What's your recommendation?

Oleg:

So surprisingly I have a conference presentation this week later this week, and it's a little bit unusual for me because it's a Python conference. So Python Estonia, a local conference. I'm very happy to support Estonian entertainment and the IT scene, but what I learned this week while preparing the presentation is that Python is very different from Java and Python developers while like the general problems we see they're all the same, right? We need an IDE that works; we need like tests that we can trust; we need the, like an application framework; we need access to data storages and like cloud things. There's still things that different languages implement differently, and if you approach it with an open mind, instead of being like, "oh my God, how this language can be so silly, why they're doing this way?", but if you approach it with open mind, you can expand your horizon quite a bit, and then back in the university, of course, I had my share of, I don't know, Haskel and I looked at Prologue and I was horrible at those things. You had to think about those differently, and I couldn't. I was young and not very sophisticated. But now I got a new appreciation for looking at the problems from different community ecosystem point of view. And I would totally recommend to everyone like pick something that is equally popular but that comes with a different paradigm. If you are a Java developer, look at, I dunno, Golan look at maybe Rust or maybe Python, right? You don't have to go all the way into Haskel right, or something like academic or something, super hip and popular, like Julia or domain specific for machine learning, something, but something popular just to see how people do things and try to do simple things yourself, and that will expand the horizon, and then you will come back to your preferred stack and you will be like either you will see ways to improve things, or you will have a great appreciation for what you have, which is a win-win in my book.

Pauline Narvas:

Oh, thank you so much for that Oleg. There's loads to take from that recommendation! I will definitely add some summary as well in the show notes, and last but not least Waldemar, can you tell us your recommendation for this?

Waldemar:

Yeah, so I guess my shout out definitely goes out to our community. I really wanna express my my kudos and gratitude to all the people who are creating pool requests, issue reports, everything, so we're very much open source driven with a large follower base and GitHub. And also would like to point everybody to our V1 release, the version 1.0 of LocalStack, we recently released a couple of weeks ago, big achievement, the culmination of lots of years of work, and especially in the last year and a half the team has been working incredibly to get this over the line and yeah, also kudos and shoutout to you for organizing this Pauline and Chris, thanks so much, and it's been great having this chat with you today.

Chris:

Congratulations to your 1.0, that is a milestone indeed! And thanks so much for being on this show. It's been wonderful.

Pauline Narvas:

Thank you. Before we go, could you both tell us where people can find you and how they can get started with LocalStack and AtomicJar?

Waldemar:

Absolutely. So for us the best starting point is LocalStack.cloud, that's our domain where you find all the links to the documentation, the different sort of product tiers that we also offer from a product perspective, and then obviously our GitHub repository, which is also linked from from the website where you'll find ways to contribute. If you're interested in contributing, then please get in touch with us via our Slack channel, we have a very active community on Slack. Please just reach out anytime and we can see where we can contribute to the project, so that's the best way to start.

Oleg:

Nice, for the AtomicJar TestContainers, we have a bunch of websites that you can find information about: if you want AtomicJar specific one it's AtomicJar.com, but for TestContainers I think TestContainers.org is the place to learn about TestContainers Java implementation and then there are other websites for other implementations for other languages. I think the easiest for that would be either to Google'TestContainers' and your language or go through GitHub, GitHub.com TestContainers, you will see the repositories for TestContainers Node, TestContainers Go, TestContainers.net, and those who link to the corresponding documentation. We will sort our ducks, and dot our I's eventually and make it all super presentable and nice, but currently I think maybe going through GitHub is the easiest route, especially if you like come from non-Java background. It's not because those implementations are sort of sub power or anything like that, it's just our background personally was Java first and then we didn't get to sort it out yet.

Pauline Narvas:

Amazing. Thank you so much both again, for being part of this episode. I'm so excited to share this with the world. Thank you again, and we hope to collaborate with you again in the future.

Oleg:

Thank you. Thanks much for having us.

Waldemar:

Thank you.

Pauline Narvas:

Thank you for listening to this episode of DevX pod. Want to continue the conversation about developer experience? Head over to our community Discord at gitpod.io/chat. We have a dedicated channel there for us to talk all about DevEx.

Chris:

To make sure you don't miss the next episode followers on your favorite podcast platform or subscribe to our newsletter DevEx Digest. You can also find out more about Gitpod on Gitpod.io. Start a new workspace and tell us about your developer experience. See you in the next episode.

People on this episode