A while ago I came across a lambda that had been set up to run in AWS that had previously existed as an endpoint within a microservice running in ECS (Docker). I needed to dig into the code to try to understand why an external cache was behaving strangely - but I can save that for another post.
The focus of this post is on what may have made sense in one environment doesn't necessary hold true for AWS lambdas.
Lambdas are intended to be short-lived and are charged at a fine level of time granularity so the most cost efficient way of running is to start quickly, perform the desired action and then stop.
Some code that I came across in this particular lambda involved setting up a large in memory cache capable of containing millions of items. This would take a little time to set up, and based on my knowledge of the data involved it would not achieve a good hit rate for a typical batch of data being processed.
Another aspect of the code that had been carried over was the use of the external cache. At initialisation time a cache client was set up to enable checking for the existence of records in Redis. The client was only being closed by a shutdown hook or the Java application - which AWS Lambdas do not reach. This resulted in client connections being kept around even after the lambda had completed, resulting in the underlying Docker container running out of file handles and having to be destroyed mid-processing.
Subscribe to:
Post Comments (Atom)
How should we set up a relational database for microservices?
Introduction Over the years I've provisioned and maintained relational databases to support a range of web based applications, starting ...
-
What are they? Simplest case: Service A calls on a resource of Service B, and Service B calls on a resource of Service A - BANG! We have ...
-
Introduction Before Kubernetes and service meshes became the mainstream way for deploying microservices some organizations would set up t...
-
This is a bit of a "note to self" but if you find it interesting let me know. This isn't intended to be a deep dive into an...
No comments:
Post a Comment