Thursday, 7 November 2019

Moving to lambdas - not just a "lift and shift"

A while ago I came across a lambda that had been set up to run in AWS that had previously existed as an endpoint within a microservice running in ECS (Docker).  I needed to dig into the code to try to understand why an external cache was behaving strangely - but I can save that for another post.

The focus of this post is on what may have made sense in one environment doesn't necessary hold true for AWS lambdas.

Lambdas are intended to be short-lived and are charged at a fine level of time granularity so the most cost efficient way of running is to start quickly, perform the desired action and then stop.

Some code that I came across in this particular lambda involved setting up a large in memory cache capable of containing millions of items.  This would take a little time to set up, and based on my knowledge of the data involved it would not achieve a good hit rate for a typical batch of data being processed.

Another aspect of the code that had been carried over was the use of the external cache.  At initialisation time a cache client was set up to enable checking for the existence of records in Redis.  The client was only being closed by a shutdown hook or the Java application - which AWS Lambdas do not reach.  This resulted in client connections being kept around even after the lambda had completed, resulting in the underlying Docker container running out of file handles and having to be destroyed mid-processing.

No comments:

Post a Comment

How should we set up a relational database for microservices?

Introduction Over the years I've provisioned and maintained relational databases to support a range of web based applications, starting ...