The Data Monster Under my bed : Migrating Cobol to the cloud – Part 6: Put the docker image on a diet

If you remember the last article, we started exploring the variables, but if you follow along you are seing so far that the image is getting bigger, like 405 MB bigger.

While for development this might be in the ok range stacking the generated binary for production usage you will have to put the extra effort and be lean (for faster transfer, execution, space and other factors).

So here are two simple factors to consider while delivering an image and being lean at the same time:

Tip No 1: Multi stage building.

  • By seaprating the buld in several stages you can easily separate the build environemnt from the execution one. Doccer official documentaiton has a good article about that found here, we will also explore in the future better production grade containers and orchestration but for now this will be ok.

Here we seprate the build stage from the deliverable image.

Tip No 2: You already have the library, why not relate it in the deliverable image

When relating from stage to stage, you can always relate the already in place library (in that case for cobol) to the deliverable environment (actually this is in the gray area, as most probably you will have an artifact repository to hold it in, but again maybe in some other topic to write upon)

What is the outcome

So 15 MB right I think is better from the previous 405 MB.

The full codebase and the docker build for this article, is available for my patrons on PATREON

TO BE CONTINUED …