Alberto Cubeddu
1 min readMay 26, 2021

--

I have to be honest, i wrote this article very quickly and i didn't include all the case scenarios :) I agree 100% with you, all the literature about microservice involve the "best/easy case scenario", nonetheless the reality in very different!

As an engineering i love to have everything technically perfect, as a head of development I MUST have balance between perfection, usability and speed to go to market.

Regarding the Write/Read my first suggestion would be to use a Read Replica instance that could be scaled across horizontally (for Read purpose). Depending on the configuration you can even select one of the read replica to be used for a certain scenario! (We did that in the past when we were not in AWS, and we didn't have the ability to spin up an EMR cluster or a RedShift Database).

Regarding the Analytics part I would strongly suggest to use an ETL tool and transform all your data in parquet file and use EMR + SPARK!

We use AWS DMS (Database Migration Service) to convert SQL to Parquet, and after that we are using AWS EMR Studio to read the parquet and do all the Analytics queries + ML/AI :)

You really unlock the power of your data when you're able to use that kind of technology, moreover it's very cheap, a decent cluster is around $1.2/hour.

--

--

Alberto Cubeddu

Leadership || Management || Innovation - Technology Director & Former Head Of Engineering