For context, we’ve implemented a customised manual implementation leveraging Postgresql for a POC. In the last 30 days we have recorded around 8300 events – which seems a bit excessive. We’ve since shut down this implementation, but we were wondering whether there would be anything you’d suggest we do to bring down the cost?
For 8300 events this looks very expensive and is comparable to the cost of running a production pipeline in GCP (at several million events / month).
At those volumes I’d expect Cloud SQL to be less than $30 per month (as generally BQ is used for event storage rather than Postgres, for this you would be in the free tier) even with a high availability configuration and Compute Engine to be less than $100.
21 TB of ingress and egress on the load balancer at a 10 million event volume means that you expect each event to be approximately 2 MB large. That isn’t impossible but is significantly more than what an average event would be in the pipeline (~2-5 kB).
Cloud Storage is also sitting at 10 TB which would also be significant (1 MB per event) as this data can also be compressed as well making this even smaller.