I tried to search in our forum about this subject but I didn’t found the exactly scenario and problem/solution, so I’m posting here. (If it already reported, please guide me to, I’ll be really appreciated)
I’m using the AWS realtime pipeline in my project, It’s working very well for my purpose. I’ve recently added more event sources to my collector and my events volume grew from 3m/daily to 5m/daily and started to see some events peaks where my postgresql-loader activated the autoscaling group (70% cpu rule).
But when I check the monitoring details about the ec2 instances, I can see only one is working and the new one from scaling is idle. For the autoscaling monitoring it’s like the base metric is good, because first is working with 96% cpu load and the second 0-1%, resulting in a medium under 50%, so it keeps my 2 instances running but only one doing the job with maximum cpu%, they don’t share the job.
I’m trying to see where my configuration is wrong or if I need to do something more. Like I said, I’m using the real-time pipeline provided by the quickstart repo, my scaling groups and configs were deployed using the base terraform plan, I’ve changed the instance type for the postgresql-loader to t3.small and my db to t3.medium to fit my use case, and my plan is to use 1 instance during most of the day and 2 when it needs scaling by the %cpu use. Is there any other var or config I need to take a look at? It seems the new instance when ‘ready’ doesn’t start to work like the other.
Thanks in advance,