No worries, thanks for the detailed reply!
So we do have a problem in regards of some data loss to the collector, so that’s why I am asking the question. The error we get is the following
[pool-4-thread-1] ERROR com.snowplowanalytics.snowplow.collectors.scalastream.sinks.GooglePubSubSink - Publishing message to good failed with code GrpcStatusCode{transportCode=DEADLINE_EXCEEDED}: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.999950601s. [remote_addr=pubsub.googleapis.com/IP_ADDRESS]
I suspect this has to do with the backup policy set with a too low limit (10s currently) in the application.config file for the collector. Would you say this is the issue, that batch loads may cause an overwhelmingly large queue in just a moment so that the machines are not able to autoscale in time for that specific moment?