Collector - Error Publishing to Sink

Hey,

I have my collector setup on GCP’s cloud run service, the pub sub topics and subscriptions are also set up. When I track events to the collector, some events pass and some fail with the following error.

ERROR com.snowplowanalytics.snowplow.collectors.scalastream.sinks.GooglePubSubSink - Publishing message to snowplowcollectortestinggood failed with code GrpcStatusCode{transportCode=DEADLINE_EXCEEDED}: io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: deadline exceeded after 9.399391451s. [buffered_nanos=9799576094, waiting_for_connection] This error is retryable: true.

Screenshot of the same:

Could this be related to the configuration of the collector?

Thank you!

Hi @siv, it looks like this is the same issue that got discussed recently in this thread. It happens because the collector is trying to publish PubSub messages more quickly than they can be delivered to PubSub.

One good solution is to increase streams.sink.backoffPolicy.totalBackoff to a larger value in your configuration file. You could try using 9223372036854 which is the largest allowable value, and will become the new default in a future version of the collector.

If your collector continues to have problems publishing messages, then you might need to scale horizontally, i.e. run more instances of the collector. You can sink messages to pubsub more quickly if the http requests are distributed across more collectors.

I should also point out that the collector was never really designed to run in cloud run, but was designed to be a long-running microservice. You might well get it to work ok on cloud run, but I honestly have no idea what problems you might face in production. Good luck!

1 Like

Hi @istreeter,

I have increased the max and total backoff periods and it seems to be working fine now.

Thank you!