Scala Stream Collector and Kinesis Shards


A couple questions for whomever has some insight:

  1. Doing load testing with Avalanche, I see that sometimes the stream collector outputs the following error:

    Record failed with error code [ProvisionedThroughputExceededException] and message [Rate exceeded for shard shardId-X in stream snowplow-events-good-qa under account X.]

I understand in this instance it means that I need more shards on the stream. However, what happens to the event records that weren’t able to be put onto the stream? Will they be retried by the collector or are they permanently lost?

  1. Is there any recommended way to autoscale the number of shards on a stream? Is it advisable to use any particular metric and is there a recommended way to actually perform the scaling?



  1. The records which have failed sinking to Kinesis will be retried according to a linear backoff starting with a minimum backoff and limited by a maximum backoff.

  2. makes use of CloudWatch metrics to provide a service similar to auto scaling groups for Kinesis streams

Hope that helps!