Any tips for speeding up Micro in testing?

We are using Snowplow Micro in our CI environment. We have it running in a docker container clustered with our application. Our tests exercise the UI which sends events to Micro using the Browser Tracker.

To get the tests to pass consistently we have to add about 700ms of sleep time or retries between the event getting fired from the UI and it showing up in Micro’s queue.

At first, we had Micro configured to use our staging Iglu resolver. Thinking that maybe this extra HTTP call was increasing the lag time, we switched to embedding the schema in the Micro container. However, this only shaved off about 100ms of the wait time.

I am wondering if anyone has any other tips for reducing the lag time between an event being fired and the event showing up in Micro’s queue?

Sending an event at Micro’s boot can help get Micro “warm”, not sure if you’re using one long lived Micro instance or recreating it but that might help a little.

Other than that, Micro by design isn’t instant. It has both the Collector and Enrich (including Validation) running, so it can take a little while to process your events.

Thanks, @PaulBoocock. I am not sending an event at boot currently. I will try that.

The trade-off of having tests take a little longer to get Validation of events is definitely worth it. I am just trying to minimize the time it adds to our test suite :slight_smile:

One thing we’ve started doing in some of our Snowplow Micro test suites is to run our validation requests to Micro in a loop, and loop until we get the event we’re expecting to land. You can hit /micro/good reasonably aggressively in your tests and then as soon as the event is ready you’ll find it. This chipped a few seconds off in places rather than having to always wait for the worst case of 5 seconds (when we’re testing some slow old browser on a terrible VM).

Some very rough pseudocode:

event = undefined
loop every 500 milliseconds
   events = GET /micro/good/
   if events contains expected_event_type
      event = events[expected_event_type]
      continue
until 5 seconds

if event is undefined
    fail

Yes. We are using Capybara for our UI tests and it has a mechanism to recheck an expectation until it passes or the max wait time expires. I am planning on using this as a subsequent optimization.

Thanks again for the tips :+1: