We’re tremendously excited to release Snowplow RDB Loader 1.2.0. This release brings signficiant improvements into monitoring subsystem, including new webhook-based monitoring, reworked severity of the issues as well as ability to automigrate tables when length of the column has been increased.
Webhook-based monitoring
Since we’ve released 1.0.0 many users brought up the suboptimal monitoring approach of the new RDB Loader - in particular it was quite easy to miss an error in stdout
that could require a human intervention in order to retry folder loading. With new webhook-based monitoring you can set RDB Loader to send all alarms onto a specified HTTP endpoint. All payloads will be schema’ed with iglu:com.snowplowanalytics.monitoring.batch/alert/jsonschema/1-0-0
, indicating what folder caused an alert and providing details necessary to resolve the problem.
Unloaded/unprocessed folder monitoring
Archive monitoring is our next step in improving monitoring subsystem. Previously, if Loader was down for some time, while S3DistCp and/or Shredder job were working users were running into cases where multiple folders were left unprocessed (in enriched archive) or unloaded (in shredded archive) and nothing was explicitly indicting it. It was possible to write custom monitoring script, but now this check exists in Loader itself. You can configure it to monitor folder with specified frequency to find out about data missing in Redshift.
VARCHAR-migration
Since R32, RDB Loader supports automigration - it can create tables and migrate them when a column was added, however another very common migration case has never been implemented - increase of maxLength
. Previously, user either still had to increase VARCHAR
columns manually to reflect the change or data was trimmed during loading.
For 1.2.0 Upgrade Guide please refer to our docs website: