Our continuous delivery failed us this morning, we’re currently rebuilding the artifacts one by one, I’ll post here once they’re all up, sorry about the inconvenience.
Hi @tclass - are you referring to Hadoop Event Recovery? No, that remains a Scalding-based application, and of course the underlying Snowplow data formats have not changed in this release.
Any recommendations about AWS instance sizes with the new internals? We’ve been using c3.8xlarges, but with Spark being more memory intensive, are the r3s better now? Is the instance storage still a requirement (e.g. c3/r3 vs c4/r4)?
Hi @rbolkey - the c3.8xlarges should be fine, but let us know how you go.
You don’t need instance storage, but you will need to attach EBS if you don’t have instance storage (c4/r4), because we are still using the HDFS on the cluster. We’ll remove that usage of HDFS in a future release.