Hello, I’m trying to get EmrEtlRunner r90 going, and am running into an issue.
I believe the configuration is correct because it validates and the process starts. I see “Staging raw logs…” outputted, but then a short while later the console prints “Killed” and the process ends.
On the command line I am passing --config, --resolver, --enrichments, and --targets.
I see this problem when executing the JAR directly from command line. OS is Debian and JRE is 8 (OpenJDK Runtime Environment (build 1.8.0_141-8u141-b15-1~deb9u1-b15)). The EC2 instance is a t2.small.
We’ve previously experimented with a version of Snowplow that was several versions older, but I’m essentially setting it up fresh.
Hi @anton, I’ve been trying R91 likewise without much success. It just sits and hangs forever with no output.
I tried using JRE 7 instead of 8, but it immediately dies and says “Killed”. Now I’m having trouble finding what the expected version is in the documentation. Can you verify that I should be using JRE 8?
I also tried increasing the EC2 instance to a t2.medium.
I previously had R87 working okay and have double- and triple-checked my config against the current example and it seems okay.
When I run the lint command on the resolver it says it’s valid. When I try to lint the enrichments it hangs.
Ok, this is getting strange. I’m going to try to reproduce it. This is a clean t2.small with Debian stretch and OpenJDK 1.8, right?
UPD: Sorry, missed part of message before last one. It should work both on JRE 7 and 8. We’ve tested on both, but recommend 8. Regarding R91 hanging - EmrEtlRunner doesn’t output anything when staging files (unlike previous versions) because S3DistCp is used for it now. Can you confirm EMR cluster didn’t bootstrap for R91?
I’m running a t2.medium currently. It’s Debian Stretch with OpenJDK 1.8.
I’m trying to run it in a Docker container (based on this image), which I’m now thinking is probably related. However, running the same built image on my local machine the runner starts fine.
On the EC2 instance oom-killer kills it and the EMR cluster never starts.
I figured it out @anton. I had something configured incorrectly on AWS related to the memory available to the Docker container. I thought I had checked it earlier and had ruled it out, but apparently I was mistaken.