[shred] spark: Shred Enriched Events - Failures

I am not sure how to debug this but I am constantly getting failure in - [shred] spark: Shred Enriched Events

Stderr File

Warning: Skip remote jar s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar.
20/02/13 14:55:51 INFO RMProxy: Connecting to ResourceManager at ip-172-31-28-165.ec2.internal/172.31.28.165:8032
20/02/13 14:55:51 INFO Client: Requesting a new application from cluster with 1 NodeManagers
20/02/13 14:55:51 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (117760 MB per container)
20/02/13 14:55:51 INFO Client: Will allocate AM container, with 23552 MB memory including 3072 MB overhead
20/02/13 14:55:51 INFO Client: Setting up container launch context for our AM
20/02/13 14:55:51 INFO Client: Setting up the launch environment for our AM container
20/02/13 14:55:51 INFO Client: Preparing resources for our AM container
20/02/13 14:55:53 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
20/02/13 14:55:55 INFO Client: Uploading resource file:/mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c/__spark_libs__7630671433302422462.zip → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/__spark_libs__7630671433302422462.zip
20/02/13 14:55:58 WARN RoleMappings: Found no mappings configured with ‘fs.s3.authorization.roleMapping’, credentials resolution may not work as expected
20/02/13 14:55:58 INFO Client: Uploading resource s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/snowplow-rdb-shredder-0.13.1.jar
20/02/13 14:55:58 INFO S3NativeFileSystem: Opening ‘s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar’ for reading
20/02/13 14:56:00 INFO Client: Uploading resource file:/mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c/__spark_conf__1077437495166959388.zip → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/spark_conf.zip
20/02/13 14:56:00 INFO SecurityManager: Changing view acls to: hadoop
20/02/13 14:56:00 INFO SecurityManager: Changing modify acls to: hadoop
20/02/13 14:56:00 INFO SecurityManager: Changing view acls groups to:
20/02/13 14:56:00 INFO SecurityManager: Changing modify acls groups to:
20/02/13 14:56:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
20/02/13 14:56:00 INFO Client: Submitting application application_1581605582645_0002 to ResourceManager
20/02/13 14:56:00 INFO YarnClientImpl: Submitted application application_1581605582645_0002
20/02/13 14:56:01 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:01 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1581605760694
final status: UNDEFINED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
20/02/13 14:56:02 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:03 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:04 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:05 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:06 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:06 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.27.72
ApplicationMaster RPC port: 0
queue: default
start time: 1581605760694
final status: UNDEFINED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
20/02/13 14:56:07 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:08 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:09 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:10 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:11 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:12 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:13 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:14 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:15 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:16 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:17 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:18 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:19 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:20 INFO Client: Application report for application_1581605582645_0002 (state: FINISHED)
20/02/13 14:56:20 INFO Client:
client token: N/A
diagnostics: User class threw exception: org.apache.spark.SparkException: Job aborted.
ApplicationMaster host: 172.31.27.72
ApplicationMaster RPC port: 0
queue: default
start time: 1581605760694
final status: FAILED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
Exception in thread “main” org.apache.spark.SparkException: Application application_1581605582645_0002 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1104)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1150)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/02/13 14:56:20 INFO ShutdownHookManager: Shutdown hook called
20/02/13 14:56:20 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c
Command exiting with ret ‘1’

Controller logs

2020-02-13T14:55:47.135Z INFO Ensure step 2 jar file command-runner.jar
2020-02-13T14:55:47.135Z INFO StepRunner: Created Runner for step 2
INFO startExec ‘hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit --class com.snowplowanalytics.snowplow.storage.spark.ShredJob --master yarn --deploy-mode cluster s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar --iglu-config ewogICJzY2hlbWEiOiAiaWdsdTpjb20uc25vd3Bsb3dhbmFseXRpY3MuaWdsdS9yZXNvbHZlci1jb25maWcvanNvbnNjaGVtYS8xLTAtMCIsCiAgImRhdGEiOiB7CiAgICAiY2FjaGVTaXplIjogNTAwLAogICAgInJlcG9zaXRvcmllcyI6IFsKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCAtIE1pcnJvciAwMSIsCiAgICAgICAgInByaW9yaXR5IjogMSwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vbWlycm9yMDEuaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIlJSIGN1c3RvbSIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnJhbmRhbGxyZWlsbHkiLAogICAgICAgICAgImNvbS5yaWdkaWdiaSIKICAgICAgICBdLAogICAgICAgICJjb25uZWN0aW9uIjogewogICAgICAgICAgImh0dHAiOiB7CiAgICAgICAgICAgICJ1cmkiOiAiaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL3JyLXNub3dwbG93LWNsb3VkZnJvbnQtaWdsdS1jZW50cmFsIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfQogICAgXQogIH0KfQo= --input-folder hdfs:///local/snowplow/enriched-events/* --output-folder hdfs:///local/snowplow/shredded-events/ --bad-folder s3://rr-snowplow-dev/shredded/bad/run=2020-02-13-14-49-50/’
INFO Environment:
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/opt/aws/bin
LESS_TERMCAP_md=[01;38;5;208m
LESS_TERMCAP_me=[0m
HISTCONTROL=ignoredups
LESS_TERMCAP_mb=[01;31m
AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
UPSTART_JOB=rc
LESS_TERMCAP_se=[0m
HISTSIZE=1000
HADOOP_ROOT_LOGGER=INFO,DRFA
JAVA_HOME=/etc/alternatives/jre
AWS_DEFAULT_REGION=us-east-1
AWS_ELB_HOME=/opt/aws/apitools/elb
LESS_TERMCAP_us=[04;38;5;111m
EC2_HOME=/opt/aws/apitools/ec2
TERM=linux
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
runlevel=3
LANG=en_US.UTF-8
AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
MAIL=/var/spool/mail/hadoop
LESS_TERMCAP_ue=[0m
LOGNAME=hadoop
PWD=/
LANGSH_SOURCED=1
HADOOP_CLIENT_OPTS=-Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/s-3CYRBLP9JR60Y/tmp
_=/etc/alternatives/jre/bin/java
CONSOLETYPE=serial
RUNLEVEL=3
LESSOPEN=||/usr/bin/lesspipe.sh %s
previous=N
UPSTART_EVENTS=runlevel
AWS_PATH=/opt/aws
USER=hadoop
UPSTART_INSTANCE=
PREVLEVEL=N
HADOOP_LOGFILE=syslog
PYTHON_INSTALL_LAYOUT=amzn
HOSTNAME=ip-172-31-28-165
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
HADOOP_LOG_DIR=/mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y
EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
SHLVL=5
HOME=/home/hadoop
HADOOP_IDENT_STRING=hadoop
INFO redirectOutput to /mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y/stdout
INFO redirectError to /mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y/stderr
INFO Working dir /mnt/var/lib/hadoop/steps/s-3CYRBLP9JR60Y
INFO ProcessRunner started child process 8756 :
hadoop 8756 4005 0 14:55 ? 00:00:00 bash /usr/lib/hadoop/bin/hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit --class com.snowplowanalytics.snowplow.storage.spark.ShredJob --master yarn --deploy-mode cluster s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar --iglu-config ewogICJzY2hlbWEiOiAiaWdsdTpjb20uc25vd3Bsb3dhbmFseXRpY3MuaWdsdS9yZXNvbHZlci1jb25maWcvanNvbnNjaGVtYS8xLTAtMCIsCiAgImRhdGEiOiB7CiAgICAiY2FjaGVTaXplIjogNTAwLAogICAgInJlcG9zaXRvcmllcyI6IFsKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCAtIE1pcnJvciAwMSIsCiAgICAgICAgInByaW9yaXR5IjogMSwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vbWlycm9yMDEuaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIlJSIGN1c3RvbSIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnJhbmRhbGxyZWlsbHkiLAogICAgICAgICAgImNvbS5yaWdkaWdiaSIKICAgICAgICBdLAogICAgICAgICJjb25uZWN0aW9uIjogewogICAgICAgICAgImh0dHAiOiB7CiAgICAgICAgICAgICJ1cmkiOiAiaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL3JyLXNub3dwbG93LWNsb3VkZnJvbnQtaWdsdS1jZW50cmFsIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfQogICAgXQogIH0KfQo= --input-folder hdfs:///local/snowplow/enriched-events/* --output-folder hdfs:///local/snowplow/shredded-events/ --bad-folder s3://rr-snowplow-dev/shredded/bad/run=2020-02-13-14-49-50/
2020-02-13T14:55:47.237Z INFO HadoopJarStepRunner.Runner: startRun() called for s-3CYRBLP9JR60Y Child Pid: 8756
INFO Synchronously wait child process to complete : hadoop jar /var/lib/aws/emr/step-runner/hadoop-…
INFO waitProcessCompletion ended with exit code 1 : hadoop jar /var/lib/aws/emr/step-runner/hadoop-…
INFO total process run time: 34 seconds
2020-02-13T14:56:21.362Z INFO Step created jobs:
2020-02-13T14:56:21.362Z WARN Step failed with exitCode 1 and took 34 seconds

Config

[quote=“Tejas_Behra, post:1, topic:3551, full:true”]
I am not sure how to debug this but I am constantly getting failure in - [shred] spark: Shred Enriched Events

Stderr File

Warning: Skip remote jar s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar.
20/02/13 14:55:51 INFO RMProxy: Connecting to ResourceManager at ip-172-31-28-165.ec2.internal/172.31.28.165:8032
20/02/13 14:55:51 INFO Client: Requesting a new application from cluster with 1 NodeManagers
20/02/13 14:55:51 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (117760 MB per container)
20/02/13 14:55:51 INFO Client: Will allocate AM container, with 23552 MB memory including 3072 MB overhead
20/02/13 14:55:51 INFO Client: Setting up container launch context for our AM
20/02/13 14:55:51 INFO Client: Setting up the launch environment for our AM container
20/02/13 14:55:51 INFO Client: Preparing resources for our AM container
20/02/13 14:55:53 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
20/02/13 14:55:55 INFO Client: Uploading resource file:/mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c/__spark_libs__7630671433302422462.zip → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/__spark_libs__7630671433302422462.zip
20/02/13 14:55:58 WARN RoleMappings: Found no mappings configured with ‘fs.s3.authorization.roleMapping’, credentials resolution may not work as expected
20/02/13 14:55:58 INFO Client: Uploading resource s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/snowplow-rdb-shredder-0.13.1.jar
20/02/13 14:55:58 INFO S3NativeFileSystem: Opening ‘s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar’ for reading
20/02/13 14:56:00 INFO Client: Uploading resource file:/mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c/__spark_conf__1077437495166959388.zip → hdfs://ip-172-31-28-165.ec2.internal:8020/user/hadoop/.sparkStaging/application_1581605582645_0002/spark_conf.zip
20/02/13 14:56:00 INFO SecurityManager: Changing view acls to: hadoop
20/02/13 14:56:00 INFO SecurityManager: Changing modify acls to: hadoop
20/02/13 14:56:00 INFO SecurityManager: Changing view acls groups to:
20/02/13 14:56:00 INFO SecurityManager: Changing modify acls groups to:
20/02/13 14:56:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set()
20/02/13 14:56:00 INFO Client: Submitting application application_1581605582645_0002 to ResourceManager
20/02/13 14:56:00 INFO YarnClientImpl: Submitted application application_1581605582645_0002
20/02/13 14:56:01 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:01 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1581605760694
final status: UNDEFINED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
20/02/13 14:56:02 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:03 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:04 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:05 INFO Client: Application report for application_1581605582645_0002 (state: ACCEPTED)
20/02/13 14:56:06 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:06 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 172.31.27.72
ApplicationMaster RPC port: 0
queue: default
start time: 1581605760694
final status: UNDEFINED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
20/02/13 14:56:07 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:08 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:09 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:10 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:11 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:12 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:13 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:14 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:15 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:16 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:17 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:18 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:19 INFO Client: Application report for application_1581605582645_0002 (state: RUNNING)
20/02/13 14:56:20 INFO Client: Application report for application_1581605582645_0002 (state: FINISHED)
20/02/13 14:56:20 INFO Client:
client token: N/A
diagnostics: User class threw exception: org.apache.spark.SparkException: Job aborted.
ApplicationMaster host: 172.31.27.72
ApplicationMaster RPC port: 0
queue: default
start time: 1581605760694
final status: FAILED
tracking URL: http://ip-172-31-28-165.ec2.internal:20888/proxy/application_1581605582645_0002/
user: hadoop
Exception in thread “main” org.apache.spark.SparkException: Application application_1581605582645_0002 finished with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1104)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1150)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
20/02/13 14:56:20 INFO ShutdownHookManager: Shutdown hook called
20/02/13 14:56:20 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-26ff3b0b-bdec-43b5-aaf7-deac9df86a5c
Command exiting with ret ‘1’

Controller logs

2020-02-13T14:55:47.135Z INFO Ensure step 2 jar file command-runner.jar
2020-02-13T14:55:47.135Z INFO StepRunner: Created Runner for step 2
INFO startExec ‘hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit --class com.snowplowanalytics.snowplow.storage.spark.ShredJob --master yarn --deploy-mode cluster s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar --iglu-config ewogICJzY2hlbWEiOiAiaWdsdTpjb20uc25vd3Bsb3dhbmFseXRpY3MuaWdsdS9yZXNvbHZlci1jb25maWcvanNvbnNjaGVtYS8xLTAtMCIsCiAgImRhdGEiOiB7CiAgICAiY2FjaGVTaXplIjogNTAwLAogICAgInJlcG9zaXRvcmllcyI6IFsKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCAtIE1pcnJvciAwMSIsCiAgICAgICAgInByaW9yaXR5IjogMSwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vbWlycm9yMDEuaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIlJSIGN1c3RvbSIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnJhbmRhbGxyZWlsbHkiLAogICAgICAgICAgImNvbS5yaWdkaWdiaSIKICAgICAgICBdLAogICAgICAgICJjb25uZWN0aW9uIjogewogICAgICAgICAgImh0dHAiOiB7CiAgICAgICAgICAgICJ1cmkiOiAiaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL3JyLXNub3dwbG93LWNsb3VkZnJvbnQtaWdsdS1jZW50cmFsIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfQogICAgXQogIH0KfQo= --input-folder hdfs:///local/snowplow/enriched-events/* --output-folder hdfs:///local/snowplow/shredded-events/ --bad-folder s3://rr-snowplow-dev/shredded/bad/run=2020-02-13-14-49-50/’
INFO Environment:
PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/sbin:/opt/aws/bin
LESS_TERMCAP_md=[01;38;5;208m
LESS_TERMCAP_me=[0m
HISTCONTROL=ignoredups
LESS_TERMCAP_mb=[01;31m
AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
UPSTART_JOB=rc
LESS_TERMCAP_se=[0m
HISTSIZE=1000
HADOOP_ROOT_LOGGER=INFO,DRFA
JAVA_HOME=/etc/alternatives/jre
AWS_DEFAULT_REGION=us-east-1
AWS_ELB_HOME=/opt/aws/apitools/elb
LESS_TERMCAP_us=[04;38;5;111m
EC2_HOME=/opt/aws/apitools/ec2
TERM=linux
XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
runlevel=3
LANG=en_US.UTF-8
AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
MAIL=/var/spool/mail/hadoop
LESS_TERMCAP_ue=[0m
LOGNAME=hadoop
PWD=/
LANGSH_SOURCED=1
HADOOP_CLIENT_OPTS=-Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/s-3CYRBLP9JR60Y/tmp
_=/etc/alternatives/jre/bin/java
CONSOLETYPE=serial
RUNLEVEL=3
LESSOPEN=||/usr/bin/lesspipe.sh %s
previous=N
UPSTART_EVENTS=runlevel
AWS_PATH=/opt/aws
USER=hadoop
UPSTART_INSTANCE=
PREVLEVEL=N
HADOOP_LOGFILE=syslog
PYTHON_INSTALL_LAYOUT=amzn
HOSTNAME=ip-172-31-28-165
NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
HADOOP_LOG_DIR=/mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y
EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
SHLVL=5
HOME=/home/hadoop
HADOOP_IDENT_STRING=hadoop
INFO redirectOutput to /mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y/stdout
INFO redirectError to /mnt/var/log/hadoop/steps/s-3CYRBLP9JR60Y/stderr
INFO Working dir /mnt/var/lib/hadoop/steps/s-3CYRBLP9JR60Y
INFO ProcessRunner started child process 8756 :
hadoop 8756 4005 0 14:55 ? 00:00:00 bash /usr/lib/hadoop/bin/hadoop jar /var/lib/aws/emr/step-runner/hadoop-jars/command-runner.jar spark-submit --class com.snowplowanalytics.snowplow.storage.spark.ShredJob --master yarn --deploy-mode cluster s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar --iglu-config ewogICJzY2hlbWEiOiAiaWdsdTpjb20uc25vd3Bsb3dhbmFseXRpY3MuaWdsdS9yZXNvbHZlci1jb25maWcvanNvbnNjaGVtYS8xLTAtMCIsCiAgImRhdGEiOiB7CiAgICAiY2FjaGVTaXplIjogNTAwLAogICAgInJlcG9zaXRvcmllcyI6IFsKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIklnbHUgQ2VudHJhbCAtIE1pcnJvciAwMSIsCiAgICAgICAgInByaW9yaXR5IjogMSwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnNub3dwbG93YW5hbHl0aWNzIgogICAgICAgIF0sCiAgICAgICAgImNvbm5lY3Rpb24iOiB7CiAgICAgICAgICAiaHR0cCI6IHsKICAgICAgICAgICAgInVyaSI6ICJodHRwOi8vbWlycm9yMDEuaWdsdWNlbnRyYWwuY29tIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfSwKICAgICAgewogICAgICAgICJuYW1lIjogIlJSIGN1c3RvbSIsCiAgICAgICAgInByaW9yaXR5IjogMCwKICAgICAgICAidmVuZG9yUHJlZml4ZXMiOiBbCiAgICAgICAgICAiY29tLnJhbmRhbGxyZWlsbHkiLAogICAgICAgICAgImNvbS5yaWdkaWdiaSIKICAgICAgICBdLAogICAgICAgICJjb25uZWN0aW9uIjogewogICAgICAgICAgImh0dHAiOiB7CiAgICAgICAgICAgICJ1cmkiOiAiaHR0cHM6Ly9zMy5hbWF6b25hd3MuY29tL3JyLXNub3dwbG93LWNsb3VkZnJvbnQtaWdsdS1jZW50cmFsIgogICAgICAgICAgfQogICAgICAgIH0KICAgICAgfQogICAgXQogIH0KfQo= --input-folder hdfs:///local/snowplow/enriched-events/* --output-folder hdfs:///local/snowplow/shredded-events/ --bad-folder s3://rr-snowplow-dev/shredded/bad/run=2020-02-13-14-49-50/
2020-02-13T14:55:47.237Z INFO HadoopJarStepRunner.Runner: startRun() called for s-3CYRBLP9JR60Y Child Pid: 8756
INFO Synchronously wait child process to complete : hadoop jar /var/lib/aws/emr/step-runner/hadoop-…
INFO waitProcessCompletion ended with exit code 1 : hadoop jar /var/lib/aws/emr/step-runner/hadoop-…
INFO total process run time: 34 seconds
2020-02-13T14:56:21.362Z INFO Step created jobs:
2020-02-13T14:56:21.362Z WARN Step failed with exitCode 1 and took 34 seconds

Config

aws:
access_key_id: xxxxxxxxxxxxx # User snowplow-pipeline-operator
secret_access_key: xxxxxxxxxxxxxxxxxxxxx
s3:
region: us-east-1
buckets:
assets: s3://snowplow-hosted-assets # DO NOT CHANGE unless you are hosting the jarfiles etc yourself in your own bucket
jsonpath_assets: s3://rr-snowplow-cloudfront-iglu-central/jsonpaths/
log: s3://rr-snowplow-emr-logs
encrypted: false # Whether the buckets below are enrcrypted using server side encryption (SSE-S3)
#raw:
#in: # This is a YAML array of one or more in buckets - you MUST use hyphens before each entry in the array, as below
# - s3://rr-snowplow-dev/raw/good # production
# - s3://rr-snowplow-dev/raw/bad # production
#processing: s3://rr-snowplow-dev/raw/processing
#archive: s3://rr-snowplow-dev/raw/archive # e.g. s3://my-archive-bucket/raw
enriched:
stream: s3://rr-snowplow-enriched-stream-bucket/stream
good: s3://rr-snowplow-dev/enriched/good # e.g. s3://my-out-bucket/enriched/good
bad: s3://rr-snowplow-dev/enriched/bad # e.g. s3://my-out-bucket/enriched/bad
errors: # Leave blank unless :continue_on_unexpected_error: set to true below
archive: s3://rr-snowplow-dev/enriched/archive # Where to archive enriched events to, e.g. s3://my-archive-bucket/enriched
shredded:
good: s3://rr-snowplow-dev/shredded/good # e.g. s3://my-out-bucket/shredded/good
bad: s3://rr-snowplow-dev/shredded/bad # e.g. s3://my-out-bucket/shredded/bad
errors: # Leave blank unless :continue_on_unexpected_error: set to true below
archive: s3://rr-snowplow-dev/shredded/archive # Where to archive shredded events to, e.g. s3://my-archive-bucket/shredded
consolidate_shredded_output: false # Whether to combine files when copying from hdfs to s3
emr:
ami_version: 5.9.0
region: us-east-1 # Always set this
jobflow_role: EMR_EC2_DefaultRole # Created using $ aws emr create-default-roles
service_role: EMR_DefaultRole # Created using $ aws emr create-default-roles
placement: # us-east-1c # Set this if not running in VPC. Leave blank otherwise
ec2_subnet_id: #subnet-910418f6 # Set this if running in VPC. Leave blank otherwise
ec2_key_name: rr_snowplow_dev
security_configuration: # Specify your EMR security configuration if needed. Leave blank otherwise
bootstrap: # Set this to specify custom boostrap actions. Leave empty otherwise
software:
hbase: # Optional. To launch on cluster, provide version, “0.92.0”, keep quotes. Leave empty otherwise.
lingual: # Optional. To launch on cluster, provide version, “1.1”, keep quotes. Leave empty otherwise.
# Adjust your Hadoop cluster below
jobflow:
job_name: Snowplow etl dev # Give your job a name
master_instance_type: m4.large
core_instance_count: 1
core_instance_type: r4.4xlarge
core_instance_ebs: # Optional. Attach an EBS volume to each core instance.
volume_size: 40 # Gigabytes
volume_type: “gp2”
volume_iops: 400 # Optional. Will only be used if volume_type is “io1”
ebs_optimized: # false # Optional. Will default to true
task_instance_count: 0 # Increase to use spot instances
task_instance_type: m1.medium
task_instance_bid: # In USD. Adjust bid, or leave blank for non-spot-priced (i.e. on-demand) task instances
bootstrap_failure_tries: 3 # Number of times to attempt the job in the event of bootstrap failures
configuration:
yarn-site:
yarn.nodemanager.vmem-check-enabled: “false”
yarn.nodemanager.resource.memory-mb: “117760”
yarn.scheduler.maximum-allocation-mb: “117760”
yarn.resourcemanager.am.max-attempts: “1”
spark:
maximizeResourceAllocation: “false”
spark-defaults:
spark.dynamicAllocation.enabled: “false”
spark.executor.instances: “4”
spark.yarn.executor.memoryOverhead: “3072”
spark.executor.memory: “20G”
spark.executor.cores: “3”
spark.yarn.driver.memoryOverhead: “3072”
spark.driver.memory: “20G”
spark.driver.cores: “3”
spark.default.parallelism: “48”
additional_info: # Optional JSON string for selecting additional features
collectors:
format: thrift
enrich:
versions:
spark_enrich: 1.17.0 # Version of the Spark Enrichment process
continue_on_unexpected_error: false # Set to ‘true’ (and set :out_errors: above) if you don’t want any exceptions thrown from ETL
output_compression: GZIP # Compression only supported with Redshift, set to NONE if you have Postgres targets. Allowed formats: NONE, GZIP
storage:
versions:
rdb_loader: 0.14.0
rdb_shredder: 0.13.1 # Version of the Spark Shredding process
hadoop_elasticsearch: 0.1.0 # Version of the Hadoop to Elasticsearch copying process
monitoring:
tags: {‘Name’: ‘snowplow-etl-dev’} # Name-value pairs describing this job
logging:
level: DEBUG # You can optionally switch to INFO for production

One thing I noticed in your config.yml is that you have your master_instance_type set to m4.large and your core_instance_type set to r4.4xlarge. @ihor recommends r4 instance types due to the memory intensive nature of EMR. Maybe set your master_instance_type to r4.4xlarge to see if that makes a difference.

Hi @datawise I have uploaded the config that I am using. What should be the right instance type for master_instance_type ? Any suggestion from your side ?
Do I also need to change task_instance_type ? its currently m1.medium

@Tejas_Behra, the master node can remain m4.large - it performs a managerial role and does not need the computing power of the core instances.

What is the total size of the files in the enriched:good bucket? There is a rough correlation between the volume of events (based on the logs/files size) and the EMR cluster specs.

Hi @ihor I am using stream based collector & enricher (Kinesis). Attaching the screenshot for enriched/good S3 bucket.

There are two files that got generated -

  1. rr-snowplow-enriched-good-delivery-stream-3-2020-02-13-14-17-02-57c359bf-60cd-42b6-9fdd-51d6c8b0f95b.gz of size 1.6 KB
  2. rr-snowplow-enriched-good-delivery-stream-3-2020-02-13-14-11-59-737a8418-f683-4a05-9329-6911c6119dc9.gz of size 1.8 KB

@Tejas_Behra,

  1. Your files’ location appears invalid - you have further partitioning after run=... folder. The enriched files are expected to be located directly in run=... folder
  2. You do not need such a big EMR cluster for this volume. The default specs are enough - m4.large nodes for both master and core and the Spark config as in here. I also advise having consolidate_shredded_output set to false with such a low volume.

Thanks @ihor, my files location were invalid, after fixing that everything worked.