Shred stage failure on EMR ETL Runner upgrade

Hi I am trying to upgrade EMR ETL Runner to version 1.0.4 but facing the following, can anyone help what exactly going wrong here ?

21/08/06 15:54:46 WARN DependencyUtils: Skip remote jar s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.16.0.jar.
21/08/06 15:54:47 INFO RMProxy: Connecting to ResourceManager at ip-10-0-1-92.ec2.internal/10.0.1.92:8032
21/08/06 15:54:47 INFO Client: Requesting a new application from cluster with 2 NodeManagers
21/08/06 15:54:47 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (117760 MB per container)
21/08/06 15:54:47 INFO Client: Will allocate AM container, with 23552 MB memory including 3072 MB overhead
21/08/06 15:54:47 INFO Client: Setting up container launch context for our AM
21/08/06 15:54:47 INFO Client: Setting up the launch environment for our AM container
21/08/06 15:54:47 INFO Client: Preparing resources for our AM container
21/08/06 15:54:47 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
21/08/06 15:54:51 INFO Client: Uploading resource file:/mnt/tmp/spark-fef4f9fe-7bfc-4934-abb5-e745d8a96770/__spark_libs__1741538148949239420.zip -> hdfs://ip-10-0-1-92.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628265125386_0002/__spark_libs__1741538148949239420.zip
21/08/06 15:54:55 INFO Client: Uploading resource s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.16.0.jar -> hdfs://ip-10-0-1-92.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628265125386_0002/snowplow-rdb-shredder-0.16.0.jar
21/08/06 15:54:56 INFO S3NativeFileSystem: Opening 's3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.16.0.jar' for reading
21/08/06 15:54:58 INFO Client: Uploading resource file:/mnt/tmp/spark-fef4f9fe-7bfc-4934-abb5-e745d8a96770/__spark_conf__711079088507742684.zip -> hdfs://ip-10-0-1-92.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628265125386_0002/__spark_conf__.zip
21/08/06 15:54:58 INFO SecurityManager: Changing view acls to: hadoop
21/08/06 15:54:58 INFO SecurityManager: Changing modify acls to: hadoop
21/08/06 15:54:58 INFO SecurityManager: Changing view acls groups to: 
21/08/06 15:54:58 INFO SecurityManager: Changing modify acls groups to: 
21/08/06 15:54:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
21/08/06 15:55:00 INFO Client: Submitting application application_1628265125386_0002 to ResourceManager
21/08/06 15:55:00 INFO YarnClientImpl: Submitted application application_1628265125386_0002
21/08/06 15:55:01 INFO Client: Application report for application_1628265125386_0002 (state: ACCEPTED)
21/08/06 15:55:01 INFO Client: 
	 client token: N/A
	 diagnostics: AM container is launched, waiting for AM container to Register with RM
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1628265300794
	 final status: UNDEFINED
	 tracking URL: http://ip-10-0-1-92.ec2.internal:20888/proxy/application_1628265125386_0002/
	 user: hadoop
21/08/06 15:55:02 INFO Client: Application report for application_1628265125386_0002 (state: ACCEPTED)
21/08/06 15:55:03 INFO Client: Application report for application_1628265125386_0002 (state: ACCEPTED)
21/08/06 15:55:04 INFO Client: Application report for application_1628265125386_0002 (state: ACCEPTED)
21/08/06 15:55:05 INFO Client: Application report for application_1628265125386_0002 (state: ACCEPTED)
21/08/06 15:55:06 INFO Client: Application report for application_1628265125386_0002 (state: FAILED)
21/08/06 15:55:06 INFO Client: 
	 client token: N/A
	 diagnostics: Application application_1628265125386_0002 failed 1 times due to AM Container for appattempt_1628265125386_0002_000001 exited with  exitCode: 13
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1628265125386_0002_01_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
	at org.apache.hadoop.util.Shell.run(Shell.java:869)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:235)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 13
For more detailed output, check the application tracking page: http://ip-10-0-1-92.ec2.internal:8088/cluster/app/application_1628265125386_0002 Then click on links to logs of each attempt.
. Failing the application.
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1628265300794
	 final status: FAILED
	 tracking URL: http://ip-10-0-1-92.ec2.internal:8088/cluster/app/application_1628265125386_0002
	 user: hadoop
21/08/06 15:55:06 ERROR Client: Application diagnostics message: Application application_1628265125386_0002 failed 1 times due to AM Container for appattempt_1628265125386_0002_000001 exited with  exitCode: 13
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1628265125386_0002_01_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
	at org.apache.hadoop.util.Shell.run(Shell.java:869)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:235)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:299)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:83)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 13
For more detailed output, check the application tracking page: http://ip-10-0-1-92.ec2.internal:8088/cluster/app/application_1628265125386_0002 Then click on links to logs of each attempt.
. Failing the application.
Exception in thread "main" org.apache.spark.SparkException: Application application_1628265125386_0002 finished with failed status
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1149)
	at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1526)
	at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
	at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
	at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
	at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
	at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/08/06 15:55:06 INFO ShutdownHookManager: Shutdown hook called
21/08/06 15:55:06 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-caf76558-4e65-405d-9754-c4b198c1915d
21/08/06 15:55:06 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-fef4f9fe-7bfc-4934-abb5-e745d8a96770
Command exiting with ret '1'

EMR ETL Config

aws:
  # Credentials can be hardcoded or set in environment variables
  access_key_id: <%= ENV['AWS_SNOWPLOW_ACCESS_KEY'] %>  # User snowplow-pipeline-operator
  secret_access_key: <%= ENV['AWS_SNOWPLOW_SECRET_KEY'] %>
  s3:
    region: <%= ENV['RR_SNOWPLOW_REGION'] %>
    buckets:
      assets: s3://snowplow-hosted-assets # DO NOT CHANGE unless you are hosting the jarfiles etc yourself in your own bucket
      jsonpath_assets: s3://rr-snowplow-cloudfront-iglu-central/jsonpaths/
      log: s3://rr-snowplow-events/emr_logs
      encrypted: false # Whether the buckets below are enrcrypted using server side encryption (SSE-S3)
      #raw:
      #in: # This is a YAML array of one or more in buckets - you MUST use hyphens before each entry in the array, as below
      #  - s3://rr-snowplow-dev/raw/good  # production
      #  - s3://rr-snowplow-dev/raw/bad  # production
      #processing: s3://rr-snowplow-dev/raw/processing
      #archive: s3://rr-snowplow-dev/raw/archive    # e.g. s3://my-archive-bucket/raw
      enriched:
        stream: s3://rr-snowplow-enriched-stream
        good: s3://rr-snowplow-events/enriched/good      # e.g. s3://my-out-bucket/enriched/good
        bad: s3://rr-snowplow-events/enriched/bad       # e.g. s3://my-out-bucket/enriched/bad
        errors:      # Leave blank unless :continue_on_unexpected_error: set to true below
        archive: s3://rr-snowplow-events/enriched/archive    # Where to archive enriched events to, e.g. s3://my-archive-bucket/enriched
      shredded:
        good: s3://rr-snowplow-events/shredded/good       # e.g. s3://my-out-bucket/shredded/good
        bad: s3://rr-snowplow-events/shredded/bad        # e.g. s3://my-out-bucket/shredded/bad
        errors:      # Leave blank unless :continue_on_unexpected_error: set to true below                                                                                                                                                           archive: s3://rr-snowplow-events/shredded/archive    # Where to archive shredded events to, e.g. s3://my-archive-bucket/shredded                                                                                                         consolidate_shredded_output: false # Whether to combine files when copying from hdfs to s3
  emr:
    ami_version: 5.29.0
    region: <%= ENV['RR_SNOWPLOW_REGION'] %>          # Always set this
    jobflow_role: EMR_EC2_DefaultRole # Created using $ aws emr create-default-roles
    service_role: EMR_DefaultRole     # Created using $ aws emr create-default-roles
    placement: # us-east-1c     # Set this if not running in VPC. Leave blank otherwise
    ec2_subnet_id: <%= ENV['RR_SNOWPLOW_PUBLIC_SUBNET'] %>  #subnet-910418f6 # Set this if running in VPC. Leave blank otherwise
    ec2_key_name:
    security_configuration:  # Specify your EMR security configuration if needed. Leave blank otherwise
    bootstrap: []           # Set this to specify custom boostrap actions. Leave empty otherwise
    software:
      hbase:                # Optional. To launch on cluster, provide version, "0.92.0", keep quotes. Leave empty otherwise.
      lingual:              # Optional. To launch on cluster, provide version, "1.1", keep quotes. Leave empty otherwise.
    # Adjust your Hadoop cluster below
    jobflow:
      job_name: Snow2plow EMR ETL # Give your job a name
      master_instance_type: m4.large
      core_instance_count: 2
      #core_instance_type: r4.2xlarge
      core_instance_type: m4.xlarge
      core_instance_ebs:    # Optional. Attach an EBS volume to each core instance.
        volume_size: 100    # Gigabytes
        volume_type: "gp2"
        volume_iops: 400    # Optional. Will only be used if volume_type is "io1"
        ebs_optimized:  # false # Optional. Will default to true
      task_instance_count: 0 # Increase to use spot instances
      task_instance_type: m1.medium
      task_instance_bid:  # In USD. Adjust bid, or leave blank for non-spot-priced (i.e. on-demand) task instances
    bootstrap_failure_tries: 3 # Number of times to attempt the job in the event of bootstrap failures
    configuration:
      yarn-site:
        yarn.nodemanager.vmem-check-enabled: "false"
        yarn.nodemanager.resource.memory-mb: "117760"
        yarn.resourcemanager.am.max-attempts: "1"
      spark:
        maximizeResourceAllocation: "false"
      spark-defaults:
        spark.dynamicAllocation.enabled: "false"
        spark.executor.instances: "4"
        spark.executor.memoryOverhead: "3072"
        spark.executor.memory: "20G"
        spark.executor.cores: "3"
        spark.driver.memoryOverhead: "3072"
        spark.driver.memory: "20G"
        spark.driver.cores: "3"
        spark.default.parallelism: "48"
    additional_info:        # Optional JSON string for selecting additional features
collectors:
  # For example: 'clj-tomcat' for the Clojure Collector, 'thrift' for Thrift records, 'tsv/com.amazon.aws.cloudfront/wd_access_log' for Cloudfront access logs or 'ndjson/urbanairship.connect/v1' for UrbanAirship Connect events
  # N.B. Use 'cloudfront' for Snowplow Cloudfront Collector logs, 'tsv/com.amazon.aws.cloudfront/wd_access_log' for website access logs on cloudfront
  format: thrift
enrich:
  versions:
    spark_enrich: 1.19.0 # Version of the Spark Enrichment process
  continue_on_unexpected_error: false # Set to 'true' (and set :out_errors: above) if you don't want any exceptions thrown from ETL
  output_compression: GZIP # Compression only supported with Redshift, set to NONE if you have Postgres targets. Allowed formats: NONE, GZIP
storage:
  versions:
    rdb_loader: 0.17.0
    rdb_shredder: 0.16.0        # Version of the Spark Shredding process
    hadoop_elasticsearch: 0.1.0 # Version of the Hadoop to Elasticsearch copying process
monitoring:
  tags: {'Name': 'rr-snowplow-emr-etl-runner'} # Name-value pairs describing this job
  logging:
    level: DEBUG # You can optionally switch to INFO for production

@Tejas_Behra , perhaps incompatibility somewhere. You might need to check your configuration file. In particular, AMI version, enrich version, shredder and loader versions. You should have something like this

aws:
  emr:
    ami_version: "6.1.0"
    . . .
enrich:
  versions:
    spark_enrich: "1.18.0"
  . . .
storage:
  versions:
    rdb_loader: "0.18.1"
    rdb_shredder: "0.18.1"

@ihor , I have posted the EMR ETL Runner config. I am using the same config as provided ins sample config here - emr-etl-runner/config.yml.sample at master · snowplow/emr-etl-runner · GitHub
Can you provide me details with the latest version of ami, spark_enrich, rdb_loader, rdb_shredder that I can use in production ?

Hey @Tejas_Behra, my previous comment does contain the latest versions you can use with EER 1.0.4.

Do note that we will no longer maintain EER as we are moving to Streamed version of RDB Loader and Shredder. EmrEtlRunner is becoming obsolete.

Thanks @ihor , can you please provide me with documentation links with streamed version of rdb loader & shredder.

Note RBD Shredder is not streamed yet (rather experimental if you want to try). Thus, the latest architecture suitable for production is Streamed RDB Loader with batched RDB Shredder (as per diagram - the 2nd link). They communicate via SQS messages and run independently of each other. The introduction (rather revival) of manifest table ensures the data is not loaded twice.

aws:
  emr:
    ami_version: "6.1.0"
    . . .
enrich:
  versions:
    spark_enrich: "1.18.0"
  . . .
storage:
  versions:
    rdb_loader: "0.18.1"
    rdb_shredder: "0.18.1"

This configuration setting didn’t work

Hi @ihor , I have rolled back to EER 119 but unable to get pass the shredding stage.
I have tried increasing the core instances count but that didn’t worked
Here are the logs -

Warning: Skip remote jar s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar.
21/08/06 21:19:19 INFO RMProxy: Connecting to ResourceManager at ip-10-0-1-143.ec2.internal/10.0.1.143:8032
21/08/06 21:19:19 INFO Client: Requesting a new application from cluster with 5 NodeManagers
21/08/06 21:19:19 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container)
21/08/06 21:19:19 INFO Client: Will allocate AM container, with 6143 MB memory including 558 MB overhead
21/08/06 21:19:19 INFO Client: Setting up container launch context for our AM
21/08/06 21:19:19 INFO Client: Setting up the launch environment for our AM container
21/08/06 21:19:19 INFO Client: Preparing resources for our AM container
21/08/06 21:19:21 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
21/08/06 21:19:23 INFO Client: Uploading resource file:/mnt/tmp/spark-3b5801c9-e5b5-415f-a539-129cae43e1ab/__spark_libs__5776985583832220695.zip -> hdfs://ip-10-0-1-143.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628284601675_0002/__spark_libs__5776985583832220695.zip
21/08/06 21:19:26 WARN RoleMappings: Found no mappings configured with 'fs.s3.authorization.roleMapping', credentials resolution may not work as expected
21/08/06 21:19:27 INFO Client: Uploading resource s3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar -> hdfs://ip-10-0-1-143.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628284601675_0002/snowplow-rdb-shredder-0.13.1.jar
21/08/06 21:19:27 INFO S3NativeFileSystem: Opening 's3://snowplow-hosted-assets-us-east-1/4-storage/rdb-shredder/snowplow-rdb-shredder-0.13.1.jar' for reading
21/08/06 21:19:29 INFO Client: Uploading resource file:/mnt/tmp/spark-3b5801c9-e5b5-415f-a539-129cae43e1ab/__spark_conf__7466515130660854392.zip -> hdfs://ip-10-0-1-143.ec2.internal:8020/user/hadoop/.sparkStaging/application_1628284601675_0002/__spark_conf__.zip
21/08/06 21:19:29 INFO SecurityManager: Changing view acls to: hadoop
21/08/06 21:19:29 INFO SecurityManager: Changing modify acls to: hadoop
21/08/06 21:19:29 INFO SecurityManager: Changing view acls groups to: 
21/08/06 21:19:29 INFO SecurityManager: Changing modify acls groups to: 
21/08/06 21:19:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
21/08/06 21:19:29 INFO Client: Submitting application application_1628284601675_0002 to ResourceManager
21/08/06 21:19:29 INFO YarnClientImpl: Submitted application application_1628284601675_0002
21/08/06 21:19:30 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:30 INFO Client: 
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1628284769574
	 final status: UNDEFINED
	 tracking URL: http://ip-10-0-1-143.ec2.internal:20888/proxy/application_1628284601675_0002/
	 user: hadoop
21/08/06 21:19:31 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:32 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:33 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:34 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:35 INFO Client: Application report for application_1628284601675_0002 (state: ACCEPTED)
21/08/06 21:19:36 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:36 INFO Client: 
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: 10.0.1.133
	 ApplicationMaster RPC port: 0
	 queue: default
	 start time: 1628284769574
	 final status: UNDEFINED
	 tracking URL: http://ip-10-0-1-143.ec2.internal:20888/proxy/application_1628284601675_0002/
	 user: hadoop
21/08/06 21:19:37 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:38 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:39 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:40 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:41 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:42 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:43 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:44 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:45 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:46 INFO Client: Application report for application_1628284601675_0002 (state: RUNNING)
21/08/06 21:19:47 INFO Client: Application report for application_1628284601675_0002 (state: FINISHED)
21/08/06 21:19:47 INFO Client: 
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: 10.0.1.133
	 ApplicationMaster RPC port: 0
	 queue: default
	 start time: 1628284769574
	 final status: FAILED
	 tracking URL: http://ip-10-0-1-143.ec2.internal:20888/proxy/application_1628284601675_0002/
	 user: hadoop
Exception in thread "main" org.apache.spark.SparkException: Application application_1628284601675_0002 finished with failed status
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1104)
	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1150)
	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
21/08/06 21:19:47 INFO ShutdownHookManager: Shutdown hook called
21/08/06 21:19:47 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-3b5801c9-e5b5-415f-a539-129cae43e1ab
Command exiting with ret '1'