Value guarded in: Snowplow::EmrEtlRunner::Cli::load_config
With Contract: Maybe, String, Bool => Maybe
At: uri:classloader:/emr-etl-runner/lib/snowplow-emr-etl-runner/cli.rb:211
my config is
aws:
# Credentials can be hardcoded or set in environment variables
access_key_id: abc
secret_access_key:def
s3:
region: us-east-1
buckets:
assets: s3://snowplow-assets
log: s3n://snowplow-etl/logs/
raw:
in:
- "s3n://snowplow-events/"
processing: s3n://snowplow-etl/processing/
archive: s3://snowplow-archive/raw
enriched:
good: s3://snowplow-data/enriched/good
bad: s3://snowplow-data/enriched/bad
errors: s3://snowplow-data/enriched/errors
archive: s3://snowplow-data/enriched/archive
shredded:
good: s3://snowplow-data/shredded/good
bad: s3://snowplow-data/shredded/bad
errors: s3://snowplow-data/shredded/errors
emr:
job_name: Snowplow ETL # Give your job a name
ami_version: 5.9.0 # Don't change this
region: us-east-1 # Always set this
jobflow_role: EMR_EC2_DefaultRole # Created using $ aws emr create-default-roles
service_role: EMR_DefaultRole # Created using $ aws emr create-default-roles
placement: # Set this if not running in VPC. Leave blank otherwise
ec2_subnet_id: subnet-082251e665ce25c17 # Set this if running in VPC. Leave blank otherwise
ec2_key_name: nmj
bootstrap: [] # Set this to specify custom boostrap actions. Leave empty otherwise
software:
hbase: "0.92.0" # Optional. To launch on cluster, provide version, "0.92.0", keep quotes. Leave empty otherwise.
lingual: "1.1" # Optional. To launch on cluster, provide version, "1.1", keep quotes. Leave empty otherwise.
# Adjust your Spark cluster below
jobflow:
master_instance_type: m1.medium
core_instance_count: 2
core_instance_type: m1.medium
core_instance_ebs: # Optional. Attach an EBS volume to each core instance.
volume_size: 100 # Gigabytes
volume_type: "gp2"
volume_iops: 400 # Optional. Will only be used if volume_type is "io1"
ebs_optimized: false # Optional. Will default to true
task_instance_count: 0 # Increase to use spot instances
task_instance_type: m1.medium
task_instance_bid: 0.015 # In USD. Adjust bid, or leave blank for non-spot-priced (i.e. on-demand) task instances
bootstrap_failure_tries: 3 # Number of times to attempt the job in the event of bootstrap failures
additional_info: # Optional JSON string for selecting additional features
collectors:
format: cloudfront # Or 'clj-tomcat' for the Clojure Collector, or 'thrift' for Thrift records, or 'tsv/com.amazon.aws.cloudfront/wd_access_log' for Cloudfront access logs
enrich:
versions:
spark_enrich: 1.10.0 # Version of the Spark Enrichment process
continue_on_unexpected_error: false # Set to 'true' (and set out_errors: above) if you don't want any exceptions thrown from ETL
output_compression: NONE # Compression only supported with Redshift, set to NONE if you have Postgres targets. Allowed formats: NONE, GZIP
storage:
versions:
rdb_shredder: 0.13.0 # Version of the Relational Database Shredding process
rdb_loader: 0.14.0 # Version of the Relational Database Loader app
hadoop_elasticsearch: 0.1.0 # Version of the Hadoop to Elasticsearch copying process
monitoring:
tags: {} # Name-value pairs describing this job
logging:
level: DEBUG # You can optionally switch to INFO for production
snowplow:
method: get
app_id: ID-ap001
collector: klk.cloudfront.net # e.g. d3rkrsqld9gmqf.cloudfront.net
@Haseeb717, could you provide the formatted config, please, (in between the triple ticks). Indentation is very important when it comes to troubleshooting loading/parsing the configuration file. Also, what version of EmrEtlRunner do you use? Configuration files have different structures from version to version.
@Haseeb717, it does appear you are using configuration format for older EmrEtlRunner. Could you, please, check your config against this version. You will find a few discrepancies. Once corrected, the configuration should be accepted.
As a hint, job_name is in the wrong place as well as some properties are missing like encrypted, consolidate_shredded_output, etc.
Hi @ihor
Thank you for your reply .
Yeah I update my config exactly with the link you send
aws:
# Credentials can be hardcoded or set in environment variables
access_key_id: ID
secret_access_key: Key
s3:
region: us-east-1
buckets:
assets: s3://snowplow-assets # DO NOT CHANGE unless you are hosting the jarfiles etc yourself in your own bucket
jsonpath_assets: # If you have defined your own JSON Schemas, add the s3:// path to your own JSON Path files in your own bucket here
log: s3n://snowplow-etl/logs/
encrypted: false # Whether the buckets below are enrcrypted using server side encryption (SSE-S3)
raw:
in: # This is a YAML array of one or more in buckets - you MUST use hyphens before each entry in the array, as below
- "s3n://snowplow-events/" # e.g. s3://my-old-collector-bucket
processing: s3n://snowplow-etl/processing/
archive: s3://snowplow-archive/raw
enriched:
good: s3://snowplow-data/enriched/good
bad: s3://snowplow-data/enriched/bad
errors: s3://snowplow-data/enriched/errors
archive: s3://snowplow-data/enriched/archive
shredded:
good: s3://snowplow-data/shredded/good
bad: s3://snowplow-data/shredded/bad
errors: s3://snowplow-data/shredded/errors
archive: s3://snowplow-data/shredded/archive # Where to archive shredded events to, e.g. s3://my-archive-bucket/shredded
consolidate_shredded_output: false # Whether to combine files when copying from hdfs to s3
emr:
ami_version: 5.9.0 # Don't change this
region: us-east-1 # Always set this
jobflow_role: EMR_EC2_DefaultRole # Created using $ aws emr create-default-roles
service_role: EMR_DefaultRole # Created using $ aws emr create-default-roles
placement: # Set this if not running in VPC. Leave blank otherwise
ec2_subnet_id: "subnet-082251e665ce25c17" # Set this if running in VPC. Leave blank otherwise
ec2_key_name: "tolga"
security_configuration: # Specify your EMR security configuration if needed. Leave blank otherwise
bootstrap: [] # Set this to specify custom boostrap actions. Leave empty otherwise
software:
hbase: "0.92.0" # Optional. To launch on cluster, provide version, "0.92.0", keep quotes. Leave empty otherwise.
lingual: "1.1" # Optional. To launch on cluster, provide version, "1.1", keep quotes. Leave empty otherwise.
# Adjust your Spark cluster below
jobflow:
job_name: Snowplow ETL # Give your job a name
master_instance_type: m1.medium
core_instance_count: 2
core_instance_type: m1.medium
core_instance_bid: 0.015
core_instance_ebs: # Optional. Attach an EBS volume to each core instance.
volume_size: 100 # Gigabytes
volume_type: "gp2"
volume_iops: 400 # Optional. Will only be used if volume_type is "io1"
ebs_optimized: false # Optional. Will default to true
task_instance_count: 0 # Increase to use spot instances
task_instance_type: m1.medium
task_instance_bid: 0.015 # In USD. Adjust bid, or leave blank for non-spot-priced (i.e. on-demand) task instances
bootstrap_failure_tries: 3 # Number of times to attempt the job in the event of bootstrap failures
configuration:
yarn-site:
yarn.resourcemanager.am.max-attempts: "1"
spark:
maximizeResourceAllocation: "true"
additional_info: # Optional JSON string for selecting additional features
collectors:
format: cloudfront # For example: 'clj-tomcat' for the Clojure Collector, 'thrift' for Thrift records, 'tsv/com.amazon.aws.cloudfront/wd_access_log' for Cloudfront access logs or 'ndjson/urbanairship.connect/v1' for UrbanAirship Connect events
enrich:
versions:
spark_enrich: 1.18.0 # Version of the Spark Enrichment process
continue_on_unexpected_error: false # Set to 'true' (and set :out_errors: above) if you don't want any exceptions thrown from ETL
output_compression: NONE # Compression only supported with Redshift, set to NONE if you have Postgres targets. Allowed formats: NONE, GZIP
storage:
versions:
rdb_loader: 0.14.0
rdb_shredder: 0.13.1 # Version of the Spark Shredding process
hadoop_elasticsearch: 0.1.0 # Version of the Hadoop to Elasticsearch copying process
monitoring:
tags: {} # Name-value pairs describing this job
logging:
level: DEBUG # You can optionally switch to INFO for production
snowplow:
method: get
protocol: http
port: 80
app_id: ID # e.g. snowplow
collector: d3qhcvxu0vb.cloudfront.net # e.g. d3rkrsqld9gmqf.cloudfront.net
But now I get another error ERROR: org.jruby.embed.EvalFailedException: (ArgumentError) AWS EMR API Error (AccessDeniedException):