Dear team,
We research to use Snowplow as new Tracking and Analytic system, following the guide on Wiki and Lambda architecture on forum.
I have some error when running Storageloader to load data to Redshift. It said table com_snowplowanalytics_monitoring_kinesis_app_heartbeat_1 and com_amazon_aws_ec2_instance_identity_document_1 didn’t exist. I find that there were some sql file to create them on Github, but were deleted.
What are these tables use for? And how can i create/ find sql template?
_**./snowplow-storage-loader --config emr.yml --skip analyze**_
Loading Snowplow events and shredded types into Snowplow Redshift (Redshift cluster)...
Unexpected error: Java::Default::PSQLException error executing COPY statements: BEGIN;
COPY atomic.events FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/atomic-events' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' REGION AS 'us-west-2' DELIMITER '\t' MAXERROR 5 EMPTYASNULL FILLRECORD TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_amazon_aws_ec2_instance_identity_document_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.amazon.aws.ec2/instance_identity_document/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.amazon.aws.ec2/instance_identity_document_1.json' REGION AS 'us-west-2' MAXERROR 5 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_heartbeat_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_heartbeat/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_heartbeat_1.json' REGION AS 'us-west-2' MAXERROR 5 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_initialized_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_initialized/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_initialized_1.json' REGION AS 'us-west-2' MAXERROR 5 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_shutdown_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_shutdown/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_shutdown_1.json' REGION AS 'us-west-2' MAXERROR 5 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COMMIT;: ERROR: Cannot COPY into nonexistent table com_amazon_aws_ec2_instance_identity_document_1
/opt/snowplow-storage-loader!/storage-loader/lib/snowplow-storage-loader/redshift_loader.rb:88:in `load_events_and_shredded_types'
file:/opt/snowplow-storage-loader!/storage-loader/bin/snowplow-storage-loader:54:in `(root)'
org/jruby/RubyArray.java:1613:in `each'
file:/opt/snowplow-storage-loader!/storage-loader/bin/snowplow-storage-loader:51:in `(root)'
org/jruby/RubyKernel.java:1091:in `load'
file:/opt/snowplow-storage-loader!/META-INF/main.rb:1:in `(root)'
org/jruby/RubyKernel.java:1072:in `require'
file:/opt/snowplow-storage-loader!/META-INF/main.rb:1:in `(root)'
/tmp/jruby2766016113231930523extract/jruby-stdlib-1.7.20.1.jar!/META-INF/jruby.home/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:1:in `(root)'
**_./snowplow-storage-loader --config emr.yml --skip analyze_**
Loading Snowplow events and shredded types into Snowplow Redshift (Redshift cluster)...
Unexpected error: Java::Default::PSQLException error executing COPY statements: BEGIN;
COPY atomic.events FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/atomic-events' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' REGION AS 'us-west-2' DELIMITER '\t' MAXERROR 10 EMPTYASNULL FILLRECORD TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_amazon_aws_ec2_instance_identity_document_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.amazon.aws.ec2/instance_identity_document/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.amazon.aws.ec2/instance_identity_document_1.json' REGION AS 'us-west-2' MAXERROR 10 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_heartbeat_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_heartbeat/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_heartbeat_1.json' REGION AS 'us-west-2' MAXERROR 10 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_initialized_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_initialized/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_initialized_1.json' REGION AS 'us-west-2' MAXERROR 10 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COPY atomic.com_snowplowanalytics_monitoring_kinesis_app_shutdown_1 FROM 's3://snowplows3/emr_shredded_good/run=2016-11-09-03-21-56/com.snowplowanalytics.monitoring.kinesis/app_shutdown/jsonschema/1-' CREDENTIALS 'aws_access_key_id=xxxxxxxxxxxxxxxxxxxx;aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' JSON AS 's3://snowplow-hosted-assets-us-west-2/4-storage/redshift-storage/jsonpaths/com.snowplowanalytics.monitoring.kinesis/app_shutdown_1.json' REGION AS 'us-west-2' MAXERROR 10 TRUNCATECOLUMNS TIMEFORMAT 'auto' ACCEPTINVCHARS ;
COMMIT;: ERROR: Cannot COPY into nonexistent table com_snowplowanalytics_monitoring_kinesis_app_heartbeat_1
/opt/snowplow-storage-loader!/storage-loader/lib/snowplow-storage-loader/redshift_loader.rb:88:in `load_events_and_shredded_types'
file:/opt/snowplow-storage-loader!/storage-loader/bin/snowplow-storage-loader:54:in `(root)'
My yml config for Emretlrunner and Storageloader
aws:
# Credentials can be hardcoded or set in environment variables
access_key_id: AxxxxxxxQ
secret_access_key: dxxxxxN
s3:
region: us-west-2
buckets:
assets: s3://snowplow-hosted-assets # DO NOT CHANGE unless you are hosting the jarfiles etc yourself in your own bucket
jsonpath_assets: # If you have defined your own JSON Schemas, add the s3:// path to your own JSON Path files in your own bucket here
log: s3://snowplows3/logs
raw:
in: # Multiple in buckets are permitted
- s3://snowplows3/ # e.g. s3://my-in-bucket
processing: s3://snowplows3/emr_raw_processing
archive: s3://snowplows3/emr_raw_archive # e.g. s3://my-archive-bucket/raw
enriched:
good: s3://snowplows3/emr_enriched_good # e.g. s3://my-out-bucket/enriched/good
bad: s3://snowplows3/emr_enriched_bad # e.g. s3://my-out-bucket/enriched/bad
errors: s3://snowplows3/emr_enriched_error # Leave blank unless :continue_on_unexpected_error: set to true below
archive: s3://snowplows3/emr_enriched_archive # Where to archive enriched events to, e.g. s3://my-archive-bucket/enriched
shredded:
good: s3://snowplows3/emr_shredded_good # e.g. s3://my-out-bucket/shredded/good
bad: s3://snowplows3/emr_shredded_bad # e.g. s3://my-out-bucket/shredded/bad
errors: s3://snowplows3/emr_shredded_error # Leave blank unless :continue_on_unexpected_error: set to true below
archive: s3://snowplows3/emr_shredded_archive # Where to archive shredded events to, e.g. s3://my-archive-bucket/shredded
emr:
ami_version: 4.5.0
region: us-west-2 # Always set this
jobflow_role: EMR_EC2_DefaultRole # Created using $ aws emr create-default-roles
service_role: EMR_DefaultRole # Created using $ aws emr create-default-roles
placement: # Set this if not running in VPC. Leave blank otherwise
ec2_subnet_id: # Set this if running in VPC. Leave blank otherwise
ec2_key_name: trungnt
bootstrap: [] # Set this to specify custom boostrap actions. Leave empty otherwise
software:
hbase: # Optional. To launch on cluster, provide version, "0.92.0", keep quotes. Leave empty otherwise.
lingual: # Optional. To launch on cluster, provide version, "1.1", keep quotes. Leave empty otherwise.
# Adjust your Hadoop cluster below
jobflow:
master_instance_type: m1.large
core_instance_count: 3
core_instance_type: m1.large
task_instance_count: 0 # Increase to use spot instances
task_instance_type: m1.large
task_instance_bid: 0.05 # In USD. Adjust bid, or leave blank for non-spot-priced (i.e. on-demand) task instances
bootstrap_failure_tries: 3 # Number of times to attempt the job in the event of bootstrap failures
additional_info: # Optional JSON string for selecting additional features
collectors:
format: thrift # For example: 'clj-tomcat' for the Clojure Collector, 'thrift' for Thrift records, 'tsv/com.amazon.aws.cloudfront/wd_access_log' for Cloudfront access logs or 'ndjson/urbanairship.connect/v1' for UrbanAirship Connect events
enrich:
job_name: Snowplow # Give your job a name
versions:
hadoop_enrich: 1.8.0 # Version of the Hadoop Enrichment process
hadoop_shred: 0.9.0 # Version of the Hadoop Shredding process
hadoop_elasticsearch: 0.1.0 # Version of the Hadoop to Elasticsearch copying process
continue_on_unexpected_error: false # Set to 'true' (and set :out_errors: above) if you don't want any exceptions thrown from ETL
output_compression: NONE # Compression only supported with Redshift, set to NONE if you have Postgres targets. Allowed formats: NONE, GZIP
storage:
download:
folder: # Postgres-only config option. Where to store the downloaded files. Leave blank for Redshift
targets:
- name: "Snowplow Redshift"
type: redshift
host: xxxx.redshift.amazonaws.com # The endpoint as shown in the Redshift console
database: snowplow # Name of database
port: 5439 # Default Redshift port
ssl_mode: disable # One of disable (default), require, verify-ca or verify-full
table: atomic.events
username: admin
password: RxxxxxV
maxerror: 10 # Stop loading on first error, or increase to permit more load errors
comprows: 200000 # Default for a 1 XL node cluster. Not used unless --include compupdate specified
monitoring:
tags: {} # Name-value pairs describing this job
logging:
level: DEBUG # You can optionally switch to INFO for production