Loading data from s3 to Redshift after EmrEtlRunner

Ok.

in the previous post,

I had an issue to run EmrEtlRunner. Emr jobs are failed with

java.io.FileNotFoundException: File does not exist: hdfs:/local/snowplow/shredded-events
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:517)

Shredding is failing with File does not exist: hdfs:/local/snowplow/shredded-event

    The Spark job step that is failing is the copy (using Amazon's S3DistCp utility) of shredded JSONs from your EMR cluster's HDFS file system back to Amazon S3, ready for loading into Redshift. Due to an unfortunate attribute of S3DistCp, it will fail if no files were output for shredding. Possible reasons for this:

You are not generating any custom contexts, nor unstructured events and have not enabled link click tracking. Solution: run EmrEtlRunner with --skip shred. Remove this --skip as/when you know that you do have JSONs to shred.

I used --skip shred but it looks I need to use shredding.

Here is my configuration:

;(function(p,l,o,w,i,n,g){if(!p[i]){p.GlobalSnowplowNamespace=p.GlobalSnowplowNamespace||[];
p.GlobalSnowplowNamespace.push(i);p[i]=function(){(p[i].q=p[i].q||[]).push(arguments)
};p[i].q=p[i].q||[];n=l.createElement(o);g=l.getElementsByTagName(o)[0];n.async=1;
n.src=w;g.parentNode.insertBefore(n,g)}}(window,document,"script","//d1fc8wv8zag5ca.cloudfront.net/2.9.2/sp.js","snowplow"));

window.snowplow('newTracker', 'cf', 'd2z6pco7ayls3z.cloudfront.net', { // Initialise a tracker - point to cloudfront that serves S3 bucket w/ pixel
  appId: 'web',
  cookieDomain: null,
  gaCookies: true
});

window.snowplow(‘enableActivityTracking’, 30, 10);
window.snowplow(‘enableLinkClickTracking’, null, true, true);
window.snowplow(‘trackPageView’);

What is the way to troubleshoot why there are no files for shredding?

Thanks
Oleg.