EmrEtlRunner Failing on enrich (can't find schema in repository?)

The snowplow-emr-etl-runner started failing this morning with the error below. The config files worked yesterday but not today. Why would it be looking for the iglu:com.snowplowanalytics.dataflowrunner/ClusterConfig/avro/1-1- schema? It didn’t think that it used this (only the snowflake transformer and loader). Any help would be great.

19/05/30 14:34:01 INFO Client: 
	 client token: N/A
	 diagnostics: User class threw exception: com.snowplowanalytics.snowplow.enrich.common.FatalEtlError: EtlError Errors:
  - error: NonEmptyList(error: Could not find schema with key iglu:com.snowplowanalytics.dataflowrunner/ClusterConfig/avro/1-1-0 in any repository, tried:
    level: "error"
    repositories: ["Iglu Client Embedded [embedded]","Iglu Central - GCP Mirror [HTTP]","Iglu Central [HTTP]"]
, error: "record" is not a valid primitive type (valid values are: [array, boolean, integer, null, number, object, string])
    level: "error"
    domain: "syntax"
    schema: {"loadingURI":"#","pointer":""}
    keyword: "type"
    found: "record"
    valid: ["array","boolean","integer","null","number","object","string"]
, error: "record" is not a valid primitive type (valid values are: [array, boolean, integer, null, number, object, string])
    level: "error"
    domain: "syntax"
    schema: {"loadingURI":"#","pointer":""}
    keyword: "type"
    found: "record"
    valid: ["array","boolean","integer","null","number","object","string"]
)
    level: "error"

	 ApplicationMaster host: 172.31.32.44
	 ApplicationMaster RPC port: 0
	 queue: default
	 start time: 1559226801285
	 final status: FAILED
	 tracking URL: http://ip-172-31-42-22.ec2.internal:20888/proxy/application_1559226502281_0003/
	 user: hadoop
Exception in thread "main" org.apache.spark.SparkException: Application application_1559226502281_0003 finished with failed status
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1104)
	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1150)
	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/05/30 14:34:01 INFO ShutdownHookManager: Shutdown hook called
19/05/30 14:34:01 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-e25ec817-07eb-40e6-9370-9c4417c92e57
Command exiting with ret '1'

Resolved! My bad and just a typo:

Turns out that if you put a space between your directory and the file name you get odd behavior. see (./config enrichments) . There should not be a space. Thanks all!

./snowplow-emr-etl-runner run --config ./config/config.yml --resolver ./config/resolver.conf --enrichments ./config/ enrichments