Snowplow Mini Initial Setup

Hey guys,

Do we have any problem on the last snowplow mini image on AWS stack. I followed the requirements, creating before te role, policy, security group, log group and after that, the instance launch with no errors, but all the containers didn’t up and when I try to up sudo docker-compose up -d
return an http timeout for me. Should I do somenthing else?

Creating network “snowplow_default” with the default driver
Creating elasticsearch …
Creating cadvisor …
Creating nsqlookupd …
Creating postgres …
Creating swagger-ui …

ERROR: for cadvisor UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for elasticsearch UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for nsqlookupd UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for swagger-ui UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for postgres UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for cadvisor UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for elasticsearch UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for nsqlookupd UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for swagger UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)

ERROR: for postgres UnixHTTPConnectionPool(host=‘localhost’, port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Hi, I solved this problem executing:
sudo COMPOSE_HTTP_TIMEOUT=300 docker-compose up -d

Recreating elasticsearch … done
Recreating swagger-ui … done
Recreating cadvisor … done
Recreating nsqlookupd … done
Recreating postgres … done
Creating nsqd … done
Creating nsqadmin … done
Creating iglu-server … done
Creating elasticsearch-loader-bad … done
Creating kibana … done
Creating elasticsearch-loader-good … done
Creating scala-stream-collector-nsq … done
Creating stream-enrich-nsq … done

but after that the problem occurred to access the CloudWatch
ERROR: for cadvisor Cannot start service cadvisor: failed to initialize logging driver: failed to create Cloudwatch log stream: RequestError: send request failed
caused by: Post “https://logs.us-east-1.amazonaws.com/”: dial tcp 52.46.146.57:443: i/o timeout

I commented the log on the Yaml file just to up the environment and even that the errors keep showing. Now to containers with errors:

stream-enrich-nsq
An error occured: {“error”:“ResolutionError”,“lookupHistory”:[{“repository”:“Iglu Central”,“errors”:[{“error”:“RepoFailure”,“message”:“connect timed out”}],“attempts”:1,“lastAttempt”:“2022-11-21T14:12:22.183Z”},{“repository”:“Iglu Central - GCP Mirror”,“errors”:[{“error”:“RepoFailure”,“message”:“connect timed out”}],“attempts”:1,“lastAttempt”:“2022-11-21T14:12:24.193Z”},{“repository”:“Iglu Client Embedded”,“errors”:[{“error”:“NotFound”}],“attempts”:1,“lastAttempt”:“2022-11-21T14:12:24.196Z”},{“repository”:“Iglu Server”,“errors”:[{“error”:“NotFound”}],“attempts”:1,“lastAttempt”:“2022-11-21T14:12:20.161Z”}]}

elasticsearch:
#There is insufficient memory for the Java Runtime Environment to continue.
#Native memory allocation (mmap) failed to map 8589934592 bytes for committing reserved memory.
#An error report file with more information is saved as:
#logs/hs_err_pid1.log
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000600000000, 8589934592, 0) failed; error=‘Not enough space’ (errno=12)

I am using a large instance with 2 vCPU and 8GB for memory

1 Like

Hi @nando_roz
Thanks for updating your thread - really helpful.
Cheers,
Eddie