I was trying to run the snowplow collector on cloud run using the provided docker image but this attempt failed as I wasn’t able to pass the config file to the docker container through the gcloud command set. Is there a way to get around this?
I tried another approach, I created a alpine java docker image containing the jar file provided and the config file, this was a success.
The contents of the docker file are as follows
The command is:
gcloud run deploy snowplow-collector --project project-name --region region --allow-unauthenticated --image gcr-image-url --port 8080 --args “–config application.config”
i also tried keeping the config file in a gcs bucket:
gcloud run deploy snowplow-collector --project project-name --region region --allow-unauthenticated --image gcr-image-url --port 8080 --args “–config link-to-config-file-in-gcs”
The issue is that here you’re referring to a local file inside the Docker container whereas it’s not existing. You need to mount a volume with the file so that you can access it.
It’s up to you, but personally yes I would prefer to use directly the original Docker image rather than needing to rebuild one each time the config or version evolves.
I used the following which I feel is an easy deploy process done by code. Obviously there are multiple ways, but this way, all code can be placed in repo and no manual intervention.
Below is the docker file which works in building the image cloud run. This is then stored into cloud registry.
FROM snowplow/scala-stream-collector-pubsub:2.3.1
USER root
RUN mkdir -p /var/snowplow/collector/
ENV _SNOWPLOWPATH=/var/snowplow
WORKDIR $_SNOWPLOWPATH
COPY collector/ /var/config/snowplow/collector/
EXPOSE 80 443 8080
WORKDIR /opt/docker/bin/
ENTRYPOINT ["/opt/docker/bin/snowplow-stream-collector", "--config", "/var/config/snowplow/collector/config.collector.pubsub.hocon"]