Errors in GCP Quick Guide

Hi! I am trying to install the Quick GCP default configuration but at the moment to apply the terraform I´ve got the following error:

The group instance, the instance and instance template are installed, but the health check is not ok.

If I see the log of the startup script I´ve got the following error:

Please let me know any ideas of things to check.

Thanks,
Angela

The first error sounds like your network connection may have dropped out if Terraform isn’t able to connect to googleapis.com. If you run terraform plan does it still have steps to complete?

1 Like

Hi! tks for ur reply. “Terraform plan” worked fine at the beginning, I´ve tried 3 times with new projects because I thought it was my network, maybe I should increase the timeouts of the main.tf file??
If I use “terraform plan” now, it stays like this:

This is odd - something like creating an instance group manager shouldn’t generally take this long, and Terraform shouldn’t take a particularly long time to refresh the state assuming that it exists. Does the sp-iglu-server-grp resource exist in the account?

Maybe double check that you aren’t hitting any quotas in your project as well? Google Cloud Platform

I too have found errors in the GCP Quick Start Guide:
Within the https://docs.snowplow.io/docs/getting-started-on-snowplow-open-source/quick-start-gcp/#setting-up-your-pipeline section it mentions

Once you have cloned the quickstart-examples repository, you will need to navigate to the pipeline directory to update the input variables in terraform.tfvars .

However, this file does not exist within the quickstart-examples repository. There are however the following two files:

  • postgres.terraform.tfvars
  • bigquery.terraform.tfvars

Can someone please advise on how to continue with the tutorial?

You first have to decide where you want the data to end up. Postgres or BigQuery.

Based on that edit the representative file.

You can then run terraform apply --var-file=[file] - and replace file with the tfvars file you want to use. Ignore the var questions you will get then.

1 Like

Thanks for flagging @Evan_Giordanella - we should have the docs cleaned up today so its a lot clearer what needs to be done here!

2 Likes

@josh - another thing. The storage bucket that gets generated for the pipeline is often causing an issue because it just uses the individual prefix as custom naming but needs to be unique globally (so everyone sticking with “sp” prefix will run into an issue").

I ran into this as well. I had to change the name in main.tf

Hey @Timo_Dechau @Evan_Giordanella so the quickstart was designed with the idea that each user would define their own “prefix” and “s3_bucket_name”.

I can see two options here but would love some feedback from users!

  1. Set these as empty values by default and validate that they are non-empty at apply time which will force a value to be set that should be unique?
  2. Remove the ability to name at all and instead used a random seed for entropy on first-apply so that the prefix and bucket names end up always being unique?

Thoughts or other potential solves?

I personally like the idea of the prefix on resources - but in my mind it was to show which infrastructure belonged to Snowplow - hence why at first I left it as sp. I would called out a variable for a S3 / GCP bucket name in addition to prefix [assuming that bucket is the only item in the config that has to be globally unique].

1 Like

@SteveB would you be up for testing out this PR: GCS bucket entropy by jbeemster · Pull Request #50 · snowplow/quickstart-examples · GitHub

It adds ability to define the bucket name independently as well as allowing for re-use of an existing bucket if you do not want that handled by the quickstart.