The first error sounds like your network connection may have dropped out if Terraform isn’t able to connect to googleapis.com. If you run terraform plan does it still have steps to complete?
Hi! tks for ur reply. “Terraform plan” worked fine at the beginning, I´ve tried 3 times with new projects because I thought it was my network, maybe I should increase the timeouts of the main.tf file??
If I use “terraform plan” now, it stays like this:
This is odd - something like creating an instance group manager shouldn’t generally take this long, and Terraform shouldn’t take a particularly long time to refresh the state assuming that it exists. Does the sp-iglu-server-grp resource exist in the account?
Maybe double check that you aren’t hitting any quotas in your project as well? Google Cloud Platform
Once you have cloned the quickstart-examples repository, you will need to navigate to the pipeline directory to update the input variables in terraform.tfvars .
However, this file does not exist within the quickstart-examples repository. There are however the following two files:
postgres.terraform.tfvars
bigquery.terraform.tfvars
Can someone please advise on how to continue with the tutorial?
You first have to decide where you want the data to end up. Postgres or BigQuery.
Based on that edit the representative file.
You can then run terraform apply --var-file=[file] - and replace file with the tfvars file you want to use. Ignore the var questions you will get then.
@josh - another thing. The storage bucket that gets generated for the pipeline is often causing an issue because it just uses the individual prefix as custom naming but needs to be unique globally (so everyone sticking with “sp” prefix will run into an issue").
Hey @Timo_Dechau@Evan_Giordanella so the quickstart was designed with the idea that each user would define their own “prefix” and “s3_bucket_name”.
I can see two options here but would love some feedback from users!
Set these as empty values by default and validate that they are non-empty at apply time which will force a value to be set that should be unique?
Remove the ability to name at all and instead used a random seed for entropy on first-apply so that the prefix and bucket names end up always being unique?
I personally like the idea of the prefix on resources - but in my mind it was to show which infrastructure belonged to Snowplow - hence why at first I left it as sp. I would called out a variable for a S3 / GCP bucket name in addition to prefix [assuming that bucket is the only item in the config that has to be globally unique].
It adds ability to define the bucket name independently as well as allowing for re-use of an existing bucket if you do not want that handled by the quickstart.