I’ve gone through the Snowplow Quickstart for a snowplow pipeline hosted on AWS, where much of our computing stack is hosted. Our actual snowflake instance, however, is hosted on Azure and the Quickstart configuration variables seem to default to aws URLS. Is there a way to tell the snowflake loader to query our actual instance, which is something like
“account_name.region.azure.snowflakecomputing.com” instead of “account_name.region.aws.snowflakecomputing.com”?
We don’t support the deployment of the Snowplow pipeline on Azure just yet. You can have an Azure-based target of Snowflake or Databricks, however, but your pipeline must be on AWS or GCP.
Ok, I think we’re talking about the same thing – our snowplow pipeline is now successfully deployed to AWS, and that’s where we want it.
Our snowflake database is hosted on microsoft azure, and we want our snowplow pipeline to be able to put processed events into the snowflake database.
@jpiekos it sounds like there is a way to get the snowplow pipeline talking to a snowflake db on azure?
@scelerat could you share the input variables you have currently supplied to the Snowflake Loader module? Specifically it should be possible to configure what you need with these two inputs: terraform-aws-snowflake-loader-ec2/variables.tf at master · snowplow-devops/terraform-aws-snowflake-loader-ec2 · GitHub
Its based on the region so if you use “australia-east” as an example the loader will automatically know that this is for Azure.
I had the region wrong, it was supposed to be ‘east-us-2’ instead of ‘us-east-2’. Changing that changed the URL that the snowplow loader is trying to reach.