Is there a way to use the storage API when BigQuery Loader?

Currently, through open source, it is understood that ‘tabledata.insertAll API’ is being used when writing from Loader to Bigquery, which is presumed to be the GCP Legacy API. (Use the legacy streaming API | BigQuery | Google Cloud)

So, is there a way to switch from Snowplow Bigquery Loader (Releases · snowplow-incubator/snowplow-bigquery-loader · GitHub) to ‘BigQuery Storage Write API’ rather than tabledata.insertAll API Legacy?
Anyone who knows please respond!

Hi @buzzz, not at the moment, but we might be looking into it next year. Another option would be to use the Lake Loader, storing the events in GCS in Iceberg format and creating an external table in BigQuery. Not real-time, but does not incur the costs of the Streaming Inserts API either.

1 Like

Thanks :slight_smile: