Creating new custom event and navigate it to new designate table

Hi all

I’m doing a POC and deployed the infra of snowplow using on GCP using the quick start and Terraform.
Now I want to create custom event with 2 sample attributes and send it to new table in Bigquery
I have added the new schema to the Iglu registry and create the table in BQ
But I’m not sure where to define the new table that will ingest this specific custom event.

Further more, I have few general question

  1. I saw that no bucket created in the storage to hold the raw events files
    (only the bucket " sp-bq-loader-dead-letter-XXX" that not hold data at all after sending bed request
    sent) Did I missed some configuration of Terraform ?
  2. What is the recommended way to trace the sent events in the collector instances I’m not sure how to see the logs of the collector, Do I need different machine for it ?

Thanks in advanced !

There won’t be a new table, the data will land in the events table in a new column - it’s all handled in the loader automatically. Once you send data (provided it passes validation), it’ll land in the table.

I saw that no bucket created in the storage to hold the raw events files
(only the bucket " sp-bq-loader-dead-letter-XXX" that not hold data at all after sending bed request

It’s all handled in stream on GCP, so there’s no raw file storage. The dead letter bucket is for events that have for some reason passed validation but been corrupted in some way such that the loader can’t get them into the table after retrying. Very rare scenario - it exists for legacy reasons, the first iterations of the loaders had more chance to hit this scenario. Still is possible but very rare.

What is the recommended way to trace the sent events in the collector instances I’m not sure how to see the logs of the collector, Do I need different machine for it ?

The collector does have logs, but off the top of my head I’m not sure exactly where to look for them (hopefully someone else who does can help). But you wouldn’t normally look in the collector logs in order to find where your events are - they’ll go to either the good or the bad stream - the good will end up in BQ and the bad will end up in storage, and you can query them from there.

Typically, the best process/easiest flow is to use Snowplow Micro to test and debug the tracking code, then once you’ve confirmed everything works as expected, migrate that to prod. It’s a much faster feedback loop for the parts that need it. :slight_smile:

1 Like

Ohh thank you for the detailed reply !

  1. So all I need to configure new custom event is to publish a new schema file to Iglu or there are more action required ?
  2. Are there mandatory fields in schema or I can send only my fields
  • So all I need to configure new custom event is to publish a new schema file to Iglu or there are more action required ?

You need to push the schema, then send valid data against it. The first valid event that hits the loader is what trigggers column creation. It might take a few mins for that to happen.

  • Are there mandatory fields in schema or I can send only my fields

The schema is what determines what’s mandatory or not, if you set something to ‘required’ in the schema, the data won’t pass validation without it. This page might be helpful.

Aside from what you define in the schema, there’s nothing required for you to populate - but the trackers will automatically populate some fields - for example event_id, or for the JS tracker session_id. But those are handled automatically too - all you’ll need to do is call the track method [like the example in this page](https://docs.snowplow.io/docs/understanding-tracking-design/out-of-the-box-vs-custom-events-and-entities/.