Storing the unstructured events in Redshift

Hello Snowplow community,

My current pipeline for page_view events looks like
Javascript Tracker -> Kinesis Collector -> Kinesis Enricher -> Lamdba Function -> S3
I have used Athena to connect to the S3 files to query the data.

Now for the unstructured events, I followed this link. As per the link, we need to use Redshift Cluster/DB to store the events.

All our data analysis happens in BigQuery. For all the data which are in S3(other than collected from snowplow data), we have different jobs to move sync this data to BigQuery. We want to use the existing jobs to move the data from S3 to BigQuery so that our data analysis team can continue to use the new data captured using Snowplow.

I have a couple of questions related to this.

  1. Is it mandatory to use Redshift for unstructured events?
  2. Can the unstructured events be stored in S3? If yes, how can I link event_id and root_id as we do it in
    FOREIGN KEY (root_id) REFERENCES atomic.events(event_id)?
  3. If no, then do I need to use Redshift for structured(page_view) events also so that I can link both unstructured and structured events using FOREIGN KEY constraint?

If there is any other way to improve the pipeline, let me know. But the end goal is to move the data to BigQuery as we want to have a hybrid(both AWS and GCP) cloud solution.

Thanks,
Raghav

@raghavn, why do you keep mentioning Redshift if you load the data into BigQuery? Unstructured events can be stored in BigQuery, Google Cloud Storage, Snowflake DB, S3, Redshift depending on your architecture and circumstances.

Redshift is different from the rest as the unstructured events are stored in the dedicated child tables. All the other storages have the unstructured events in a dedicated field/column of the atomic events table/record.

Can the unstructured events be stored in S3?

Sure. If you are running your pipeline in AWS you can have your events in enriched and shredded format. Both could be queried with Athena. The shredded format is typically used when you also need to load data to Redshift. Otherwise, we would recommend either using Athena or Snowplow Analytics SDKs to work on enriched data.

Hello @ihor,

Thanks for the reply.

As per your response, the unstructured events are stored in a dedicated field/column of the atomic events table/record. If that is the case, do I need to create my schema to track the unstructured events? I presume I do not have to create SQL and JSON schemas as I am not using Redshift or Postgres. The unstructured value will be stored as a JSON field.

We created my custom schema and loaded it to S3. In Javascript tracker, I used the custom schema name(schema": "example/unstructured_events/jsonschema/1-0-0"). In enrichment, I gave my custom JSON file.

The actual path of the schema file is
https://example-schemas.s3.region.amazonaws.com/schemas/example/unstructured_events/jsonschema/1-0-0

In the Javascript tracker and enricher JSON file, we have given iglu:example/unstructured_events/jsonschema/1-0-0.

We are getting the below-given error in the S3.

"errors":[{"level":"error","message":"error: Could not find schema with key iglu:example/unstructured_events/jsonschema/1-0-0 in any repository, tried:\n level: \"error\"\n repositories: [\"Iglu Central - GCP Mirror [HTTP]\",\"Iglu Central [HTTP]\",\"Iglu Client Embedded [embedded]\"]\n"}],"failure_tstamp":"2020-01-20T13:46:22.833Z"}

Questions related to this are
1. Do we need to give the actual path of JSON schema in the tracker and enricher the JSON file?
2. Do we need to make the schema folder public?

If you provide an example for creating the custom schema, that would be really useful.

We have gone through the blogs and technical documentation of Iglu and Snowplow, but we are still struggling to understand how the pipeline works for the custom schemas.

I went through this link too but not much help.

Thanks,
@raghavn

I presume I do not have to create SQL and JSON schemas as I am not using Redshift or Postgres. The unstructured value will be stored as a JSON field.

If you’re not loading to Redshift, you don’t need the SQL DDL or Jsonpath files. You still need the JSON schema itself though.

1. Do we need to give the actual path of JSON schema in the tracker and enricher the JSON file?

I’m not sure I follow exactly the question here, but for each custom event you track, you need to supply the iglu path to the schema. The format of the example you’ve provided looks correct (iglu:example/unstructured_events/jsonschema/1-0-0).

As for the enrichment step of the pipeline, on that side of things you just need to make sure your iglu resolver configuration contains the location of your schema repository (in addition to Iglu Central, which is what is used for standard events).

2. Do we need to make the schema folder public?

If you’re using S3 to host your schemas, then yes. Per-S3-bucket authentication isn’t possible at present. If you require a private repository then Iglu-Server allows you to configure private API-key based access.

Best,

Hello @Colm, @ihor,

I have done the below-given steps so far.

  1. In the JavaScript tracker:

     if (eventType === "trackPageView") {
           window.snowplow(eventType);
         } else {
           window.snowplow("trackUnstructEvent", {
             schema:
               "iglu:com.snowplowanalytics.snowplow/unstruct_event/jsonschema/1-0-0",
             data
           });
         }
    
  2. In the resolver.js of enrichment server for getting the iglu schema

     {
       "schema": "iglu:com.snowplowanalytics.iglu/resolver-config/jsonschema/1-0-1",
       "data": {
         "cacheSize": 1000,
         "repositories": [
           {
             "name": "Iglu Central",
             "priority": 0,
             "vendorPrefixes": [ "com.snowplowanalytics" ],
             "connection": {
               "http": {
                 "uri": "http://iglucentral.com"
               }
             }
           },
           {
             "name": "Schemas for Custom Entities",
             "priority": 2,
             "vendorPrefixes": [ "mycompany" ],
             "connection": {
               "http": {
                 "uri": "https://<bucketname>.s3.<region>.amazonaws.com"
               }
             }
           }
         ]
       }
     }
    
  3. The actual schema is stored in the s3 bucket which is public.

https://<bucketname>.s3.<region>.amazonaws.com/schemas/mycompany/unstructured_events/jsonschema/1-0-0

Note: I haven’t used any com.mycompany. I have just used mycompany.

{
    "$schema": "http://iglucentral.com/schemas/com.snowplowanalytics.self-desc/schema/jsonschema/1-0-0#",
    "description": "unstructured events tracking",
    "self": {
        "vendor": "mycompany",
        "name": "unstructured_events",
        "format": "jsonschema",
        "version": "1-0-0"
    },
    "type": "object",
    "properties": {
        "State": {
            "type": "string"
        },
        "Current_Page": {
            "type": "string"
        },
        "URL": {
            "type": "string"
        },
        "Referrer_Page": {
            "type": "string"
        },
        "Referrer_URL": {
            "type": "string"
        },
        "Page_Modal": {
            "type": "string"
        },
        "FA": {
            "type": "string"
        },
       <many no mandatory more fields>
  1. In enrichments/unstruct_event.json:(contains only mandatory fields, it has some other no mandatory fields)

     {
         "schema": "iglu:mycompany/unstructured_events/jsonschema/1-0-0",
         "data": {
             "event": "Page Clicks",
             "State": "Logged Out",
             "Current_Page": "myHome",
             "URL": "<some url>/",
             "Referrer_Page": "NA",
             "Referrer_URL": "NA",
             "Page_Modal": "Home",
             "Section": "State",
             "Type": "abcd"
         }
     }
    

When I run the below-given command, I get errors.

java -jar snowplow-stream-enrich-kinesis-0.22.0.jar --config config.hocon --resolver file:resolver.js --enrichments file:enrichments

Errors are

NonEmptyList(error: NonEmptyList(error: JSON path name not found
    level: "error"
)
    level: "error"
)

Can you please tell me whatever I have done is correct or not? I believe I have missed something somewhere.

Thanks,
Raghav

Hey @raghavn,

You’re pointing tracking to "iglu:com.snowplowanalytics.snowplow/unstruct_event/jsonschema/1-0-0", but your schema is at iglu:mycompany/unstructured_events/jsonschema/1-0-0.

Tracking unstructured events isn’t to do with enrichments. The enrichments json files are configuration files for the enrich process of the pipeline.

All you need to do to track a custom event is upload the schema to Iglu, then point your tracking to that schema.

I’m unsure what the behaviour of the pipeline is if you have an enrichment configuration file which doesn’t correlate to an enrichment, so best thing to do is remove that, in case it causes issues.

I recommend spinning up a Snowplow Mini instance to give yourself a faster feedback loop on the process of setting up custom events - we normally use Mini to test and debug schemas and tracking before taking it to prod.

Finally - I would recommend designing your events to be one schema per event type rather than a single schema to cover all of them.

Best,

Hello @Colm,

Thanks for the updates.
It worked. I realized that there is no need to keep any json for unstructured schema for in the enrichments folder.
Meanwhile, I will use snowplow-mini for testing purpose.

About designing the events based on event, I do agree with you. I will be writing separate schema for each event type.

I really appreciate your help.

Thanks,
@raghavn

Glad to hear you got it working, and hope I’ve helped you get your head around it all!

Hello @colm,

I could successfully import messages from S3 to BigQuery using BigQuery transfer.
When I used custom unstruct schema, it is stored in the unstruct_events column as JSON. Is there any way to flatten the unstruct events to BigQuery?

Our entire analysis platform is on BigQuery.

current pipeline looks like
javascript Tracker -> Kinesis Collector -> Kinesis Enricher -> Lamdba Function -> S3 -> BigQuery

Thanks,
Raghav

So, we have a version of the pipeline which runs natively on GCP, and loads to Bigquery as standard. If the rest of your analytics stack is there, it makes most sense to use that.

We also have a native S3 loader, so you don’t need your own Lambda to do that.

Best,