Given a set of logs from AWS, GCP, Azure and system/services logs, I would like to convert them to a knowledge graph.
Any best practices doing it in Snowplow?
Given a set of logs from AWS, GCP, Azure and system/services logs, I would like to convert them to a knowledge graph.
Any best practices doing it in Snowplow?
Are you able to share some examples and the sort of queries you would be interested in running / analysing?
For structured log data the first step (from a Snowplow perspective) would be to decompose the log into entities e.g., actor / system / microservices / components / info level and likely have a number of events for any callouts that you’d want beyond a generic log
event.
Is this something you’d want to query in your data warehouse? You can do some graph operations in some data warehouses (e.g., via recursive CTEs) but for certain other operations (pathfinding, in degree / out degree etc) then you may want to model this in a dedicated graph database / triple store.