Sometimes the context is encrypted by snowplow process to very large string (more than 7600 characters) and therefore I get error 413 (Request Entity Too Large).
Why the context encrypted to very large string?
How we can solve it? Is there a workaround?
We are running snowplow version 80, using cloudfront collector and javascript tracker (version 2.6.0).
The function that I call in javascript is:
snowplow_260(‘trackStructEvent’, eventCategory, eventAction, eventLabel, eventProperty, eventValue,
[{
schema: schemaName,
data: eventContexts
}]);
The eventContexts that I sent can be like this: ‘{“accountId”:48850,“holdingId”:5218981,“ivId”:“F000003ZBI”,“alternativeIvId”:“FOUSA04B9A”,“recommendation_type”:“best_match”,“sessionId”:“9089EF3963B5F9A66CDD7FEA3E72C8C9”,“uuid”:“2e11ad0e-8585-4252-bc65-606a64a4accb”}’
Hi @eltsafan - are you sending your events using GET or POST - what are your emitter settings?
Are you confident that your individual event contexts are consistently of a similar short length to the one in your example, {"accountId":48850,"holdingId":5218981,"ivId":"F000003ZBI","alternativeIvId":"FOUSA04B9A","recommendation_type":"best_match","sessionId":"9089EF3963B5F9A66CDD7FEA3E72C8C9","uuid":"2e11ad0e-8585-4252-bc65-606a64a4accb"} which is just 219 characters?
Thanks, do you have an example of a <500 character context which is encoded to 7600+ characters that you could share? Both the original context and the huge encoded version?
I found that our context in specific case was very large and it caused to the error.
When I looked at the error, all contexts were <500 characters.
The reason that I thought the error caused when the contexts have normal size is that I wasn’t aware that snowplow tried to resend the request again (by saving the out queue to local storage).
Hi @eltsafan - ah great, thanks for clarifying. If you are going to be consistently sending large events/contexts to Snowplow, consider switching from GET to POST in your tracker configuration.