Querystring is empty: no raw event to process - upgrading collector to latest version

we’ve been scratching our heads over this for days. We tried upgrading our production collector to latest version. all the configs are the same except for the new ones you added. for javascript tracker, i get the events fine. for ruby tracker, i’m not getting them at all and getting this error below only for Ruby/Backend events.

any pointers for where i can look?
very similar to this problem here:

but i have those lines in config. i’ll send config a little later.

record_hash | d5fcb960c2b0fd6ccb24d78b2566ae2a59adbe6a
record_length | 867
error_count | 1
first_error_level | error
first_error_message | Querystring is empty: no raw event to process
first_error_message_length | 45
collector_payload | CollectorPayload(userAgent=‘Ruby’, ipAddress=‘xxx’, hostname=‘sp.generalassemb.ly’, body=None, collector=‘ssc-0.14.0-kinesis’, timestamp=1537501133781, path=’/i’, refererUri=None, networkUserId=‘b52aaf6f-2c2a-49b2-bc86-4d3560f4abdd’, querystring=None, contentType=None, encoding=‘UTF-8’, schema=‘iglu:com.snowplowanalytics.snowplow/CollectorPayload/thrift/1-0-0’, headers=[‘Connection: upgrade’, ‘Host: xxx’, ‘X-Real-Ip: xxx’, ‘X-Forwarded-For: xxx,’, ‘Accept: /’, ‘Accept-Encoding: gzip, deflate;q=0.6, identity;q=0.3’, ‘User-Agent: Ruby’, ‘X-Forwarded-Port: 443’, ‘X-Forwarded-Proto: https’, 'Timeout-Access: '])
collector_payload_length | 687

That’s odd, for a GET request to the collector you should have something in the querystring in the Thrift serialised record. Do you know what the full URL was that the Ruby tracker constructed before sending the network request?

Do you use load balancer/cloudfront? It looks like you do not allow to pass query string to collector instance (i.e. default Cloudfront configuration for get requests). If you use Cloudfront distribution in front of ELB/ALB, make sure not to cache anything;-)

@mike and @grzegorzewald i think i know what it is. i had a syntax error in this section. going to deploy to dev again today and test before trying this in prod again. i think having this section fubar would mess things up. i’ll update in a few hours. thanks,

# Akka has a variety of possible configuration options defined at
# http://doc.akka.io/docs/akka/current/scala/general/configuration.html
akka {
  loglevel = DEBUG # 'OFF' for no logging, 'DEBUG' for all logging.
  loggers = ["akka.event.slf4j.Slf4jLogger"]

  # akka-http is the server the Stream collector uses and has configurable options defined at
  # http://doc.akka.io/docs/akka-http/current/scala/http/configuration.html
  http.server {
    # To obtain the hostname in the collector, the 'remote-address' header
    # should be set. By default, this is disabled, and enabling it
    # adds the 'Remote-Address' header to every request automatically.
    remote-address-header = on

    raw-request-uri-header = on

    # Define the maximum request length (the default is 2048)
    parsing {
      max-uri-length = 32768
      uri-parsing-mode = relaxed

we do use amazon load balancer but we’ve been using it for almost 2 years now with no changes… i think it’s the collector config that’s the problem that i’ll work on today.

closing this case out. it was the akka {} section. once fixed we are good. sorry for alarm.