Connecting React-native tracker to kafka-stream-collector

Hello Snowplow Community,
I have dockerized micro running locally on port 9090. I am able to send events from react-native and the micro recieves them. However, when I am running the kafka stream collector locally at 9090 and when I try to send an event from react-native tracker, I am getting this error in my console from expo react-native client:

error:  TypeError: Network request failed
    at D:\main\temp\app\node_modules\whatwg-fetch\dist\fetch.umd.js:535:18
    at D:\main\temp\app\node_modules\react-native\Libraries\Core\Timers\JSTimers.js:214:18
    at _callTimer (D:\main\temp\app\node_modules\react-native\Libraries\Core\Timers\JSTimers.js:112:7)
    at Object.callTimers (D:\main\temp\app\node_modules\react-native\Libraries\Core\Timers\JSTimers.js:357:7)
    at MessageQueue.__callFunction (D:\main\temp\app\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:423:27)
    at D:\main\temp\app\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:113:12
    at MessageQueue.__guard (D:\main\temp\app\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:367:9)
    at MessageQueue.callFunctionReturnFlushedQueue (D:\main\temp\app\node_modules\react-native\Libraries\BatchedBridge\MessageQueue.js:112:10)
    at debuggerWorker.aca173c4.js:4:902

I have tried to change the CORS settings inside my stream-collector config but that didn’t help. I suspect that the cause is an unsuccessful TCP handshake.

The code that I am using to send out events from my react-native app:

const tracker = createTracker(
  'myTracker',
  {
    endpoint: `http://192.168.213.68:9090`,
  },
  {
    trackerConfig: {
      appId: 'ReactNativeTracker',
    },
  },
);
tracker.trackSelfDescribingEvent({
          schema: "iglu:com.company/raw_events_public/jsonschema/1-0-0",
          data: trackerData
})

In the code, trackerData is just a JS object.

If anyone, has run into same problem or have any solutions, please let us know. Thank you for your time.

Here’s the configuration file for Kafka stream collector:

collector {
  interface = "0.0.0.0"
  port = 9090

  p3p {
    policyRef = "/w3c/p3p.xml"
    CP = "NOI DSP COR NID PSA OUR IND COM NAV STA"
  }

  crossDomain {
    enabled = false
    domains = ["*"]
    secure = false
  }

  cookie {
    enabled = true
    expiration = 365 days
    name = gaCollectorCookie
    domain = kijanowski.eu
  }

  doNotTrackCookie {
    enabled = false
    name = todoCookieName
    value = todoCookieValue
  }

  cookieBounce {
    enabled = false
    name = "n3pc"
    fallbackNetworkUserId = "00000000-0000-4000-A000-000000000000"
    forwardedProtocolHeader = "X-Forwarded-Proto"
  }

  redirectMacro {
    enabled = false
    placeholder = "[TOKEN]"
  }

  rootResponse {
    enabled = false
    statusCode = 302
    headers = {
      Location = "https://127.0.0.1/",
      X-Custom = "something"
    }
    body = "302, redirecting"
  }

  cors {
    accessControlMaxAge = 5 seconds
  }

  prometheusMetrics {
    enabled = true
    durationBucketsInSeconds = [0.1, 3, 10]
  }

  streams {
    good = ga-success
    bad = ga-fail
    useIpAddressAsPartitionKey = false
    sink {
      enabled = kafka
      brokers = "0.0.0.0:9092"
      retries = 0

      producerConf {
        acks = all
      }
    }

    buffer {
      byteLimit = 100000 
      recordLimit = 1 # Not supported by Kafka; will be ignored
      timeLimit = 1 # -> Kafka's linger.ms
    }
  }
}

akka {
  loglevel = DEBUG # 'OFF' for no logging, 'DEBUG' for all logging.
  loggers = ["akka.event.slf4j.Slf4jLogger"]

  http.server {
    remote-address-header = on

    raw-request-uri-header = on

    parsing {
      max-uri-length = 32768
      uri-parsing-mode = relaxed
    }
  }
}

Hi @Gurankit_Pal_Singh please can you share details of how you are running the kafka collector? For example, do you launch it via docker? If so, could you share the docker command line you run?

The error suggests that your tracker cannot reach your collector on the network. Your collector config and tracking code look OK, so I guess it’s a problem with how you’re deploying the collector.

Hey @istreeter,
I run snowplow’s JAR file (downloaded from here). The exact command used to run collector is:
java -jar snowplow-stream-collector-kafka-2.8.2.jar --config .\configs\stream-collector.conf

The stream-collector.conf is identical to configuration mentioned previously.

Hi @Gurankit_Pal_Singh , I don’t know exactly what’s wrong but I’ll share a couple of ideas.

When you run the collector, do you see a message like REST interface bound to /0.0.0.0:9090? Are there any errors from log output of the collector?

A good first step is to confirm the collector is open for http requests. What happens if you visit http://0.0.0.0:9090/health in your browser? If it says “ok” then your collector is healthy.

Next… what happens if you visit http://192.168.213.68:9090/health in your browser? This is the address you have configured in your tracker code, but there’s a chance this address does not resolve to your collector.

If the last step doesn’t work, try changing changing your tracker coder to something like:

  endpoint: `http://127.0.0.1:9090`

Hey @istreeter,Yes, I see the said output. More, specifically, this is what my collector logs:

[main] WARN org.apache.kafka.clients.ClientUtils - Couldn't resolve server another.host:9092 from bootstrap.servers as DNS resolution failed for another.host
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.2.1
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 55783d3133a5a49a
[kafka-producer-network-thread | producer-2] INFO org.apache.kafka.clients.Metadata - Cluster ID: AEZRKq5oQO6x0Ro3nmIqTw
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - Cluster ID: AEZRKq5oQO6x0Ro3nmIqTw
[scala-stream-collector-akka.actor.default-dispatcher-5] INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
[main] INFO com.snowplowanalytics.snowplow.collectors.scalastream.telemetry.TelemetryAkkaService - Telemetry enabled
[scala-stream-collector-akka.actor.default-dispatcher-7] INFO com.snowplowanalytics.snowplow.collectors.scalastream.KafkaCollector$ - REST interface bound to /[0:0:0:0:0:0:0:0]:9090
[scala-stream-collector-akka.actor.default-dispatcher-7] INFO com.snowplowanalytics.snowplow.collectors.scalastream.KafkaCollector$ - Setting health endpoint to healthy

When I visit “0.0.0.0:9090/health” on my browser, the response is it is unable to find the page. Also, on visiting “http://192.168.213.68:9090/health” or more generally “http://<my_current_ip”:9090/health" the response is “OK”.

Changing endpoint to “http://127.0.0.1:9090” leads again to “network request failed” error in my expo app when I try to send data to collector. An interesting point to note is that when I try to visit “http://127.0.0.1:9090/health” on my browser, it shows “OK” as a response which is weird. Are there any other solutions that I can try?