Snowbridge Confluent Kafka Configs

Hello Folks,

I am trying to use Snowbridge to replicate data from enriched kafka topic to pub/sub target topic for loader.

Error:

time="2023-10-26T12:54:47Z" level=info msg="Failed to read SASL handshake header : EOF\n" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="Closed connection to broker ************:9092\n" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="Error while sending ApiVersionsRequest to broker ************:9092: EOF\n" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="client/metadata got error from broker -1 while fetching metadata: EOF\n" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="client/metadata no available broker to send metadata request to" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="client/brokers resurrecting 1 dead seed brokers" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=info msg="Closing Client" brokers="************:9092" source=kafka topic=qb-enrich-good-topic version=2.7.0
time="2023-10-26T12:54:47Z" level=error msg="Failed to create Kafka client: kafka: client has run out of available brokers to talk to: EOF" error="Failed to create Kafka client: kafka: client has run out of available brokers to talk to: EOF"
Logs from 26/10/2023, 18:19:17

My current configs:

source {
      use "kafka" {
      # Kafka broker connection string
      brokers = "*******.gcp.confluent.cloud:9092"

      # Kafka topic name
      topic_name = "qb-enrich-good-topic"

      concurrent_writes = "15"
      assignor = "sticky"

      # The Kafka version
      target_version      = "2.7.0"

      # Kafka consumer group name
      consumer_name = "snowplow-stream-replicator"

      # Kafka offset configuration, -1 stands for read all new messages, -2 stands for read oldest offset that is still available on the broker
      offsets_initial = -2

      # Whether to enable SASL support (defailt: false)
      enable_sasl = true

      # SASL AUTH
      sasl_username = "abcdedfdfdf"
      sasl_password = env.SASL_PASSWORD

      # The SASL Algorithm to use: "sha512" or "sha256" (default: "sha512")
      sasl_algorithm = "sha256"

      # Whether to skip verifying ssl certificates chain (default: false)
      skip_verify_tls = true
    }

This is the enrichment output configs which works perfectly fine.

"good": {
        "type": "Kafka"
        "topicName": "qb-enrich-good-topic"
        "bootstrapServers": "****.gcp.confluent.cloud:9092"
        "producerConf": {
          "acks": "all"
          "ssl.endpoint.identification.algorithm": "https"
          "sasl.mechanism": "PLAIN"
          "security.protocol": "SASL_SSL"
          "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"********\" password=\"************\";"
        }
        "partitionKey": ""
        "headers": []
      }

At first glance, I can think of a few things to look at here:

Firstly, a quirk of the implementation of the snowbridge config is that tls will only be enabled when cert_file and key_file are provided with valid ssl certs. (ca_file is optional but I’m not sure what difference providing it or not makes here). I had a weird case before where I had to provide them in order to get the sasl configuration to work. It’s a bit ugly but I resolved that by generating files locally and providing them.

Another thing to look at is "sasl.mechanism": "PLAIN" in the enrich config. The note in the snowbridge config is actually incorrect - "sha512", "sha256" and "plaintext" are acceptable values.

Finally, I have had instances in the past where I got this exact error, and it turned out to be a networking issue - the machine running snowbridge didn’t have network access to the kafka cluster. It’s worth double-checking that aspect of things.

Those are the parts I’d look into debugging first anyway!

1 Like

Thank you @Colm Collector and Enrich pods are running on the same GKE cluster as the Snowbridge pods. So I don’t think it’s due to network.

Coming on to the sasl.mechanism: PLAIN, I think you are suggesting sasl_algorithm to be set plaintext . I can do and retry.

What I feel is, it could be due to the cert and key files. I will check on that part as well.

Now the error message is different: I have verified that the files are mounted to the path mentioned below.

Configs:

      # The optional certificate file for client authentication
      cert_file = "/snowbridge/config/snowbridge.crt"

      # The optional key file for client authentication
      key_file = "/snowbridge/config/snowbridge.key"
time="2023-10-26T15:43:57Z" level=error msg="open : no such file or directory" error="open : no such file or directory"

Thank you @Colm The suggested approach worked.

Crt, key and ca-crt are mandatory files to be pointed for Kafka connection to work.

I am marking this as close.

1 Like