Unable to connect snowplow collector at port 443 with ssl enabled

I am trying to enable ssl on collector level and trying to run on port 443 but unable to do so. Collector is perfectly running fine on normal http port (port 80 or other) but not on port 443 with ssl enabled. I followed instructions given in snowplow collector official documentation but still missing something. Below is the configuration I am using for collector.

# Copyright (c) 2013-2020 Snowplow Analytics Ltd. All rights reserved.
#
# This program is licensed to you under the Apache License Version 2.0, and
# you may not use this file except in compliance with the Apache License
# Version 2.0.  You may obtain a copy of the Apache License Version 2.0 at
# http://www.apache.org/licenses/LICENSE-2.0.
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the Apache License Version 2.0 is distributed on an "AS
# IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.  See the Apache License Version 2.0 for the specific language
# governing permissions and limitations there under.

# This file (config.hocon.sample) contains a template with
# configuration options for the Scala Stream Collector.
#
# To use, copy this to 'application.conf' and modify the configuration options.

# 'collector' contains configuration options for the main Scala collector.
collector {
#  # The collector runs as a web service specified on the following interface and port.
interface = "0.0.0.0"
port = 80

# optional SSL/TLS configuration
ssl {
  enable = true
  # whether to redirect HTTP to HTTPS
  redirect = true
  port = 443
}

paths {
  # "/com.acme/track" = "/com.snowplowanalytics.snowplow/tp2"
  # "/com.acme/redirect" = "/r/tp2"
  # "/com.acme/iglu" = "/com.snowplowanalytics.iglu/v1"
}

# Configure the P3P policy header.
p3p {
  policyRef = "/w3c/p3p.xml"
  CP = "NOI DSP COR NID PSA OUR IND COM NAV STA"
}

# Cross domain policy configuration.
# If "enabled" is set to "false", the collector will respond with a 404 to the /crossdomain.xml
# route.
crossDomain {
  enabled = false
  # Domains that are granted access, *.acme.com will match http://acme.com and 
http://sub.acme.com
   domains = [ "*" ]
  # Whether to only grant access to HTTPS or both HTTPS and HTTP sources
  secure = true
}

# The collector returns a cookie to clients for user identification
# with the following domain and expiration.
cookie {
  enabled = true
  expiration = "365 days" # e.g. "365 days"
  # Network cookie name
  name = sp
  domains = ["devopslearn.online" # e.g. "domain.com" -> any origin domain ending with this will be 
  matched and domain.com will be returned
    # ... more domains
 ]
# ... more domains
# If specified, the fallback domain will be used if none of the Origin header hosts matches the list of
# cookie domains configured above. (For example, if there is no Origin header.)
fallbackDomain = "devopslearn.online"
secure = true
httpOnly = false
# The sameSite is optional. You can choose to not specify the attribute, or you can use `Strict`,
# `Lax` or `None` to limit the cookie sent context.
#   Strict: the cookie will only be sent along with "same-site" requests.
#   Lax: the cookie will be sent with same-site requests, and with cross-site top-level navigation.
#   None: the cookie will be sent with same-site and cross-site requests.
sameSite = "None"
}

# If you have a do not track cookie in place, the Scala Stream Collector can respect it by
# completely bypassing the processing of an incoming request carrying this cookie, the collector
# will simply reply by a 200 saying "do not track".
# The cookie name and value must match the configuration below, where the names of the cookies 
must
# match entirely and the value could be a regular expression.
doNotTrackCookie {
  enabled = false
  name = dnt
  value = "[Tt][Rr][Uu][Ee]"
}

# When enabled and the cookie specified above is missing, performs a redirect to itself to check
# if third-party cookies are blocked using the specified name. If they are indeed blocked,
# fallbackNetworkId is used instead of generating a new random one.
cookieBounce {
  enabled = false
  # The name of the request parameter which will be used on redirects checking that third-party
  # cookies work.
  name = "n3pc"
  name = ${?COLLECTOR_COOKIE_BOUNCE_NAME}
  # Network user id to fallback to when third-party cookies are blocked.
  fallbackNetworkUserId = "00000000-0000-4000-A000-000000000000"
  # Optionally, specify the name of the header containing the originating protocol for use in the
  # bounce redirect location. Use this if behind a load balancer that performs SSL termination.
  # The value of this header must be http or https. Example, if behind an AWS Classic ELB.
  forwardedProtocolHeader = "X-Forwarded-Proto"
}

# When enabled, redirect prefix `r/` will be enabled and its query parameters resolved.
# Otherwise the request prefixed with `r/` will be dropped with `404 Not Found`
# Custom redirects configured in `paths` can still be used.
enableDefaultRedirect = false

# When enabled, the redirect url passed via the `u` query parameter is scanned for a placeholder
# token. All instances of that token are replaced withe the network ID. If the placeholder isn't
# specified, the default value is `${SP_NUID}`.
redirectMacro {
  enabled = false
  # Optional custom placeholder token (defaults to the literal `${SP_NUID}`)
  placeholder = "[TOKEN]"
}

# Customize response handling for requests for the root path ("/").
# Useful if you need to redirect to web content or privacy policies regarding the use of this collector.
rootResponse {
  enabled = false
  statusCode = 302
  # Optional, defaults to empty map
  headers = {
    Location = "https://127.0.0.1/",
    X-Custom = "something"
  }
  # Optional, defaults to empty string
  body = "302, redirecting"
}

# Configuration related to CORS preflight requests
cors {
  # The Access-Control-Max-Age response header indicates how long the results of a preflight
  # request can be cached. -1 seconds disables the cache. Chromium max is 10m, Firefox is 24h.
  accessControlMaxAge = 5 seconds
}

# Configuration of prometheus http metrics
prometheusMetrics {
  # If metrics are enabled then all requests will be logged as prometheus metrics
  # and '/metrics' endpoint will return the report about the requests
  enabled = false
  # Custom buckets for http_request_duration_seconds_bucket duration metric
  #durationBucketsInSeconds = [0.1, 3, 10]
}

streams {
  # Events which have successfully been collected will be stored in the good stream/topic
  good = test-raw-good

  # Events that are too big (w.r.t Kinesis 1MB limit) will be stored in the bad stream/topic
  bad = test-raw-bad

# Whether to use the incoming event's ip as the partition key for the good stream/topic
# Note: Nsq does not make use of partition key.
useIpAddressAsPartitionKey = false

# Enable the chosen sink by uncommenting the appropriate configuration
sink {
  # Choose between kinesis, google-pub-sub, kafka, nsq, or stdout.
  # To use stdout, comment or remove everything in the "collector.streams.sink" section except
  # "enabled" which should be set to "stdout".
  enabled = googlepubsub

  # Or Google Pubsub
  googleProjectId = test-learn-gcp
  ## Minimum, maximum and total backoff periods, in milliseconds
  ## and multiplier between two backoff
  backoffPolicy {
    minBackoff = 1000
    maxBackoff = 5000
    totalBackoff = 10000 # must be >= 10000
    multiplier = 2
  }
}

# Incoming events are stored in a buffer before being sent to Kinesis/Kafka.
# Note: Buffering is not supported by NSQ.
# The buffer is emptied whenever:
# - the number of stored records reaches record-limit or
# - the combined size of the stored records reaches byte-limit or
# - the time in milliseconds since the buffer was last emptied reaches time-limit
buffer {
  byteLimit = 1
  recordLimit = 1 # Not supported by Kafka; will be ignored
  timeLimit = 1
}
}

}

# Akka has a variety of possible configuration options defined at
# http://doc.akka.io/docs/akka/current/scala/general/configuration.html
akka {
 loglevel = OFF # 'OFF' for no logging, 'DEBUG' for all logging.
 loggers = ["akka.event.slf4j.Slf4jLogger"]

# akka-http is the server the Stream collector uses and has configurable options defined at
# http://doc.akka.io/docs/akka-http/current/scala/http/configuration.html
http.server {
  # To obtain the hostname in the collector, the 'remote-address' header
  # should be set. By default, this is disabled, and enabling it
  # adds the 'Remote-Address' header to every request automatically.
  remote-address-header = on

  raw-request-uri-header = on

  # Define the maximum request length (the default is 2048)
  parsing {
    max-uri-length = 32768
    uri-parsing-mode = relaxed
  }
}

# By default setting `collector.ssl` relies on JSSE (Java Secure Socket
# Extension) to enable secure communication.
# To override the default settings set the following section as per
# https://lightbend.github.io/ssl-config/ExampleSSLConfig.html
 ssl-config {
   debug = {
     ssl = true
   }
   keyManager = {
     stores = [
       {type = "PKCS12", classpath = false, path = "/root/certificate/collector.p12", password = "" }
     ]
   }
   loose {
     disableHostnameVerification = true
   }
 }
}

If I comment out port 80, collector throws error on that line. and if left it uncommented collector starts running on port 80.
Please help me how could I configure https based collector.

What sort of networking configuration do you have setup? Generally it’s pretty common to terminate the SSL at the load balancer - so you can avoid having to override the default SSL settings in the collector configuration.

@mike Thanks for your reply. Our infra is hosting in GCP cloud and yes we do have load balancer as well ssl termination happening on it but as internal traffic is not encrypted (LB to collector instance). so I would like to terminate ssl on instance as well.

What load balancing service are you using? Depending on the service and your networking configuration GCP may encrypt this for you automatically (e.g., HTTP(s) load balancing => VMs in a VPC).

You can of course use SSL in addition to this but you’ll need to make sure the keyManager is configured with the certificate that you’ve installed on your backend instances (note that Google won’t validate this cert though).

Issue is resolved. Problem was with the snowplow version which I was using. When I used latest collector jar, problem got solved.