Setting up multiple environments

Morning all, i would like to separate my clusters into olap and oltp as a result need two separate agents in my config, preferebly a single hq. please help me with the config below. i have questions:

  1. is this the correct way to create two agents/environments/clusters in the config
name: lenses-6-preview

# This docker compose offers a preview of the upcoming Lenses 6 release (Panoptes).
# You can start it like this:
#     ACCEPT_EULA=true docker compose up
#
# To enable persistence, uncomment the data volumes for PostgreSQL and demo Kafka.
# To clean up persistent data, run 'docker compose down -v'.

services:
  # HQ is the control plane of Lenses and where users connect
  mqtt5:
    image: eclipse-mosquitto
    container_name: mqtt5
    ports:
      - "1883:1883" # default mqtt port
      - "9001:9001" # default mqtt port for websockets
    volumes:
      - ./mosquitto/config:/mosquitto/config:rw
      - ./mosquitto/data:/mosquitto/data:rw
      - ./mosquitto/log:/mosquitto/log:rw
    restart: unless-stopped

  lenses-hq:
    image: lensting/lenses-hq:6-preview
    command: /app/config.yaml
    ports:
      - 9991:9991
    depends_on:
      postgres:
        condition: service_healthy
      create-configs:
        condition: service_completed_successfully
    healthcheck:
      test: ["CMD", "lenses-hq", "is-up", "lenses-hq:9991"]
      interval: 10s
      timeout: 3s
      retries: 5
      start_period: 5s
    volumes:
      - lenses-hq-volume:/app

  # First agent container using the first set of configuration files and its own database.
  lenses-agent:
    image: lensting/lenses-agent:6-preview
    environment:
      DEMO_HQ_URL: http://lenses-hq:9991
      DEMO_HQ_USER: admin
      DEMO_HQ_PASSWORD: admin
      DEMO_AGENTKEY_PATH: /mnt/settings/DEMO_AGENTKEY
      LENSES_HEAP_OPTS: -Xmx1536m -Xms512m
    depends_on:
      postgres:
        condition: service_healthy
      lenses-hq:
        condition: service_healthy
      create-configs:
        condition: service_completed_successfully
    volumes:
      - lenses-agent-volume:/mnt/settings

  # Second agent container using a duplicate configuration with a different agent key and its own database.
  lenses-agent2:
    image: lensting/lenses-agent:6-preview
    environment:
      DEMO_HQ_URL: http://lenses-hq:9991
      DEMO_HQ_USER: admin
      DEMO_HQ_PASSWORD: admin
      DEMO_AGENTKEY_PATH: /mnt/settings/DEMO_AGENTKEY
      LENSES_HEAP_OPTS: -Xmx1536m -Xms512m
    depends_on:
      postgres:
        condition: service_healthy
      lenses-hq:
        condition: service_healthy
      create-configs:
        condition: service_completed_successfully
    volumes:
      - lenses-agent2-volume:/mnt/settings

  # PostgreSQL is required for both HQ and Agents to store their configuration and data.
  postgres:
    image: postgres
    environment:
      POSTGRES_USER: lenses
      POSTGRES_PASSWORD: lenses
    depends_on:
      create-configs:
        condition: service_completed_successfully
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U lenses"]
      interval: 5s
      timeout: 5s
      retries: 5
    volumes:
      - postgres-volume:/docker-entrypoint-initdb.d

  # Demo Kafka service (with Schema Registry, Connect, etc.)
  demo-kafka:
    image: lensesio/fast-data-dev:3.9.0
    hostname: demo-kafka
    environment:
      ADV_HOST: demo-kafka
      RUNNING_SAMPLEDATA: 1
      RUNTESTS: 0
      KAFKA_LISTENERS: PLAINTEXT://:9092,DOCKERCOMPOSE://:19092,CONTROLLER://:16062
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://demo-kafka:9092,DOCKERCOMPOSE://demo-kafka:19092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: >
        DOCKERCOMPOSE:PLAINTEXT,CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,
        SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
      DISABLE: debezium-mongodb,debezium-mysql,debezium-sqlserver,debezium-jdbc
    ports:
      - 9092:9092
      - 8081:8081
    volumes:
      - ./snowflake-kafka-connector-3.1.0.jar:/connectors/snowflake.jar

  # This service creates the required configuration files for Lenses HQ, Agents, and PostgreSQL.
  create-configs:
    image: busybox
    command: >
      sh -c '
        printenv hq.config.yaml > /hq/config.yaml;
        printenv agent.lenses.conf > /agent/lenses.conf;
        printenv agent.provisioning.yaml > /agent/provisioning.yaml;
        printenv postgres.init.sql > /postgres/init.sql;
        printenv agent2.lenses.conf > /agent2/lenses.conf;
        printenv agent2.provisioning.yaml > /agent2/provisioning.yaml;
      '
    environment:
      hq.config.yaml: |
        http:
          address: :9991
          secureSessionCookies: false
        auth:
          administrators:
            - admin
          users:
            - username: admin
              password: $$2a$$10$$DPQYpxj4Y2iTWeuF1n.ItewXnbYXh5/E9lQwDJ/cI/.gBboW2Hodm
        agents:
          address: :10000
        database:
          host: postgres:5432
          username: lenses
          password: lenses
          database: hq
        logger:
          mode: text
        license:
          key: license_key_2SFZ0BesCNu6NFv0-EOSIvY22ChSzNWXa5nSds2l4z3y7aBgRPKCVnaeMlS57hHNVboR2kKaQ8Mtv1LFt0MPBBACGhDT5If8PmTraUM5xXLz4MYv
          acceptEULA: ${ACCEPT_EULA}
      agent.lenses.conf: |
        lenses {
          storage.postgres.host = "postgres"
          storage.postgres.port = 5432
          storage.postgres.database = "agent1"
          storage.postgres.username = "lenses"
          storage.postgres.password = "lenses"
          provisioning.path = "/mnt/settings"
          sql.state.dir = "/data/lsql-state-dir"
          secret.file = "/data/security.conf"
          storage.directory = "/data/lenses"
          connectors.info = [
            {
              class.name  = "com.snowflake.kafka.connector.SnowflakeSinkConnector"
              name        = "Snowflake Kafka Connector"
              sink        = true
              description = "Writes Kafka data into Snowflake for analytics."
              author      = "Snowflake"
            }
          ]
        }
      agent.provisioning.yaml: |
        lensesHq:
          - configuration:
              agentKey:
                value: $${LENSESHQ_AGENT_KEY}
              port:
                value: 10000
              server:
                value: lenses-hq
            name: lenses-hq
            tags: ['hq']
            version: 1
        kafka:
          - name: kafka
            version: 1
            tags: ['kafka', 'dev']
            configuration:
              metricsType:
                value: JMX
              metricsPort:
                value: 9581
              kafkaBootstrapServers:
                value: [PLAINTEXT://demo-kafka:19092]
              protocol:
                value: PLAINTEXT
        confluentSchemaRegistry:
          - name: schema-registry
            version: 1
            tags: ['dev']
            configuration:
              schemaRegistryUrls:
                value: [http://demo-kafka:8081]
              metricsType:
                value: JMX
              metricsPort:
                value: 9582
        connect:
          - name: dev
            version: 1
            tags: ['dev']
            configuration:
              workers:
                value: [http://demo-kafka:8083]
              aes256Key:
                value: 0123456789abcdef0123456789abcdef
              metricsType:
                value: JMX
              metricsPort:
                value: 9584
      agent2.lenses.conf: |
        lenses {
          storage.postgres.host = "postgres"
          storage.postgres.port = 5432
          storage.postgres.database = "agent2"
          storage.postgres.username = "lenses"
          storage.postgres.password = "lenses"
          provisioning.path = "/mnt/settings"
          sql.state.dir = "/data/lsql-state-dir"
          secret.file = "/data/security.conf"
          storage.directory = "/data/lenses"
          connectors.info = [
            {
              class.name  = "com.snowflake.kafka.connector.SnowflakeSinkConnector"
              name        = "Snowflake Kafka Connector"
              sink        = true
              description = "Writes Kafka data into Snowflake for analytics."
              author      = "Snowflake"
            }
          ]
        }
      agent2.provisioning.yaml: |
        lensesHq:
          - configuration:
              agentKey:
                value: "agent_key_dF0bQf7trcH4dPne_lg2WfnchylzaCZEJL0ZbdHgTKjVVGc7I"
              port:
                value: 10000
              server:
                value: lenses-hq
            name: lenses-hq
            tags: ['hq']
            version: 1
        kafka:
          - name: kafka
            version: 1
            tags: ['kafka', 'dev']
            configuration:
              metricsType:
                value: JMX
              metricsPort:
                value: 9581
              kafkaBootstrapServers:
                value: [PLAINTEXT://demo-kafka:19092]
              protocol:
                value: PLAINTEXT
        confluentSchemaRegistry:
          - name: schema-registry
            version: 1
            tags: ['dev']
            configuration:
              schemaRegistryUrls:
                value: [http://demo-kafka:8081]
              metricsType:
                value: JMX
              metricsPort:
                value: 9582
        connect:
          - name: dev
            version: 1
            tags: ['dev']
            configuration:
              workers:
                value: [http://demo-kafka:8083]
              aes256Key:
                value: 0123456789abcdef0123456789abcdef
              metricsType:
                value: JMX
              metricsPort:
                value: 9584
      postgres.init.sql: |
        CREATE DATABASE hq;
        CREATE DATABASE agent1;
        CREATE DATABASE agent2;
        CREATE DATABASE metabaseappdb;
    volumes:
      - lenses-hq-volume:/hq
      - postgres-volume:/postgres
      - lenses-agent-volume:/agent
      - lenses-agent2-volume:/agent2


volumes:
  lenses-hq-volume:
  lenses-agent-volume:
  lenses-agent2-volume:
  postgres-volume:
  postgres-data-volume:
  kafka-data-volume:
  urandom:

networks:
  default:
    name: lenses

Hello Socrates,

With a quick look but without running your docker-compose file, it seems correct. If there are any problems running it, they should be caused by typos or wrong names here and there. But the structure and approach are correct.

For the second agent, there is a nitpick: you set both an agentKey in agent2.provisioning.yaml and the DEMO_* environment variables. Since these overlap, you should choose one of the two.
In a production environment, you would first set up the HQ, then issue agent keys and use them to configure your agents. You would never use the DEMO_* variables.

The DEMO_* variables were added as a feature to the agent-docker only for demoing the community edition. They make registering an agent to the HQ seamless, so someone running the docker-compose will have an agent automatically registered and running. It also helps when bringing the docker-compose up and down, as this will wipe HQ’s database, and new agent keys will be needed. But it was never meant for production.

If you tried your docker-compose and failed, paste the logs and we may be able to help. We also have a docker-compose that we use internally for now that creates two agents and two demo Kafkas. If you are interested, I can search for it and share it with you.

Hey Socrates

I’ve attached a docker compose with 2 functioning kafka’s and two agents for you. It also has the data generator turned off so you can use them both for development. Let me know if you have more questions. You can download it here.

Thanks

Drew

1 Like