Erro ao publicar mensagem Kafka
Olá pessoal, estou com um erro ao publicar a mensagem pelo GO Parece ser por causa do host.docker.internal:172.17.0.1
producer := kafka.NewKafkaProducer() kafka.Publish("olá", "readtest", producer)
Adicionei a linha 127.0.0.1 host.docker.internal no meu etc/hosts mas está dando timeout. @wesleywillians @argentinaluiz
%3|1655827555.464|FAIL|rdkafka#producer-1| [thrd:host.docker.internal:9094/bootstrap]: host.docker.internal:9094/bootstrap: Connect to ipv4#172.17.0.1:9094 failed: Connection timed out (after 130166ms in state CONNECT)
Também estou tendo esse erro. SO: Windows 10 (WSL 2)
Também estou com mesmo erro
Oi pessoal, tudo bem?
Vocês adicionaram isto no docker-compose.yaml da aplicação Golang?
https://github.com/codeedu/imersao8/blob/main/simulator/docker-compose.yaml#L9-L10.
Para o host.docker.internal funcionar é necessário realizar esta configuração.
Aqui eu consegui fazer funcionar, mesmo adicionando a linha do host internal no etc/host só funcionou depois que eu adicionei o projeto go na mesma network do kafka
docker-compose.yaml
version: "3"
services:
app:
networks:
- default
build: .
container_name: simulator
volumes:
- .:/go/src
extra_hosts:
- "host.docker.internal:172.17.0.1"
networks:
default:
external:
name: kafka_default
Sem esta configuração da network, nada que eu tentei funcionou aqui no macOS.
@argentinaluiz já tinha adicionado este trecho:
extra_hosts:
- "host.docker.internal:172.17.0.1"
@fabianosanttana Realmente, com isso agora funcionou, vlw! 👍
Aqui eu consegui fazer funcionar, mesmo adicionando a linha do host internal no etc/host só funcionou depois que eu adicionei o projeto go na mesma network do kafka
docker-compose.yamlversion: "3" services: app: networks: - default build: . container_name: simulator volumes: - .:/go/src extra_hosts: - "host.docker.internal:172.17.0.1" networks: default: external: name: kafka_defaultSem esta configuração da network, nada que eu tentei funcionou aqui no macOS.
Consegui utilizando essas dicas no Windows 11 com WSL2. Obrigado!
Também percebi que tinha esquecido de alterar o arquivo hosts. No meu caso notei que já existia um "host.docker.internal" com um outro ip e que foi adicionado pelo Docker Desktop. Deixei a linha com o ip localhost no final do arquivo hosts, provavelmente estou fazendo override dessa configuração.
Depois de fazer essa alteração no hosts e de adicionar essa Network (o comando do Dockerfile que o Luiz indicou eu já tinha feito), tive que derrubar os containers e criá-los novamente (docker-compose down em ambos os terminais, do GO e do Kafka).
Quando subi o conteiner do go (simulator) reparei que deu erro na primeira vez, reclamou de não conseguir criar a network externa, mas rodando novemente deu certo e rodando o comando docker network ls a nova rede está lá.
EDIT:
Só esqueci de comentar que todos os host.docker.internal:172.17.0.1 ficaram desse jeito mesmo.
Show pessoal, ótimo que conseguiram evoluir!
Qualquer dúvida, estamos à disposição.
nossa, eu estou passando pelo mesmo erro. Já adicionei host na pasta etcs/hosts e continuo recebendo:
%3|1676066051.462|FAIL|rdkafka#producer-1| [thrd:host.docker.internal:172/bootstrap]: host.docker.internal:172/bootstrap: Connect to ipv4#127.0.0.1:172 failed: Connection refused (after 0ms in state CONNECT, 30 identical error(s) suppressed)
as configs do docker-compose.yml
version: "3"
services:
app:
networks:
- default
build: .
container_name: simulator
volumes:
- .:/go/src
extra_hosts:
- "host.docker.internal:172.17.0.1"
networks:
default:
external:
name: kafka_default
as configs do kafka/docker-compose.yaml:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
extra_hosts:
- "host.docker.internal:172.17.0.1"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9094:9094"
environment:
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_LISTENERS: INTERNAL://:9092,OUTSIDE://:9094
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://host.docker.internal:9094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT
extra_hosts:
- "host.docker.internal:172.17.0.1"
kafka-topics-generator:
image: confluentinc/cp-kafka:latest
depends_on:
- kafka
command: >
bash -c
"sleep 5s &&
kafka-topics --create --topic=route.new-direction --if-not-exists --bootstrap-server=kafka:9092 &&
kafka-topics --create --topic=route.new-position --if-not-exists --bootstrap-server=kafka:9092"
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.1
hostname: control-center
depends_on:
- kafka
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:9092'
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_CONNECT_CLUSTER: http://kafka-connect:8083
PORT: 9021
extra_hosts:
- "host.docker.internal:172.17.0.1"
dei docker-compose down, depois up de novo no container e mesmo assim não consigo.
*esqueci de mencionar que estou usando ubuntu.
Consegui resolver. Era o arquivo .env que tava com o host errado. Coloquei 9094 deu certo.
Boa Noite Pessoal.
@argentinaluiz @moreiralud
Me deparei com esse mesmo erro ao publicar a mensagem no kafka.
%4|1680225536.133|FAIL|rdkafka#producer-1| [thrd:host.docker.internal:9094/bootstrap]: host.docker.internal:9094/bootstrap: Connection setup timed out in state CONNECT (after 30038ms in state CONNECT)
Estou utilizando WSL 2 no Windows 10.
Acrescentei no meu /etc/hosts : 127.0.0.1 host.docker.internal No Windows acrescentei o C:\Windows\system32\drivers\etc\hosts: 127.0.0.1 host.docker.internal
- kafka/docker-comose.yaml
version: "3.8"
services:
zookeeper: image: confluentinc/cp-zookeeper:latest container_name: zookeeper environment: ZOOKEEPER_SERVER_ID: 1 ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 5 ZOOKEEPER_SYNC_LIMIT: 2 ZOOKEEPER_SERVERS: localhost:22888:23888 extra_hosts: - "host.docker.internal:172.17.0.1"
kafka: image: confluentinc/cp-kafka:latest container_name: kafka depends_on: - zookeeper ports: - "9092:9092" - "9094:9094" environment: KAFKA_BROKER_ID: 1 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_LISTENERS: INTERNAL://:9092,OUTSIDE://:9094 KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9092,OUTSIDE://host.docker.internal:9094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,OUTSIDE:PLAINTEXT KAFKA_BOOTSTRAP_SERVERS: "PLAINTEXT://host.docker.internal:9094" extra_hosts: - "host.docker.internal:172.17.0.1"
kafka-topics-generator: image: confluentinc/cp-kafka:latest depends_on: - kafka command: > bash -c "sleep 5s && kafka-topics --create --topic=payments --if-not-exists --bootstrap-server=kafka:9092"
control-center: image: confluentinc/cp-enterprise-control-center:6.0.1 container_name: kafka-control-center hostname: control-center depends_on: - kafka ports: - "9021:9021" environment: CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:9092' CONTROL_CENTER_REPLICATION_FACTOR: 1 CONTROL_CENTER_CONNECT_CLUSTER: http://kafka-connect:8083 PORT: 9021 extra_hosts: - "host.docker.internal:172.17.0.1"
kafka-connect: image: confluentinc/cp-kafka-connect-base:6.0.0 container_name: kafka-connect depends_on: - zookeeper - kafka ports: - 8083:8083 environment: CONNECT_BOOTSTRAP_SERVERS: "kafka:9092" CONNECT_REST_PORT: 8083 CONNECT_GROUP_ID: kafka-connect CONNECT_CONFIG_STORAGE_TOPIC: _connect-configs CONNECT_OFFSET_STORAGE_TOPIC: _connect-offsets CONNECT_STATUS_STORAGE_TOPIC: _connect-status CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect" CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO" CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR" CONNECT_LOG4J_APPENDER_STDOUT_LAYOUT_CONVERSIONPATTERN: "[%d] %p %X{connector.context}%m (%c:%L)%n" CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1" CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1" CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1" # # Optional settings to include to support Confluent Control Center CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" # CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" # --------------- CONNECT_PLUGIN_PATH: /usr/share/java,/usr/share/confluent-hub-components,/data/connect-jars # If you want to use the Confluent Hub installer to d/l component, but make them available # when running this offline, spin up the stack once and then run : # docker cp kafka-connect:/usr/share/confluent-hub-components ./data/connect-jars volumes: - $PWD/data:/data # In the command section, $ are replaced with $$ to avoid the error 'Invalid interpolation format for "command" option' command: - bash - -c - | echo "Installing Connector" confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:10.0.1 # echo "Launching Kafka Connect worker" /etc/confluent/docker/run & # sleep infinity extra_hosts: - "host.docker.internal:172.17.0.1"
application/docker-compose.yaml
version: "3.8"
services: app: container_name: appbank build: . ports: - "50052:50051" volumes: - .:/go/src/ extra_hosts: - "host.docker.internal:172.17.0.1"
db: build: .docker/postgres container_name: dbbank tty: true volumes: - ./.docker/dbdata:/var/lib/postgresql/data environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=codebank ports: - "5432:5432" extra_hosts: - "host.docker.internal:172.17.0.1"
pgadmin: image: dpage/pgadmin4:6.21 container_name: clientpg tty: true environment: - [email protected] - PGADMIN_DEFAULT_PASSWORD=123456 ports: - "9000:80" depends_on: - db extra_hosts: - "host.docker.internal:172.17.0.1"