使用 Golang 和 Docker 导入外部包时构建失败

我无法使用 Docker 构建这个简单的融合 kafka 示例。可能是 go 路径或特殊构建参数的技巧,找不到,尝试了 go 中的所有默认文件夹,没有成功。


Dockerfile


FROM golang:alpine AS builder


# Set necessary environmet variables needed for our image

ENV GO111MODULE=on \

    CGO_ENABLED=0 \

    GOOS=linux \

    GOARCH=amd64


ADD . /go/app


# Install librdkafka

RUN apk add librdkafka-dev pkgconf


# Move to working directory /build

WORKDIR /go/app


# Copy and download dependency using go mod

COPY go.mod .

RUN go mod download


# Copy the code into the container

COPY . .


# Build the application

RUN go build -o main .


# Run test

RUN go test ./... -v


# Move to /dist directory as the place for resulting binary folder

WORKDIR /dist


# Copy binary from build to main folder

RUN cp /go/app/main .


############################

# STEP 2 build a small image

############################

FROM scratch


COPY --from=builder /dist/main /


# Command to run the executable

ENTRYPOINT ["/main"]

错误


./producer_example.go:37:12: undefined: kafka.NewProducer

./producer_example.go:37:31: undefined: kafka.ConfigMap

./producer_example.go:48:28: undefined: kafka.Event

./producer_example.go:51:19: undefined: kafka.Message


万千封印
浏览 114回答 1
1回答

米脂

编辑我可以确认使用musl构建标签有效:FROM golang:alpine as buildWORKDIR /go/src/app# Set necessary environmet variables needed for our imageENV GOOS=linux GOARCH=amd64 COPY . .RUN apk update && apk add gcc librdkafka-dev openssl-libs-static zlib-static zstd-libs libsasl librdkafka-static lz4-dev lz4-static zstd-static libc-dev musl-dev RUN go build -tags musl -ldflags '-w -extldflags "-static"' -o mainFROM scratchCOPY --from=build /go/src/app/main /# Command to run the executableENTRYPOINT ["/main"]与测试设置一起使用,如下所示。好吧,使用的 1.4.0 版本github.com/confluentinc/confluent-kafka-go/kafka似乎至少与 alpine 3.11 的当前状态普遍不兼容。此外,尽管我尽了最大的努力,我还是无法构建一个静态编译的二进制文件,适合与FROM scratch.但是,我能够让您的代码在当前版本的 Kafka 上运行。图像有点大,但我想工作和大一点总比不工作和优雅要好。待办事项1.降级为confluent-kafka-go@v1.1.0简单到$ go get -u -v github.com/confluentinc/confluent-kafka-go@v1.1.02.修改你的Dockerfile您一开始就缺少一些构建依赖项。显然,我们还需要一个运行时依赖项,因为我们不再使用FROM scratch。请注意,我还尝试简化它并留下jwilder/dockerize,我使用了它,这样我就不必为我的测试设置计时:FROM golang:alpine as build# The default location is /go/srcWORKDIR /go/src/appENV GOOS=linux \    GOARCH=amd64# We simply copy everything to /go/src/app    COPY . .# Add the required build librariesRUN apk update && apk add gcc librdkafka-dev zstd-libs libsasl lz4-dev libc-dev musl-dev # Run the buildRUN go build -o mainFROM alpine# We use dockerize to make sure the kafka sever is up and running before the command starts.ENV DOCKERIZE_VERSION v0.6.1ENV KAFKA kafka# Add dockerizeRUN apk --no-cache upgrade && apk --no-cache --virtual .get add curl \ && curl -L -O https://github.com/jwilder/dockerize/releases/download/${DOCKERIZE_VERSION}/dockerize-linux-amd64-${DOCKERIZE_VERSION}.tar.gz \ && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \ && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \ && apk del .get \ # Add the runtime dependency. && apk add --no-cache librdkafka# Fetch the binary COPY --from=build /go/src/app/main /# Wait for kafka to come up, only then start /mainENTRYPOINT ["sh","-c","/usr/local/bin/dockerize -wait tcp://${KAFKA}:9092 /main kafka test"]3. 测试它我创建了一个docker-compose.yaml来检查一切是否正常:version: "3.7"services:  zookeeper:    image: 'bitnami/zookeeper:3'    ports:      - '2181:2181'    volumes:      - 'zookeeper_data:/bitnami'    environment:      - ALLOW_ANONYMOUS_LOGIN=yes  kafka:    image: 'bitnami/kafka:2'    ports:      - '9092:9092'    volumes:      - 'kafka_data:/bitnami'    environment:      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181      - ALLOW_PLAINTEXT_LISTENER=yes    depends_on:      - zookeeper  server:    image: fals/kafka-main    build: .    command: "kafka test"volumes:  zookeeper_data:  kafka_data:您可以检查设置是否适用于:$  docker-compose build && docker-compose up -d && docker-compose logs -f server[...]server_1     | 2020/04/18 18:37:33 Problem with dial: dial tcp 172.24.0.4:9092: connect: connection refused. Sleeping 1sserver_1     | 2020/04/18 18:37:34 Connected to tcp://kafka:9092server_1     | Created Producer rdkafka#producer-1server_1     | Delivered message to topic test [0] at offset 0server_1     | 2020/04/18 18:37:36 Command finished successfully.kfka_server_1 exited with code 0
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Go