Kubernetes(K8S) Kafka – Strimzi ver

K8S에서 Kafka를 배포하기 위해서는 조금 복잡한 구조를 가지고 있다.
따라서 본 글에서는 Strimzi라는 Operator Tool을 이용해 Clustering을 진행하고자 한다.
(Google 서칭을 통해 구동된 내용을 기반으로 작성하여, 다른 버전이나 환경에 따라 구동되지 않을 수 있음.)

1. K8S에 Strimzi 설치

아래 명령어를 이용해 kafka 라는 namespace를 생성한다.

kubectl create namespace kafka

namespace를 생성하였으면 아래 명령어를 이용해 strimzi를 설치한다.
-n 을 이용해 kafka namespace 태그를 설정하여 설치한다.

kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka

잘 설치되어있는지 확인한다.

kubectl get pod -n kafka

2. Strimzi에 등록된 Kafka 설치

아래 명령어를 이용해 Strimzi의 Kafka를 설치한다.

kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka 

3. 영구 저장소 구축

Strimzi의 Kafka는 PVC(Persistent Volumes Claim)를 사용한다.
PVC 명은 아래와 같다.

구분PVC
Brokerdata-0-<cluster명>-kafka-<인덱스>
Zookeeperdata-<cluster명>-zookeeper-<인덱스>

본 글에서 만든 PVC명은 아래와 같다.
(Replicas 3이기 때문에 3개 생성)

구분PVC
Brokerdata-0-my-cluster-kafka-0
Brokerdata-0-my-cluster-kafka-1
Brokerdata-0-my-cluster-kafka-2
Zookeeperdata-my-cluster-zookeeper-0
Zookeeperdata-my-cluster-zookeeper-1
Zookeeperdata-my-cluster-zookeeper-2

위 테이블을 yaml 파일로 만들어준다.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: kafka-storage
reclaimPolicy: Delete
provisioner: kubernetes.io/no-provisioner
mountOptions:
  - debug
volumeBindingMode: Immediate
---
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-my-cluster-zookeeper-pv-0
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-my-cluster-zookeeper-0
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-my-cluster-zookeeper-pv-1
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-my-cluster-zookeeper-1
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
      
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-my-cluster-zookeeper-pv-2
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-my-cluster-zookeeper-2
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-0-my-cluster-kafka-pv-0
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-0-my-cluster-kafka-0
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-0-my-cluster-kafka-pv-1
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-0-my-cluster-kafka-1
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
# Kind for volume chain
kind: PersistentVolume
metadata:
  # Name the persistent chain
  name: data-0-my-cluster-kafka-pv-2
spec:
  storageClassName: kafka-storage
  capacity:
    # PV Storage capacity
    storage: 5Gi
  # A db can write and read from volumes to multiple pods
  accessModes:
    - ReadWriteOnce
  # Specify the path to persistent the volumes  
  hostPath:
    path: "/var/lib/zookeeper"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-0-my-cluster-kafka-2
spec:
  storageClassName: kafka-storage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

다음 명령어를 이용해 pv, pvc를 생성한다.

kubectl apply -f ./pv.yaml -n kafka

3. broker, zookeeper생성

본 글에서는 3개의 replicas를 설정했고 storage명은 위에서생성한 kafka-storage로 class를 설정하였다.
아래와 같이 kafka.yaml 파일을 생성한다.

apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
  name: my-cluster
spec:
  kafka:
    version: 3.5.1
    replicas: 3
    template:
      pod:
        securityContext:
          runAsUser: 0
          fsGroup: 0
    listeners:
      - name: plain
        port: 9092
        type: internal
        tls: false
      - name: tls
        port: 9093
        type: internal
        tls: true
      - name: external
        port: 9094
        type: nodeport
        tls: false
        configuration:            
          bootstrap:               
            nodePort: 30000
    config:
      offsets.topic.replication.factor: 3
      transaction.state.log.replication.factor: 3
      transaction.state.log.min.isr: 2
      default.replication.factor: 3
      min.insync.replicas: 2
      inter.broker.protocol.version: "3.5"
    storage:
      type: jbod
      volumes:
      - id: 0
        type: persistent-claim
        size: 100Gi
        deleteClaim: false
      class: "kafka-storage"
  zookeeper:
    template:
      pod:
        securityContext:
          runAsUser: 0
          fsGroup: 0
    replicas: 3
    storage:
      type: persistent-claim
      size: 100Gi
      deleteClaim: false
      class: "kafka-storage"
  entityOperator:
    topicOperator: {}
    userOperator: {}

아래 명령어를 이용해 k8s에 kafka 생성을 진행한다.

kubectl apply -f ./kafka.yaml -n kafka

4. kafka UI

Kafka Broker 등 연결 내역을 UI로 볼 수 있도록 지원해주는 Utill Tool인 Kafka-UI를 이용해 연결상태를 확인할 수 있다.

아래와 같이 kafka-ui.yaml 파일을 생성한다.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    io.kompose.service: kafka-ui
  name: kafka-ui
  namespace: kafka
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: kafka-ui
  strategy: {}
  template:
    metadata:
      labels:
        io.kompose.service: kafka-ui
    spec:
      containers:
        - env:
            - name: KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS
              value: my-cluster-kafka-bootstrap:9092
            - name: KAFKA_CLUSTERS_0_ZOOKEEPER
              value: my-cluster-zookeeper-client:2181
          image:  provectuslabs/kafka-ui
          name: kafka-ui
          ports:
            - containerPort: 8080
          resources: {}
      restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    io.kompose.service: kafka-ui
  name: kafka-ui
  namespace: kafka
spec:
  ports:
    - name: "9000"
      port: 9000
      nodePort: 31012
      targetPort: 8080
  selector:
    io.kompose.service: kafka-ui
  type: LoadBalancer
status:
  loadBalancer: {}

아래 명령어를 이용해 K8S에 Kafka-UI를 생성한다.

kubectl apply -f ./kafka.yaml -n kafka

Leave a Comment