有限元分析的簡單示例,Java Kafka 簡單示例

 2023-10-12 阅读 28 评论 0

摘要:Java Kafka 簡單示例 簡介 ????Java kafka 簡單代碼示例 maven依賴配置 <!-- kafka --> <dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.11.0.0</version> </depende

Java Kafka 簡單示例


簡介

????Java kafka 簡單代碼示例

maven依賴配置

<!-- kafka -->
<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.11.0.0</version>
</dependency>

kakfa生產和消費者生成

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;import java.util.*;/*** @author lw1243925457*/
public class KafkaUtil {public static KafkaConsumer<String, String> createConsumer(String servers, String topic) {Properties properties = new Properties();properties.put("bootstrap.servers", servers);properties.put("group.id", "group-1");properties.put("enable.auto.commit", "false");properties.put("auto.commit.interval.ms", "1000");properties.put("auto.offset.reset", "earliest");properties.put("session.timeout.ms", "30000");properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<String, String>(properties);kafkaConsumer.subscribe(Arrays.asList(topic));return kafkaConsumer;}public static void readMessage(KafkaConsumer<String, String> kafkaConsumer, int timeout) {while (true) {ConsumerRecords<String, String> records = kafkaConsumer.poll(timeout);for (ConsumerRecord<String, String> record : records) {String value = record.value();kafkaConsumer.commitAsync();System.out.println(value);}}}public static KafkaProducer<String, String> createProducer(String servers) {Properties properties = new Properties();properties.put("bootstrap.servers", servers);properties.put("acks", "all");properties.put("retries", 0);properties.put("batch.size", 16384);properties.put("linger.ms", 1);properties.put("buffer.memory", 33554432);properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");return new KafkaProducer<String, String>(properties);}public static void send(KafkaProducer<String, String> producer, String topic, String message) {producer.send(new ProducerRecord<String, String>(topic, message));}
}

運行

public class Main {public static void main(String[] args) {String servers = "localhost:9092,localhost:9093,localhost:9094";String topic = "TestTopic";String message = "test";KafkaProducer<String, String> producer = KafkaUtil.createProducer(servers);KafkaUtil.send(producer, topic, message);KafkaConsumer<String, String> consumer = KafkaUtil.createConsumer(servers, topic);KafkaUtil.readMessage(consumer, 100);}
}

使用心得

總是讀取最老的消息

????可能是group-id的問題,新起一個group-id名稱

  • earliest:當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,從頭開始消費
  • latest:當各分區下有已提交的offset時,從提交的offset開始消費;無提交的offset時,消費新產生的該分區下的數據
  • none:topic各分區都存在已提交的offset時,從offset后開始消費;只要有一個分區不存在已提交的offset,則拋出異常

參考鏈接

  • java 實現kafka消息生產者和消費者
  • kafka(三)—Kafka的Java代碼示例和配置說明
  • Kafka - 偏移量提交
  • Kafka系列(四)Kafka消費者:從Kafka中讀取數據
  • Kafka auto.offset.reset值詳解

版权声明:本站所有资料均为网友推荐收集整理而来,仅供学习和研究交流使用。

原文链接:https://hbdhgg.com/5/135574.html

发表评论:

本站为非赢利网站,部分文章来源或改编自互联网及其他公众平台,主要目的在于分享信息,版权归原作者所有,内容仅供读者参考,如有侵权请联系我们删除!

Copyright © 2022 匯編語言學習筆記 Inc. 保留所有权利。

底部版权信息