Replicate data from MySQL, Postgres and MongoDB to ClickHouse
Go to file
2022-06-21 16:59:33 -04:00
.github/workflows Fixed image name in github action for building docker images. 2022-05-11 21:09:38 -04:00
deploy Changed thread pool size to a configurable parameter, Changed the logic of removing records from the shared records queue. 2022-06-21 16:59:33 -04:00
doc Changed thread pool size to a configurable parameter, Changed the logic of removing records from the shared records queue. 2022-06-21 16:59:33 -04:00
docker Refactoring KafkaOffsetWriter and added test case for MySQL JSON Data type. 2022-06-17 13:58:52 -04:00
src Changed thread pool size to a configurable parameter, Changed the logic of removing records from the shared records queue. 2022-06-21 16:59:33 -04:00
tests Refactoring KafkaOffsetWriter and added test case for MySQL JSON Data type. 2022-06-17 13:58:52 -04:00
.gitignore Added initial commit of files from the original repo 2022-03-28 09:16:31 -04:00
LICENSE Initial commit 2022-03-21 11:32:45 +03:00
pom.xml Added logic to record min and max offset of the bulk message in prometheus. 2022-05-06 09:57:30 -04:00
README.md Added documentation and sysbench read/write script. 2022-06-20 14:24:33 -04:00
strimzi.yml Updated documentation with Data types mapping. 2022-04-29 13:32:53 -04:00

Altinity Sink Connector for ClickHouse

Sink connector sinks data from Kafka into Clickhouse. The connector is tested with the following converters

Features

  • Inserts, Updates and Deletes using ReplacingMergeTree/CollapsingMergeTreeUpdates/Deletes
  • Deduplication logic to dedupe records from Kafka topic.
  • Exactly once semantics
  • Bulk insert to Clickhouse.
  • Store Kafka metadata Kafka Metadata

Source Databases

  • MySQL (Debezium)
  • PostgreSQL (Debezium) (Testing in progress)

Documentation