Replicating Data to Apache Kafka or Confluent Cloud

With Syniti Data Replication, you can replicate relational data to a Kafka stream using JSON, AVRO, CSV or XML serialization of the entire record. Kafka is currently supported as a target both in refresh and mirroring. Every session creates either a .ref or .mir file with the content of the replication.

To define a target connection for Kafka:

  1.  In the Metadata Explorer, choose Targets, then Add New Connection from the right mouse button.

  2. In the Add Target Connection wizard, type a name for the connection and choose Apache Kafka in the Database field.

  3. In the Set Connection String screen Output Folder field, enter the path to a directory to contain the output files and schema.

  4. Choose values for the following fields:

    Server The server name or IP address of the system running Kafka.

    Port The port number of the server running Kafka.

    Output Folder The schema name and location to hold config files for the Kafka objects.

    Group ID Kafka property not used by Syniti DR. A string that uniquely identifies the group of consumer processes to which this consumer belongs. By setting the same group id multiple processes indicate that they are all part of the same consumer group.

    Connect Timeout Amount of time before the connection will throw an exception.

    Default Topic All Kafka streams need to be targeted to a specific topic. In Syniti DR you can indicate a default topic for all replications in that connection. Alternatively, if the Default Topic is left blank, every replication will need to specify a topic name.

    Serialization Possible types of message serialization.

    Currently supported:

    0 – JSON

    1 – CSV

    2 – XML

    2 – AVRO

    Schema Registry URL Used only when AVRO serialization is selected. Because Avro does not include any schema information (column names, types, sizes and format) within the message that is being sent, Kafka producers and consumers use an implicit “contract” to determine how the message is serialized. Schema Registry helps ensure that this contract is met with compatibility checks and it provides centralized schema management and compatibility checks as schemas evolve.

    Auto Offset Reset Kafka property not used by Syniti DR. Determines how to proceed when there is no initial offset in ZooKeeper or if an offset is out of range:

    * smallest : automatically reset the offset to the smallest offset;

    * largest : automatically reset the offset to the largest offset;

    * anything else: throw exception to the consumer.

    Use One Producer Per Group True by default. If True, the same Producer is used for all streams within the group (this has a better impact on performance).

  5. If using Kafka with Kerberos security, there are additional connection values to set. Download the Kafka setup guide in the Help Center for complete details.

  6. You can leave the ExtendedProperties field blank.

  7. Click Next to view the Select Tables screen.
    If this is the first time you have created a connection using the output folder defined above, the table display will be empty.

  8. Click Next to display the Actions screen.

  9. Optionally choose to continue with creating replications once the wizard is complete.

  10. Click Next to display the summary, then click Finish to create the connection.

    The next step is to add target output representation to the Metadata Explorer. This will be represented as relational tables.

Now set up replications from whichever source connection you have defined to the Kafka stream.