Azure Cosmos DB Sink V2 Connector for Confluent Cloud

The fully-managed Azure Cosmos DB Sink V2 connector for Confluent Cloud writes data to an Azure Cosmos DB database. The connector polls data from Apache Kafka® and writes to database containers, supporting high-throughput data ingestion with configurable write strategies for enhanced data handling.

Note

If you require private networking for fully-managed connectors, make sure to set up the proper networking beforehand. For more information, see Manage Networking for Confluent Cloud Connectors.

V2 Improvements

The V2 connector includes the following improvements:

  • Supports multiple write strategies for enhanced data handling.
  • Supports service principal authentication using client secrets.
  • Supports enhanced throughput control for managing data ingestion rates.
  • Offers improved metadata handling for accurate offset tracking and seamless scalability.

Features

The Azure Cosmos DB Sink V2 connector supports the following features:

  • Topic mapping: Maps the Kafka topic to the Azure Cosmos DB container.

  • Multiple key strategies:

    • FullKeyStrategy: The ID generated is the Kafka record key. This is the default option.
    • KafkaMetadataStrategy: The ID generated is a concatenation of the Kafka topic, partition, and offset. For example: ${topic}-${partition}-${offset}.
    • ProvidedInKeyStrategy: The ID generated is the id field found in the key object.
    • ProvidedInValueStrategy: The ID generated is the id field found in the value object.
    • TemplateStrategy: The template string used to populate the document with the id field.

    Every record must have (lower case) id field. This is an Azure Cosmos DB requirement. See the lower case id prerequisite.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Limitations

Be sure to review the following information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Azure Cosmos DB Sink V2 connector. The quick start provides the basics of selecting the connector and configuring it to stream Kafka events to an Azure Cosmos DB container.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Azure.

  • The Confluent CLI installed and configured for the cluster. See Install the Confluent CLI.

  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf). See Schema Registry Enabled Environments for additional information.

  • At least one source Kafka topic must exist in your Confluent Cloud cluster before creating the sink connector.

  • The Azure Cosmos DB and the Kafka cluster must be in the same region.

  • The Azure Cosmos DB requires an id field in every record. See ID strategies for an example of how each of these works. The following strategies are provided to generate the ID:

    • FullKeyStrategy: The ID generated is the Kafka record key. This is the default option.

    • KafkaMetadataStrategy: The ID generated is a concatenation of the Kafka topic, partition, and offset. For example: ${topic}-${partition}-${offset}.

    • ProvidedInKeyStrategy: The ID generated is the id field found in the key object.

    • ProvidedInValueStrategy: The ID generated is the id field found in the value object. If you select this ID strategy, you must create a new field named id. You can also use the following ksqlDB statement. The example below uses a topic named orders.

    • TemplateStrategy: The template string used to generate the id field.

      CREATE STREAM ORDERS_STREAM WITH (
         KAFKA_TOPIC = 'orders',
         VALUE_FORMAT = 'AVRO'
         );
      CREATE STREAM ORDER_AUGMENTED AS
         SELECT
            ORDERID AS `id`,
              ORDERTIME,
              ITEMID,
              ORDERUNITS,
              ADDRESS
         FROM  ORDERS_STREAM;
      

Note

  • The connector supports Upsert based on id.
  • The connector does not support Delete for tombstone records.

Using the Confluent Cloud Console

Step 1: Launch your Confluent Cloud cluster

To create and launch a Kafka cluster in Confluent Cloud, see Create a kafka cluster in Confluent Cloud.

Step 2: Add a connector

In the left navigation menu, click Connectors. If you already have connectors in your cluster, click + Add connector.

Step 3: Select your connector

Click the Azure Cosmos DB Sink V2 connector card.

Azure Cosmos DB Sink V2 Connector Card

Step 4: Enter the connector details

Note

  • Ensure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

At the Add Azure Cosmos DB Sink V2 Connector screen, complete the following:

If you’ve already populated your Kafka topics, select the topics you want to connect from the Topics list.

To create a new topic, click +Add new topic.

Step 5: Check for records

Verify that records are being produced in your Azure Cosmos database.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

Using the Confluent CLI

Complete the following steps to set up and run the connector using the Confluent CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors

Enter the following command to list available connectors:

confluent connect plugin list

Step 2: List the connector configuration properties

Enter the following command to show the connector configuration properties:

confluent connect plugin describe <connector-plugin-name>

The command output shows the required and optional configuration properties.

Step 3: Create the connector configuration file

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "name": "CosmosDbSinkV2Connector_0",
  "config": {
    "connector.class": "CosmosDbSinkV2",
    "name": "CosmosDbSinkV2Connector_0",
    "input.data.format": "AVRO",
    "kafka.auth.mode": "KAFKA_API_KEY",
    "kafka.api.key": "****************",
    "kafka.api.secret": "**********************************************",
    "topics": "pageviews",
    "azure.cosmos.account.endpoint": "https://myaccount.documents.azure.com:443/",
    "azure.cosmos.account.key": "****************************************",
    "azure.cosmos.sink.database.name": "myDBname",
    "azure.cosmos.sink.containers.topicMap": "pageviews#Container2",
    "azure.cosmos.sink.id.strategy": "FullKeyStrategy",
    "tasks.max": "1"
  }
}

Note the following property definitions:

  • "connector.class": Identifies the connector plugin name.
  • "input.data.format": Sets the input Kafka record value format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "name": Sets a name for your new connector.
  • "kafka.auth.mode": Identifies the connector authentication mode you want to use. There are two options: SERVICE_ACCOUNT or KAFKA_API_KEY (the default). To use an API key and secret, specify the configuration properties kafka.api.key and kafka.api.secret, as shown in the example configuration (above). To use a service account, specify the Resource ID in the property kafka.service.account.id=<service-account-resource-ID>. To list the available service account resource IDs, use the following command:

    confluent iam service-account list
    

    For example:

    confluent iam service-account list
    
       Id     | Resource ID |       Name        |    Description
    +---------+-------------+-------------------+-------------------
       123456 | sa-l1r23m   | sa-1              | Service account 1
       789101 | sa-l4d56p   | sa-2              | Service account 2
    
  • "azure.cosmos.account.endpoint": A URI with the form https://ccloud-cosmos-db-1.documents.azure.com:443/.

  • "azure.cosmos.account.key": The Azure Cosmos master key.

  • "azure.cosmos.sink.database.name": The name of your Cosmos DB.

  • "azure.cosmos.sink.containers.topicMap": A comma-delimited list of Kafka topics mapped to Cosmos DB containers. Note that this property only supports 1:1 mapping between topic and container name. For example: topic#container1,topic2#container2.

  • (Optional) "azure.cosmos.sink.id.strategy": Defaults to FullKeyStrategy. Enter one of the following strategies:

    • FullKeyStrategy: The ID generated is the Kafka record key.
    • KafkaMetadataStrategy: The ID generated is a concatenation of the Kafka topic, partition, and offset. For example: ${topic}-${partition}-${offset}.
    • ProvidedInKeyStrategy: The ID generated is the id field found in the key object. Every record must have (lower case) id field. This is an Azure Cosmos DB requirement. See Lower case id prerequisite.
    • ProvidedInValueStrategy: The ID generated is the id field found in the value object.
    • TemplateStrategy: The template string used to generate the id field.

    Every record must have (lower case) id field. This is an Azure Cosmos DB requirement. See Lower case id prerequisite.

  • "tasks": The number of tasks to use with the connector. More tasks may improve performance.

Single Message Transforms: See the Single Message Transforms (SMT) documentation for details about adding SMTs using the CLI.

See Configuration Properties for all property values and descriptions.

Step 4: Load the properties file and create the connector

Enter the following command to load the configuration and start the connector:

confluent connect cluster create --config-file <file-name>.json

For example:

confluent connect cluster create --config-file azure-cosmos-v2-sink-config.json

Example output:

Created connector CosmosDbSinkV2Connector_0 lcc-do6vzd

Step 4: Check the connector status.

Enter the following command to check the connector status:

confluent connect cluster list

Example output:

ID           |             Name              | Status  | Type | Trace
+------------+-------------------------------+---------+------+-------+
lcc-do6vzd   | CosmosDbSinkV2Connector_0     | RUNNING | sink |       |

Step 5: Check for records

Verify that records are populating the endpoint.

For more information and examples to use with the Confluent Cloud API for Connect, see the Confluent Cloud API for Connect Usage Examples section.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See View Connector Dead Letter Queue Errors in Confluent Cloud for details.

V1 to V2 Migration

Confluent recommends upgrading from version 1 to version 2 of this connector to take advantage of the latest features, including support for TemplateStrategy ID strategy.

Use the following steps to migrate to version 2 connector. Implement and validate any connector changes in a pre-production environment before promoting to production.

Important

If you plan to migrate from version 1 to version 2, set azure.cosmos.sink.id.strategy configuration property as FullKeyStrategy to avoid any migration failures. This is also the default ID strategy.

  1. Pause the V1 connector.

  2. Get the offset for the V1 connector.

  3. Create the V2 connector using the offset from the previous step.

    confluent connect cluster create [flags]
    

    For example:

    Create a configuration file with connector configs and offsets.

    {
      "name": "(connector-name)",
      "config": {
          ... // connector specific configuration
      },
      "offsets": [
          {
              "partition": {
          ... // connector specific configuration
              },
              "offset": {
          ... // connector specific configuration
              }
          }
      ]
    }
    

    Create a V2 connector in the current or specified Kafka cluster context.

    confluent connect cluster create --config-file config.json
    
  4. Verify the migration and confirm that the connector is running successfully with the V1 payloads.

  5. Delete the V1 connector.

For more information on offsets, see Sink connectors.

Configuration Properties

Use the following configuration properties with the fully-managed connector. For self-managed connector property definitions and other details, see the connector docs in Self-managed connectors for Confluent Platform.

How should we connect to your data?

name

Sets a name for your connector.

  • Type: string
  • Valid Values: A string at most 64 characters long
  • Importance: high

Schema Config

schema.context.name

Add a schema context name. A schema context represents an independent scope in Schema Registry. It is a separate sub-schema tied to topics in different Kafka clusters that share the same Schema Registry instance. If not used, the connector uses the default schema configured for Schema Registry in your Confluent Cloud environment.

  • Type: string
  • Default: default
  • Importance: medium

Input messages

input.data.format

Sets the input Kafka record value format. Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. Note that you need to have Confluent Cloud Schema Registry configured if using a schema-based message format like AVRO, JSON_SR, and PROTOBUF.

  • Type: string
  • Default: JSON
  • Importance: high

Kafka Cluster credentials

kafka.auth.mode

Kafka Authentication mode. It can be one of KAFKA_API_KEY or SERVICE_ACCOUNT. It defaults to KAFKA_API_KEY mode.

  • Type: string
  • Default: KAFKA_API_KEY
  • Valid Values: KAFKA_API_KEY, SERVICE_ACCOUNT
  • Importance: high
kafka.api.key

Kafka API Key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high
kafka.service.account.id

The Service Account that will be used to generate the API keys to communicate with Kafka Cluster.

  • Type: string
  • Importance: high
kafka.api.secret

Secret associated with Kafka API key. Required when kafka.auth.mode==KAFKA_API_KEY.

  • Type: password
  • Importance: high

Which topics do you want to get data from?

topics

Identifies the topic name or a comma-separated list of topic names.

  • Type: list
  • Importance: high

Connect to your Azure Cosmos DB

azure.cosmos.account.endpoint

Cosmos endpoint URL. For example: https://connect-cosmosdb.documents.azure.com:443/.

  • Type: string
  • Importance: high
azure.cosmos.sink.containers.topicMap

A comma delimited list of Kafka topics mapped to Cosmos containers. For example: topic1#con1,topic2#con2.

  • Type: string
  • Importance: high
azure.cosmos.sink.database.name

Cosmos target database to write records into.

  • Type: string
  • Importance: high

Account details

azure.cosmos.account.environment

The azure environment of the Cosmos DB account: Azure, AzureChina, AzureUsGovernment, AzureGermany.

  • Type: string
  • Default: AZURE
  • Valid Values: AZURE, AZURE_CHINA, AZURE_CHINA, AZURE_GERMANY, AZURE_US_GOVERNMENT
  • Importance: medium
azure.cosmos.mode.gateway

Flag to indicate whether to use gateway mode. By default it is false, means SDK uses direct mode. https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes

  • Type: boolean
  • Default: false
  • Importance: low
azure.cosmos.preferredRegionList

Preferred regions list to be used for a multi region Cosmos DB account. This is a comma separated value (e.g., [East US, West US] or East US, West US) provided preferred regions will be used as hint. You should use a collocated kafka cluster with your Cosmos DB account and pass the kafka cluster region as preferred region. See list of azure regions - https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.locationnames?view=azure-dotnet&preserve-view=true.

  • Type: string
  • Importance: low
azure.cosmos.auth.type

Cosmos connection auth type

  • Type: string
  • Default: MasterKey
  • Valid Values: MasterKey, ServicePrincipal
  • Importance: high
azure.cosmos.account.key

Cosmos DB account key (only required in case of auth.type as MasterKey).

  • Type: password
  • Importance: medium
azure.cosmos.auth.aad.clientId

The clientId/ApplicationId of the service principal. Required for ServicePrincipal authentication.

  • Type: string
  • Importance: medium
azure.cosmos.auth.aad.clientSecret

The client secret/password of the service principal. Required for ServicePrincipal authentication.

  • Type: password
  • Importance: medium
azure.cosmos.account.tenantId

The tenantId of the Cosmos DB account. Required for ServicePrincipal authentication.

  • Type: string
  • Default: “”
  • Importance: medium

Consumer configuration

max.poll.interval.ms

The maximum delay between subsequent consume requests to Kafka. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 300000 milliseconds (5 minutes).

  • Type: long
  • Default: 300000 (5 minutes)
  • Valid Values: [60000,…,1800000] for non-dedicated clusters and [60000,…] for dedicated clusters
  • Importance: low
max.poll.records

The maximum number of records to consume from Kafka in a single request. This configuration property may be used to improve the performance of the connector, if the connector cannot send records to the sink system. Defaults to 500 records.

  • Type: long
  • Default: 500
  • Valid Values: [1,…,500] for non-dedicated clusters and [1,…] for dedicated clusters
  • Importance: low

Number of tasks for this connector

tasks.max

Maximum number of tasks for the connector.

  • Type: int
  • Valid Values: [1,…]
  • Importance: high

Write configuration details

azure.cosmos.sink.bulk.enabled

Flag to indicate whether Cosmos DB bulk mode is enabled for Sink connector. By default it is true.

  • Type: boolean
  • Default: true
  • Importance: medium
azure.cosmos.sink.bulk.maxConcurrentCosmosPartitions

Cosmos DB item write max concurrent cosmos partitions. If not specified it will be determined based on the number of the container’s physical partitions - which would indicate every batch is expected to have data from all Cosmos physical partitions. If specified it indicates from at most how many Cosmos Physical Partitions each batch contains data. So this config can be used to make bulk processing more efficient when input data in each batch has been repartitioned to balance to how many Cosmos partitions each batch needs to write. This is mainly useful for very large containers (with hundreds of physical partitions).

  • Type: int
  • Default: -1
  • Importance: low
azure.cosmos.sink.bulk.initialBatchSize

Cosmos DB initial bulk micro batch size - a micro batch will be flushed to the backend when the number of documents enqueued exceeds this size - or the target payload size is met. The micro batch size is getting automatically tuned based on the throttling rate. By default the initial micro batch size is 1. Reduce this when you want to avoid that the first few requests consume too many RUs.

  • Type: int
  • Default: 1
  • Importance: medium
azure.cosmos.sink.write.strategy

Cosmos DB item write strategy: ItemOverwrite (using upsert), ItemAppend (using create, ignore pre-existing items i.e., Conflicts), ItemDelete (deletes based on id/pk of data frame), ItemDeleteIfNotModified (deletes based on id/pk of data frame if etag hasn’t changed since collecting id/pk), ItemOverwriteIfNotModified (using create if etag is empty, update/replace with etag pre-condition otherwise, if document was updated the pre-condition failure is ignored), ItemPatch (Partial update all documents based on the patch config)

  • Type: string
  • Default: ItemOverwrite
  • Valid Values: ItemAppend, ItemDelete, ItemDeleteIfNotModified, ItemOverwrite, ItemOverwriteIfNotModified, ItemPatch
  • Importance: high
azure.cosmos.sink.write.patch.operationType.default

Default Cosmos DB patch operation type. Supported ones include none, add, set, replace, remove, increment. Choose none for no-op, for others please reference - https://docs.microsoft.com/azure/cosmos-db/partial-document-update#supported-operations for full context.

  • Type: string
  • Default: Set
  • Valid Values: Add, Increment, None, Remove, Replace, Set
  • Importance: low
azure.cosmos.sink.write.patch.property.configs

Cosmos DB patch json property configs. It can contain multiple definitions matching the following patterns separated by comma. property(jsonProperty).op(operationType) or property(jsonProperty).path(patchInCosmosdb).op(operationType) - The difference of the second pattern is that it also allows you to define a different cosmosdb path. Note: It does not support nested json property config.

  • Type: string
  • Importance: low
azure.cosmos.sink.write.patch.filter

Used for Conditional patch. Ref - https://docs.microsoft.com/azure/cosmos-db/partial-document-update-getting-started#java

  • Type: string
  • Importance: low
azure.cosmos.sink.maxRetryCount

Cosmos DB max retry attempts on write failures. By default, the connector will retry on transient write errors for up to 10 times.

  • Type: int
  • Default: 10
  • Importance: medium
azure.cosmos.sink.errors.tolerance.level

Error tolerance level after exhausting all retries. None for fail on error. All for log and continue.

  • Type: string
  • Default: None
  • Valid Values: All, None
  • Importance: high

ID Strategy details

azure.cosmos.sink.id.strategy

The IdStrategy class name to use for generating a unique document id (id). FullKeyStrategy uses the full record key as ID. KafkaMetadataStrategy uses a concatenation of the kafka topic, partition, and offset as ID, with dashes as separator. i.e. ${topic}-${partition}-${offset}. ProvidedInKeyStrategy and ProvidedInValueStrategy use the id field found in the key and value objects respectively as ID.

  • Type: string
  • Default: FullKeyStrategy
  • Valid Values: FullKeyStrategy, KafkaMetadataStrategy, ProvidedInKeyStrategy, ProvidedInValueStrategy, TemplateStrategy
  • Importance: high

Throughput control details

azure.cosmos.throughputControl.enabled

A flag to indicate whether throughput control is enabled.

  • Type: boolean
  • Default: false
  • Importance: medium
azure.cosmos.throughputControl.auth.type

There are two auth types are supported currently: MasterKey`(PrimaryReadWriteKeys, SecondReadWriteKeys, PrimaryReadOnlyKeys, SecondReadWriteKeys), `ServicePrincipal

  • Type: string
  • Default: MasterKey
  • Valid Values: MasterKey, ServicePrincipal
  • Importance: low
azure.cosmos.throughputControl.account.key

Cosmos DB throughput control account key (only required in case of throughputControl.auth.type as MasterKey)

  • Type: password
  • Importance: low
azure.cosmos.throughputControl.auth.aad.clientId

The clientId/applicationId of the service principal. Required for ServicePrincipal authentication.

  • Type: string
  • Importance: low
azure.cosmos.throughputControl.auth.aad.clientSecret

The client secret/password of the service principal. Required for ServicePrincipal authentication.

  • Type: password
  • Importance: low
azure.cosmos.throughputControl.account.tenantId

The tenantId of the Cosmos DB account. Required for ServicePrincipal authentication.

  • Type: string
  • Importance: low
azure.cosmos.throughputControl.account.environment

The azure environment of the Cosmos DB account: Azure, AzureChina, AzureUsGovernment, AzureGermany.

  • Type: string
  • Default: AZURE
  • Valid Values: AZURE, AZURE_CHINA, AZURE_GERMANY, AZURE_US_GOVERNMENT
  • Importance: low
azure.cosmos.throughputControl.account.endpoint

Cosmos DB throughput control account endpoint uri.

  • Type: string
  • Importance: low
azure.cosmos.throughputControl.mode.gateway

Flag to indicate whether to use gateway mode. By default it is false, means SDK uses direct mode. https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes

  • Type: boolean
  • Default: false
  • Importance: low
azure.cosmos.throughputControl.preferredRegionList

Preferred regions list to be used for a multi region Cosmos DB account. This is a comma separated value (e.g., [East US, West US] or East US, West US) provided preferred regions will be used as hint. You should use a collocated kafka cluster with your Cosmos DB account and pass the kafka cluster region as preferred region. See list of azure regions - https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.locationnames?view=azure-dotnet&preserve-view=true

  • Type: string
  • Importance: low
azure.cosmos.throughputControl.group.name

Throughput control group name. Since customer is allowed to create many groups for a container, the name should be unique.

  • Type: string
  • Importance: medium
azure.cosmos.throughputControl.targetThroughput

Throughput control group target throughput. The value should be larger than 0.

  • Type: int
  • Valid Values: [1,…]
  • Importance: medium
azure.cosmos.throughputControl.targetThroughputThreshold

Throughput control group target throughput threshold. The value should be between (0,1].

  • Type: double
  • Importance: medium
azure.cosmos.throughputControl.priorityLevel

Throughput control group priority level. The value can be None, High or Low.

  • Type: string
  • Default: None
  • Valid Values: High, Low, None
  • Importance: medium
azure.cosmos.throughputControl.globalControl.database.name

Database which will be used for throughput global control.

  • Type: string
  • Importance: medium
azure.cosmos.throughputControl.globalControl.container.name

Container which will be used for throughput global control.

  • Type: string
  • Importance: medium
azure.cosmos.throughputControl.globalControl.renewIntervalInMS

This controls how often the client is going to update the throughput usage of itself and adjust its own throughput share based on the throughput usage of other clients. Default is 5s, the allowed min value is 5s.

  • Type: int
  • Default: 5000
  • Valid Values: [5000,…]
  • Importance: low
azure.cosmos.throughputControl.globalControl.expireIntervalInMS

This controls how quickly we will detect the client has been offline and hence allow its throughput share to be taken by other clients. Default is 11s, the allowed min value is 2 * renewIntervalInMS + 1

  • Type: int
  • Importance: low

Additional Configs

consumer.override.auto.offset.reset

Defines the behavior of the consumer when there is no committed position (which occurs when the group is first initialized) or when an offset is out of range. You can choose either to reset the position to the “earliest” offset or the “latest” offset (the default). You can also select “none” if you would rather set the initial offset yourself and you are willing to handle out of range errors manually. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#auto-offset-reset

  • Type: string
  • Importance: low
consumer.override.isolation.level

Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. More details: https://docs.confluent.io/platform/current/installation/configuration/consumer-configs.html#isolation-level

  • Type: string
  • Importance: low
header.converter

The converter class for the headers. This is used to serialize and deserialize the headers of the messages.

  • Type: string
  • Importance: low
value.converter.allow.optional.map.keys

Allow optional string map key when converting from Connect Schema to Avro Schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.auto.register.schemas

Specify if the Serializer should attempt to register the Schema.

  • Type: boolean
  • Importance: low
value.converter.connect.meta.data

Allow the Connect converter to add its metadata to the output schema. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.avro.schema.support

Enable enhanced schema support to preserve package information and Enums. Applicable for Avro Converters.

  • Type: boolean
  • Importance: low
value.converter.enhanced.protobuf.schema.support

Enable enhanced schema support to preserve package information. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.flatten.unions

Whether to flatten unions (oneofs). Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.index.for.unions

Whether to generate an index suffix for unions. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.generate.struct.for.nulls

Whether to generate a struct variable for null values. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.int.for.enums

Whether to represent enums as integers. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.latest.compatibility.strict

Verify latest subject version is backward compatible when use.latest.version is true.

  • Type: boolean
  • Importance: low
value.converter.object.additional.properties

Whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.nullables

Whether nullable fields should be specified with an optional label. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.optional.for.proto2

Whether proto2 optionals are supported. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.scrub.invalid.names

Whether to scrub invalid names by replacing invalid characters with valid characters. Applicable for Avro and Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.use.latest.version

Use latest version of schema in subject for serialization when auto.register.schemas is false.

  • Type: boolean
  • Importance: low
value.converter.use.optional.for.nonrequired

Whether to set non-required properties to be optional. Applicable for JSON_SR Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.nullables

Whether nullable fields should use primitive wrapper messages. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
value.converter.wrapper.for.raw.primitives

Whether a wrapper message should be interpreted as a raw primitive at root level. Applicable for Protobuf Converters.

  • Type: boolean
  • Importance: low
errors.tolerance

Use this property if you would like to configure the connector’s error handling behavior. WARNING: This property should be used with CAUTION for SOURCE CONNECTORS as it may lead to dataloss. If you set this property to ‘all’, the connector will not fail on errant records, but will instead log them (and send to DLQ for Sink Connectors) and continue processing. If you set this property to ‘none’, the connector task will fail on errant records.

  • Type: string
  • Default: all
  • Importance: low
key.converter.key.subject.name.strategy

How to construct the subject name for key schema registration.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low
value.converter.decimal.format

Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:

BASE64 to serialize DECIMAL logical types as base64 encoded binary data and

NUMERIC to serialize Connect DECIMAL logical type values in JSON/JSON_SR as a number representing the decimal value.

  • Type: string
  • Default: BASE64
  • Importance: low
value.converter.flatten.singleton.unions

Whether to flatten singleton unions. Applicable for Avro and JSON_SR Converters.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.ignore.default.for.nullables

When set to true, this property ensures that the corresponding record in Kafka is NULL, instead of showing the default column value. Applicable for AVRO,PROTOBUF and JSON_SR Converters.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.reference.subject.name.strategy

Set the subject reference name strategy for value. Valid entries are DefaultReferenceSubjectNameStrategy or QualifiedReferenceSubjectNameStrategy. Note that the subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy.

  • Type: string
  • Default: DefaultReferenceSubjectNameStrategy
  • Importance: low
value.converter.replace.null.with.default

Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used. Applicable for JSON Converter.

  • Type: boolean
  • Default: true
  • Importance: low
value.converter.schemas.enable

Include schemas within each of the serialized values. Input messages must contain schema and payload fields and may not contain additional fields. For plain JSON data, set this to false. Applicable for JSON Converter.

  • Type: boolean
  • Default: false
  • Importance: low
value.converter.value.subject.name.strategy

Determines how to construct the subject name under which the value schema is registered with Schema Registry.

  • Type: string
  • Default: TopicNameStrategy
  • Importance: low

Auto-restart policy

auto.restart.on.user.error

Enable connector to automatically restart on user-actionable errors.

  • Type: boolean
  • Default: true
  • Importance: medium

Next Steps

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent CLI to manage your resources in Confluent Cloud.

../_images/topology.png