Additional Connector Configuration Reference for Confluent Cloud¶
This topic describes the additional configuration properties that you can use when setting up a fully-managed Confluent Cloud connector.
Search for an additional configuration¶
Enter a string to search and filter by configuration property name.
value.converter.decimal.format¶
It defines the serialization format for Connect DECIMAL logical type values in JSON/JSON_SR Schema. You can choose between two formats
- BASE64: Serializes DECIMAL logical types as Base64-encoded binary data (preserves full precision).
- NUMERIC: Serializes DECIMAL logical types as a number representing the decimal value.
value.converter.object.additional.properties¶
Accepts a boolean value. This configuration controls whether to allow additional properties for object schemas. Applicable for JSON_SR Converters.
value.converter.replace.null.with.default¶
Accepts a boolean value. It controls how the JSON converter handles nullable fields that also have a default value defined in their schema.
true
: Replacesnull
with the schema’s default.false
: Preservesnull
, even if a schema default exists.
value.converter.int.for.enums¶
Accepts a boolean value. It determines whether to represent enums values as integers during Protobuf serialization and deserialization.
value.converter.allow.optional.map.keys¶
This configuration, applicable to Avro Converters, allows string map keys to be optional when converting from a Connect Schema to an Avro Schema.
value.converter.latest.compatibility.strict¶
Accepts a boolean value. It verifies that the latest retrieved schema version is backward compatible, when use.latest.version
is true
.
true
: Enforces this backward compatibility check; fails if not compatible.false
: Uses the latest version without any backward compatibility verification.
value.converter.flatten.union¶
Accepts a boolean value. It determines if the Protobuf converter flattens oneof
(union) fields when mapping to Connect schemas.
true
: Simplifiesoneof
fields into a more direct Connect type, reducing schema complexity.false
: Preserves the explicitoneof
structure in the Connect schema.
header.converter¶
The header.converter
class handles conversions between Kafka Connect’s internal format and the serialized form used in Kafka message headers. This independent control over header value format means any connector can work with any serialization method, like JSON or Avro. By default, the SimpleHeaderConverter
converts header values to and from strings by inferring their schemas.
value.converter.optional.for.proto2¶
Accepts a boolean value. It controls the converter’s support for optional fields as defined in Protobuf 2 (proto2) syntax. When true
, it enables interpretation and mapping of explicit optional fields from Protobuf 2 (proto2) schemas.
consumer.override.isolation.level¶
This configuration controls how you read messages sent via transactions.
- When set to
read_committed
,consumer.poll()
only returns transactional messages that have been successfully committed. - If set to
read_uncommitted
(default),consumer.poll()
returns all messages, including transactional messages that were later aborted.
Regardless of the setting, non-transactional messages are always returned. For more details, see isolation.level.
value.converter.optional.for.nullables¶
Accepts a boolean value. It controls whether nullable fields should be specified with an optional
label. Applicable for Protobuf Converters.
value.converter.reference.subject.name.strategy¶
It sets the strategy for constructing subject names for referenced schemas within your message values. The subject reference name strategy can be selected only for PROTOBUF format with the default strategy being DefaultReferenceSubjectNameStrategy
. Valid entries are:
DefaultReferenceSubjectNameStrategy
(Default): Names referenced schemas based on the main message’s subject.QualifiedReferenceSubjectNameStrategy
: Uses a fully qualified name for referenced schemas, like, package and message names.
value.converter.schemas.enable¶
Accepts a boolean value. It controls whether the JSON converter includes a schema within each serialized value.
true
: Requires input messages to containschema
andpayload
fields, and embeds schema into the output.false
(Default): Treats messages as plain JSON data without an embedded schema.
This configuration is applicable only for JSON Converters.
value.converter.generate.struct.for.nulls¶
Accepts a boolean value. It determines if the converter generates a struct (dedicated empty Protobuf message) variable to represent null values for nullable fields.
true
: Generates a struct variable fornull
values.false
: Does not generate a struct variable for null values, opting to simply omit the field from the serialized message.
This configuration is applicable only for Protobuf Converters.
value.converter.auto.register.schemas¶
This configuration controls whether value converter automatically registers new schemas or new versions of existing schemas with the configured Schema Registry. It accepts a boolean value.
true
(Default): Converter registers schemas automatically. Simplifies development and rapid schema evolution.false
: Converter does not register schemas. Requires manual or programmatic pre-registration.
errors.tolerance¶
This setting dictates how your connector handles errors during operation. It accepts none
and all
.
none
(Default): Any error encountered will cause the connector task to immediately fail.all
: The connector will skip over problematic records and continue processing.
value.converter.connect.meta.data¶
This configuration allows the Kafka Connect converter to add its own metadata into the output Avro Schema. Accepts a boolean value.
value.converter.use.latest.version¶
If value.converter.auto.register.schemas
is false
, setting this to true
directs the converter to use the latest registered schema version from the Schema Registry for serialization. If false
, the converter looks for an exact schema match.
value.converter.value.subject.name.strategy¶
It defines the strategy for constructing the subject name used by the Schema Registry for the message value’s schema registration. Common strategies include:
- TopicNameStrategy (Default): Based on the Kafka topic.
- RecordNameStrategy: Based on the schema’s record name.
- TopicRecordNameStrategy: Combines both the topic name and the record name.
value.converter.enhanced.avro.schema.support¶
Accepts a boolean value. When set to true
, it enables enhanced Avro schema support to preserve package information and correctly map Connect enums
to Avro enum
types. Applicable for Avro Converters.
key.converter.key.subject.name.strategy¶
It defines the strategy for constructing the subject name used by the Schema Registry for the key’s schema registration. Common strategies include:
- TopicNameStrategy (Default): Based on the Kafka topic.
- RecordNameStrategy: Based on the schema’s record name.
- TopicRecordNameStrategy: Combines both the topic name and the record name.
value.converter.flatten.singleton.unions¶
Accepts a boolean value. It determines if singleton unions (union types containing only one actual data type, e.g., [“null”, “string”]) are flattened in the generated Avro or JSON Schema.
true
: Simplifies the schema by representing the single type directly.false
(Default): Preserves the explicit union structure, even if it’s a singleton.
This configuration is applicable only for Avro and JSON_SR Converters.
value.converter.ignore.default.for.nullables¶
Accepts a boolean value. It controls how null values in nullable fields are serialized when the target schema (Avro, Protobuf, or JSON) has a default value for that field.
true
: If the source field is null, the output is null, ignoring any schema default.false
(Default): If the source field is null, the converter may use the target schema’s default value instead of null.
This configuration is applicable only for Avro, Protobuf, and JSON Schema Converters.
value.converter.use.optional.for.nonrequired¶
Accepts a boolean value. It controls whether non-required Kafka Connect schema properties are represented as optional in the output JSON Schema. Applicable for JSON Schema Converters.
value.converter.wrapper.for.nullables¶
Accepts a boolean value. It determines if nullable Connect fields are serialized into Protobuf using primitive wrapper messages. Applicable for Protobuf Converters.
value.converter.enhanced.protobuf.schema.support¶
Accepts a boolean value. When set to true
, it enables enhanced Protobuf schema support to preserve package information. Applicable for Protobuf Converters.
value.converter.scrub.invalid.names¶
Accepts a boolean value. It determines if the converter automatically replaces invalid characters in schema names to conform to Avro or Protobuf naming conventions.
true
: Invalid characters are scrubbed/replaced, preventing conversion failures.false
: Invalid names will cause conversion to fail.
This configuration is applicable only for Avro and Protobuf Converters.
value.converter.generate.index.for.unions¶
Accepts a boolean value. It controls whether the Protobuf converter generates an index suffix for fields that originate from Kafka Connect union types.
true
: Generates uniquely indexed field names for each type within a Connect union.false
: Uses an alternative method for union representation.
consumer.override.auto.offset.reset¶
It defines the Kafka consumer’s behavior when it starts without a committed offset (e.g., a new consumer group) or when its committed offset is out of range (e.g., data was deleted).
You can choose from the following strategies for resetting the consumer’s position:
- latest (Default): The consumer resets its position to the most recent available offset, consuming only new messages produced after that point.
- earliest: The consumer resets its position to the earliest available offset, consuming all messages from the beginning of the topic or partition.
- none: The consumer does not automatically reset its position. If no committed offset exists or the offset is out of range, the consumer fails. This option requires you to manually set the initial offset or handle offset errors programmatically.
For more details, see Kafka Consumer Configuration.
value.converter.wrapper.for.raw.primitives¶
Accepts a boolean value. When set to true
, it enables the converter to interpret a wrapper message as a raw primitive type when it appears as the root-level value of a Kafka message. Applicable for Protobuf Converters.
producer.override.linger.ms¶
This configuration defines the maximum delay (in milliseconds) the producer waits to batch records together before sending a request to Kafka. It allows the producer to group more records into a single request, improving overall throughput at the cost of increased latency. A value of 0
ms sends records immediately.
producer.override.compression.type¶
It sets the compression type for all data generated by the producer. You can the set the following compression types:
- none (Default): No compression is applied.
- gzip: Offers high compression ratio.
- snappy: Balances compression ratio with CPU efficiency.
- lz4: Very fast compression/decompression.
- zstd: Provides a good balance of speed and compression ratio.
Since compression applies to full batches of data, so the efficacy of batching also impacts the compression ratio (more batching means better compression).
key.converter.schemas.enable¶
Accepts a boolean value. When set to true
, enables the key converter to serialize Kafka message keys with their corresponding schemas.
key.converter.replace.null.with.default¶
It specifies whether the key converter should replace null values with a default value.