Overview of Confluent Platform Upgrade

Upgrading to Confluent Platform 8.0 enables you to leverage the latest innovative features.

For details on the new features in Confluent Platform 8.0, see Release Notes for Confluent Platform 8.0.

The following checklist provides a quick guide for how to upgrade to the latest version. For detailed guidance, see Upgrade Confluent Platform.

Note that these steps apply only to upgrading from one Confluent Platform version to another. If you want to migrate from an open-source Kafka deployment to Confluent Platform, see Migrate an Existing Kafka Deployment to Confluent Platform. If you want to migrate from ZooKeeper to KRaft, see Migrate from ZooKeeper to KRaft on Confluent Platform. You should not upgrade and migrate at the same time.

Step 0: Prepare for the upgrade

Important

Confluent Platform 8.0 does not support ZooKeeper for metadata management. If you are currently running ZooKeeper mode, you must upgrade to to a version of Confluent Platform that supports migrating to KRaft, migrate to KRaft and then upgrade to Confluent Platform 8.0. For example:

  1. If you are on a version earlier than 7.7.1, upgrade to Confluent Platform 7.7.1 or later in ZooKeeper mode.
  2. Migrate to KRaft mode. For more information, see Migrate from ZooKeeper to KRaft on Confluent Platform.
  3. Upgrade to Confluent Platform 8.0.

There are many other changes in Confluent Platform 8.0, so you should read the Release Notes for Confluent Platform 8.0 and the Changelogs for your Confluent Platform components before you upgrade.

Here’s what you need to get started:

  • An existing Confluent Platform deployment. If you’re starting with a new deployment, follow the steps in Install Confluent Platform On-Premises.
  • An upgrade plan that matches your specific requirements and environment. You should not start working through this checklist on a live cluster. Review the Upgrade Guide fully and draft an upgrade plan.

Step 1: Upgrade Kafka controllers and brokers

You have these options for upgrading your Kafka brokers:

  • Downtime upgrade: If downtime is acceptable for your business case, you can take down the entire cluster, upgrade each Kafka controller or broker, and restart the cluster.
  • Rolling upgrade: In a rolling upgrade scenario, you upgrade one Kafka controller or broker at a time while the cluster continues to run. To avoid downtime for end users, follow the recommendations in rolling restarts.

For details on how to upgrade Kafka brokers, see Upgrade Kafka.

Step 2: Upgrade Confluent Platform components

In this step, you will upgrade the Confluent Platform components. For a rolling upgrade, you can do this on one server at a time while the cluster continues to run. The details depend on your environment, but the steps to upgrade components are the same.

You should always upgrade Confluent Control Center as the final Confluent Platform component.

Upgrade steps:

  1. Stop the Confluent Platform components.
  2. Back up configuration files, for example in ./etc/kafka.
  3. Remove existing packages and their dependencies.
  4. Install new packages.
  5. Restart the Confluent Platform components.

For details on how to upgrade different package types, see the following sections:

For details on how to upgrade individual Confluent Platform components, see the following sections:

Step 3: Update configuration files

Some configuration settings change from one version to the next. The following sections describe changes that are required for specific versions.

Connect Log Redactor configuration

The Log Redactor enables you to redact logs based on regex rules. Starting with Confluent Platform 7.1.0, the Confluent Log Redactor is configured for Connect by default. Because you must back up your configuration files pre-install and restore them post-install, it’s possible the Log Redactor will no longer be enabled after this process. You can manually configure it in the connect-log4j2.yaml file as follows:

# connect-log4j2.yaml
  Configuration:
    Properties:
      Property:
        - name: "kafka.logs.dir"
          value: "."
        - name: "logPattern"
          value: "[%d] %p %X{connector.context}%m (%c:%L)%n"

    Appenders:
      Console:
        name: STDOUT
        PatternLayout:
          pattern: "${logPattern}"

      RollingFile:
        - name: ConnectAppender
          fileName: "${sys:kafka.logs.dir}/connect.log"
          filePattern: "${sys:kafka.logs.dir}/connect-%d{yyyy-MM-dd-HH}.log"
          PatternLayout:
            pattern: "${logPattern}"
          TimeBasedTriggeringPolicy:
            modulate: true
            interval: 1

      Rewrite:
        - name: RedactorAppender
          RedactorPolicy:
            name: "io.confluent.log4j2.redactor.RedactorPolicy"
            rules: "${log4j.config.dir}/connect-log-redactor-rules.json"
          AppenderRef:
            - ref: STDOUT
            - ref: ConnectAppender
    Loggers:
      Root:
        level: INFO
        AppenderRef:
          - ref: STDOUT
          - ref: ConnectAppender

Confluent license

When you upgrade to Confluent Platform, add the confluent.license configuration parameter to the server.properties file. The confluent.license setting is required to start Confluent Platform. For more information, see Manage Confluent Platform Licenses.

Replication factor for Self-Balancing Clusters

Ensure the confluent.balancer.topic.replication.factor setting is less than or equal to the total number of brokers.

For more information, see confluent.balancer.topic.replication.factor.

Step 4: Enable Health+

Health+ enables you to identify issues before downtime occurs, ensuring high availability for your event streaming applications.

  • Enable Telemetry – The Confluent Telemetry Reporter is a plugin that runs inside each Confluent Platform service to push metadata about the service to Confluent. Telemetry Reporter enables product features based on the metadata, like Health+. Telemetry is limited to metadata required to provide Health+ (for example, no topic data) and is used solely to assist Confluent in the provisioning of support services.
  • Enable Health+ – After you enable Telemetry Reporter, you can activate Health+, which provides ongoing, real-time analysis of performance and configuration data for your Confluent Platform deployment.

Note

While enabling Telemetry and Health+ is highly encouraged and beneficial to minimize downtime, these features are not mandatory. Speak with your Confluent account team if you have any questions about the features.

Step 5: Rebuild applications

If you have applications that use Kafka producers and consumers against the new 8.0.x libraries, rebuild and redeploy them. For more information, see Schemas, Serializers, and Deserializers for Confluent Platform.

You can upgrade Kafka Streams applications independently, without requiring Kafka brokers to be upgraded first. Follow the instructions in the Kafka Streams Upgrade Guide to upgrade your applications to use the latest version of Kafka Streams.

For more information, see Upgrade other client applications.

Other Considerations

Confluent Platform 7.2 and later has idempotence enabled by default for Kafka producers. This may cause certain proprietary Confluent connectors (which use Kafka producers to write the license to the license topic) to fail if you’re not using a Centralized License in your Connect worker. Because Kafka brokers on older versions don’t support idempotent producers out of the box, you can try any of the following workarounds:

  • Workaround 1: You can switch to using the Centralized License feature, which explicitly disables producer idempotence.

  • Workaround 2: Add the following property to each proprietary connector’s configuration:

    confluent.topic.producer.enable.idempotence = false