documentation
Get Started Free
  • Get Started Free
  • Stream
      Confluent Cloud

      Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

      Confluent Platform

      An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

  • Connect
      Managed

      Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

      Self-Managed

      Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

  • Govern
      Managed

      Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

      Self-Managed

      Use self-managed Schema Registry and Stream Governance with Confluent Platform.

  • Process
      Managed

      Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

      Self-Managed

      Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Stream
Confluent Cloud

Fully-managed data streaming platform with a cloud-native Kafka engine (KORA) for elastic scaling, with enterprise security, stream processing, governance.

Confluent Platform

An on-premises enterprise-grade distribution of Apache Kafka with enterprise security, stream processing, governance.

Connect
Managed

Use fully-managed connectors with Confluent Cloud to connect to data sources and sinks.

Self-Managed

Use self-managed connectors with Confluent Platform to connect to data sources and sinks.

Govern
Managed

Use fully-managed Schema Registry and Stream Governance with Confluent Cloud.

Self-Managed

Use self-managed Schema Registry and Stream Governance with Confluent Platform.

Process
Managed

Use Flink on Confluent Cloud to run complex, stateful, low-latency streaming applications.

Self-Managed

Use Flink on Confluent Platform to run complex, stateful, low-latency streaming applications.

Learn
Get Started Free
  1. Home
  2. Cloud
  3. Manage Networking on Confluent Cloud
  4. Networking with AWS on Confluent Cloud
  5. Use AWS PrivateLinks on Confluent Cloud

CLOUD

  • Overview
  • Get Started
    • Overview
    • Quick Start
    • REST API Quick Start
    • Manage Schemas
    • Deploy Free Clusters
    • Tutorials and Examples
      • Overview
      • Example: Use Replicator to Copy Kafka Data to Cloud
      • Example: Create Fully-Managed Services
      • Example: Build an ETL Pipeline
  • Manage Kafka Clusters
    • Overview
    • Cluster Types
    • Manage Configuration Settings
    • Cloud Providers and Regions
    • Resilience
    • Copy Data with Cluster Linking
      • Overview
      • Quick Start
      • Use Cases and Tutorials
        • Share Data Across Clusters, Regions, and Clouds
        • Disaster Recovery and Failover
        • Create Hybrid Cloud and Bridge-to-Cloud Deployments
        • Use Tiered Separation of Critical Workloads
        • Migrate Data
        • Manage Audit Logs
      • Configure, Manage, and Monitor
        • Configure and Manage Cluster Links
        • Manage Mirror Topics
        • Manage Private Networking
        • Manage Security
        • Monitor Metrics
      • FAQ
      • Troubleshooting
    • Copy Data with Replicator
      • Quick Start
      • Use Replicator to Migrate Topics
    • Resize a Dedicated Cluster
    • Multi-Tenancy and Client Quotas for Dedicated Clusters
      • Overview
      • Quick Start
    • Create Cluster Using Terraform
    • Create Cluster Using Pulumi
    • Connect Confluent Platform and Cloud Environments
      • Overview
      • Connect Self-Managed Control Center to Cloud
      • Connect Self-Managed Clients to Cloud
      • Connect Self-Managed Connect to Cloud
      • Connect Self-Managed REST Proxy to Cloud
      • Connect Self-Managed ksqlDB to Cloud
      • Connect Self-Managed MQTT to Cloud
      • Connect Self-Managed Schema Registry to Cloud
      • Connect Self-Managed Streams to Cloud
      • Example: Autogenerate Self-Managed Component Configs for Cloud
  • Build Client Applications
    • Overview
    • Client Quick Start
    • Configure Clients
      • Architectural Considerations
      • Consumer
      • Producer
      • Configuration Properties
      • Connect Program
    • Test and Monitor a Client
      • Test
      • Monitor
      • Reset Offsets
    • Optimize and Tune
      • Overview
      • Configuration Settings
      • Throughput
      • Latency
      • Durability
      • Availability
      • Freight
    • Client Guides
      • Python
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java Client
      • JMS Client
        • Overview
        • Development Guide
    • Kafka Client APIs
      • Python Client API
      • .NET Client API
      • JavaScript Client API
      • Go Client API
      • C++ Client API
      • Java Client API
      • JMS Client
        • Overview
        • Development Guide
    • Deprecated Client APIs
    • Client Examples
      • Overview
      • Python Client
      • .NET Client
      • JavaScript Client
      • Go Client
      • C++ Client
      • Java
      • Spring Boot
      • KafkaProducer
      • REST
      • Clojure
      • Groovy
      • Kafka Connect Datagen
      • kafkacat
      • Kotlin
      • Ruby
      • Rust
      • Scala
    • VS Code Extension
  • Build Kafka Streams Applications
    • Overview
    • Quick Start
    • ksqlDB
      • Create Stream Processing Apps with ksqlDB
      • Quick Start
      • Enable ksqlDB Integration with Schema Registry
      • ksqlDB Cluster API Quick Start
      • Monitor ksqlDB
      • Manage ksqlDB by using the CLI
      • Manage Connectors With ksqlDB
      • Develop ksqlDB Applications
      • Pull Queries
      • Grant Role-Based Access
      • Migrate ksqlDB Applications on Confluent Cloud
  • Manage Security
    • Overview
    • Manage Authentication
      • Overview
      • Manage User Identities
        • Overview
        • Manage User Accounts
          • Overview
          • Authentication Security Protections
          • Manage Local User Accounts
          • Multi-factor Authentication
          • Manage SSO User Accounts
        • Manage User Identity Providers
          • Overview
          • Use Single Sign-On (SSO)
          • Manage SAML Single Sign-On (SSO)
          • Manage Azure Marketplace SSO
          • Just-in-time User Provisioning
          • Group Mapping
            • Overview
            • Enable Group Mapping
            • Manage Group Mappings
            • Troubleshooting
            • Best Practices
          • Manage Trusted Domains
          • Manage SSO provider
          • Troubleshoot SSO
      • Manage Workload Identities
        • Overview
        • Manage Workload Identities
        • Manage Service Accounts and API Keys
          • Overview
          • Manage Service Accounts
          • Manage API Keys
            • Overview
            • Manage API keys
            • Best Practices
            • Troubleshoot
        • Manage OAuth/OIDC Identity Providers
          • Overview
          • Add an OIDC identity provider
          • Use OAuth identity pools and filters
          • Manage identity provider configurations
          • Manage the JWKS URI
          • Configure OAuth clients
          • Access Kafka REST APIs
          • Use Confluent STS tokens with REST APIs
          • Best Practices
        • Manage mTLS Identity Providers
          • Overview
          • Configure mTLS
          • Manage Certificate Authorities
          • Manage Certificate Identity Pools
          • Create CEL Filters for mTLS
          • Create JSON payloads for mTLS
          • Manage Certificate Revocation
          • Troubleshoot mTLS Issues
    • Control Access
      • Overview
      • Resource Hierarchy
        • Overview
        • Organizations
          • Overview
          • Manage Multiple Organizations
        • Environments
        • Confluent Resource Names (CRNs)
      • Manage Role-Based Access Control
        • Overview
        • Predefined RBAC Roles
        • Manage Role Bindings
        • Use ACLs with RBAC
      • Manage IP Filtering
        • Overview
        • Manage IP Groups
        • Manage IP Filters
        • Best Practices
      • Manage Access Control Lists
      • Use the Confluent CLI with multiple credentials on Confluent Cloud
    • Encrypt and Protect Data
      • Overview
      • Manage Data in Transit With TLS
      • Encrypt Data at Rest Using Self-managed Encryption Keys
        • Overview
        • Use Self-managed Encryption Keys on AWS
        • Use Self-managed Encryption Keys on Azure
        • Use Self-managed Encryption Keys on Google Cloud
        • Use Pre-BYOK-API-V1 Self-managed Encryption Keys
        • Use Confluent CLI for Self-managed Encryption Keys
        • Use BYOK API for Self-managed Encryption Keys
        • Revoke Access to Data at Rest
        • Best Practices
      • Encrypt Sensitive Data Using Client-side Field Level Encryption
        • Overview
        • Manage CSFLE using Confluent Cloud Console
        • Use Client-side Field Level Encryption
        • Configuration Settings
        • Manage Encryption Keys
        • Quick Start
        • Implement a Custom KMS Driver
        • Process Encrypted Data with Apache Flink
        • Code examples
        • Troubleshoot
        • FAQ
    • Monitor Activity
      • Concepts
      • Understand Audit Log Records
      • Audit Log Event Schema
      • Auditable Event Methods
        • Connector
        • Custom connector plugin
        • Flink
        • Flink Authentication and Authorization
        • IP Filter Authorization
        • Kafka Cluster Authentication and Authorization
        • Kafka Cluster Management
        • ksqlDB Cluster Authentication and Authorization
        • Networking
        • Notifications Service
        • OAuth/OIDC Identity Provider and Identity Pool
        • Organization
        • Role-based Access Control (RBAC)
        • Schema Registry Authentication and Authorization
        • Schema Registry Management and Operations
        • Tableflow Data Plane
        • Tableflow Control Plane
      • Access and Consume Audit Log Records
      • Retain Audit Logs
      • Best Practices
      • Troubleshoot
    • Access Management Tutorial
  • Manage Topics
    • Overview
    • Configuration Reference
    • Message Browser
    • Share Streams
      • Overview
      • Provide Stream Shares
      • Consume Stream Shares
    • Tableflow
      • Overview
      • Concepts
        • Overview
        • Storage
        • Schemas
        • Materialize Change Data Capture Streams
        • Billing
      • Get Started
        • Overview
        • Quick Start with Managed Storage
        • Quick Start Using Your Storage and AWS Glue
      • How-to Guides
        • Overview
        • Configure Storage
        • Integrate Catalogs
          • Overview
          • Integrate with AWS Glue Catalog
          • Integrate with Snowflake Open Catalog or Apache Polaris
        • Query Data
          • Overview
          • Query with AWS
          • Query with Snowflake
          • Query with Trino
      • Operate
        • Overview
        • Configure
        • Grant Role-Based Access
        • Monitor
        • Use Private Networking
        • Supported Cloud Regions
  • Govern Data Streams
    • Overview
    • Stream Governance
      • Manage Governance Packages
      • Data Portal
      • Track Data with Stream Lineage
      • Manage Stream Catalog
        • Stream Catalog User Guide
        • REST API Catalog Usage and Examples Guide
        • GraphQL API Catalog Usage and Examples Guide
    • Manage Schemas
      • Overview
      • Manage Schemas
      • Delete Schemas and Manage Storage
      • Use Broker-Side Schema ID Validation
      • Schema Linking
      • Schema Registry Tutorial
    • Fundamentals
      • Key Concepts
      • Schema Evolution and Compatibility
      • Schema Formats
        • Serializers and Deserializers Overview
        • Avro
        • Protobuf
        • JSON Schema
      • Data Contracts
      • Security Considerations
      • Enable Private Networking
        • Enable Private Networking with Schema Registry PrivateLink
        • Enable Private Networking for Schema Registry with a Public Endpoint
    • Reference
      • Configure Clients to Schema Registry
      • Schema Registry REST API Usage Examples
      • Use AsyncAPI to Describe Topics and Schemas
      • Maven Plugin
    • FAQ
  • Connect to External Systems
    • Overview
    • Install Connectors
      • ActiveMQ Source
      • AlloyDB Sink
      • Amazon CloudWatch Logs Source
      • Amazon CloudWatch Metrics Sink
      • Amazon DynamoDB CDC Source
      • Amazon DynamoDB Sink
      • Amazon Kinesis Source
      • Amazon Redshift Sink
      • Amazon S3 Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
      • Amazon S3 Source
      • Amazon SQS Source
      • AWS Lambda Sink
      • Azure Blob Storage Sink
        • Configure and Launch
        • Configure with Azure Egress Private Link Endpoints
      • Azure Blob Storage Source
      • Azure Cognitive Search Sink
      • Azure Cosmos DB Sink
      • Azure Cosmos DB Sink V2
      • Azure Cosmos DB Source
      • Azure Cosmos DB Source V2
      • Azure Data Lake Storage Gen2 Sink
      • Azure Event Hubs Source
      • Azure Functions Sink
      • Azure Log Analytics Sink
      • Azure Service Bus Source
      • Azure Synapse Analytics Sink
      • Databricks Delta Lake Sink
        • Set up Databricks Delta Lake (AWS) Sink Connector for Confluent Cloud
        • Configure and launch the connector
      • Datadog Metrics Sink
      • Datagen Source (development and testing)
      • Elasticsearch Service Sink
      • GitHub Source
      • Google BigQuery Sink [Deprecated]
      • Google BigQuery Sink V2
      • Google Cloud BigTable Sink
      • Google Cloud Dataproc Sink [Deprecated]
      • Google Cloud Functions Gen 2 Sink
      • Google Cloud Functions Sink [Deprecated]
      • Google Cloud Pub/Sub Source
      • Google Cloud Spanner Sink
      • Google Cloud Storage Sink
      • Google Cloud Storage Source
      • HTTP Sink
      • HTTP Sink V2
      • HTTP Source
      • HTTP Source V2
      • IBM MQ Source
      • InfluxDB 2 Sink
      • InfluxDB 2 Source
      • Jira Source
      • Microsoft SQL Server CDC Source (Debezium) [Deprecated]
      • Microsoft SQL Server CDC Source V2 (Debezium)
        • Configure and launch the connector
        • Backward incompatibility considerations
      • Microsoft SQL Server Sink (JDBC)
      • Microsoft SQL Server Source (JDBC)
      • MongoDB Atlas Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Egress Private Service Connect Endpoints
      • MongoDB Atlas Source
      • MQTT Sink
      • MQTT Source
      • MySQL CDC Source (Debezium) [Deprecated]
      • MySQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • MySQL Sink (JDBC)
      • MySQL Source (JDBC)
      • New Relic Metrics Sink
      • OpenSearch Sink
      • Oracle XStream CDC Source
        • Overview
        • Configure and Launch the connector
        • Oracle Database Prerequisites
        • Change Events
        • Examples
      • Oracle CDC Source
        • Overview
        • Configure and Launch the connector
        • Horizontal Scaling
        • Oracle Database Prerequisites
        • SMT Examples
        • DDL Changes
        • Troubleshooting
      • Oracle Database Sink (JDBC)
      • Oracle Database Source (JDBC)
      • PagerDuty Sink [Deprecated]
      • Pinecone Sink
      • PostgreSQL CDC Source (Debezium) [Deprecated]
      • PostgreSQL CDC Source V2 (Debezium)
        • Configure and Launch the connector
        • Backward Incompatible Changes
      • PostgreSQL Sink (JDBC)
      • PostgreSQL Source (JDBC)
      • RabbitMQ Sink
      • RabbitMQ Source
      • Redis Sink
      • Salesforce Bulk API 2.0 Sink
      • Salesforce Bulk API 2.0 Source
      • Salesforce Bulk API Source
      • Salesforce CDC Source
      • Salesforce Platform Event Sink
      • Salesforce Platform Event Source
      • Salesforce PushTopic Source
      • Salesforce SObject Sink
      • ServiceNow Sink
      • ServiceNow Source [Legacy]
      • ServiceNow Source V2
      • SFTP Sink
      • SFTP Source
      • Snowflake Sink
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Snowflake Source
        • Configure and Launch
        • Configure with AWS Egress PrivateLink Endpoints
        • Configure with Azure Egress Private Link Endpoints
        • Configure with Google Cloud Private Service Connect Endpoints
      • Solace Sink
      • Splunk Sink
      • Zendesk Source
    • Confluent Hub
      • Overview
      • Component Archive Specification
      • Contribute
    • Install Custom Plugins and Custom Connectors
      • Overview
      • Quick Start
      • Manage Custom Connectors
      • Limitations and Support
      • API and CLI
    • Manage Provider Integration
      • Quick Start
      • Provider Integration APIs
    • Manage CSFLE
    • Networking and DNS
      • Overview
      • AWS Egress PrivateLink Endpoints for First-Party Services
      • AWS Egress PrivateLink Endpoints for Self-Managed Services
      • AWS Egress PrivateLink Endpoints for Amazon RDS
      • Azure Egress Private Link Endpoints for First-Party Services
      • Azure Egress Private Link Endpoints for Self-Managed Services
      • Google Cloud Private Service Connect Endpoints for First-Party Services
    • Connect API Usage
    • Manage Public Egress IP Addresses
    • Sample Connector Output
    • Configure Single Message Transforms
    • View Connector Events
    • Interpret Connector Statuses
    • Manage Service Accounts
    • Configure RBAC
    • View Errors in the Dead Letter Queue
    • Connector Limits
    • Manage Offsets
    • Transforms List
      • Overview
      • Cast
      • Drop
      • DropHeaders
      • EventRouter
      • ExtractField
      • ExtractTopic
      • Filter (Kafka)
      • Filter (Confluent)
      • Flatten
      • GzipDecompress
      • HeaderFrom
      • HoistField
      • InsertField
      • InsertHeader
      • MaskField
      • MessageTimestampRouter
      • RegexRouter
      • ReplaceField (Kafka)
      • ReplaceField (Confluent)
      • SetSchemaMetadata
      • TimestampConverter
      • TimestampRouter
      • TombstoneHandler
      • TopicRegexRouter
      • ValueToKey
  • Process Data with Flink
    • Overview
    • Get Started
      • Overview
      • Quick Start with Cloud Console
      • Quick Start with SQL Shell in Confluent CLI
      • Quick Start with Java Table API
      • Quick Start with Python Table API
    • Concepts
      • Overview
      • Compute Pools
      • Autopilot
      • Statements
      • Determinism
      • Tables and Topics
      • Time and Watermarks
      • User-defined Functions
      • Delivery Guarantees and Latency
      • Schema and Statement Evolution
      • Private Networking
      • Comparison with Apache Flink
      • Billing
    • How-To Guides
      • Overview
      • Aggregate a Stream in a Tumbling Window
      • Combine Streams and Track Most Recent Records
      • Compare Current and Previous Values in a Stream
      • Convert the Serialization Format of a Topic
      • Create a UDF
      • Deduplicate Rows in a Table
      • Enable UDF Logging
      • Handle Multiple Event Types
      • Mask Fields in a Table
      • Scan and Summarize Tables
      • Process Schemaless Events
      • Resolve Common SQL Query Problems
      • Transform a Topic
      • View Time Series Data
    • Operate and Deploy
      • Overview
      • Manage Compute Pools
      • Monitor and Manage Statements
      • Grant Role-Based Access
      • Deploy a Statement with CI/CD
      • Generate a Flink API Key
      • REST API
      • Move SQL Statements to Production
      • Enable Private Networking
    • Flink Reference
      • Overview
      • SQL Syntax
      • DDL Statements
        • Statements Overview
        • ALTER MODEL
        • ALTER TABLE
        • ALTER VIEW
        • CREATE FUNCTION
        • CREATE MODEL
        • CREATE TABLE
        • CREATE VIEW
        • DESCRIBE
        • DROP MODEL
        • DROP TABLE
        • DROP VIEW
        • HINTS
        • EXPLAIN
        • RESET
        • SET
        • SHOW
        • USE CATALOG
        • USE database_name
      • DML Statements
        • Queries Overview
        • Deduplication
        • Group Aggregation
        • INSERT INTO FROM SELECT
        • INSERT VALUES
        • Joins
        • LIMIT
        • Pattern Recognition
        • ORDER BY
        • OVER Aggregation
        • SELECT
        • Set Logic
        • EXECUTE STATEMENT SET
        • Top-N
        • Window Aggregation
        • Window Deduplication
        • Window Join
        • Window Top-N
        • Window Table-Valued Function
        • WITH
      • Functions
        • Flink SQL Functions
        • Aggregate
        • Collections
        • Comparison
        • Conditional
        • Datetime
        • Hashing
        • JSON
        • AI Model Inference
        • Numeric
        • String
        • Table API
      • Data Types
      • Data Type Mappings
      • Time Zone
      • Keywords
      • Information Schema
      • Example Streams
      • Supported Cloud Regions
      • SQL Examples
      • Table API
      • CLI Reference
    • Get Help
  • Build AI with Flink
    • Overview
    • Run an AI Model
    • Create an Embedding
  • Manage Networking
    • Overview
    • Networking on AWS
      • Overview
      • Public Networking on AWS
      • Confluent Cloud Network on AWS
      • PrivateLink on AWS
        • Overview
        • Inbound PrivateLink for Dedicated Clusters
        • Inbound PrivateLink for Serverless Products
        • Outbound PrivateLink for Dedicated Clusters
        • Outbound PrivateLink for Serverless Products
      • VPC Peering on AWS
      • Transit Gateway on AWS
      • Private Network Interface on AWS
    • Networking on Azure
      • Overview
      • Public Networking on Azure
      • Confluent Cloud Network on Azure
      • Private Link on Azure
        • Overview
        • Inbound Private Link for Dedicated Clusters
        • Inbound Private Link for Serverless Products
        • Outbound Private Link for Dedicated Clusters
        • Outbound Private Link for Serverless Products
      • VNet Peering on Azure
    • Networking on Google Cloud
      • Overview
      • Public Networking on Google Cloud
      • Confluent Cloud Network on Google Cloud
      • Private Service Connect on Google Cloud
        • Overview
        • Inbound Private Service Connect for Dedicated Clusters
        • Inbound Private Service Connect for Serverless Products
        • Outbound Private Service Connect for Dedicated Clusters
      • VPC Peering on Google Cloud
    • Connectivity for Confluent Resources
      • Overview
      • Public Egress IP Address for Connectors and Cluster Linking
      • Cluster Linking using AWS PrivateLink
      • Follower Fetching using AWS VPC Peering
    • Use the Confluent Cloud Console with Private Networking
    • Test Connectivity
  • Log and Monitor
    • Metrics
    • Manage Notifications
    • Monitor Consumer Lag
    • Monitor Dedicated Clusters
      • Monitor Cluster Load
      • Manage Performance and Expansion
      • Track Usage by Team
    • Observability for Kafka Clients to Confluent Cloud
  • Manage Billing
    • Overview
    • Marketplace Consumption Metrics
    • Use AWS Pay As You Go
    • Use AWS Commits
    • Use Azure Pay As You Go
    • Use Azure Commits
    • Use Professional Services on Azure
    • Use Google Cloud Pay As You Go
    • Use Google Cloud Commits
    • Use Professional Services on Google Cloud
    • Marketplace Organization Suspension and Deactivation
  • Manage Service Quotas
    • Overview
    • Service Quotas
    • View Service Quotas using Confluent CLI
    • Service Quotas API
  • APIs
    • Confluent Cloud APIs
    • Kafka Admin and Produce REST APIs
    • Connect API
    • Client APIs
      • C++ Client API
      • Python Client API
      • Go Client API
      • .NET Client API
    • Provider Integration API
    • Flink REST API
    • Metrics API
    • Stream Catalog REST API Usage
    • GraphQL API
    • Service Quotas API
  • Confluent CLI
  • Release Notes & FAQ
    • Release Notes
    • FAQ
    • Upgrade Policy
    • Compliance
    • Generate a HAR file for Troubleshooting
    • Confluent AI Assistant
  • Support
  • Glossary

Use AWS Egress PrivateLink Endpoints for Dedicated Clusters on Confluent Cloud¶

AWS PrivateLink is a networking service that allows one-way connectivity from one VPC to a service provider and is popular for its unique combination of security and simplicity.

Confluent Cloud, available through AWS Marketplace or directly from Confluent, supports outbound AWS PrivateLink connections using Egress PrivateLink Endpoints. Egress PrivateLink Endpoints are AWS interface VPC Endpoints, and they enable Confluent Cloud clusters to access supported AWS services and other endpoint services powered by AWS PrivateLink, such as AWS S3, a SaaS service, or a PrivateLink Service that you create yourself.

The following diagram summarizes the Egress PrivateLink Endpoint architecture between Confluent Cloud and various potential destinations.

AWS Egress PrivateLink Endpoint architecture

The high-level workflow to set up an Egress PrivateLink Endpoint from Confluent Cloud to an external system, such as for managed connectors:

  1. Identify a Confluent Cloud network you want to use, or set up a new Confluent Cloud network.

  2. Obtain the AWS PrivateLink Service name.

    For certain target systems, you can retrieve the service name as part of the guided workflow while creating an Egress PrivateLink Endpoint in the next step.

  3. Create an Egress PrivateLink Endpoint in Confluent Cloud.

  4. [Optional] Create private DNS records for use with AWS VPC endpoints.

For service/connector-specific setup, see the target system networking supportability table.

Requirements and considerations¶

Review the following requirements and considerations before you set up an Egress PrivateLink Endpoint using AWS PrivateLink:

  • Egress PrivateLink Endpoints described in this document is only available for use with Dedicated clusters inside a “PrivateLink Access” type network.

    For use with Enterprise clusters, see Use AWS Egress PrivateLink Endpoints for Serverless Products on Confluent Cloud.

  • The AWS PrivateLink service must be configured to allow access from Confluent Cloud’s account or IAM role.

    Due to differing granularity of the allowlist configuration across SaaS providers, it is recommended that you leverage provider-specific controls (like network rules) for securing access to the PrivateLink services against confused deputy type issues.

  • Egress PrivateLink Endpoints can only be used by fully managed connectors.

  • AWS does not support cross-region connections with PrivateLink.

  • When using Egress PrivateLink Endpoints, additional charges may apply, for example, for certain connector configurations. For more information, see the following pricing information:

    • Confluent pricing
    • Fully-managed Kafka Connector pricing

Obtain AWS PrivateLink Service name¶

To make an AWS PrivateLink connection from Confluent Cloud to an external system, you must first obtain an AWS PrivateLink Service name for Confluent to establish a connection to.

Depending on the system you wish to connect to, there may be different allowlist requirements to allow Confluent access. It is recommended that you check each system’s allowlist mechanism to verify that Confluent Cloud will be able to create an endpoint targeting that system.

For AWS services¶

Refer to the AWS documentation for a list of all AWS services that integrate with AWS PrivateLink and their associated service names.

For 3rd party services¶

Refer to the system provider’s documentation for how to obtain the AWS PrivateLink Service name and to determine allowlisting requirements.

The following are reference links for some of the popular system providers: :

  • Snowflake
  • MongoDB Atlas
  • Elastic Cloud

For AWS PrivateLink Services you create¶

Refer to the AWS documentation for how to make your endpoint service available to service consumers.

For step-by-step guide on how to set up an Egress PrivateLink to connect to self-managed services, see the Confluent Cloud connector document.

Manage access to your service¶

When you stand up your own PrivateLink Service, you may want to manage its permissions to restrict who can create endpoints to that service.

Confluent Cloud uses a unique IAM Role to create VPC endpoints for each environment you create an Egress PrivateLink Endpoint from. We highly recommend only allowlisting this principal for maintaining an optimal security posture.

To obtain the IAM Role’s ARN:

  1. In the Confluent Cloud Console, in the Network Management tab, click the Confluent Cloud network.
  2. Copy the IAM Principal in the Egress Connections tab.

Create an Egress PrivateLink Endpoint in Confluent Cloud¶

Confluent Cloud Egress PrivateLink Endpoints are AWS interface VPC Endpoints used to connect to AWS PrivateLink Services.

  1. In the Network Management tab of the desired Confluent Cloud environment, click the Confluent Cloud network to which you want to add the PrivateLink Endpoint. The Connection Type of the network you select must be “PrivateLink Access”.

  2. Click Create endpoint in the Egress connections tab.

  3. Click the service you want to connect to.

  4. Follow the guided steps to specify the field values, including:

    • Name: Name of the PrivateLink Endpoint.

    • PrivateLink service name: The name of the PrivateLink service you retrieved as part of this guided workflow or as described in Obtain AWS PrivateLink Service name.

    • Create an endpoint with high availability: Check the box if you wish to deploy an endpoint with High Availability.

      Endpoints deployed with high availability have network interfaces deployed in multiple availability zones.

  5. Click Create to create the PrivateLink Endpoint.

  6. If there are additional steps for the specific target service, follow the prompt to complete the tasks, and then click Finish.

Send a request to create an endpoint:

HTTP POST request

POST https://api.confluent.cloud/networking/v1/access-points

Authentication

See Authentication.

Request specification

{
  "spec": {
    "display_name": "<The custom name for the endpoint>",
    "config": {
      "kind": "AwsEgressPrivateLinkEndpoint",
      "vpc_endpoint_service_name": "<The name of the PrivateLine service you wish to connect to>",
      "enable_high_availability": <Provision with high availability>,
    },
    "environment": {
      "id": "<The environment ID where the endpoint belongs to>",
      "environment": "<Environment of the referred resource, if env-scoped>"
    },
    "gateway": {
      "id": "<The gateway ID to which this belongs>",
      "environment": "<Environment of the referred resource, if env-scoped>"
    }
  }
}
  • vpc_endpoint_service_name: See Obtain AWS PrivateLink Service name.

  • enable_high_availability: Set to true to deploy an endpoint with high availability. The default is false.

    Endpoints deployed with high availability have network interfaces deployed in multiple availability zones.

  • gateway.id: Issue the following API request to get the gateway id.

    GET https://api.confluent.cloud/networking/v1/networks/{Confluent Cloud network ID}
    

    You can find the gateway id in the response under spec.gateway.id.

An example request spec to create an endpoint:

{
  "spec": {
    "display_name": "prod-plap-egress-usw2",
    "config": {
      "kind": "AwsPrivateLinkEndpoint",
      "vpc_endpoint_service_name": "com.amazonaws.vpce.us-west-2.vpce-svc-00000000000000000",
      "enable_high_availability": false,
    },
    "environment": {
      "id": "env-00000",
    },
    "gateway": {
      "id": "gw-00000",
    }
  }
}

Use the confluent network access-point private-link egress-endpoint create Confluent CLI command to create an Egress PrivateLink Endpoint:

confluent network access-point private-link egress-endpoint create [name] [flags]

The following are the command-specific flags:

  • --cloud: Required. The cloud provider. Set to aws.
  • --service: Required. Name of an AWS VPC endpoint service that you retrieved in Obtain AWS PrivateLink Service name.
  • --gateway: Required. Gateway ID.
  • --high-availability: enable high availability for AWS egress endpoint. Endpoints deployed with high availability have network interfaces deployed in multiple availability zones.

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

The following is an example Confluent CLI command to create an AWS Egress PrivateLink Endpoint with high availability:

confluent network access-point private-link egress-endpoint create \
  --cloud aws \
  --gateway gw-123456 \
  --service com.amazonaws.vpce.us-west-2.vpce-svc-00000000000000000 \
  --high-availability

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

Use the confluent_access_point resource to create an Egress PrivateLink Endpoint.

An example snippet of Terraform configuration:

resource "confluent_environment" "development" {
  display_name = "Development"
}

resource "confluent_access_point" "main" {
  display_name = "access_point"
  environment {
    id = confluent_environment.development.id
  }
  gateway {
    id = confluent_network.main.gateway[0].id
  }
  aws_egress_private_link_endpoint {
    vpc_endpoint_service_name = "com.amazonaws.vpce.us-west-2.vpce-svc-00000000000000000"
  }
}

Your Egress PrivateLink Endpoint status will transition from “Provisioning” to “Ready” in the Confluent Cloud Console when the endpoint has been created and can be used.

Once the endpoint is created, connectors provisioned against Kafka clusters in the same network can leverage the Egress PrivateLink Endpoint to access the external data.

Confluent Cloud exposes the VPC Endpoint ID for each of the above Egress PrivateLink Endpoints so that you can use it in various network-related policies, such as in an S3 bucket policy or Snowflake Network rule.

Create a private DNS record in Confluent Cloud¶

Create private DNS records for use with AWS VPC endpoints.

Not all service providers set up public DNS records to be used when connecting to them with AWS PrivateLink. For situations where a system provider requires setting up private DNS records in conjunction with AWS PrivateLink, you need to create DNS records in Confluent Cloud.

Before you create a DNS Record, you need to first create an Egress PrivateLink Endpoint and use the Egress PrivateLink Endpoint ID for the DNS record.

AWS private DNS names are not supported.

When creating DNS records, Confluent Cloud creates a single * record that maps the domain name you specify to the DNS name of the VPC endpoint.

For example, in setting up DNS records for Snowflake, the DNS zone configuration will look like:

*.xy12345.us-west-2.privatelink.snowflakecomputing.com CNAME vpce-0cb12cd2dc02130cf-8s6uwimu.vpce-svc-03bc1ff023623a033.us-east-1.vpce.amazonaws.com TTL 60
  1. In the Network Management tab of your environment, click the Confluent Cloud network you want to add the DNS record to.
  2. In the Egress DNS tab, click Create DNS record.
  3. Specify the following field values:
    • Egress PrivateLink Endpoint: The Egress PrivateLink Endpoint ID you created in create an Egress PrivateLink Endpoint
    • Domain: The domain of the private link service you wish to access. Get the domain value from the private link service provider, AWS or a third-party provider.
  4. Click Save.

Send a request to create a DNS Record that is associated with a PrivateLink Endpoint that is associated with a gateway.

HTTP POST request

POST https://api.confluent.cloud/networking/v1/dns-records

Authentication

See Authentication.

Request specification

{
  "spec": {
    "display_name": "The name of this DNS record",
    "domain": "<The fully qualified domain name of the external system>",
    "config": {
      "kind": "PrivateLinkAccessPoint",
      "resource_id": "<The ID of the endpoint that you created>"
    },
    "environment": {
      "id": "<The environment ID where this resource belongs to>",
      "environment": "<Environment of the referred resource, if env-scoped>"
    },
    "gateway": {
      "id": "<The gateway ID to which this belongs>",
      "environment": "<Environment of the referred resource, if env-scoped>"
    }
  }
}
  • domain: Get the value from the private link service provider, AWS, or a third-party provider.

  • gateway.id: Issue the following API request to get the gateway ID.

    GET https://api.confluent.cloud/networking/v1/networks/{Confluent Cloud network ID}
    

    You can find the gateway ID in the response under spec.gateway.id.

An example request spec to create a DNS record:

{
  "spec": {
    "display_name": "prod-dns-record1",
    "domain": "example.com",
    "config": {
      "kind": "PrivateLinkAccessPoint",
      "resource_id": "plap-12345"
    },
    "environment": {
      "id": "env-00000",
    },
    "gateway": {
      "id": "gw-00000",
    }
  }
}

Use the confluent network dns record create Confluent CLI command to create a DNS record:

confluent network dns record create [name] [flags]

The following are the command-specific flags:

  • --private-link-access-point: Required. Private Link Endpoint ID.
  • --gateway: Required. Gateway ID.
  • --domain: Required. Fully qualified domain name of the external system. Get the domain value from the private link service provider, AWS or a third-party provider.

You can specify additional optional CLI flags described in the Confluent CLI command reference, such as --environment.

The following is an example Confluent CLI command to create DNS record for an endpoint:

confluent network dns record create my-dns-record \
  --gateway gw-123456 \
  --private-link-access-point ap-123456 \
  --domain xy12345.us-west-2.privatelink.snowflakecomputing.com

Use the confluent_dns_record resource to create DNS records.

An example snippet of Terraform configuration:

resource "confluent_environment" "development" {
  display_name = "Development"
}

resource "confluent_dns_record" "main" {
  display_name = "dns_record"
  environment {
    id = confluent_environment.development.id
  }
  domain = "example.com"
  gateway {
    id = confluent_network.main.gateway[0].id
  }
  private_link_access_point {
    id = confluent_access_point.main.id
  }
}

Support for AWS PrivateLink Service configuration¶

Confluent Support can help with issues you may encounter when creating an Egress PrivateLink Endpoint to a specific service.

For any service-side problems, such as described below, Confluent is not responsible for proper AWS PrivateLink Service configuration or setup:

  • If you need help setting up an AWS PrivateLink Service for data systems running within your environment or VPC that you want to connect to from Confluent Cloud, contact AWS for configuration help and best practices.
  • If you need help configuring AWS PrivateLink Services for those managed by a third-party provider or service, contact that provider for compatibility and proper setup.

Next steps¶

Try Confluent Cloud on AWS Marketplace with $1000 of free usage for 30 days, and pay as you go. No credit card is required.

Was this doc page helpful?

Give us feedback

Do you still need help?

Confluent support portal Ask the community
Thank you. We'll be in touch!
Be the first to get updates and new content

By clicking "SIGN UP" you agree that your personal data will be processed in accordance with our Privacy Policy.

  • Confluent
  • About
  • Careers
  • Contact
  • Professional Services
  • Product
  • Confluent Cloud
  • Confluent Platform
  • Connectors
  • Flink
  • Stream Governance
  • Developer
  • Free Courses
  • Tutorials
  • Event Streaming Patterns
  • Documentation
  • Blog
  • Podcast
  • Community
  • Forum
  • Meetups
  • Kafka Summit
  • Catalysts
Terms & Conditions Privacy Policy Do Not Sell My Information Modern Slavery Policy Cookie Settings Feedback

Copyright © Confluent, Inc. 2014- Apache®️, Apache Kafka®️, Kafka®️, Apache Flink®️, Flink®️, Apache Iceberg®️, Iceberg®️ and associated open source project names are trademarks of the Apache Software Foundation

On this page:
    OSZAR »