Quickstart guide
The following section contains two sets of guides: one for setting up the OData V2 Source Connector on a local Kafka Connect instance in standalone mode, and another for running both OData connectors (source and sink) on a Confluent platform inside Docker.
For using all the functionalities of the connector, we recommend going for the Confluent platform.
Kafka Connect standalone
Synopsis
This quickstart guide shows you how to set up the OData V2 Source Connector on a local Kafka Connect instance in standalone mode using Apache Kafka, extracting data from an existing OData v2 service. The publicly available OData service used for this scenario is the Northwind V2 service. It does not require authentication for read operations.
Preliminary setup
- Download and extract Apache Kafka
- Copy and extract the OData v2 connectors package into the Kafka Connect plugins directory
Configuration
-
Edit the contents of file
/config/connect-standalone.properties like this:bootstrap.servers=localhost:9092 key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=true value.converter.schemas.enable=true offset.storage.file.filename=/tmp/connect.offsets offset.flush.interval.ms=10000 plugin.path=/kafka_2.12-2.0.0/plugins,
NoteMake sure the plugin path exists
- Extract the OData V2 Source Connector property template etc/odata-source-connector.properties from the connector package and copy it to
/config/odata-source-connector.properties
Execution
-
Start a local Zookeeper instance from the shell, e.g. for Windows OS type:
cd <KAFKA_ROOT> set KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:<KAFKA_ROOT>/config/tools-log4j.properties bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
-
Start a local Kafka server instance from another shell, e.g. for Windows OS type:
bin\windows\kafka-server-start.bat .\config\server.properties
-
Start a simple kafka consumer from another shell, e.g. for Windows OS type:
bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic Order_Details --from-beginning
-
Start a local standalone Kafka Connect instance and execute the OData v2 source connector from another shell, e.g. for Windows OS type:
The logging outputs will be written to the file log.txtbin\windows\connect-standalone.bat .\config\connect-standalone.properties .\config\odata-source-connector.properties > log.txt 2>&1
Switch to the Kafka consumer shell and if everything is successful, you should see json representations of the Northwind OData order details messages together with their schema printed to the standard console output.
Logging
Check the log outputs by opening file log.txt in an editor of your choice or for Windows OS just type:
type log.txt
Confluent Platform
Synopsis
This section shows how to launch the OData V2 connectors on a Confluent Platform running locally within a docker environment.
Preliminary setup
- Make sure to have Docker Engine and Docker Compose installed on the machine where you want to run the Confluent Platform
- Docker Desktop available for Mac and Windows includes both
- Download and launch a ready-to-go Confluent Platform Docker image as described in Confluent Platform Quickstart Guide
- Ensure that the machine where the Confluent Platform is running on has a network connection to the publicly available (read only) Northwind V2 and (read/write) OData V2 services.
Connector installation
The OData v2 connectors can either be installed manually or through the Confluent Hub Client.
In both scenarios it is beneficial to use a volume to easily transfer the connector file into the Kafka Connect service container. If running Docker on a Windows machine make sure to add a new system variable COMPOSE_CONVERT_WINDOWS_PATHS and set it to 1
Manual Installation
- Unzip the zipped connector package init-kafka-connect-odatav2-x.x.x.zip
- Move the unzipped connector folder into the configured CONNECT_PLUGIN_PATH of the Kafka Connect service
-
Navigate to the directory containing the docker-compose.yml file of the Confluent Platform and use Docker Compose to start the platform.
docker-compose up -d
Confluent CLI
Install the zipped connector package init-kafka-connect-odatav2-x.x.x.zip using the Confluent Hub Client from outside the Kafka Connect docker container
docker-compose exec connect confluent-hub install {PATH_TO_ZIPPED_CONNECTOR}/init-kafka-connect-odatav2-x.x.x.zip
Further information on the Confluent CLI can be found in the Confluent CLI Command Reference
Connector configuration
The OData v2 connectors can be configured and launched using the control-center service of the Confluent Platform.
- In the control-center (default: localhost:9091) select a Connect Cluster in the Connect tab
- Click the “Add connector” button and choose either OData2SourceConnector or OData2SinkConnector.
- Provide a name for the connector and complete any additional required configuration.
OData V2 Source Connector
To extract data from the Northwind V2 service using the OData V2 Source Connector, you can transfer the properties from the OData V2 Source Connector property template etc/odata-source-connector.properties in the connector package to the control-center user interface and launch it.
OData V2 Sink Connector
To export data from the OData V2 Sink Connector to the OData V2 service, you need to follow these steps:
- Open OData V2 service in a browser to obtain a temporary service instance URL that allows write operations. The URL will include a temporary service ID that replaces the S(readwrite) part of the URL.
- Transfer the properties from the OData V2 Sink Connector property template etc/odata-source-connector.properties in the connector package to the Confluent Control Center user interface. Remember to include the service ID in the service path.
- Launch the sink connector.
- To test the sink connector, you need sample data. You can obtain sample data for all entities from the target OData service instance. You can then import this data into Kafka using the OData V2 Source Connector.
The public OData V2 service has restrictions on write operations, which include:
- The service only supports content-type set to application/atom+xml.
- The service has a limit of 50 entities per entity set that can be written at once.
- String properties have a maximum length of 256 characters for write operations.
Hints
- When you use a single connector instance to poll data from different service entity sets, an extra configuration block for an OData Source will appear after you’ve provided the necessary information for the previous OData Source block. However, keep in mind that the number of configurable sources for a given connector instance is limited in the Confluent Control Center UI. Additionally, you can configure sources in the UI without recommendations in the Additional Properties section. The same rule applies to the number of query filter conditions and OData Sinks.