Key Features
Delivery Semantics of the Source Connector
The OData V2 protocol lacks standard methods for identifying unique datasets, handling delta data, and tracking extractions. Although Change Data Capture (CDC) is officially part of OData V4, some OData V2 services, along with Apache Olingo V2, implement a lightweight change tracking specification. This allows the source connector to implement delta mode and support exactly-once delivery in certain configurations.
While both the source and sink connectors offer at-least-once delivery semantics, the source connector ensures exactly-once delivery in delta mode or full mode when execution period
is set to -1
(one-time polls).
Enabling Exactly-Once in Kafka
To enable exactly-once support, update the Kafka worker configuration property exactly.once.source.support
to preparing
or enabled
. Refer to KIP-618 for details.
When deploying a source connector, configure the exactly.once.support
property as follows:
- Required: Ensures exactly-once delivery, with a preflight check. The connector fails if it can’t support exactly-once delivery.
- Requested (default): No preflight check is performed.
Operation Modes of the Source Connector
The source connector operates in two modes: full mode and delta mode. You can set the mode via the track-changes
property.
- Full Mode: Periodically requests data from an OData V2 service’s entity set.
- Delta Mode: Uses change-tracking to extract only new, changed, or deleted records after an initial data request. Deleted records include key properties and a RECORDMODE or ODQ_CHANGEMODE field set to “D”, with a DELETED_AT timestamp.
After each extraction, the source connector waits for the time specified in exec-period
before the next extraction.
Offset Handling
Each message’s offset contains:
- The URL from which it was requested,
- A record number identifying the message’s position in the response,
- Optionally, a second URL for server-side paging (next link) or change tracking (delta link).
If issues arise, the connector retrieves the latest offsets from Kafka Connect, resuming from the last processed request. This ensures no data loss, and duplicate messages are filtered out. In delta mode, at-least-once semantics are maintained, while in full mode, the source system must guarantee data consistency across calls.
Delivery Semantics of the Sink Connector
The OData V2 protocol lacks a standard method for identifying unique datasets, handling delta data, change data capture, or tracking data extractions. To address this, the sink connector uses Kafka’s message commit functionality, providing at-least-once delivery semantics.
How It Works
- The connector stores consumed records in a task cache and delivers them at intervals specified by the connect worker configuration property:
offset.flush.interval.ms
. - You can trigger deliveries outside of this interval based on the number of records in the cache using the
sap.odata.flush.trigger.size
property.
Operation Modes of the Sink Connector
The sink connector supports three operation modes: Insert, Update, and Dynamic, set by the configuration property sap.odata.operation.mode
. Currently, only OData V2 CRUD operations are supported, and there is no Upsert mode. However, upsert functionality can be implemented by the OData service provider.
-
Insert Mode:
- Uses
HTTP POST
to export entries. - Entries already existing in the target system may result in errors, depending on the OData service.
- Supports deep inserts, allowing insertion of nested business objects and their associations. Learn more about deep inserts here.
- Uses
-
Update Mode:
- Uses
HTTP PUT
to update entities. - Fails if the entry doesn’t exist in the target system.
- Requires all fields to be present for the update; partial updates (
HTTP MERGE
) are not supported. - Deep updates are not supported by OData and will result in exceptions.
- Uses
-
Dynamic Mode:
- The
HTTP
method (POST
orPUT
) is determined by the record field value specified insap.odata.operation.mode.property
:- ‘I’ =
HTTP POST
(Insert) - ‘U’ =
HTTP PUT
(Update)
- ‘I’ =
- The
Delete Operation
- Enable the
sap.odata.enable.delete
property to allow deleting records in the target system when a Kafka tombstone record (key with null value) is consumed. - Tombstone records are converted into
HTTP DELETE
requests, or ignored if this property is disabled. - Ensure all key properties are included in the record key, which must be a struct.