Data Management
Supported Data Types and Formats
The OData V2 Connectors support the Confluent JSON Converter as well as the AVRO Converter.
Data Type Mapping Source Connector
The following table gives an overview of the OData V2 data type mapping from OData V2 data types to Kafka Connect data types applied by the source connector.
OData V2 | Kafka Connect Schema Type | Java Data Type |
---|---|---|
Bit | INT8 | java.lang.Byte |
Uint7 | INT8 | java.lang.Byte |
EdmSByte | INT8 | java.lang.Byte |
EdmByte | INT16 | java.lang.Short |
EdmGuid | STRING | java.lang.String |
EdmTime | TIME | java.util.Date |
EdmInt16 | INT16 | java.lang.Short |
EdmInt32 | INT32 | java.lang.Integer |
EdmInt64 | INT64 | java.lang.Long |
EdmBinary | BYTES | java.nio.ByteBuffer |
EdmDouble | FLOAT64 | java.lang.Double |
EdmSingle | FLOAT32 | java.lang.Float |
EdmString | STRING | java.lang.String |
EdmBoolean | BOOL | java.lang.Boolean |
EdmDecimal | DECIMAL | java.math.BigDecimal |
EdmDateTime | TIMESTAMP | java.util.Date |
EdmDateTimeOffset | TIMESTAMP | java.util.Date |
EdmStructuralType | STRUCT | java.util.Map[String, Object] |
Data Type Mapping Sink Connector
The following table gives an overview of the OData V2 data type mapping from Kafka Connect data types to OData V2 data types applied by the sink connector. The default conversion type is the first one listed in the Kafka Connect Schema Types
column.
OData V2 | Kafka Connect Schema Types |
---|---|
Bit | INT8 |
EdmSByte | INT8, INT16, INT32, INT64 |
EdmByte | INT8, INT16, INT32, INT64 |
EdmGuid | STRING |
EdmTime | TIME, TIMESTAMP, DATE, INT64 |
EdmInt16 | INT16, INT8, INT32, INT64 |
EdmInt32 | INT32, INT8, INT16, INT64 |
EdmInt64 | INT64, INT8, INT16, INT32 |
EdmBinary | BYTES |
EdmDouble | FLOAT64, FLOAT32, DECIMAL, INT8, INT16, INT32, INT64 |
EdmSingle | FLOAT32, FLOAT64, DECIMAL, INT8, INT16, INT32, INT64 |
EdmString | STRING |
EdmBoolean | BOOL |
EdmDecimal | DECIMAL, FLOAT32, FLOAT64, INT8, INT16, INT32, INT64 |
EdmDateTime | TIMESTAMP, DATE, INT64 |
EdmDateTimeOffset | TIMESTAMP, DATE, INT64 |
EdmStructuralType | STRUCT |
Type conversions into OData types from other Kafka Connect Schema Types
than the default conversion type might result in inaccurate data or loss of information.
For instance, some integer values that can be represented as INT32 may not be accurately represented as EdmInt16.
Optional Data Conversion Settings
You can optionally transform DECIMAL types to other appropriate data types if needed. decimal.mapping = primitive
will transform decimals to double or long, depending on the scale.
Kafka Headers
The OData V2 source connector supports inserting metadata information into the kafka message header by setting sap.odata.headers.enable = 1
(default is 0). The following header fields are supported:
Metric | Explanation | Value Type |
---|---|---|
absolute URL | The current used absolut URL | String |
record Index | With this index the message can be identified | Integer |
odata.timestamp | Response time | String |
odata.version | The used OData version | String |
odata.oauth.client.id | OAuth Client ID | String |
odata.oauth.redirect | OAuth Redirect Link | String |
odata.oauth.token.endpoint | OAuth Token Endpoint | String |
connector version | The version of the connector | String |
Single Message Transforms (SMTs)
Single Message Transforms (SMTs) allow for lightweight, real-time modifications of data as it passes through Kafka Connect. SMTs can be applied in source connectors before writing data to Kafka topics or in sink connectors before sending data to external systems.
Use Cases for SMTs
- Data Filtering: Remove unnecessary fields or records based on conditions, focusing only on relevant data.
- Field Manipulation: Modify fields by renaming, masking sensitive data, or changing formats, ensuring consistency and compatibility.
- Field Enrichment: Add metadata or default values to provide more context to messages.
- Transformation Chains: Combine multiple SMTs for complex transformations.
- Routing and Partitioning: Dynamically change the Kafka topic or partition for records based on their contents.
The OData V2 Connectors support the chaining of multiple SMTs, allowing for flexible and powerful data processing, enhancing the capabilities of data pipelines in Kafka Connect.
Schema Registry Integration
The OData V2 Connectors are fully compatible with Confluent Schema Registry, enabling seamless integration of data with Kafka topics using Avro, JSON Schema, or Protobuf formats. This compatibility ensures that all schema definitions are centrally stored and managed, simplifying the development and maintenance of applications that consume the data.
Advantages of Schema Registry Compatibility
- Data consistency: Enforces a well-defined schema, preventing issues like missing fields or type mismatches.
- Schema evolution: Supports backward and forward schema compatibility, allowing updates to schemas without breaking existing consumers.
- Reduced data size: Avro and Protobuf serialization minimize payload size, improving data transmission efficiency.
- Centralized schema management: Simplifies handling multiple message formats by storing all schema versions in one place.
This integration enhances data governance and ensures reliable handling of messages in Kafka.