Data Management
Supported Data Types and Formats
The RFC Connector supports both the Confluent JSON Converter and the AVRO Converter. Using Avro for data serialization requires the RFC Source Connector to translate field names provided by an RFC/RFM into valid Avro names by replacing illegal characters with an underscore _
.
Data Type Mapping
SAP JCo defines internal data types in com.sap.conn.jco.JCoMetaData
, each corresponding to one of the built-in types of SAP ABAP. The RFC Source Connector supports nested data structures and adheres to the following mappings of SAP data types to Kafka Connect.
JCo | Kafka Connect Schema Type | Java Data Type |
---|---|---|
TYPE_CHAR | STRING | java.lang.String |
TYPE_DECF16 | Decimal | java.math.BigDecimal |
TYPE_DECF34 | Decimal | java.math.BigDecimal |
TYPE_DATE | Date | java.util.Date |
TYPE_BCD | Decimal | java.math.BigDecimal |
TYPE_FLOAT | FLOAT64 | java.lang.Double |
TYPE_INT1 | INT16 | java.lang.Short |
TYPE_INT2 | INT16 | java.lang.Short |
TYPE_INT | INT32 | java.lang.Integer |
TYPE_INT8 | INT64 | java.lang.Long |
TYPE_BYTE | BYTES | java.nio.ByteBuffer |
TYPE_NUM | STRING | java.lang.String |
TYPE_XSTRING | STRING | java.lang.String |
TYPE_TIME | Time | java.util.Date |
TYPE_STRING | STRING | java.lang.String |
TYPE_UTCLONG | INT64 | java.lang.Long |
TYPE_UTCMINUTE | INT64 | java.lang.Long |
TYPE_UTCSECOND | INT64 | java.lang.Long |
TYPE_DTDAY | INT32 | java.lang.Integer |
TYPE_DTWEEK | INT32 | java.lang.Integer |
TYPE_DTMONTH | INT32 | java.lang.Integer |
TYPE_TSECOND | INT32 | java.lang.Integer |
TYPE_TMINUTE | INT16 | java.lang.Short |
TYPE_CDAY | INT32 | java.lang.Integer |
TYPE_STRUCTURE | STRUCT | java.util.Map[String, Object] |
TYPE_TABLE | ARRAY | java.util.List[Struct] |
Kafka Headers
The RFC source connector supports inserting metadata information into the kafka message header by setting sap.rfc.headers.enable = 1
(default is 0). The following header fields are supported:
Name (String) | Value | Value Type |
---|---|---|
connector.version | The version of the connector | String |
rfc.name | Name of the rfc | String |
rfc.offset | The logical rfc offset | String |
auth-id | The authentication identifier (user, alias_user, snc_myname, or x509cert); may be empty for some authentication mechanisms. | String |
jco.version | The version of jco | String |
jco.client.ashost | The SAP application server. | String |
jco.client.sysnr | The SAP system number. | String |
jco.client.mshost | The SAP message server. | String |
jco.client.msserv | The SAP message service. | String |
jco.client.r3name | The R/3 name. | String |
Optional Data Conversion Settings
decimal.mapping
can be used to transform DECIMAL types to other appropriate data types if needed.decimal.mapping = primitive
will transform decimals to double or long, depending on the scale.- SAP defines an own internal format for storing amounts, e.g. 1 japanese yen is stored as 0.01. You can change the representation of currency amounts by setting configuration property
currency.conversion
.
Single Message Transforms (SMTs)
Single Message Transforms (SMTs) allow for lightweight, real-time modifications of data as it passes through Kafka Connect. SMTs can be applied in source connectors before writing data to Kafka topics. The RFC Source Connector supports SMTs and has been successfully tested with a concatenation of two SMTs.
Use Cases for SMTs
- Data Filtering: Remove unnecessary fields or records based on conditions, focusing only on relevant data.
- Field Manipulation: Modify fields by renaming, masking sensitive data, or changing formats, ensuring consistency and compatibility.
- Field Enrichment: Add metadata or default values to provide more context to messages.
- Transformation Chains: Combine multiple SMTs for complex transformations.
- Routing and Partitioning: Dynamically change the Kafka topic or partition for records based on their contents.
The RFC Source Connector supports the chaining of multiple SMTs, allowing for flexible and powerful data processing, enhancing the capabilities of data pipelines in Kafka Connect.
Schema Registry Integration
The connector is fully compatible with Confluent Schema Registry, allowing seamless integration of data with Kafka topics that use Avro, JSON Schema, or Protobuf formats. This compatibility ensures that all schema definitions for messages are stored and managed centrally, simplifying the development and maintenance of applications that consume the data.
Advantages of Schema Registry Compatibility:
- Data Consistency: Enforces a well-defined schema for RFC messages, preventing issues like missing fields or type mismatches.
- Schema Evolution: Supports backward and forward schema compatibility, allowing you to update RFC structures without breaking existing consumers.
- Reduced Data Size: Avro and Protobuf serialization minimize the payload size, improving data transmission efficiency.
- Centralized Schema Management: Simplifies handling multiple RFC message formats by storing all schema versions in one place.
This integration enhances data governance and ensures robust handling of RFC messages in Kafka.
Message Schema
When a remote enabled function module is called, the resulting message and schema include all parameters defined in the function module, including the IMPORTING
parameters.