Kafka Connect - ODP Source Connector

The ODP Source Connector is a Kafka Connector designed to extract data from SAP® Operational Data Provisioning sources. The connector builds upon i-OhJa, a collection of libraries and components written in Scala for interaction with SAP® systems.

Dependencies

The connector depends on the SAP® Java Connector 3.1 SDK library to connect to SAP® systems. To run the ODP source connector you need to provide a copy of SAP® JCo library v3.1.8 (jar and native library) in the classpath or in the plugin path configured for Kafka Connect.

Note

JCo needs to be obtained separately from the SAP® Marketplace. For more detailed information about licensing terms and how to obtain a license visit the SAP® FAQ and the SAP® connectors’ homepage.

License

See licenses for INITs evaluation license (DE and EN) or for more information on the dependencies’ licenses.

SAP® system requirements

For a comprehensive overview of ODP and a list of supported ODP extractors, please refer to the ODP Introduction and ODP enabled extractors pages.

Here are some crucial SAP® support notes to consider:

ODP contexts

A context represents a source of ODPs. Context identifiers exist for all SAP® technologies whose analytical views can be exposed as ODPs. Please refer to the relevant links and support notes for information on the specific prerequisites and configuration procedures for your ODP context. Currently, the following ODP-contexts are available (depending on release):

Installation

Packaging

The ODP source connector package name is: init-kafka-connect-odp-<connector version>.zip

The zip archive includes one folder called init-kafka-connect-odp-<connector version>, which itself contains the nested folders lib, etc, assets and doc.

  • lib/ contains the uber Java archive that needs to be extracted into the plugin.path of Kafka Connect.
  • etc/ contains a sample properties file for the connector that can be supplied as an argument in the CLI during startup of Kafka Connect or by upload to the Confluent Control Center.
  • doc/ contains more detailed documentation about the connector like licenses, readme, configurations and so on.
  • assets/ contains media files like icons and company logos.

Manual installation

Put the connector jar and the SAP® JCo jar together with its native libraries in the configured plugin path of Kafka Connect.

Confluent CLI

To install the connector on the Confluent platform, you can use the command line installation interface to install the connector from a local file system, see Confluent CLI Command Reference. Additionally, you need to manually copy the SAP® JCo to the lib directory of the connector’s installation path.

Source system configuration

The ODP connector is fully compatible with the configuration user interface in Confluent Control Center. Compared to using properties files, the configuration UI in Confluent Control Center offers a wider range of features, such as suggested properties, extensive property value recommendations, incremental visibility of applicable configurations, and a rich set of interactive validations.

Once the required fields for an ODP source are entered in the configuration UI, a new configuration group for an additional source will appear. To ensure optimal performance, the maximum number of displayed recommendations in the Confluent Control Center UI is limited to 1000.

For more information, please refer to Configuration Details or the configuration file (doc/source/configuration.html) in the connector package.

Connection

The ODP Source Connector includes configuration properties from SAP® JCo, which are identified by the prefix jco. For a detailed description of these properties, please refer to the Java documentation of the com.sap.conn.jco.ext.DestinationDataProvider interface. The JCo JavaDoc can be found within the sapjco3.jar package.

A basic configuration for a SAP® JCo client destination is as follows:

# SAP Netweaver application host DNS or IP
jco.client.ashost = 127.0.0.1
# SAP system number
jco.client.sysnr = 20
# SAP client number
jco.client.client = 100
# SAP RFC user
jco.client.user = user
# SAP user password
jco.client.passwd = password

Rather than specifying the application host directly using the jco.client.ashost property, one can use jco.client.mshost to utilize the Message Server for load balancing within SAP®.

To enable encryption and data integrity for communication between the connector and SAP®, one can use the SAP® Secure Network Connection (SNC) and its corresponding JCo configuration properties.

The connector supports various authentication types, including user/password, SNC Single Sign-On (SSO) using X509 certificates, and SAP® cookie v2 logon ticket.

For TCP/IP ports utilized by SAP® RFC, please refer to the TCP/IP ports of all SAP® products. The default ports for RFC communication are as follows:

  • 33<NN> (unencrypted)
  • 48<NN> (encrypted)

To be able to establish communication between the connector and the SAP® system and within the framework of authorization management, you need an SAP® user. The SAP® user must be of type Communications or Dialog. To save SAP® dialog resources, the Communications type is recommended. The user needs at least the following authorizations for extracting data out of e.g., a SAPI data source:

  • Object S_RFC:
    ACTVT: 16
    RFC_NAME: RFCPING, SMLG_GET_DEFINED_GROUPS, RFC_GET_FUNCTION_INTERFACE, RFC_METADATA_GET
    RFC_TYPE: FUNC

  • Object S_RFC:
    ACTVT: 16
    RFC_NAME: RFC1, RODPS_REPL, BUS1090
    RFC_TYPE: FUGR

  • Object S_BTCH_JOB:
    JOBACTION: RELE
    JOBGROUP: *

  • Object S_RO_OSOA:
    ACTVT: 03
    OLTPSOURCE: [odp names]
    OSOAAPCO: *
    OSOAPART: DATA, DEFINITION

  • Object S_TABU_NAM:
    ACTVT: 03
    TABLE: V_CURC

For more detailed information, see the following SAP® support notes:

ODP source configuration

A minimal connector ODP configuration looks like this:

# The remaining configs are specific to the ODP source connector.
# Unique subscriber name used to subscribe to ODP sources in SAP.
# This identifier will be used to calculate delta requests.
sap.odp.subscriber-name = OhJaODPKafkaConnector
sap.odp#00.name = Test
sap.odp#00.context = SAPI
sap.odp#00.topic = ODPSAPITEST

ODP being a publish/subscribe oriented component with potentially multiple subscribers, each subscriber must provide a unique name, referred to as sap.odp.subscriber-name , specifically for delta extractions. This name serves as the primary key used for tracking pointers that indicate which delta data has already been successfully extracted.

The name and context properties must correspond to the ODP name and ODP context configured in SAP®. Within the ODP framework, the context describes to a SAP® repository that maps its metadata in the framework and can be compared to a schema in a database. The available contexts vary depending on the source system. To get a dropdown list of valid values in the Confluent Control Center, a prefix must be entered. A list of possible contexts can be found above in section ODP contexts

topic defines the Kafka output topic the connector producer will use to publish extracted data.

Optional data conversion settings

  • sap.odp.decimal.mapping can be used to transform DECIMAL types to other appropriate data types if needed. sap.odp.decimal.mapping = primitive will transform decimals to double or long, depending on the scale.
  • SAP defines an own internal format for storing amounts e.g., 1 japanese yen is stored as 0.01. You can change the representation of currency amounts by setting configuration property sap.odp.currency.conversion.
  • sap.odp.numeric.mapping can be used to transform ABAP type d (date),t (time) and n (numeric varchar) to other data types like string. This can be useful if your source contains field of those types containing invalid values.

Parallelism

The parallelism of the ODP source connector is defined by assigning an ODP to exactly one worker task. This restriction is mainly due to the Connect API, as the connector would be able to handle an ODP data load request package wise in multiple tasks, but at the point in time when the poll() method is called getting new data from the source, the tasks are already created, and we would need a rebalance to change the task assignments. If the number of configured ODP extractions exceeds the maxTasks configuration or the number of available tasks, a single task will be responsible for extracting data from multiple ODPs. Adding tasks to scale the connector makes sense only if the number of configured ODP sources exceeds the number of available tasks. Moreover, scaling by adding topic partitions is not feasible since only one partition is used to ensure sequential order.

Headers

The connector supports inserting metadata information into the kafka message header by setting sap.odp.headers.enable = 1 (default). The following header fields are supported:

name (string) value value type
odp.name ODP name String
odp.context ODP context String
odp.subscriber Subscriber name String
odp.request ODP request ID (TSN) String
odp.package ODP request package number Integer
odp.record ODP request record number (starting with 0) Integer
numeric.mapping numeric.mapping configuration String
decimal.mapping decimal.mapping configuration String
  • The ODP request ID or pointer ID corresponds to the following pattern: yyyyMMddHHmmss.XXXXXXXXX. See transaction ODQMON and tables RODPS_REPL_TID and RODPS_REPL_RID for more details about the mapping between RID, TSN and pointer.
  • The package number is a continuous number starting from 0000000. A unique package ID is composed of the ODP request ID and the package number. The package number equals the ODQ_UNITNO seen in ODQMON minus one.
  • An ODP dataset or record number is a continuous number starting from 0000000000. A unique record ID is composed of the ODP request ID, the package number, and the corresponding record number. The record number equals the ODQ_RECORDNO seen in ODQMON minus one.

Offset handling

Whenever the task method poll() is called, ODP delta requests are generated in the source system. These requests comprise all the data available since the last poll and up to the current time. A delta request can be divided into multiple packages by setting the configuration property package-size, which specifies the maximum number of bytes per package. Each record in a package of a delta request is assigned a unique record number, and the connector employs a request identifier, package number, and record number as keys for offset handling.

Commitments to the source system can only be issued on request level. If any issues or task rebalances occur, the connector retrieves the latest offsets from Kafka Connect and re-extracts the last processed requests from SAP® to prevent data loss. Any duplicated datasets that are extracted again are then filtered based on the most recent offset information. This ensures that no duplicate messages are stored in the output topic when the SourceRecords are passed to the producer.

Exactly-Once delivery semantics

The ODP source connector supports exactly-once delivery semantics via exactly.once.support and transaction.boundary=poll, see KIP-618 for more details. Exactly-once is restricted to ODP data source extractions either configured for delta initialization or configured for full extraction, but only executed once.

The ODP API includes functionalities for committing delta data requests and tracking committed requests per subscriber. If any issues arise, the source connector will rely on Kafka Connect’s offset information when restarting. Since the source system only tracks commitments on a per-request basis, the ODP source connector may extract source data multiple times. However, it applies a filtering mechanism using offsets to eliminate duplicate data before passing it to the Kafka Connect producers.

Enabling Exactly-Once in your Kafka cluster

To enable exactly-once support for source connectors in your Kafka cluster, you will need to update the worker-level configuration property exactly.once.source.support to either preparing or enabled.

When deploying a source connector, you can configure the exactly.once.support property to make exactly-once delivery either requested (default) or required.

  • required the connector will undergo a preflight check to ensure it can provide exactly-once delivery. If the connector or worker cannot support exactly-once delivery, creation or validation requests will fail.

  • requested the connector will not undergo a preflight check for exactly-once delivery.

Error tolerance

Kafka’s errors.tolerance is not applied to the internal logic of a Kafka Connect source connector, therefore the ODP Source Connector provides its own configuration errors.tolerance to define the behaviour of error handling:

  • none is the default value and signals that any error will result in an immediate connector task failure
  • skiprow changes the behavior to skip over problematic records
  • initializefield changes the behavior to initialize problematic field values
  • deadletterqueue will send the erroneous datasets to a DLQ. No delivery semantic guarantees are provided for DLQ records.

This error handling behaviour does not cover all logical areas of the connector and only applies to issues in data type and value conversion from SAP® to Kafka.

In addition to that one must define a regular expression in errors.tolerance.row-regexp for selection of specific datasets an error tolerance behavior is to be applied. See the Headers section for more details about unique identifiers.
Another optional configuration errors.tolerance.fields can be used to define a list of field names an error tolerance behavior is applied to. If a problematic field is not part of the list, the connector falls back to none.
Another option to deal with errors for values of type date, time or numeric character is provided by configuration option sap.odp.numeric.mapping.

Graceful backoff

If there are any connection or communication issues with the configured SAP® system, the connector will employ a retry backoff strategy. The maximum number of retry attempts can be configured using the sap.odp.max.retries property. After each failed attempt, the connector will pause execution for a random number of milliseconds between the values set in sap.odp.min.retry.backoff.ms and sap.odp.max.retry.backoff.ms configuration properties.

Any exceptions for connection attempts and communication with SAP® are assigned an internal exception group. The list of exception groups for which the backoff retry strategy is applied can be configured using property sap.odp.retry.exception.groups. A complete list of exception groups can be found in the JCo JavaDocs of com.sap.conn.jco.JCoException. BAPI return messages of any message class can be included, too.

ODP serialization format

The ODP-API and the connector support various kinds of serialization formats for transmission of the data store in ODQ via RFC. It is essential to choose the appropriate serialization format as it can affect the load on the SAP® system during translation into the target format, the volume of data transferred over the wire, and overall replication performance. The default format for storing delta initialization data in ODQ is gzipped basXML, and we recommend setting this serialization format using the sap.odp.serialization property.

Partitioning

Each SourceRecord is associated with a partition that corresponds to the context name and identifier or name of an ODP. This context name and identifier/name together serve as the primary key of an ODP source instance as defined by SAP®.

JMX metrics

The ODP source connector supports all the connector and task metrics provided by Kafka Connect through Java Management Extensions (JMX). In addition, the ODP source connector provides extra JMX metrics for accessing state managed by the connector.

MBean: org.init.ohja.kafka.connect:type=odp-source-task-metrics,connector=([-.w]+),task=([d]+)

Metric Explanation
task-phase Phase in which the connector is currently operating. The connector can either be in fill cache, fetch, processing queue or waiting phase.
task-active-threads Count of active threads that are in use by the connector for processing data.
thread-retries Count of retries performed in a connector thread that is in retrying state.
thread-next-retry Timestamp for next retry performed in a connector thread that is in retrying state.
jco-destination-pool-capacity Maximum number of connections that will be held open for the JCo destination instance.
jco-destination-peak-limit Maximum number of connections that can be used simultaneously with the JCo destination instance
jco-destination-pooled-connections Count of connections that are currently held open for the JCo destination instance.
jco-destination-used-connections Count of connections that are currently being used with the JCo destination instance.
${configGroup}-odp-context ODP source context of ODP configured in the configuration group.
${configGroup}-odp-name ODP source name of ODP configured in the configuration group.
${configGroup}-fetch-completed Package ID of the latest successfully fetched package for ODP configured in the configuration group.
${configGroup}-prefetch-completed Package ID of latest prefetched package for ODP configured in the configuration group.
${configGroup}-committed-record Current committed Kafka offset for ODP configured in the configuration group.
${configGroup}-recent-request-update Timestamp for last time of extraction for ODP configured in the configuration group.
${configGroup}-request-state The request mode for the ODP extraction in the configuration group. The request mode can either be rda, full or delta.

Supported data types

SAP® JCo defines internal data types in com.sap.conn.jco.JCoMetaData, each corresponding to one of the built-in types of SAP® ABAP®. The ODP source connector uses SAP® Operational Data Provisioning, which only supports flat structured tables containing the following SAP® basic data types and mappings to Kafka connect org.apache.kafka.connect.data data / schema types:

JCo Kafka Connect Schema Type Java data type
TYPE_CHAR STRING java.lang.String
TYPE_DECF16 Decimal java.math.BigDecimal
TYPE_DECF34 Decimal java.math.BigDecimal
TYPE_DATE Date java.util.Date
TYPE_BCD Decimal java.math.BigDecimal
TYPE_FLOAT FLOAT64 java.lang.Double
TYPE_INT1 INT16 java.lang.Short
TYPE_INT2 INT16 java.lang.Short
TYPE_INT INT32 java.lang.Integer
TYPE_INT8 INT64 java.lang.Long
TYPE_BYTE BYTES java.nio.ByteBuffer
TYPE_NUM STRING java.lang.String
TYPE_XSTRING STRING java.lang.String
TYPE_TIME Time java.util.Date
TYPE_STRING STRING java.lang.String
TYPE_UTCLONG INT64 java.lang.Long
TYPE_UTCMINUTE INT64 java.lang.Long
TYPE_UTCSECOND INT64 java.lang.Long
TYPE_DTDAY INT32 java.lang.Integer
TYPE_DTWEEK INT32 java.lang.Integer
TYPE_DTMONTH INT32 java.lang.Integer
TYPE_TSECOND INT32 java.lang.Integer
TYPE_TMINUTE INT16 java.lang.Short
TYPE_CDAY INT32 java.lang.Integer

Supported features

Data serialization

The ODP connector supports the Confluent JSON Converter as well as the AVRO Converter. Using Avro for data serialization requires the ODP source connector to translate field names provided by an ODP source into valid Avro names by replacing illegal characters with "_".

SMT

The use of Single Message Transforms in a source connector allows for each record to be passed through one or a chain of several simple transformations before writing to the Kafka topic. The ODP source connector supports SMT and has been successfully tested with a concatenation of two SMTs.

Error handling

The connector applies different kind of validations and error handling mechanisms like configuration validation, offset recovery, upstream connection tests, connection retries with graceful back-off and repeatable source request extractions.

Single configuration parameter validation extending the Validator class will throw exceptions of type ConfigException if any invalid configuration values are detected. Additionally, the connector overrides the validate method to validate interdependent configuration parameters and adds error messages to class ConfigValue in case of any invalid parameter values. The corresponding parameters containing invalid values are highlighted in red in the Confluent Control Center.

The connector categorizes known exceptions into ConnectException, allowing them to be handled appropriately by Kafka connect. Any errors or warnings are logged using SLF4J, as described in the Logging section.

Logging

The connector makes use of SLF4J for logging integration. The logger uses the logger name org.init.ohja.kafka.connect.odp.source and can be configured e.g., in the log4j configuration properties in the Confluent Platform.

SAP® JCo includes a logger called com.sap.conn.jco which can only be used with log4j. In addition to setting the logging level for the JCo logger one can use configuration property jco.trace_level to fine tune the level of logging.

The connector provides additional log location information ohja.location using MDC (mapped diagnostic context). The log location contains the name of the nearest enclosing definition of val, class, trait, object or package and the line number. Example Log4j 1.x appender:

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %X{connector.context}%m %X{ohja.location}%n

Field projection

Each ODP source defines a set of fields that can be extracted. The connector configuration enables the selection of a subset of fields that will be extracted to Kafka. It is possible to modify the set of fields to be extracted at any time because the source system stores the delta data for all available fields with each request, regardless of the configuration settings.

Extraction modes and delta initialization

The ODP connector supports several modes of extraction and initializations:

  1. Delta initialization without data transfer, also known as delta simulation: When a subscriber starts extracting from an ODP source for the first time, only new delta records will be extracted with the following requests. The already existing data up to the point in time when extraction starts will not be extracted.

  2. Delta initialization with data transfer: In contrast to 1. The initial set of data up to the point in time when the first extraction request was issued will be extracted all at once.

  3. Full initialization with applied filters: In contrast to 2. Full initialization allows for the application of custom filter conditions to the data extracted during the first initialization request, thereby reducing the amount of data extracted during the initialization process. If the full data extraction process fails or is not completed before the connector shutdown, the connector will automatically create a new full initialization request upon restart. However, this is likely to break exactly-once semantics, so messages from the failed initialization must be deleted manually if exactly-once is mandatory. It is currently not possible to recover failed full extraction requests with SAP® ODP.

  4. Full extraction with applied filters: This setting can be employed for ODP sources that do not support delta extractions. In this scenario, a full/snapshot request will be issued to the source system during each cycle, which can be defined either by the execution period configuration setting or by using a cron-based schedule. One needs to keep in mind that multiple full extraction requests for the same source will already create duplicates on the source system.

Full extraction selection range

If extraction mode is set to full initialization or full extraction, a filter on the data being extracted during full extraction can be applied by defining so-called selection ranges. A selection range consist of:
- a field name from the list of fields supplied by the respective ODP
- a sign indicating if the selected field values should be included or excluded
- an option defining the comparison operation being applied e.g., equal, not equal, between (interval) etc.
- a low value that takes place if single value operators like equal were chosen, or to define a lower bound for
interval operations like between
- a high value defining the upper bound of intervals

Multiple selection ranges will be logically combined by an and.

Delta selection range

The delta selection range conditions are defined the same way as the full initialization range selections. In contrast to the full extractions, the delta selection is only applied to the delta initialization requests and all delta requests following any initialization. So, if you don’t have the need to apply separate filter conditions for the initialization and following delta requests, we recommend using delta initialization and delta range selections.

Realtime enabled sources

ODP has built-in support of realtime enabled sources and so has the ODP source connector. When using a real-time enabled ODP delta source, the initialization process for the ODP connector is the same as for non-realtime sources.
After initialization is completed, in contrast to non-realtime sources, the ODP connector will create a real-time request in the ODQ, which will remain open even if the connector goes down. This enables various source types and the ODP daemon to actively push data in real-time to the ODQ. As a result, the latency of data retrieval can be reduced compared to non-realtime requests, since the process of obtaining and writing data to the ODQ is shifted in time before the actual extraction process initiated by the connector takes place.
In certain special cases, such as when recovering from a failed or restarted connector instance, the connector will actively close existing open real-time requests and create a new one. Moreover, configuration options realtime.packages and realtime.timelimit can be used to close a real-time request after reaching a specific amount of extracted data packages or the configured time limit. Use configuration option realtime.enable to disable creating real-time requests for real-time enabled data sources.

Scheduling

The connector is meant to run continuously according to streaming principles. Nevertheless, a clock period can be set using configuration property exec-period per single ODP source. A data extraction will then take place only once in a clock period interval.
As an alternative one can use configuration properties cron.expression and cron.security.time.sec to define a cron-like schedule e.g., if a source system specifies strict timeframes for extraction processes.

ODP test & monitoring in SAP®

ODPs utilize an Operational Data Queue (ODQ) system for delta management, and the ODQ monitor can be accessed using the SAP® transaction code ODQMON . The ODQ monitor provides information on all ODP queues in use, the subscribers for each ODP, a history of requests, the ability to inspect data stored in an ODQ, and more. For testing ODPs on the SAP®-side, the RODPS_REPL_TEST report provides a feature-rich application. Further information on ODPs and ODQs can be found in the documentation provided by SAP®. To manually close requests in failure or success state, and to eliminate zombie requests in a running connector instance, the RODPS_REPL_ODP_CLOSE function module can be used. When using Change Data Capture enabled ABAP CDS Views, transaction code DHCDCMON enables access to the SAP® CDC Monitor.

Delta descriptive fields

ODP sources provide two common fields describing the delta type (CRUD) of a record: ODQ_CHANGEMODE and ODQ_ENTITYCNTR (structure ODQ_S_DELTA_APPEND).
Find attached a list of possible field values and their meaning:

ODQ_CHANGEMODE ODQ_ENTITYCNTR Delta type Description
C 1 New Describes the state of an entity after creation.
Similar to an after delta record without a preceding before delta record.
U -1 Before Describes the state of an entity before a change.
Summable non-key fields will have a reversed sign
U 1 After Describes the new state of an entity after a change.
D -1 Reverse Type of a delete record with contents similar to the before record.
In contrast to the before record no subsequent after record will follow.
D 0 Delete Similar to a Kafka tombstone message. Only key fields may be specified
U 0 Additive Describes the new state of an entity, but in contrast to an after image as difference to the previous state of the entity.
Logically, it aggregates a before and after record for summable non-key fields. The rest of the non-key fields are equal to an after record.

SAP® ODP suffixes

The Azure SAP® CDC connector uses SAP® Operation Data Provisioning (ODP) to extract data from SAP®. When being used to extract e.g. data from SAP® BW InfoProviders, ODP data sources receive a suffix describing the type of data:

Suffix Meaning
$F Transaction fata/facts
$P time-independant master data/attributes
$Q time-dependent master data/attributes
$T texts
$H hierarchies
$M joined time-(in)dependant master data view

CLI tool

The kafka-connect-odp connectors package contains a command line interface to validate connector properties, test the connection to a SAP® system, retrieve a list of available ODPs and query details of an ODP, extract the schema for ODP sources.

Since the CLI is written in Scala you can execute it in Scala or Java. To run the CLI you need to provide the following dependencies in the CLASSPATH:

  1. Scala runtime libraries
  2. kafka-clients
  3. kafka-connect-odp connector libraries
  4. SAP® Java Connector 3.1 SDK

Since a Java runtime is provided by the Confluent Platform and the Scala runtime libraries are part of the connector package, executing the CLI with Java would not require installing Scala.

Java

java -cp <kafka-connect-odp>:<kafka-clients>:<sapjco3> org.init.ohja.kafka.connect.odp.source.ODPDetailsApp <command> <options>

The Confluent Platform has Kafka libraries available in /usr/share/java/kafka/. When ODP source connector package and sapjco3 are installed to the plugin path of Connect, the command could look like this:

java -cp \
<CONNECT_PLUGIN_PATH>/init-kafka-connect-odp-x.x.x/lib/*:\
/usr/share/java/kafka/* \
org.init.ohja.kafka.connect.odp.source.ODPDetailsApp \
<commands> <options>

Scala

If an appropriate version of Scala is installed the Scala command can be used. This command already provides the necessary Scala runtime libraries.

scala -cp <kafka-connect-odp>:<kafka-clients>:<sapjco3> org.init.ohja.kafka.connect.odp.source.ODPDetailsApp <command> <options>

Output

The output will look like this:

usage:
    ODPDetailsApp <command> <options>

commands:
    ping
    list-odp
    odp-details -n <object name> -c <context name>
    extract-schema -n <object name> -c <context name>

mandatory options:
    -p <path to connector properties file>

Hint: Avro schemas may differ if the Single Message Transform(s) in the connector configuration are used.

Restrictions and pending features

  • The configuration option to generate tombstone messages based on ODQ_CHANGEMODE is currently unavailable. However, one can utilize Single Message Transforms to implement this functionality.
  • The assignment of fields of type currency amount and unit as part of the generated schema is currently not supported.
  • Support for nested hierarchy extractions (hierarchies can only be extracted as two-dimensional, flattened hierarchy structures)
  • The amount of ODP sources in the evaluation version of the connector is restricted.
  • No support for Websocket RFC

Full enterprise support

Full enterprise support provides expert support from the developers of the connector at a service level agreement suitable for your needs, which may include

  • 8/5 support
  • 60-minute response times depending on support plan
  • Full application lifecycle support from development to operations
  • Access to expert support engineers
  • Consultancy in case of individual SAP® integration requirements

Please contact connector-contact@init-software.de for more information.