Troubleshooting and FAQs
General Questions
Is the pricing model fixed?
Our standard pricing can always be adapted to special customer requirements.
What is procedure for new features or bug fix requests?
The paid license includes an annual maintenance fee that covers patches for bugs and vulnerabilities.
In order to deliver patches quickly, customers are notified when a new version is released, and can download it directly from the INIT repository.
What is the ETA for new features or bug fix requests?
The ETA for bug fixes is usually a couple of days.
What kind of service is included in the annual maintenance fee?
The annual maintenance fee includes all patches provided to fix bugs, security issues, transitive library updates, etc.
It does not include, for example, potentially major enhancements to the functionality of the software. These enhancements may be made available through minor and major release updates. This is comparable to major release upgrades of other software systems. We reserve the right to charge an upgrade fee for these.
Where can I find more documentation about the INIT Connectors?
For an overview of the INIT connector portfolio, visit our Kafka Connectivity - INIT product page. It also contains links to more detailed documentation.
Connector Features
Can the connector be used with different converters (like Avro, JSON or Protobuf), SMTs, JMX based monitoring, Rest-API, and Rest-based UIs?
Yes. The connector is implemented against the Kafka Connect API and therefore fully integrated into Kafka Connect with all its functional possibilities.
Do Kafka Connectors support PUSH?
By design, the Kafka Connect API does not provide PUSH for connectors. See KIP-26 - Adding the Kafka Connect framework for data import/export
Does Confluent support running partner connectors in the Confluent Managed Cloud?
Yes, Confluent supports running partner connectors in the managed cloud, but there are a few limitations to be aware of. For more details refer to Custom Connectors for Confluent Cloud - Limitations and Support.
Does the connector depend on installing custom SAP® enhancements using transport files?
No. The connector exclusively uses the standard ODP-API V2.
Troubleshooting
How to find put the context name for ODPs based on SLT?
The context is expected to be SLT~
One can also use report RODPS_REPL_TEST
and the F4 value-help for the context input field or use our CLI tool shipped with the connector package.
How can CDS-Views be enabled for extraction using ODP?
In the SCN you can find a detailed article about how to enable CDS views for data extraction via ODP. You can find standard CDS views enabled for data extraction in the SAP API Business Hub.
What to do if the connector throws exceptions like "Unparseable date: "0 0 -0 -00" for field of type date, time or numeric varchar?
The data type conversion errors you get are due to invalid dates
stored in SAP®. In these situations we generally suggest to fix incorrect values in the source system, because the
connector is unable to guess what kind of values besides the standards should be treated, for instance, as a non-existing date.
Other invalid dates (e.g., ‘333 -00-00’) can also be stored in SAP®, as the underlying database type for this is
nvarchar, but cannot be interpreted as a valid date in the connector.
The issue here with ODP is that it allows
incorrect values to be stored in ODQ and is missing an easy way to fix incorrect values already stored in the ODQ
for external consumption. There is a user-exit you can use before the data gets extracted from the ODQ, but this is
inadequate if you need to react to incorrect values in a flexible way e.g., on productive systems where you would
need to go the way through SAP® transport management.
Therefore, we implemented error tolerance strategies to overcome this missing functionality in SAP®, which enable you
to skip or initialize invalid values in a flexible way. We suggest to use this functionality restrictively and only
for data sets that have been identified as incorrect in advance. Then you would be able to e.g., skip the corresponding
dataset, fix the values in the source and trigger a new delivery of the dataset from the source system.
Another option is to convert date, time and numeric characters to plain strings using sap.odp.numeric.mapping=string
.
However, we recommend this procedure be carefully reconsidered, since maintaining adequate data types is of high importance.
Why are decimal values displayed in an unreadable format?
The problem is that SAP's Decimals type is represented in Kafka Connect by Java BigDecimal, with a byte representation. The logical data type Decimal in Avro is also represented as a primitive type of bytes. The ODP Connector can be configured to transform decimal values from bytes to either long or double, depending on the scale. It should be noted that the converted field values may no longer be exact due to data type conversion.
How can we test data extractions from BW ODP sources?
You can use the SAP report _RODPS_REPLTEST to test data extraction from BW ODP sources on SAP side, before using
the ODP Source Connector in Kafka Connect.
For more information: SAP Support Wiki
How can we monitor ODP extractions?
You can use the SAP transaction ODQMON to monitor ODP extractions. For more information: SAP Support Wiki
How can we handle records with incorrect date or timestamp formats in ODP?
The data type conversion errors in ODP Connector are often caused by invalid dates or timestamps stored in SAP, which
cannot be interpreted as a valid date later on.
(See Validity of Date Fields and Time Fields - ABAP Keyword Documentation)
SAP allows invalid values to be stored in ODQ and does not provide an easy way to fix these values for external
consumption. A user-exit can be used, but it may not be flexible enough for productive systems where a change needs
to go through SAP transport management.
To overcome this issue, new configuration properties have been added to the ODP Source Connector, which enables users
to skip or initialize invalid values or send corresponding datasets to a DLQ in a flexible way.
We suggest using this
functionality restrictively and only for data sets that have been identified as incorrect in advance. Then, the
corresponding dataset can be skipped, the values can be fixed in the source system, and a new delivery of the dataset
can be triggered.
# Optional: error tolerance behavior in case of value conversion errors (none, skiprow, initializefield or deadletterqueue)
sap.odp#00.errors.tolerance=skiprow
# Optional: regular expression for selecting datasets to which the error strategy (!= none) is to be applied
sap.odp#00.errors.tolerance.row-regexp=.*
# Optional: fields to which the error strategy (!= none) is to be applied (empty means all)
sap.odp#00.errors.tolerance.fields=CALDAY
The ODP Source Connector logs messages containing unique identifiers of the dataset and field containing invalid values,
which can be used for the row-regexp and field list settings to specify the exact datasets and values your selected error
strategy will be applied to.
Additionally, the connector provides meta information about the data extraction in the Kafka message headers by default.
You can configure additional headers using the setting sap.odp.headers.enable=1.
To map SAP ABAP types of Date, Time, and Numeric Text to String, you can use the setting sap.odp.numeric.mapping=string.
The connector is running and real-time requests have been created on SAP side, but no data is ingested into Kafka.
This behaviour can have various causes. First check the Kafka Connect logs and whether the ODP daemon has been started.
2256659 - ODQ: Real-time daemon not started (authorization missing)
Additional Resources
INIT
- INIT Kafka Connectivity
- INIT Confluent Partner page
- One Pager: Evaluation zur Integration von SAP NetWeaver basierten Systemen in Apache Kafka