My sink connector with the following configuration started successfully. The SinkRecordSendRate and SinkRecordReadRate metrics showed data was flowing through the connector to the s3 destination, but when I checked S3 I did not see the data. I expected to see a folder named after the topic being backed up.
connector.class = io.lenses.streamreactor.connect.aws.s3.sink.S3SinkConnector
s3.region = us-east-1
key.converter.schemas.enable = false
connect.s3.kcql = INSERT INTO $BUCKET_NAME SELECT * FROM lensestestTopic STOREAS JSON
connect.s3.aws.region = us-east-1
schema.compatibility = NONE
tasks.max = 2
topics = lensestestTopic
schema.enabled = false
errors.log.enable = true
errors.tolerance = “all”
errors.log.include.messages = true
config.action.reload = “restart”
errors.deadletterqueue.topic.name = “dlq_file_sink”
errors.deadletterqueue.topic.replication.factor = “1”
value.converter = org.apache.kafka.connect.storage.StringConverter
key.converter = org.apache.kafka.connect.storage.StringConverter"
The service execution role had these permissions.
{
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::*",
"arn:aws:s3:::*/*"
]
I did see the three .indexes objects in s3 for the connector, but that was all.
Was an additional configuration required for data to be placed in the folder mentioned above?
Where did the data get stored in s3? There is no sign of any new data being written to the bucket.
Thanks in advance.