S3 sink fails with "Too many index files have accumulated (6 out of max 5)"

I am running my S3 sink to back Kafka topics, but the connector fails with this error:

ERROR [backup-all-topics-t1|task-0] [backup-all-topics-t1] Too many index files have accumulated (6 out of max 5) (io.lenses.streamreactor.connect.aws.s3.sink.seek.IndexManager:75)

What can I do to address the error?

o ensure that data is processed with exactly-once semantics, the sink component stores the data in temporary files before moving them to the target bucket. Additionally, extra files are used to track the current offset of the topic-partition. These tracking files should be deleted. If they cannot be deleted, they accumulate until the default limit of 5 is reached, triggering an error.

To resolve this issue, you must grant the delete permission to the connector role. Here is an example IAM policy that provides the necessary delete permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3Delete",
      "Effect": "Allow",
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteObjectVersion"
      ],
      "Resource": [
        "arn:aws:s3:::your-bucket-name/*"
      ]
    }
  ]
}

Make sure to replace “your-bucket-name” with the actual name of your target S3 bucket.

By granting the delete permissions outlined in the IAM policy, you ensure that the connector role has access to delete objects and object versions in the specified S3 bucket.

1 Like

Does the connect.s3.seek.max.files have any relation to tasks.max? I’m still seeing this error after updating the permission as explained here