No space left on device error when using S3 Connector On Strimzi

My configuration:

{
	"connector.class": "io.lenses.streamreactor.connect.aws.s3.sink.S3SinkConnector",
	"key.converter.schemas.enable": "false",
	"flush.count": "100",
	"connect.s3.kcql": "INSERT INTO kafka-s3-sink:lenses SELECT * FROM topic",
	"partition.include.keys": "false",
	"topics": "topic",
	"tasks.max": "1",
	"value.converter.schemas.enable": "false",
	"name": "s3-sink-connector-test-lenses",
	"connect.s3.aws.region": "us-east-1",
	"value.converter": "org.apache.kafka.connect.storage.StringConverter",
	"key.converter": "org.apache.kafka.connect.storage.StringConverter"
}

My error message:

Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: org.apache.kafka.connect.errors.ConnectException: No space left on device (org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-s3-sink-connector-test-lenses-0]
org.apache.kafka.connect.errors.ConnectException: org.apache.kafka.connect.errors.ConnectException: No space left on device
	at io.lenses.streamreactor.common.errors.ThrowErrorPolicy.handle(ErrorPolicy.scala:61)
	at io.lenses.streamreactor.common.errors.ErrorHandler.handleError(ErrorHandler.scala:81)
	at io.lenses.streamreactor.common.errors.ErrorHandler.handleTry(ErrorHandler.scala:60)
	at io.lenses.streamreactor.common.errors.ErrorHandler.handleTry$(ErrorHandler.scala:41)
	at io.lenses.streamreactor.connect.cloud.common.sink.CloudSinkTask.handleTry(CloudSinkTask.scala:61)
	at io.lenses.streamreactor.connect.cloud.common.sink.CloudSinkTask.put(CloudSinkTask.scala:136)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:593)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:342)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)
	at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:236)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:840)

Kafka Connect version:
I am building custom image on top of this image Quay

I found the solution, when running kafka connect using strimzi operator, the default tmpDirSizeLimit is only 500Mi. The issue is resolved when I specify tmpDirSizeLimit as 1Gi.

1 Like