metric sender ERROR: "could not remove batch"
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Canonical Juju |
Triaged
|
Low
|
Unassigned |
Bug Description
I see this across many units when running "juju debug-log --level INFO":
unit-containerd-1: 18:29:51 ERROR juju.worker.
Watching the directory, I do see files end up in /var/lib/
This was after deploying 'kubernetes-core' on LXD using juju 2.6.9 and upgrading to juju 2.7rc1, however I don't think this is specific to those versions or charms.
tags: | added: sts |
Note that the same hash is often reported 2x, but not always. Also, this was with an HA controller, though the error messages are coming from the individual unit agents.
Maybe this is because of subordinates?
unit-flannel-1: 17:59:50 ERROR juju.worker. metrics. sender could not remove batch "2a1d8745- 7fe7-47bf- 8b8b-18c15ffa59 13" from spool: remove /var/lib/ juju/metricspoo l/2a1d8745- 7fe7-47bf- 8b8b-18c15ffa59 13: no such file or directory metrics. sender could not remove batch "2a1d8745- 7fe7-47bf- 8b8b-18c15ffa59 13" from spool: remove /var/lib/ juju/metricspoo l/2a1d8745- 7fe7-47bf- 8b8b-18c15ffa59 13: no such file or directory
unit-containerd-1: 17:59:50 ERROR juju.worker.
Note that those are 2 subordinates on the same machine. Maybe the primary is dealing with the file, and the two subordinate unit agents are then complaining that they aren't able to.