Community Packages

Please read this before reporting a bug:
https://wiki.archlinux.org/title/Bug_reporting_guidelines

Do NOT report bugs when a package is just outdated, or it is in the AUR. Use the 'flag out of date' link on the package page, or the Mailing List.

REPEAT: Do NOT report bugs for outdated packages!
Tasklist

FS#65032 - [docker] (overlay2) mounts not being unmounted when stopping docker

Attached to Project: Community Packages
Opened by Lukas (luman) - Friday, 03 January 2020, 16:40 GMT
Last edited by Sébastien Luttringer (seblu) - Wednesday, 20 May 2020, 14:58 GMT
Task Type Bug Report
Category Packages
Status Closed
Assigned To Lukas Fleischer (lfleischer)
Sébastien Luttringer (seblu)
Architecture x86_64
Severity Low
Priority Normal
Reported Version
Due in Version Undecided
Due Date Undecided
Percent Complete 100%
Votes 0
Private No

Details

Description:
Running a server with multiple docker containers deployed by docker-compose. After some days/weeks/months, I want to reboot the server. Before rebooting, I am trying to cleanly stop and unmount everything. After stopping docker, there are some overlay2 mountpoints left over which prevent to unmount the underlying filesystem which eventually lets the whole server hang on shutdown.

How can I figure out where the issue is and/or whether this is a bug?

Additional info:
* package version(s)
community/docker 1:19.03.5-1 [installed]


Steps to reproduce:
Not sure if that helps, because it does not seem to be always reproducible:

1) boot server
2) decrypt volumes
3) mount fs
4) start docker
5) start containers
6) wait
7) stop docker
8) mount | grep overlay2


umount /mnt/data fails because there are left over mounts in /mnt/data/docker

workaround:

for m in `mount | grep overlay | awk '{ print $3 }'`; do umount $m;done
for m in `mount | grep nsfs | awk '{ print $3 }'`; do umount $m;done
for m in `mount | grep shm | awk '{ print $3 }'`; do umount $m;done

followed by:

umount /mnt/data
This task depends upon

Closed by  Sébastien Luttringer (seblu)
Wednesday, 20 May 2020, 14:58 GMT
Reason for closing:  Not a bug
Comment by Lukas (luman) - Friday, 03 January 2020, 16:43 GMT Comment by Santiago Torres (sangy) - Friday, 03 January 2020, 17:02 GMT
Can you share any relevant logs (e.g., dmesg) of this situation? It's hard to dig into it without being able to reproduce...
Comment by Lukas (luman) - Friday, 03 January 2020, 17:06 GMT
Yeah sure. Will do this when the next reboot is scheduled and the problem reoccurs. What else, except dmesg could be helpful?
Comment by Santiago Torres (sangy) - Friday, 03 January 2020, 17:14 GMT
Probably the journalctl -u docker output :), probably limiting that to the latest boot using the -b flag.
Comment by Lukas (luman) - Thursday, 23 January 2020, 13:49 GMT
Today, after stopping docker, some mounts were left over. There are errors in docker-journal. Files attached
Comment by Lukas (luman) - Wednesday, 20 May 2020, 09:14 GMT
  • Field changed: Percent Complete (100% → 0%)
I provided all information and did not get a response. Now the ticket is closed, because the maintainers are not responsive? What is the logic behind this?
Comment by Sébastien Luttringer (seblu) - Wednesday, 20 May 2020, 13:43 GMT
You reported an issue here, downstream, where we package docker and, upstream, where they develop docker.
Issues and fixes are better troubleshooted upstream, where they known better how the software work, and they decide the best way to address them. Even if ultimately the solution is to update the user manual.

An upstream maintainer was *very* responsive and answered you quickly, asking for more information.
This was left unanswered.

I kept this report open, to track and eventually early patch the software if something emerge from the upstream issue.
Since your last message here, 4 new minor versions of docker was released and packaged.
As the issue was stalled upstream, I closed as No Answer.
Comment by Santiago Torres (sangy) - Wednesday, 20 May 2020, 13:56 GMT
FWIW, from lines 515-527, it appears that it is probably your application within the container that is failing to shut down gracefully, and thus the container is force-killed. I wouldn't be surprised if that's what's causing the inconsistent state (i.e., leaving the mounts on), but that mishandling of things would be between upstream moby and the application running, and not in the way arch is packaging docker
Comment by Lukas (luman) - Wednesday, 20 May 2020, 14:19 GMT
Cool, thanks for the explanations. I was waiting for the issue to come up again, but it never did. So time flew by and I became unresponsive. Also, I had no idea that I am talking to the same people here and on github. So my apologies. Next time I will try to only open one ticket and hopefully this will be in the right place! :)

Let's close this for now. I also think, it is quite likely that one of the applications misbehaved and not docker itself.

Thanks for helping!

Loading...