lxc-start enters uninterruptible sleep
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Linux |
Confirmed
|
High
|
Unassigned | ||
linux (Ubuntu) |
Incomplete
|
High
|
Unassigned | ||
lxc (Ubuntu) |
Invalid
|
High
|
Unassigned |
Bug Description
After running and terminating around 6000 containers overnight, something happened on my box that is affecting every new LXC container I try to start. The DEBUG log file looks like:
lxc-start 1372615570.399 WARN lxc_start - inherited fd 9
lxc-start 1372615570.399 INFO lxc_apparmor - aa_enabled set to 1
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/302' (5/6)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/303' (7/8)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/304' (10/11)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/305' (12/13)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/306' (14/15)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/307' (16/17)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/308' (18/19)
lxc-start 1372615570.399 DEBUG lxc_conf - allocated pty '/dev/pts/309' (20/21)
lxc-start 1372615570.399 INFO lxc_conf - tty's configured
lxc-start 1372615570.399 DEBUG lxc_start - sigchild handler set
lxc-start 1372615570.399 INFO lxc_start - 'vm-59' is initialized
lxc-start 1372615570.404 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp
lxc-start 1372615570.404 INFO lxc_start - stored saved_nic #0 idx 12392 name vethP59
lxc-start 1372615570.404 INFO lxc_conf - opened /home/x/
It stops there. In 'ps faux', it looks like:
root 31621 0.0 0.0 25572 1272 ? D 14:06 0:00 \_ lxc-start -n vm-59 -f /tmp/tmp.fG6T6ERZpS -l DEBUG -o /home/x/
On a successful LXC run (prior to the server getting into this state), this hangs just before:
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/' (rootfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/sys' (sysfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/proc' (proc)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/dev' (devtmpfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/dev/pts' (devpts)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/run' (tmpfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/' (btrfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/sys/fs/cgroup' (tmpfs)
lxc-start 1372394092.208 DEBUG lxc_cgroup - checking '/sys/fs/
lxc-start 1372394092.208 INFO lxc_cgroup - [1] found cgroup mounted at '/sys/fs/
lxc-start 1372394092.208 DEBUG lxc_cgroup - get_init_cgroup: found init cgroup for subsys (null) at /
It looks like a resource leak, but I'm not yet sure of what that would be.
If it matters, I SIGKILL my lxc-start processes instead of using lxc-stop. Could that have any negative implications?
Oh, and cgroups had almost 6000 entries for VMs that are long dead (I'm guessing it's due to my SIGKILL). I've run cgclear and my /sys/fs/cgroup/*/ dirs are now totally empty, but the new containers still hang.
---
Architecture: amd64
DistroRelease: Ubuntu 13.04
MarkForUpload: True
Package: lxc 0.9.0-0ubuntu3.3
PackageArchitec
ProcEnviron:
TERM=screen
PATH=(custom, no user)
LANG=en_US.UTF-8
SHELL=/bin/bash
Uname: Linux 3.8.0-25-generic x86_64
UserGroups:
Changed in linux: | |
importance: | Undecided → High |
Changed in linux (Ubuntu): | |
importance: | Undecided → High |
tags: | added: kernel-da-key raring |
tags: | added: apport-collected |
description: | updated |
Changed in lxc (Ubuntu): | |
status: | Incomplete → Confirmed |
Changed in linux (Ubuntu): | |
status: | New → Confirmed |
tags: | added: kernel-bug-exists-upstream |
Changed in linux (Ubuntu): | |
status: | Confirmed → Incomplete |
Also, in dmesg:
[54545.873460] unregister_ netdevice: waiting for lo to become free. Usage count = 1 netdevice: waiting for lo to become free. Usage count = 1 netdevice: waiting for lo to become free. Usage count = 1 netdevice: waiting for lo to become free. Usage count = 1 netdevice: waiting for lo to become free. Usage count = 1
[54556.103535] unregister_
[54566.333609] unregister_
[54576.563664] unregister_
[54586.793749] unregister_
I've modified my code to use lxc-stop as the cgroups do indeed leak otherwise. What's strange is that it kept happening after clearing cgroups, so perhaps it's something else.