charmhelpers is_mounted() doesn't deal with root device check?

Bug #1350051 reported by David Britton
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
swift-storage (Juju Charms Collection)
Fix Released
Undecided
Chris Glass

Bug Description

Full unit log: http://paste.ubuntu.com/7898450/

In this case, charmhelpers seemed to have problems detecting that /dev/sdb* was mounted with the root filesystem. Perhaps this is why the following is specified in the swift_storage_utils.py:

    blacklist = ['sda', 'vda', 'cciss/c0d0']

It should probably be double checked to make sure it can deal with this situation.

ubuntu@bohr:~$ tail -30 /var/log/juju/unit-swift-storage-0.log
2014-07-29 20:29:34 INFO install GPT data structures destroyed! You may now partition the disk using fdisk or
2014-07-29 20:29:34 INFO install other utilities.
2014-07-29 20:29:34 INFO install Warning: The kernel is still using the old partition table.
2014-07-29 20:29:34 INFO install The new table will be used at the next reboot.
2014-07-29 20:29:34 INFO install The operation has completed successfully.
2014-07-29 20:29:34 INFO install 1+0 records in
2014-07-29 20:29:34 INFO install 1+0 records out
2014-07-29 20:29:34 INFO install 1048576 bytes (1.0 MB) copied, 0.00211706 s, 495 MB/s
2014-07-29 20:29:34 INFO install 100+0 records in
2014-07-29 20:29:34 INFO install 100+0 records out
2014-07-29 20:29:34 INFO install 51200 bytes (51 kB) copied, 0.00456763 s, 11.2 MB/s
2014-07-29 20:29:34 INFO install mkfs.xfs: cannot open /dev/sdb: Device or resource busy
2014-07-29 20:29:34 INFO install <open file '/proc/partitions', mode 'r' at 0x7ffedc45fdb0>
2014-07-29 20:29:34 INFO install Traceback (most recent call last):
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/install", line 91, in <module>
2014-07-29 20:29:34 INFO install main()
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/install", line 85, in main
2014-07-29 20:29:34 INFO install hooks.execute(sys.argv)
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/charmhelpers/core/hookenv.py", line 478, in execute
2014-07-29 20:29:34 INFO install self._hooks[hook_name]()
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/install", line 45, in install
2014-07-29 20:29:34 INFO install setup_storage()
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/swift_storage_utils.py", line 180, in setup_storage
2014-07-29 20:29:34 INFO install mkfs_xfs(dev)
2014-07-29 20:29:34 INFO install File "/var/lib/juju/agents/unit-swift-storage-0/charm/hooks/swift_storage_utils.py", line 172, in mkfs_xfs
2014-07-29 20:29:34 INFO install check_call(cmd)
2014-07-29 20:29:34 INFO install File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
2014-07-29 20:29:34 INFO install raise CalledProcessError(retcode, cmd)
2014-07-29 20:29:34 INFO install subprocess.CalledProcessError: Command '['mkfs.xfs', '-f', '-i', 'size=1024', '/dev/sdb']' returned non-zero exit status 1
2014-07-29 20:29:34 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1
ubuntu@bohr:~$ ll /dev/sd*
brw-rw---- 1 root disk 8, 0 Jul 29 20:27 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 29 20:27 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 29 20:27 /dev/sda2
brw-rw---- 1 root disk 8, 16 Jul 29 20:29 /dev/sdb
brw-rw---- 1 root disk 8, 17 Jul 29 20:27 /dev/sdb1
ubuntu@bohr:~$ ll /proc/partitions
-r--r--r-- 1 root root 0 Jul 29 20:34 /proc/partitions
ubuntu@bohr:~$ cat /proc/partitions
major minor #blocks name

   8 0 119454720 sda
   8 1 118405103 sda1
   8 2 1047552 sda2
   8 16 117220824 sdb
   8 17 117219800 sdb1

# /dev/sdb is root (!)

ubuntu@bohr:~$ mount |grep sdb
/dev/sdb1 on / type ext4 (rw)
ubuntu@bohr:~$ mount |grep sda
ubuntu@bohr:~$ ll /dev/sda*
brw-rw---- 1 root disk 8, 0 Jul 29 20:27 /dev/sda
brw-rw---- 1 root disk 8, 1 Jul 29 20:27 /dev/sda1
brw-rw---- 1 root disk 8, 2 Jul 29 20:27 /dev/sda2

# unsure why is_mounted() in charmhelpers didn't catch that:
#

ubuntu@bohr:~$ cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
udev /dev devtmpfs rw,relatime,size=8112276k,nr_inodes=2028069,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1625168k,mode=755 0 0
/dev/disk/by-uuid/b93319af-7412-4bff-b8e1-d032c657b5cd / ext4 rw,relatime,data=ordered 0 0
none /sys/fs/cgroup tmpfs rw,relatime,size=4k,mode=755 0 0
none /sys/fs/fuse/connections fusectl rw,relatime 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
none /sys/kernel/security securityfs rw,relatime 0 0
none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0
none /run/user tmpfs rw,nosuid,nodev,noexec,relatime,size=102400k,mode=755 0 0
none /sys/fs/pstore pstore rw,relatime 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,name=systemd 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
ubuntu@bohr:~$ cat /etc/mtab
/dev/sdb1 / ext4 rw 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
none /sys/fs/cgroup tmpfs rw 0 0
none /sys/fs/fuse/connections fusectl rw 0 0
none /sys/kernel/debug debugfs rw 0 0
none /sys/kernel/security securityfs rw 0 0
udev /dev devtmpfs rw,mode=0755 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0
tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0
none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0
none /run/shm tmpfs rw,nosuid,nodev 0 0
none /run/user tmpfs rw,noexec,nosuid,nodev,size=104857600,mode=0755 0 0
none /sys/fs/pstore pstore rw 0 0
systemd /sys/fs/cgroup/systemd cgroup rw,noexec,nosuid,nodev,none,name=systemd 0 0
ubuntu@bohr:~$ mount
/dev/sdb1 on / type ext4 (rw)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd)
ubuntu@bohr:~$

Related branches

Chris Glass (tribaal)
Changed in swift-storage (Juju Charms Collection):
assignee: nobody → Chris Glass (tribaal)
Chris Glass (tribaal)
Changed in swift-storage (Juju Charms Collection):
status: New → In Progress
Revision history for this message
Chris Glass (tribaal) wrote :

A fix was committed to chamr-helpers, along with more unit tests. I also changed the logic in swift-storage to use "disks that are not in use" rather than any disk not in a blacklist. It should make things more robust in this scenario.

Marking as fix released - it should propagate to other charms using the charm-helpers code as well in further updates.

Changed in swift-storage (Juju Charms Collection):
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.