Comment 13 for bug 1493520

Revision history for this message
Artem Hrechanychenko (agrechanichenko) wrote :

Verified on Kilo 154 ISO and Liberty 55 ISO

Steps to reproduce:
1)Deploy 3 controllers, 2 compute, 1 cinder:
2) ssh to primary controller
3) fill root filesystem with fallocate -l 12G /root/bigfile(after that on root_free==0)
4) verify that crm_mon -1 --show-node-attribures print out #health_disk = red for Primary controller
5) verify that pcs status printed that all resources stopped on primary controller
6) Run OSTF tests Sanity and Functional

Actual result:

failed OSTF tests:

Sanity - Check that required services are running
Functional - Create volume and boot instance from it
                          Create volume and attach it to instance
                         Check network connectivity from instance via floating IP
                         Create security group
                         Launch instance
                         Launch instance with file injection
                         Launch instance, create snapshot, launch instance from snapshot

After resolve disk space and crm node status-attr node_hostname delete "#health_disk" resources started again, but the same OSTF test failed(instead of HA, HA don't failed) with the same errors