Steps to reproduce:
1)Deploy 3 controllers, 2 compute, 1 cinder:
2) ssh to primary controller
3) fill root filesystem with fallocate -l 12G /root/bigfile(after that on root_free==0)
4) verify that crm_mon -1 --show-node-attribures print out #health_disk = red for Primary controller
5) verify that pcs status printed that all resources stopped on primary controller
6) Run OSTF tests Sanity and Functional
Actual result:
failed OSTF tests:
Sanity - Check that required services are running
Functional - Create volume and boot instance from it Create volume and attach it to instance Check network connectivity from instance via floating IP Create security group Launch instance Launch instance with file injection Launch instance, create snapshot, launch instance from snapshot
After resolve disk space and crm node status-attr node_hostname delete "#health_disk" resources started again, but the same OSTF test failed(instead of HA, HA don't failed) with the same errors
Verified on Kilo 154 ISO and Liberty 55 ISO
Steps to reproduce: node-attribures print out #health_disk = red for Primary controller
1)Deploy 3 controllers, 2 compute, 1 cinder:
2) ssh to primary controller
3) fill root filesystem with fallocate -l 12G /root/bigfile(after that on root_free==0)
4) verify that crm_mon -1 --show-
5) verify that pcs status printed that all resources stopped on primary controller
6) Run OSTF tests Sanity and Functional
Actual result:
failed OSTF tests:
Sanity - Check that required services are running
Create volume and attach it to instance
Check network connectivity from instance via floating IP
Create security group
Launch instance
Launch instance with file injection
Launch instance, create snapshot, launch instance from snapshot
Functional - Create volume and boot instance from it
After resolve disk space and crm node status-attr node_hostname delete "#health_disk" resources started again, but the same OSTF test failed(instead of HA, HA don't failed) with the same errors