Comment 11 for bug 1493520

Revision history for this message
Artem Hrechanychenko (agrechanichenko) wrote :

{"build_id": "107", "openstack_version": "2015.1.0-8.0", "build_number": "107", "release_versions": {"2015.1.0-8.0": {"VERSION": {"build_id": "107", "openstack_version": "2015.1.0-8.0", "build_number": "107", "api": "1.0", "fuel-library_sha": "acfcfd289ca454585687b6ff9651b53e4ffaf0cd", "feature_groups": ["mirantis"], "fuel-nailgun-agent_sha": "d66f188a1832a9c23b04884a14ef00fc5605ec6d", "fuel-nailgun_sha": "a95a0bc965c11b0d412a00c4cb088888b919e054", "fuel-agent_sha": "e881f0dabd09af4be4f3e22768b02fe76278e20e", "production": "docker", "python-fuelclient_sha": "286939d3be220828f52e73b65928ed39662e1853", "astute_sha": "0f753467a3f16e4d46e7e9f1979905fb178e4d5b", "fuel-ostf_sha": "37c5d6113408a29cabe0f416fe99cf20e2bca318", "release": "8.0", "fuelmain_sha": "8e5e75302b2534fd38e4b41b795957111ac75543"}}}, "auth_required": true, "api": "1.0", "fuel-library_sha": "acfcfd289ca454585687b6ff9651b53e4ffaf0cd", "feature_groups": ["mirantis"], "fuel-nailgun-agent_sha": "d66f188a1832a9c23b04884a14ef00fc5605ec6d", "fuel-nailgun_sha": "a95a0bc965c11b0d412a00c4cb088888b919e054", "fuel-agent_sha": "e881f0dabd09af4be4f3e22768b02fe76278e20e", "production": "docker", "python-fuelclient_sha": "286939d3be220828f52e73b65928ed39662e1853", "astute_sha": "0f753467a3f16e4d46e7e9f1979905fb178e4d5b", "fuel-ostf_sha": "37c5d6113408a29cabe0f416fe99cf20e2bca318", "release": "8.0", "fuelmain_sha": "8e5e75302b2534fd38e4b41b795957111ac75543"}

Steps to reproduce:
1) Deploy cluster with 3 controllers, 1 compute, 1 compute+cinder
2) Fill root partition on primary controller
3)Wait for 5-10 minutes while pacemaker stopped all resourses
4)Verify that all resourses was really stopped
5) execute "crm node status-attr <hostname> delete "#health_disk"
6) Wait for 5-10 minutes
7) Verify that all services automatically restarted

Actual result :
crm node status-attr node-1.test.domain.local delete /

http://paste.openstack.org/show/475236/