Destroy two controllers and check pacemaker status is correct
Scenario:
1. Destroy first controller
2. Check pacemaker status
3. Run OSTF
4. Revert environment
5. Destroy second controller
6. Check pacemaker status
7. Run OSTF
Actual Result:
OStf failed on step 7:
Create volume and boot instance from it (failure)
Instance do not become active, so deletion starts, and failed by timeout
fuel_health.test: DEBUG: Waiting for <Server: ost1_test-boot-volume-instance1099375625> to get to ACTIVE status. Currently in build status
fuel_health.test: DEBUG: Sleeping for 10 seconds
fuel_health.common.test_mixins: INFO: STEP:5, verify action: 'server deletion'
fuel_health.nmanager: DEBUG: Deleting server.
fuel_health.test: DEBUG: Sleeping for 10 seconds
fuel_health.test: DEBUG: Sleeping for 10 seconds
fuel_health.common.test_mixins: INFO: Timeout 30s exceeded for server deletion
fuel_health.common.test_mixins: DEBUG: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/fuel_health/common/test_mixins.py", line 177, in verify
result = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/fuel_health/common/test_mixins.py", line 223, in __exit__
raise AssertionError(msg)
AssertionError: Time limit exceeded while waiting for server deletion to finish.
So looks like create instace after destructive actions take a liitle bit more time, so may we need to increase timeout for instance creation
VERSION:
feature_groups:
- mirantis
production: "docker"
release: "8.0"
api: "1.0"
build_number: "408"
build_id: "408"
fuel-nailgun_sha: "9ebbaa0473effafa5adee40270da96acf9c7d58a"
python-fuelclient_sha: "4f234669cfe88a9406f4e438b1e1f74f1ef484a5"
fuel-agent_sha: "df16d41cd7a9445cf82ad9fd8f0d53824711fcd8"
fuel-nailgun-agent_sha: "92ebd5ade6fab60897761bfa084aefc320bff246"
astute_sha: "c7ca63a49216744e0bfdfff5cb527556aad2e2a5"
fuel-library_sha: "7ef751bdc0e4601310e85b8bf713a62ed4aee305"
fuel-ostf_sha: "214e794835acc7aa0c1c5de936e93696a90bb57a"
fuel-mirror_sha: "8bb8c70efc61bcf633e02d6054dbf5ec8dcf6699"
fuelmenu_sha: "2a0def56276f0fc30fd949605eeefc43e5d7cc6c"
shotgun_sha: "63645dea384a37dde5c01d4f8905566978e5d906"
network-checker_sha: "9f0ba4577915ce1e77f5dc9c639a5ef66ca45896"
fuel-upgrade_sha: "616a7490ec7199f69759e97e42f9b97dfc87e85b"
fuelmain_sha: "62573cb2a8aa54845de9303b4a30935a90e1db61"
Seems the root of issue not only in time-outs. We are waiting for dictionary as response, but got the error message:
ERROR: Gateway Time-out (HTTP 504)
Steps to reproduce:
1). Deploy cluster: 3Controllers, 2Computes, 1Cinder
2). Destroy (Force Shutoff) one controller
3). Create cinder volume
Expected: paste.openstack .org/show/ 483768/
Volume will be created in ~2min
Response will be like(in cli):
http://
in rest api - dictionary
Actual: paste.openstack .org/show/ 483769/
Response: error message:
http://
but the volume was created.
Approximate time of volume creation ~5 min
https:/ /drive. google. com/file/ d/0BzWDM1PONYEu b0FpdXlMdzQ3S1U /view?usp= sharing