> puppet rerun succeeds. looks like environment performance bug
Was not able to reproduce it yet:
puppet-apply.log:
2014-05-08T10:44:10.765495+00:00 err: (/Stage[main]/Osnailyfacter::Cluster_ha/Nova_floating_range[10.108.1.128-10.108.1.254]) Could not evaluate: Oops - not sure what happened: 757: unexpected token at '<html><body><h1>504 Gateway Time-out</h1>
Around 10:44:10 time only one controller node was deploying, so this means we've got "504 Gateway Time-out" from haproxy+keystone on node-1. Keystone was already running on this node:
2014-05-08T10:40:31.977063+00:00 debug: 2014-05-08 08:18:31.919 29269 INFO eventlet.wsgi.server [-] (29269) wsgi starting up on http://10.108.2.3:5000/
And there are no errors in keystone log for 10:44:10. So it could really be the environment performance issue when haproxy did not manage to connect to keystone backend in time.
> puppet rerun succeeds. looks like environment performance bug
Was not able to reproduce it yet:
puppet-apply.log: 08T10:44: 10.765495+ 00:00 err: (/Stage[ main]/Osnailyfa cter::Cluster_ ha/Nova_ floating_ range[10. 108.1.128- 10.108. 1.254]) Could not evaluate: Oops - not sure what happened: 757: unexpected token at '<html> <body>< h1>504 Gateway Time-out</h1>
2014-05-
nova-api.log: connectionpool [-] "POST /v2.0/tokens HTTP/1.1" 504 None _make_request /usr/lib/ python2. 6/site- packages/ urllib3/ connectionpool. py:330
2014-05-08 10:44:10.836 2079 DEBUG urllib3.
Around 10:44:10 time only one controller node was deploying, so this means we've got "504 Gateway Time-out" from haproxy+keystone on node-1. Keystone was already running on this node:
2014-05- 08T10:40: 31.977063+ 00:00 debug: 2014-05-08 08:18:31.919 29269 INFO eventlet. wsgi.server [-] (29269) wsgi starting up on http:// 10.108. 2.3:5000/
And there are no errors in keystone log for 10:44:10. So it could really be the environment performance issue when haproxy did not manage to connect to keystone backend in time.