You will end up with 3 endpoints which you can confirm by running keystone endpoint-list.
This is because updating an URL results in the endpoint being completely deleted and recreated. There is no update endpoint available.
When 2 or more urls are updated, the first url update deletes the old endpoint and recreate it with the updated values. The 2nd and 3rd url updates fail to delete the old endpoint because the service_id is cached. A 2nd and 3rd endpoint end up being created.
To reproduce, you need to run puppet twice.
First, configure a service endpoint:
keystone_endpoint { 'RegionOne/cinder': example. org:8776/ v1/%(tenant_ id)s', example. org:8776/ v1/%(tenant_ id)s', example. org:8776/ v1/%(tenant_ id)s',
ensure => present,
public_url => 'http://
admin_url => 'http://
internal_url => 'http://
}
Next, update 2 or more urls of the service endpoint:
keystone_endpoint { 'RegionOne/cinder': test.com: 8776/v1/ %(tenant_ id)s', test.com: 8776/v1/ %(tenant_ id)s', test.com: 8776/v1/ %(tenant_ id)s',
ensure => present,
public_url => 'http://
admin_url => 'http://
internal_url => 'http://
}
You will end up with 3 endpoints which you can confirm by running keystone endpoint-list.
This is because updating an URL results in the endpoint being completely deleted and recreated. There is no update endpoint available.
When 2 or more urls are updated, the first url update deletes the old endpoint and recreate it with the updated values. The 2nd and 3rd url updates fail to delete the old endpoint because the service_id is cached. A 2nd and 3rd endpoint end up being created.