[nailgun] Addition of new Compute node cause puppet run on all controllers (fuel cli)

Bug #1280318 reported by Roman Sokolkov
26
This bug affects 4 people
Affects Status Importance Assigned to Milestone
Fuel for OpenStack
Fix Released
High
Dima Shulyak

Bug Description

Description:

Simple cluster deployed from cli. Addition of new compute node causes puppet run on all cloud nodes.

Environment:
 - Fuel 4.0.
 - 3 nodes

Steps to reproduce:
1. Create simple cluster (2 node) from WebUI
2. download env configuration, edit(minor changes) and upload it back.
    fuel env --env 3 deployment default
    fuel env --env 3 deployment upload
3. Press "Deploy changes" button and wait deployment has finished.
4. Add new compute node and press "Deploy changes"

Expected result:
Compute node will be provisioned and deployed

Actual result:
New Compute node provisioned. Puppet run on controller and old compute.

summary: - Addition of new Compute node caused puppet run on all controllers
+ Addition of new Compute node cause puppet run on all controllers
Revision history for this message
Vladimir Kuklin (vkuklin) wrote : Re: Addition of new Compute node cause puppet run on all controllers

Could you provide diagnostic snapshot of the environment?

Changed in fuel:
assignee: nobody → Fuel Python Team (fuel-python)
status: New → Incomplete
Revision history for this message
Roman Sokolkov (rsokolkov) wrote :
Mike Scherbakov (mihgen)
Changed in fuel:
status: Incomplete → Confirmed
importance: Undecided → High
milestone: none → 4.1
Mike Scherbakov (mihgen)
tags: added: astute
Revision history for this message
Evgeniy L (rustyrobot) wrote :

You've redefined deployment data and then sent all of your nodes on /api/v1/clusters/5/orchestrator/deployment/ handler and us result deployment was started on all nodes.

If you wants deploy only compute then you need to send on this handler only your compute nodes.

Changed in fuel:
status: Confirmed → Invalid
description: updated
summary: - Addition of new Compute node cause puppet run on all controllers
+ Addition of new Compute node cause puppet run on all controllers (fuel
+ cli)
Changed in fuel:
status: Invalid → New
Revision history for this message
Evgeniy L (rustyrobot) wrote : Re: Addition of new Compute node cause puppet run on all controllers (fuel cli)

Run

     fuel deployment --env-id 3 --delete

command between 3 and 4 step.

It's not obvious and we need to rethink usecase of custom settings functional.
Maybe we can keep settings for nodes in nodes models.

Changed in fuel:
milestone: 4.1 → 5.0
status: New → Confirmed
tags: added: customer-found
Mike Scherbakov (mihgen)
tags: added: cli
Revision history for this message
Roman Sokolkov (rsokolkov) wrote :

Workaround:

psql -U nailgun -W -h 127.0.0.1
update clusters set is_customized=false where id=${ID};

Revision history for this message
Dmitry Borodaenko (angdraug) wrote :

Shouldn't be high priority if there's a safe and simple workaround.

tags: added: release-notes
Dmitry Pyzhov (dpyzhov)
Changed in fuel:
milestone: 5.0 → 5.1
Revision history for this message
Meg McRoberts (dreidellhasa) wrote :

Added to Known Issues in 5.0 Release Notes

Artem Roma (aroma-x)
Changed in fuel:
assignee: Fuel Python Team (fuel-python) → Artem Roma (aroma-x)
Revision history for this message
Artem Roma (aroma-x) wrote :

AFAIU the issue takes place because of some features in serialization process for deployment: if deployment data has been changed via fuel-cli it is stored in 'replaced_deployment_info' attribute of Cluster model and it is that attribute which will be checked in first turn against deployment info retrieving: https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/task/task.py#L138-L139

So as you can see this data will be using in all future cluster redeployment regardless how many nodes you have added to cluster after settings change they are not deployed.

As Evgeniy L has noticed clearing of mentioned attribute will solve the issue, but it must be done explicitly by user via fuel-cli or via direct interaction with db (and btw, workaround that was introduced by Roman Sokolkov doesn't fix the problem so corresponding 5.0 Release Notes entry should be updated).

My proposal is to modify nailgun code so that 'replaced_deployment_info' is purged after success deployment of cluster.

Of course such solution is not transparent for user and maybe as Evgeniy said we should reorganize our workflow of managing deployment data for cluster but still it will be fix.

Revision history for this message
Roman Sokolkov (rsokolkov) wrote :

Artem,

I've checked this:

"My proposal is to modify nailgun code so that 'replaced_deployment_info' is purged after success deployment of cluster."

So finally i've succeed with this (Fuel 5.0)

- is_customized='f'
- replaced_deployment_info='{}'

Revision history for this message
Artem Roma (aroma-x) wrote :

Roman,

I want to clarify situation with the issue one more time so that no one has any misunderstandings left: first of all, 'is_customized' flag controls displaying of ui notification about that cluster is changed via fuel-cli and I think one should not reset it if his or her environment is actually changed using fuel client because this information has some value, hasn't it?

And second: only purging 'replaced_deployment_info' makes newly added nodes deployed properly and is fix for the problem introduced in bug upon the table.

Revision history for this message
Meg McRoberts (dreidellhasa) wrote :

Listed as "Known Issue" in 5.0.1 Release Notes.

Revision history for this message
Artem Roma (aroma-x) wrote :

Meg,

Yes, it is listed in "Known Issues" as I mentioned in my previous comment such workaround won't fix particularly nothing. I think we should update our documentation from

psql -U nailgun -W -h 127.0.0.1
update clusters set is_customized=false where id=${ID};

to following snippet:

psql -U nailgun -W -h 127.0.0.1
update clusters set replaced_deployment_info='{}' where id=${ID};

with mention that user must do this before deployment of newly added nodes to already deployed cluster if he/she has modified deployment info for that cluster via cli.

Artem Roma (aroma-x)
Changed in fuel:
assignee: Artem Roma (aroma-x) → Fuel Python Team (fuel-python)
Dmitry Ilyin (idv1985)
summary: - Addition of new Compute node cause puppet run on all controllers (fuel
- cli)
+ [nailgun] Addition of new Compute node cause puppet run on all
+ controllers (fuel cli)
tags: added: deploy-seq
Artem Roma (aroma-x)
tags: added: fuel-client
removed: cli
Revision history for this message
Dima Shulyak (dshulyak) wrote :

So, this is by design and if user changed cluster configuration once, with help of fuel client, he loses ability to modify it via UI in future. I am not sure is this really that bad, but as for now fields:

- replaced_deployment_info
- is_customized

wont be cleared by itself

As fix for this bug we can provide ability to reset fields by fuel client

Or just provide Artems workouround in known issues section:

psql -U nailgun -W -h 127.0.0.1
update clusters set replaced_deployment_info='{}' where id=${ID};

Kamil Sambor (ksambor)
Changed in fuel:
assignee: Fuel Python Team (fuel-python) → Kamil Sambor (ksambor)
status: Confirmed → In Progress
Kamil Sambor (ksambor)
Changed in fuel:
assignee: Kamil Sambor (ksambor) → Fuel Python Team (fuel-python)
Dima Shulyak (dshulyak)
Changed in fuel:
status: In Progress → Confirmed
Dima Shulyak (dshulyak)
Changed in fuel:
assignee: Fuel Python Team (fuel-python) → Dima Shulyak (dshulyak)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to fuel-web (master)

Fix proposed to branch: master
Review: https://review.openstack.org/112333

Changed in fuel:
status: Confirmed → In Progress
Revision history for this message
Dima Shulyak (dshulyak) wrote :

Actually there is already such command that i wanted to implement previously,

fuel deployment --delete --env 5

Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix merged to fuel-web (master)

Reviewed: https://review.openstack.org/112333
Committed: https://git.openstack.org/cgit/stackforge/fuel-web/commit/?id=4cd225ca9d0eb0b1dd398b07ad8202169f9c3ea4
Submitter: Jenkins
Branch: master

commit 4cd225ca9d0eb0b1dd398b07ad8202169f9c3ea4
Author: Dima Shulyak <email address hidden>
Date: Wed Aug 6 18:48:44 2014 +0300

    Store replaced info on node instead of cluster

    - add necessery columns to node model
    - add replace/get provisioning/deployment info on objects.Cluster
    - api interaction with cli is not changed
    - added migration from old data model to new

    DeploymentInfo treated everywhere as list, because it will consist of
    - multiple yaml files per node, on role basis

    ProvisioningInfo treated as dict
    - one yaml for cluster section (engine)
    - yaml per node with provisioning params of this node

    It is possible to manage cluster from UI and cli

    Change-Id: I9345a5e9adadead2c149e85fab139ae4e5615cf1
    Closes-Bug: #1280318

Changed in fuel:
status: In Progress → Fix Committed
tags: added: in progress
Revision history for this message
Andrey Sledzinskiy (asledzinskiy) wrote :

verified on {

    "build_id": "2014-09-17_21-40-34",
    "ostf_sha": "64cb59c681658a7a55cc2c09d079072a41beb346",
    "build_number": "11",
    "auth_required": true,
    "api": "1.0",
    "nailgun_sha": "eb8f2b358ea4bb7eb0b2a0075e7ad3d3a905db0d",
    "production": "docker",
    "fuelmain_sha": "8ef433e939425eabd1034c0b70e90bdf888b69fd",
    "astute_sha": "f5fbd89d1e0e1f22ef9ab2af26da5ffbfbf24b13",
    "feature_groups": [
        "mirantis"
    ],
    "release": "5.1",
    "release_versions": {
        "2014.1.1-5.1": {
            "VERSION": {
                "build_id": "2014-09-17_21-40-34",
                "ostf_sha": "64cb59c681658a7a55cc2c09d079072a41beb346",
                "build_number": "11",
                "api": "1.0",
                "nailgun_sha": "eb8f2b358ea4bb7eb0b2a0075e7ad3d3a905db0d",
                "production": "docker",
                "fuelmain_sha": "8ef433e939425eabd1034c0b70e90bdf888b69fd",
                "astute_sha": "f5fbd89d1e0e1f22ef9ab2af26da5ffbfbf24b13",
                "feature_groups": [
                    "mirantis"
                ],
                "release": "5.1",
                "fuellib_sha": "d9b16846e54f76c8ebe7764d2b5b8231d6b25079"
            }
        }
    },
    "fuellib_sha": "d9b16846e54f76c8ebe7764d2b5b8231d6b25079"

}

tags: removed: in progress
Changed in fuel:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Duplicates of this bug

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.