What is this article about?
When reading the Oracle Documentation you might discover that the method of ‘migrating’ a running OVS-server into a new POOL is not very well documented. In this article I will explain in steps that allow you to add a running OVS-Server into a new POOL on a new OVS-Manager.
Setup used to write this article.
- two ovs-managers (ovs-manager1 / ovs-manager2).
Both version 2.2.0
- one ovs-server (ovs-server1) that has two running domain ontop of it.
In my setup I didnt create a OCFS cluster (shared storage) from which the domains are ran. Even though the domains use fysical paths to the actual domain components, even though the cluster file service service will not allow the cluster to be broken (service stopped) with running domains I cannot assume this article is applicable… You are adviced to ‘TEST’ a migration in a LAB situation first! This article might help you get an idea of the required steps.
This article will describe the steps taken to migrate the “ovs-server1” which was managed on “ovs-manager1” to the new manager “ovs-manager2”. Any reason to do this might be iron-replacement without a decent backup of the ovs-manager, ovs-manager1 was fysically destroyed and needs to be restored, etc…
- Make sure you have a backup of the ovs-managers database (see ovs-manager documentation no how a backup is created)
- Logon to ‘ovs-manager1’ in which the ovs-server is member of an existing pool.
- Select the ‘Server Pools’ tab.
- “!Make sure you have all the CIs if this server documented!”
- Select the pool in which the ‘ovs-server1’ is listed and select “Delete”
- In the delete form “ONLY SELECT FORCE REMOVE”. DO NOT SELECT any other option!
- When the pool is succesfully deleted logon to the ovs-server1 using a ssh-client.
- verify that all the domains are still in the state we left them using the following command;
- Document the root repository used by the ovs-agent. You need to document it because the repositories are part of the ovs-agents local database thats being cleaned in the next step. You can find the current repository by running the following command;
It should return a string simular to this one;We need this string to verify the repository when we recreate it.
[ * ] e3514a86-a763-4eee-84b5-0fedcc03416d => /dev/sdb1
- The next step is to clean the ovs-agents local database in which the ovs-manager is registered. This entry prevents us from linking any other ovs-manager to this agent. The ovs-agent, version 2.3 contains a cleanup script located in /opt/ovs-agent-2.3/utils/cleanup.py that is needed to perform this cleanup. If the cleanup script is not there, you are probably using ovs-server version 2.2.0. You can check the ovs-server release by issuing the following command;
cat /etc/*-releaseOracle VM server release 2.2.1
In the situation you indeed using version 2.2.0, you might simply copy the cleanup.py script from a ovs-server version 2.2.1 and put it in the path mentioned above. (yeah, same ovs-agent versions but still minor differences 😉 ).
- If the cleanup.py is in place, run it;
Confirm the question
- The result of this action is that the agents local database has been cleaned. As a result of this you might notice that the /OVS mapping is gone as well. Not to worry, using the xm list command you are able to verify that the various domains are still up and running. This is because they are being configured to use the fysical path to its template directory somewhere in the /var/ovs/mount/ which is still mounted.
- Before you add the ovs-server to the new manager we need to restore the agents entry for the root repository. For this we need the information documented in step 9. Use the information, specific the devices listed in step 9 to restore that config. Initially you need to add the various repositories, afterward (step 14) we need to assign the ‘root’ label to one of them, in our case the one created here.
/opt/ovs-agent-2.3/utils/repos.py -n /dev/sdb1
- As a result the previous command returned a rule that also contains the UUID for this repository. verify that the UUID is the same as the one listed in step 9. The recreated UUID should be the same!
- Assign the ‘root’ label to this repository (that might be a different one if you used multiple repositories)
/opt/ovs-agent-2.3/utils/repos.py -r e3514a86-a763-4eee-84b5-0fedcc03416d
- When you have are finished restoring the repo config you can start adding the server to a new POOL on ovs-manager2.
- Logon to ovs-manager2 (the new ovs-manager)
- Select the “Servers Pools” tab.
- Select “Create pool”.
- Fill out all the required fields and use the cleaned ovs-server1 machine as to be added server.
- Finish all the required steps so a new pool is created.
- You might notice you have a new pool with the ovs-server1 in it, but with no virtual machines. This is because we need to “discover / reimport” these from the ovs-server. This is done as followed.
- Select “Resources”
- Select “Virtual machine Images”
- Select “Import”
- Select “Select from server pool (discover and register)”
- Select the Server POOL you have just created.
- Select the “running?” Virtual machine image name in the pulldown and fill out all the fields required (It might be wise to match these with the initial configuration of the VM Image you are trying to register)
- Finish all the steps and repeat step 25 > 29 for each VM image you like to import.
- DONT FORGET TO “APPROVE” them 😉
This should be all you need to do to restore the machine and all of its (running?) images into the new OVS-manager.
If all is well, after creating the new POOL in step 19 / 21 the /OVS share should be remounted by the ovs-agent / ovs-manager. You might want to verify this on the ovs-server box.
Hope this helps 🙂