Multi-Site
Orchestrator Deployment

Reference

This page is a reference and takeway for you to understand, like the APIC, how the shared Multi-Site Orchestrator was deployed for this lab.

You can deploy Cisco ACI Multi-Site Orchestrator in a number of different ways, such as using an OVA in vCenter, directly in the ESX server without using vCenter, or using a Python script. The Python script is the newly recommended way, but for the purposes and understanding of the steps performed, the steps below were performed manually deploying via an OVA within vCenter.

Step 1 - Deploy Three MSO Node Instances

Login to vCenter and find the DataCenter or Cluster you wish to deploy an MSO node.

  1. Right-click on the DataCenter or Cluster
  2. Select Deploy OVF Template...

In the Deploy OVF Template wizard:

  1. Click Local file
  2. Select the MSO OVA file local to your machine
  3. Click Next

  4. On the next screen of the Deploy OVF Template wizard:
  5. Click Next

  6. On the next screen of the Deploy OVF Template wizard:
  7. Name the MSO VM. Here the name mso-node1 was used.
  8. Ensure your appropriate DataCenter is selected
  9. Click Next

  10. On the next screen of the Deploy OVF Template wizard:
  11. Click the appropriate cluster
  12. Click Next
  13. On the next screen of the Deploy OVF Template wizard:
  14. Select the appropriate datastore
  15. Click Next

  16. On the next screen of the Deploy OVF Template wizard:
  17. Select the appropriate network
  18. Click Next

  19. On the next screen of the Deploy OVF Template wizard:
  20. Fill out the OVA template. Here the settings we used can be found below. Note, the IP address and password here will be used to SSH into the MSO node for further setup
  21. Click Next

  22. On the next screen of the Deploy OVF Template wizard:
  23. Review OVA deployment settings for correctness
  24. Click Power on after deployment
  25. Click Finish

Steps 1-20 need to be repeated for the deployment of two (2) more MSO nodes.

After all three (3) MSO node VMs are deployed, your vCenter would look something like the screenshot below:

Step 2 - Setup MSO Docker Swarm Cluster

  1. In your environment, would SSH into each MSO node using root and password you set during the VM deployment.
  2. On MSO node1, cd /opt/cisco/msc/builds/msc_2.0.1c/prodha/ and run a provided Python script called msc_cfg_init.py

                    [root@mso-node1 ~]# cd /opt/cisco/msc/builds/msc_2.0.1c/prodha/
                    [root@mso-node1 prodha]# ./msc_cfg_init.py 
                    2019-01-16 14:30:36,901 INFO [msc_cfg_init:52]: Starting the initialization of the cluster...
                    2019-01-16 14:30:53,828 INFO [msc_cfg_init:28]: Create swarm....
                    2019-01-16 14:30:54,060 INFO [msc_cfg_init:43]: Create docker secrets....
                    
                    2019-01-16 14:30:54,236 INFO [msc_cfg_init:48]: Create secret for nginx key...
                    yzv1femspbglxn22j8a1fmyez
                    
                    Create secret for nginx crt...
                    nu62zzu1i3snkqxrec4abq1ja
                    
                    Both secrets created successfully.
                    
                    2019-01-16 14:30:54,319 INFO [msc_cfg_init:55]: Join other nodes to the cluster by executing the following on each of the other nodes:
                    ./msc_cfg_join.py SWMTKN-1-5vu9stavbwh6wnwg7mqpur9wr3mashl31iiespxleyx7cj95n5-58fggpwqfl0egxbhfzcxf4u9z 10.100.1.41
                    [root@mso-node1 prodha]#
                

    After getting the secret string from MSO node1, which was ./msc_cfg_join.py SWMTKN-1-5vu9stavbwh6wnwg7mqpur9wr3mashl31iiespxleyx7cj95n5-58fggpwqfl0egxbhfzcxf4u9z 10.100.1.41 from above, execute that string on MSO node2.

                    [root@mso-node2 ~]# cd /opt/cisco/msc/builds/msc_2.0.1c/prodha/
                    [root@mso-node2 prodha]# ./msc_cfg_join.py SWMTKN-1-5vu9stavbwh6wnwg7mqpur9wr3mashl31iiespxleyx7cj95n5-58fggpwqfl0egxbhfzcxf4u9z 10.100.1.41
                    2019-01-16 14:32:46,304 INFO [msc_cfg_join:37]: This node joined a swarm as a manager.
                    
                    [root@mso-node2 prodha]# 
                

    Then execute the secret string on MSO node3.

                    [root@mso-node3 ~]# cd /opt/cisco/msc/builds/msc_2.0.1c/prodha/
                    [root@mso-node3 prodha]# ./msc_cfg_join.py SWMTKN-1-5vu9stavbwh6wnwg7mqpur9wr3mashl31iiespxleyx7cj95n5-58fggpwqfl0egxbhfzcxf4u9z 10.100.1.41
                    2019-01-16 14:33:09,296 INFO [msc_cfg_join:37]: This node joined a swarm as a manager.
                    
                    [root@mso-node3 prodha]# 
                
  3. Verify the Docker Swarm cluster is up using docker node ls on any of the nodes.

  4.                 [root@mso-node1 prodha]# docker node ls
                    ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
                    m76g3dwq48smo73mo6unrv1gg *   mso-node1           Ready               Active              Leader              18.03.0-ce
                    uov9zapoj0qtfexrc34kchn5h     mso-node2           Ready               Active              Reachable           18.03.0-ce
                    mqgeds4lxjg72o1dnwmagzfei     mso-node3           Ready               Active              Reachable           18.03.0-ce
                    [root@mso-node1 prodha]# 
                
  5. Bring up MSO by running another Python file called msc_deploy.py on any node.

  6.                 [root@mso-node1 prodha]# msc_deploy.py
                    
                    {{ (snip) }}
                    
                    2019-01-16 14:47:13,497 INFO [msc_deploy:219]: Deployement Complete :)
                    [root@mso-node1 prodha]# 
                
  7. Verify all container services are up using docker service ls:

  8.                 [root@mso-node1 prodha]# docker service ls
                    ID                  NAME                      MODE                REPLICAS            IMAGE                            PORTS
                    x3j4o1ku3v8q        msc_auditservice          replicated          1/1                 msc-auditservice:2.0.1c          
                    lyc4et497aru        msc_backupservice         global              3/3                 msc-backupservice:2.0.1c         
                    s5umphhqev0h        msc_cloudsecservice       replicated          1/1                 msc-cloudsecservice:2.0.1c       
                    iqg5011oji69        msc_consistencyservice    replicated          1/1                 msc-consistencyservice:2.0.1c    
                    aepvl6zossh0        msc_executionengine       replicated          1/1                 msc-executionengine:2.0.1c       
                    nbij5ulgeer4        msc_jobschedulerservice   replicated          1/1                 msc-jobschedulerservice:2.0.1c   
                    33np142y3kbl        msc_kong                  global              3/3                 msc-kong:2.0.1c                  
                    bi31pqy6yl82        msc_kongdb                replicated          1/1                 msc-postgres:9.4                 
                    bcvquzninv5e        msc_mongodb1              replicated          1/1                 msc-mongo:3.4                    
                    zndyl4n3vswv        msc_mongodb2              replicated          1/1                 msc-mongo:3.4                    
                    0izgoqjl9z09        msc_mongodb3              replicated          1/1                 msc-mongo:3.4                    
                    9xxde182tdv4        msc_platformservice       global              3/3                 msc-platformservice:2.0.1c       
                    qite2ujm9hu9        msc_schemaservice         global              3/3                 msc-schemaservice:2.0.1c         
                    ktetpybpijjz        msc_siteservice           global              3/3                 msc-siteservice:2.0.1c           
                    v6u62hiwd4du        msc_syncengine            global              3/3                 msc-syncengine:2.0.1c            
                    n0auif5l92uw        msc_ui                    global              3/3                 msc-ui:2.0.1c                    *:80->80/tcp, *:443->443/tcp
                    f9wvb7raazds        msc_userservice           global              3/3                 msc-userservice:2.0.1c           
                    [root@mso-node1 prodha]#
                

Step 3 - Login to MSO

With the Docker Swarm and container services up, you can now browse to the MSO and login. The default login info is admin/we1come!

Upon a successful login, you would receive a Welcome banner. Click Get Started.

You would immediately get prompted to change the admin user's password.

After that, you would end up at the MSO dashboard, where you're ready to add your various ACI sites.

Continue to the next section to see how to add ACI sites within the MSO.