Danube Cloud on SmartOS

Using Home Theater PC as a personal datacenter
22. February 2017
Building VM images using Danube Cloud Factory (part 1)
2. March 2017

Danube Cloud on SmartOS

This tutorial is intended for people who already have a running SmartOS server in production and want to manage it using Danube Cloud.

We will only install the central management server and skip other services, which are normally included in Danube Cloud. You will be able to manage your virtual machines and zones, virtual data centers, networks, schedule snapshots, and backups. However, you will miss some cool features like integrated monitoring, DNS and image management. These services may be installed later, and we may write about their deployment and integration in an another tutorial.

In case you don’t have anything installed on your server(s) or can afford to re-install it, do yourself a favor and perform a clean install of the latest Danube Cloud release. The install scripts will automate most of the things described in this tutorial and install additional service VMs (monitoring, DNS and image server).

Step 1 – deploy the mgmt VM

Let’s start by logging into your SmartOS server and installing the mgmt VM.

  • Add Erigones image server into imgadm sources:
    # imgadm sources -a https://images.danubecloud.org
  • Find out the image UUID of the latest mgmt image:
    # curl -s https://images.danubecloud.org/images/ | json -c 'this.name == "esdc-mgmt-ce" && this.version >= "2.4.0"'
  • Import the latest mgmt image. Please make sure that you use mgmt VM version >= 2.4:
    # imagadm import 8dd64d0e-cbff-4495-b83e-c2ca4f3b167c
  • Generate and save some passwords:
    # export rabbitmq_password="test123"
    # export redis_password="test456"
    # export pgsql_esdc_password"="test789"
    # export pgsql_pdns_password="test555"
  • Create mgmt VM with an IP address from the admin network. Update the missing properties (especially the passwords):
    # cat << EOF > mgmt.json
      "hostname": "mgmt01.example.com",
      "alias": "mgmt01.example.com",
      "brand": "kvm",
      "vcpus": 1,
      "cpu_shares": 100,
      "cpu_cap": 150,
      "zfs_io_priority": 100,
      "ram": 1024,
      "max_physical_memory": 1280,
      "max_swap": 2048,
      "owner_uuid": "1",
      "vnc_port": 30001,
      "qemu_extra_opts": "-chardev socket,path=/tmp/vm.qga,server,nowait,id=qga0 -device virtio-serial -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0",
      "disks": [
        {
          "boot": true,
          "model": "virtio",
          "media": "disk",
          "image_size": 10240,
          "image_uuid": "8dd64d0e-cbff-4495-b83e-c2ca4f3b167c",
          "zpool": "zones",
          "size": 10240,
          "compression": "lz4",
          "refreservation": 10240
        }
      ],
      "resolvers": [
        "8.8.8.8",
        "8.8.4.4"
      ],
      "nics": [
        {
          "vlan_id": 0,
          "nic_tag": "admin",
          "gateway": "<your network gateway>",
          "netmask": "<your network mask>",
          "ip": "<your VM IP address>",
          "network_uuid": "d42bc4c3-ba17-43ee-a02a-74e667bd41fa",
          "model": "virtio",
          "primary": true
        }
      ],
      "customer_metadata": {
        "org.erigones:rabbitmq_password": "$rabbitmq_password",
        "org.erigones:redis_password": "$redis_password",
        "org.erigones:pgsql_esdc_password": "$pgsql_esdc_password",
        "org.erigones:pgsql_pdns_password": "$pgsql_pdns_password",
        "org.erigones:esdc_admin_email": "yourmail@example.com",
        "root_authorized_keys": "ssh-rsa your-compute-node-SSH-key\nssh-rsa your-SSH key"
      },
      "internal_metadata": {
        "installed": true,
        "alias": "mgmt01",
        "ostype": 1
      }
    }
    
    EOF
    
    # vmadm create -f mgmt.json

The mgmt VM should start, and after a while, you should be able to SSH into it. You may want to check the logs in /opt/erigones/var/log. The erigonesd inside mgmt VM will now wait for a compute node to be registered.

The Danube Cloud web management portal will be available at https://<your VM IP address>.

Step 2 – Install erigonesd on compute node

A few things need to be installed and configured on your SmartOS server. Especially, erigonesd must run on your compute node. This is already described on our wiki page. Let’s quickly summarize the required steps:

  • Make sure you a have generated an SSH key on your compute node.
  • Bootstrap pkgin on SmartOS and install a few packages (gcc49 gmake autoconf git-base python27 py27-virtualenv). You can use our prepared local archive, in case you haven’t already installed pkgsrc.
  • Create required datasets: zones/iso, zones/backups/ds, zones/backups/file
  • Download System Rescue CD into /iso/rescuecd.iso
  • Install Danube Cloud erigonesd:
    # export ERIGONES_HOME=/opt/erigones
    # mkdir -p  $ERIGONES_HOME
    # git clone https://github.com/erigones/esdc-ce.git  $ERIGONES_HOME
    # $ERIGONES_HOME/bin/ctl.sh init_envs   # Initialize Python environments
    # $ERIGONES_HOME/bin/ctl.sh deploy --node  # Install all required Python dependencies into Python envs
  • Configure erigonesd
    # vim /opt/erigones/core/celery/local_config.py -> 
      -> BROKER_URL = 'amqp://esdc:$rabbitmq_password@10.0.1.333:5672/erigones'
      -> CELERY_RESULT_BACKEND = 'redis://:$redis_password@10.0.1.333:6379/0'
      -> ERIGONES_MGMT_WORKERS = ('mgmt@<mgmt-hostname>',)
  • Run erigonesd:
    # svccfg import /opt/erigones/etc/init.d/erigonesd.xml

Please wait a few minutes now. The compute node should appear in the central web management portal, and the admin virtual DC should initialize itself. In case something would go wrong, please check the logs in /opt/erigones/var/log/fast.log on the compute node and /opt/erigones/var/log/mgmt.log in the mgmt VM.

If everything goes well, you will see your compute node in the web management along will all your imported images.

Step 3 – Discover you servers

In order to be able to manage your existing virtual machines and zones, you have to do a few things.

  • First, you have to download and configure the es command line tool.
  • Create all networks you are currently using on the server:
    # es create /network/my-net1 -network 192.168.3.0 -netmask 255.255.255.0 -gateway 192.168.3.1 -vlan_id 0 -nic_tag admin -dc_bound false
    • Note the network UUID in the response.
  • Add the network to a virtual datacenter where all your VMs are going to be imported (in this case the “main” DC):
    # es create /dc/main/network/my-net1
  • Add the network_uuid to corresponding NICs in your VM’s manifests:
    # echo '{"update_nics": [{"mac": "c2:27:e0:a7:82:60", "network_uuid": "772a9fee-d82c-4e90-bd4b-965d71cb3370"}]}' | vmadm update <vm-uuid>
  • Harvest your VMs into a virtual datacenter of your choice (in this case the “main” DC):
    # es create /node/cn1/vm-harvest -dc main

Your VM will be imported into Danube Cloud only if the networks and images exist in the database.

Afterthoughts

Although it is possible to install all parts of Danube Cloud on a running SmartOS system, it is a labor-intensive task. The official Danube Cloud USB image includes an installer and scripts, which automate all of this.

Daniel Kontšek
Daniel Kontšek
CTO, Danube Cloud

Leave a Reply

Your email address will not be published. Required fields are marked *