Infrastructure Virtualization Project

From Amahi Wiki
Jump to: navigation, search
Msgbox-WOPr.png Work In Progress
This article is currently undergoing major expansion or restructuring. You are welcome to assist by editing it as well. If this article has not been edited in several days, please remove this template.


Objective

This is a project to update and modernize the infrastructure that keeps the Amahi web sites and services running.

The idea is to provide easier and more sustainable management of the infrastructure to leave more time for the team to devote to moving the project forward.

NOTE: this project is not about running Amahi platform software on virtual servers, etc. For that there is a separate page on Virtualization.

Goals

We have multiple goals:

  • run some of internal build machines in a reliable, efficient way, so that we have consistent and updated builds/releases
  • have consistent and recent backups making things recoverable
  • run some testing of Amahi apps more easily and efficiently
  • test new features in an isolated manner

Known Issues

  • Cannot use multiple SSH keys via Dashboard

Hardware

Dell Rack Server

  • Dual Xeon E5450 3.0 GHz Processors
  • 32GB PC2-5300 RAM (8x4)
  • Two Gigabit Network Interfaces
  • KVM Network Interface
  • RAID Controller
  • Four Quick Swap Drive Bays
    • 1 - 850GB (OS and Backup)
    • 2 - 2 TB (Images and Backup)
    • 3 - 120GB SSD (VMs)
    • 4 - Empty

Software

  • CentOS 7.2 x86_64 (Minimal)
  • OpenStack Mitaka Release

Setup

  • Download CentOS 7.2 x86_64 minimal image and install following CentOS 7.2 Netinstall Guide tutorial.
  • Configure FQDN (/etc/hosts and /etc/hostname)
  • Add users and private keys for SSH login
  • Disable SSH password and root login
  • Follow RDO Quickstart for the OpenStack installation guidance.
  • At the packstack --allinone step, follow the Neutron with existing network guidance instead.
  • Refer to floating IP range for setting up the floating IP addresses to the external network.
  • Extend cinder-volumes past 20GB to allow for creating additional volumes to attach to instances.
    • Followed the OpenStack Increase Volume Capacity tutorial
    • Created /usr/bin/ext-cinder-vol script:
    • #!/bin/bash
      /usr/sbin/losetup -f /var/lib/cinder/cinder-volumes-ext
      /usr/bin/systemctl restart openstack-cinder-volume
      /usr/bin/systemctl restart openstack-cinder-api
      /usr/bin/systemctl restart openstack-cinder-scheduler
    • Add to root crontab:
    • @reboot /usr/bin/ext-cinder-vol
    • Results in 130Gb additional space for volumes.
    • Total volume space available is now 150GB.

Naming Convention

  • Instances: os-function (i.e. f24-bot, f24-repo, f24-dev, etc)
  • Images: os-type (i.e. f24-cd, f24-dvd, etc)
  • Snapshot: os-function-ss# (i.e. f24-repo-ss1, f24-bot-ss2, etc)
  • Volumes: instance-vol (i.e. f24-repo-vol, mirrormgr-vol, dl-master-vol, etc)

Build Images

This will outline how to build OpenStack images using Proxmox VE.

  • Log into Proxmox VE web UI
  • Create a VM or clone an existing one
    • If creating a VM, install the OS
    • If using a clone, start the VM
  • Open a console window for the VM
    • Log in and as root do the following
      • dd if=/dev/zero of=/mytempfile bs=1M (zero out any unused space)
      • rm -f /mytempfile
    • Shutdown VM
    • Log into Proxmox VE via SSH and execute the following from command line
      • Navigate to /var/lib/vz/images/### (number of VM)
      • mv original_image.qcow2 original_image.qcow2_backup (rename original image)
      • qemu-img convert -O qcow2 original_image.qcow2_backup original_image.qcow2
      • Copy new .qcow2 image to a safe location for uploading into OpenStack
      • Remove .backup file
      • Delete the VM from Proxmox VE web UI
  • Use WINScp or similar program to copy the .qcow2 image to client machine
  • Upload into OpenStack via the web UI


REF: Reclaim disk space from .qcow2 or .vmdk image

Create Instance

This is a nice straightforward tutorial on Creating an instance.

Backup

Last Backup completed: 25 Aug 2016

  • Back up scripts have been created to synchronize instances, volumes, and snapshots to a secondary drive on demand.
  • Recommend monthly back ups be done in case of catastrophic failure.

Tips

Network Issues

If the DNS server is changed or networking appears inoperable, check the following to ensure the DNS is correct:

  • /etc/sysconfig/network-scripts/ifcfg-br-ex
  • /etc/resolve


Next restart neutron network services:

service neutron-server restart

service neutron-dhcp-agent restart
service neutron-l3-agent restart
service neutron-metadata-agent restart

service neutron-openvswitch-agent restart

Volume Issues

When a volume becomes detached and/or shows in error, the state can be reset:

source keystonerc_admin
cinder reset-state volume_id

or use web UI.

Also refer to Amahi Bug #2051.

Update/Reboot/Shutdown Process

  • Shutdown/Disconnect
    • Stop all instances via SSH
    • Detach volumes from instances
    • Verify all volumes detached and instances stopped
    • Perform Update/Reboot
  • Once system rebooted:
    • Verify cinder_volumes (LVM) is operational
    • Reattach volumes to instances
    • Start all needed instances
    • Verify all instances are operational

Create Static IP for Instance

Setting a static IP for instances will ensure the internal IP address remains the same through it's life cycle. The floating IP address can be easily added afterwards.

  • As root user, execute source keystonerc_admin
  • Use the following to reserve the IP address:
neutron port-create internal --fixed-ip subnet_id=internal_subnet,ip_address=x.x.x.x
  • Create the instance and boot via command-line vs the web UI:
nova boot --image NAME_OF_IMAGE --flavor amahi.small --nic port-id=ID_FOR NETWORK_FROM_ABOVE_COMMAND_RESULTS VM_NAME
NOTE: If image name has spaces, enclose it in double quotes.

Ref: Add Multiple Specific IPs to Instance

Fedora Cloud Images

See Launch Fedora Cloud images for guidance.

Miscellaneous

Refer to Amahi Bug #2050 for some OpenStack Command-Line syntax.