Infrastructure Virtualization Project

From Amahi Wiki
Revision as of 12:30, 1 November 2014 by Bigfoot65 (talk | contribs) (→‎Setup)
Jump to: navigation, search
Msgbox-WOPr.png Work In Progress
This article is currently undergoing major expansion or restructuring. You are welcome to assist by editing it as well. If this article has not been edited in several days, please remove this template.


Objective

This is a project to update and modernize the infrastructure that keeps the Amahi web sites and services running.

The idea is to provide easier and more sustainable management of the infrastructure to leave more time for the team to devote to moving the project forward.

Note: this project is not about running Amahi platform software on virtual servers, etc. For that there is a separate page on Virtualization.

Goals

We have multiple goals:

  • run some of internal build machines in a reliable, efficient way, so that we have consistent and updated builds/releases
  • have consistent and recent backups making things recoverable
  • run some testing of Amahi apps more easily and efficiently
  • test new features in an isolated manner

...

Hardware

Dell Rack Server

  • Dual Xeon E5450 3.0 GHz Processors
  • 8GB PC2-5300 RAM (8x1)
  • Two Gigabit Network Interfaces
  • KVM Network Interface
  • RAID Controller
  • Four Quick Swap Drive Bays
    • 1 - 1 TB (OS and Backup)
    • 2 - 120GB SSD (VMs)
    • 3 - Empty
    • 4 - Empty

Software

  • CentOS 7 x86_64 (Minimal)
  • OpenStack Juno Release

Setup

  • Download and install CentOS 7 x86_64 minimal image
  • Configure FQDN
  • Manually configure networking
  • Add users and private keys for SSH login
  • Disable SSH password and root login
  • Enable EPEL Repo
   yum install epel-release

or

   rpm -Uvh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
  • Perform OS update
   yum -y update
  • Install OpenStack following RDO Quickstart instructions (run packstack --allinone as root)

Notes

  • the floating IPs situation may not work on non-externally routed IPs. this may be why they set up a 179. "public" network by detafult in the RDO setup. i deleted that network
  • the external network needs to be "flagged" as external. this cannot be done with the UI, but i am told the juno release has a feature where attribute editing. so that the external attribute can be set to Yes. once that is done, MAYBE the system allows floating IPs in that network even if the IP range is not externally routable
  • resizing an image does not seem to work reliably. i am not sure, but i think resizing is "queued" so that it's done on the next reboot. however, i tried to resize the centos box to m1.tiny and it was queued but it did not work. this is not a deal breaker, but still.
  • the next thing is to be able to bridge the connections reliably. there is link at the end of the RDO quickstart page on "how to use it with your existing network" however, when i did this (editing config files in a potentially messy way), it worked, but a reboot would not bring the network up. this may be a centos issue, not sure, but a network restart was needed. this is a must to be able to funnel traffic from outside (the floating ips) to the inside VMs.
  • basically understand what it takes to get an image created, seasoned, and how we need to maintain these over long periods. i think the main workhorse is qcow2 tools.
  • these images are like "snapshots" in some way, but a snapshot is frozen and cannot be tweaked.
  • long term we want to make images like this for testing, e.g. and amahi 7 image that is bootable and it's plain instal. another example is a fully up to date amahi 7 image, etc.
  • so they are alive in that these images are frozen in time, but one takes a copy and can then evolve it into a new version of the image.
  • i do see with two 2GB ram instances running, it has very little ram left -- which means the webapp itself and it's components do take a substantial amount of memory, which is a pain. one possibility is to run the control part of the node in a separate VM somewhere and leave this host to do pure host service. it may still be the case that the control VM would require a lot of resources, however, but we don't know how much.