|Development notes for Amahi 11 on Fedora 26.|
- 1 Overview
- 2 Important Terms
- 3 Architecture Overview
- 4 How the app installation works?
- 5 How to add a new app?
- 6 Brief Overview of Code
- 7 Future Work
Amahi supports one click install for different kind of apps. To add any app to amahi they have to be packaged properly. Each app might require a completely different environment to run and it might not be possible to provide that environment to every app because their can be dependnecy conflicts. For example some app might require php5 and some might require php7. We need a mechanism to address this concern such that we can add any app to Amahi irrespective of the technology stack that it might require or without version conflicts. Using containers is one way of addressing this problem.
Before going further in this document, understanding the following terms might be useful:
- What is a Container?
- Image vs Container
- amahi.net -> Domain name of the amahi hda for all examples here
The idea of implementation is very similar to what is showing in the image above. Each app will run as a container. Each container will expose their own port/ports as can be seen attached to the apps above. We can map a host system port to the `exposed port` in the container. And then using reverse proxy we can connect different subdomains to different container apps.
For example in the diagram above, ports 35001 to 35008 are ports on the host machine inside which containers (App1 to App4) run. Each app runs different services on different ports. For the apps to be accessible from systems outside the host we have to map the ports on the container with those on the host.
How it Works?
Lets assume we are trying to run multiple container apps. Let them be App1, App2, App3 and App4.
NOTE: Each app will have its own networking stack and hence can run a service on any port that they want.
App1 - 9000, 80 App2 - 8000, 80 App3 - 5000, 80 App4 - 22, 80
Each app has a web server running on port 80 (let’s assume) and they have some other service running on different ports. There can be multiple services running on different ports. For example in the above figure each App has two services running on two ports.
We want the web interface of each of the app to be accessible to a user. Assume a user is running their HDA at IP address: 192.168.56.105 and domain amahi.net
They want to access App1 at app1.amahi.net, App2 at app2.amahi.net and so on. Inside the HDA each app is running inside an isolated environment with their own networking stack and hence they are not reachable from the outside world. So how do we work around this problem? The answer to that is reverse proxy.
We have apache running on our HDA which handles all the requests coming on *.amahi.net. So for container apps we bind their web server port with a port on the host.
App1 -> 80 binds to -> 35002 on host App2 -> 80 binds to -> 35004 on host App3 -> 80 binds to -> 35006 on host App4 -> 80 binds to -> 35008 on host
This “binding” is similar to port forwarding. Docker by default runs these containers on a separate network and each of those containers have an IP address (local, accessible only inside the host machine) and run their own networking stack.
Once the port binding is complete if we access -> http://192.168.56.105:35002 we should get the web interface of App1 and so on. But for our users we want the app to be reachable at app1.amahi.net. So earlier the hda used to create a virtual host file for each app which used to run the relevant app server based on the url. For container apps, the present approach does this:
The apache running on host reverse proxies all requests coming on http://app1.amahi.net to http://localhost:35002 , This way for the user the url is http://app1.amahi.net and they can still access the web interface of the app. Note that apache is running on port 80 in the host.
How the app installation works?
Assuming we have official container for an app available we can easily integrate them to amahi. If the official image is not available then we might have to build one of our own like I did for osticket and coppermine.
Building images can be tricky and does require some knowledge of the apps as well (For example which php libraries to install, etc). There's a well defined procedure for building images for node and rails apps as well.
Building images can be tricky and the image size is a very major issue. To reduce the image size I would suggest the readers to look up the following articles:
Once the image is available we can write an install script. You can see below a sample install script for gitlab.
cat > docker-compose.yml << 'EOF' gitlab-container: image: 'gitlab/gitlab-ce:latest' container_name: "APP_IDENTIFIER" restart: unless-stopped hostname: 'APP_HOSTNAME' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://APP_HOSTNAME:HOST_PORT' gitlab_rails['gitlab_shell_ssh_port'] = 2224 ports: - 'HOST_PORT:HOST_PORT' - '2224:22' volumes: - './srv/gitlab/config:/etc/gitlab' - './srv/gitlab/logs:/var/log/gitlab' - './srv/gitlab/data:/var/opt/gitlab' EOF docker-compose up -d
This script creates a docker-compose.yml file and then runs `docker-compose up -d` command which essentially creates and runs the container.
The `yml` file can have different parameters. For that we might have to refer to docker and docker-compose documentation. `restart : unless-stopped` is used to handle failovers of containers. If a container crashes for some reaosn then it will restart automatically.
Understanding the script
Container script for each app will have these parameters for sure:
container-name: "APP_IDENTIFIER" restart : unless-stopped ports: - 'HOST_PORT:xyz' # Not required for apps which are not webapp # xyz = any port inside the container # HOST_PORT is port on the host machine. Two different containers cannot have same value for # HOST_PORT but can have the same value for xyz (Just to clear up the confusion about # HOST_PORT)
Most of the configuration related to the container will be written in the docker-compose.yml file but some data we have to extract during run time like the HOST_PORT using which will reverse proxy to the container. HOST_PORT is 35000+app_id and has to be derived during app installation. Similarly APP_IDENTIFIER and APP_HOSTNAME are derived during runtime. The code below is used to put that data into install script during runtime.
#app/models/app.rb Line 322 to 326 install_script = installer.install_script install_script = install_script.gsub(/HOST_PORT/, (BASE_PORT+self.id).to_s) install_script = install_script.gsub(/WEBAPP_PATH/, webapp_path) install_script = install_script.gsub(/APP_IDENTIFIER/, identifier) install_script = install_script.gsub(/APP_HOSTNAME/, app_host)
Sample uninstall script
docker-compose stop docker-compose rm -f # Not removing the image. Just stopping the container.
For most containers the above uninstallation script will work fine. This stops the running container and removes it. Please note that this doesn't delete any of the volumes attached (persistent storage. Please refer to docker documentation for more details regarding volumes) with the container so if you add a volume during installation (as we have done in the gitlab example above) then we have to remove them here during uninstallation. For example if we were to remove gitlab completely along with all files that were added by gitlab container then the uninstall script would look something like this:
docker-compose stop docker-compose rm -f rm -rf srv # Removing the srv folder which holds the persistent files for gitlab container
This behavior might not be intended for all applications. Right now I haven't removed static files for any apps that I have added.
Why is it needed?
Please refer to #Architecture Overview to understand why it is needed.
For reverse proxy I have added a new app-container.conf file which can be seen below. The `APP_PORT` is changed during runtime.
<VirtualHost *:80> ServerName HDA_APP_NAME ServerAlias HDA_APP_NAME.HDA_DOMAIN APP_ALIASES APP_CUSTOM_OPTIONS ProxyPreserveHost On ProxyPass / http://localhost:APP_PORT/ ProxyPassReverse / http://localhost:APP_PORT/ ErrorLog APP_ROOT_DIR/logs/error_log CustomLog APP_ROOT_DIR/logs/access_log combined env=!dontlog </VirtualHost>
APP_PORT part is derived from the app id. After installation the app will have some id in the database base. The APP_PORT will be 35000+app_id
Please note that we can run any kind of app in container. It might be a headless app and it might be a webapp. In case of web applications we need to define an external port (mapped port on host - Refer to Architecture Overview section) through which the app will be bind. Then to reach that app we have to reverse proxy. "APP_PORT" is essentially that. For apps which don't require a web interface we might not use this file at all.
NOTE: HOST_PORT and APP_PORT are basically same thing but in the code they are used at two different places with different names to avoid confusion as per the context. For example in the Reverse Proxy case, APP_PORT variable is present inside the file `app-container.conf`. Here we want to reverse proxy to an app running on some port so it was named as APP_PORT.For the HOST_PORT , it was used in app/models/app.rb file, their it represents the port to be used on the host machine and hence named that way. But for one particular app, value of HOST_PORT=APP_PORT.
How to add a new app?
Taking example of Hydra
See the usage as mentioned by the maintainer:
docker create --name=hydra \ -v <path to data>:/config \ -v <nzb download>:/downloads \ -e PGID=<gid> -e PUID=<uid> \ -e TZ=<timezone> \ -p 5075:5075 linuxserver/hydra
Convert the above to a docker-compose file. Ignore the `-e PGID=<gid> -e PUID=<uid>`, even though it's relevant, it is out of the scope of this discussion.
hydra-container: image: 'docker.io/linuxserver/hydra' container_name: "hydra" restart: unless-stopped ports: - '5075:5075' volumes: - './config:/config' - './downloads:/downloads' - '/etc/localtime:/etc/localtime:ro' # Understanding the volume mounts: # ./config:/config -> As seen in the docker create commnad the -v command mentions the volumes. # Path to data that we are providing is a relative path. Every installed app has a path in which the # install script runs. So "config" and "downloads" folder will be created there in that path. # /etc/localtime:/etc/localtime:ro -> This is to make sure that the container uses the same time as used # by the host system. To avoid this mount we can also use # environment: # - TZ=<timezone> # in the docker compose file
NOTE: Please note that adding apps might require knowledge about docker and docker-compose and discussing those is out of the scope of this documentation though the links mentioned below might be useful.
Apps run as containers which are managed by docker. If docker daemon is shut down or stopped then the app will also stop. If the container crashes for some reason then it has to be restarted. Using restart policies we can manage this.
Now once we are done with making a docker-compose file we can test it on our local system to see if it is working properly or not. Once that's done, we can go ahead and add this image to amahi.org
For adding to amahi.org some modifications have to be done. The final changes can be seen below. Notice the `APP_IDENTIFIER` and `HOST_PORT` (For more info on this refer to "Understandin the script" section)
hydra-container: image: 'docker.io/linuxserver/hydra' container_name: "APP_IDENTIFIER" restart: unless-stopped ports: - 'HOST_PORT:5075' volumes: - './config:/config' - './downloads:/downloads' - '/etc/localtime:/etc/localtime:ro'
The final install and uninstall scripts to be added on amahi.org will be
cat > docker-compose.yml << 'EOF' hydra-container: image: 'docker.io/linuxserver/hydra' container_name: "APP_IDENTIFIER" restart: unless-stopped ports: - 'HOST_PORT:5075' volumes: - './config:/config' - './downloads:/downloads' - '/etc/localtime:/etc/localtime:ro' EOF docker-compose up -d
docker-compose stop docker-comose rm -f rm -rf config # Use this if you want all files to be removed after uninstall rm -rf downloads # Use this if you want all files to be removed after uninstall
For more examples of install scripts and docker-compose files of different apps please refer to amahi_images repo. or the table given below.
Brief Overview of Code
The installation process remains the same and follows the same flow of control as before.
script/install-ap calls install_bg function in app.rb and that takes care of the app installation.
In `install_bg` function we check if the app type is container and if yes then the rest is managed by `install_container_app` function. If it's not a container type app then see the else statement, where we just create a `webapp` for normal app.
if installer.kind.include? "container" # Try installing the container app # Mark the installation as failed if it returns false # For container app webapp creation will be handled inside the "install_container_app" function if !install_container_app(installer, webapp_path, identifier) self.install_status = 999 end else # Create the webapp normally if its not a container app self.create_webapp(:name => name, :path => webapp_path, :deletable => false, :custom_options => installer.webapp_custom_options, :kind => installer.kind) end
For container type apps the `webapp` creation is handled by the `install_container_app` function itself. Please note the way we are checking if an app is of container type? Notice : `if installer.kind.include? "container"`. The idea is that we might need to handle installation of different kind of container apps differently. So on amahi.org while adding container apps, for the kind field we have to use something like : `container-php5`, `container-python`, `container-custom`, etc. So to check if an app is container type we have to see if the `installer.kind` includes "container" pattern inside it.
Cleanup - too many images
Presently when a containerised app is uninstalled the container is stopped and removed but the image used to run the container stays (Note the difference between container and image). I am not removing the image since images are generally large and what if users decides to reinstall the app? This has to be thought of more. Adding the feature of deleting image after installation is essentially 1 line of code. Going forward we might have to come up with a mechanism for clearing images which haven't been used for a long time.
Reducing Download Size
We can't build and maintain containers for all apps. Vendors provide official containers but some of them are huge in size. To fix this problem we can reduce the download to that of a normal installation by using an apt-cache, gem server or npm server on hda itself.
The idea is following: Instead of we building images on our server we can push the Dockerfile to the client and the client can build the docker image. While building the image they will download the required packages, gem files, node dependencies or whatever. If we have a mechanism to cache this download so that all the subsequent builds can use this data then we can save a lot of Internet usage. One possible way of doing this was running apt-cache, gem server and npm server on the client itself.
With containers updates can be really easy. We can support single click update of apps. One possible way of implementing this feature would be to create an update button which on click would stop the running container, download the latest image of the app (which is the updated app) and restart the container. Though implementing this from docker perspective is not a big deal at all but it can get tricky as well if there are some major upgrades in code then the vendor must take care of data migration from older version to newer versions in their image itself. But this problem of migration exists without containers as well.
Support for advance configuration
With containers we can limit the cpu/memory/disk usage of each app. We just have to modify the docker-compose.yml file for the app for these changes without doing any modification to the source code. If required, this can be used.
It's a new feature, we will need to collect a lot of metrics from the users to understand how this feature is working and how it can be improved. Some of those features include
- The CPU info of systems which are running Amahi.
- RAM and Storage information.
- Logs of containerized apps to debug errors.