Scaling Host on Amazon AWS EC2 using Ansible

It's already a long time since I started playing around with Ansible. Actually I was curious about other CM (Configuration Management) software tools like SaltStack, Chef or Puppet but as long as I started looking for them I jumped into a book which compares and explains them, giving a real taste of all of them. It turns out that Ansible was both powerful and simple to learn and without particular requested backgrounds.

What captured early my attention was the possibility to deploy quickly host machines on Amazon AWS using EC2 and the corresponding Ec2 Ansible module, since I haven't a “real” set of machines to play with. Using EC2 hosts I can easily test my configuration without worrying about which machine I am using, moreover using micro-instances these tests are almost priceless (as long as you remember to stop them from time to time!).

What I wanted to create is a system to deploy on Amazon AWS a scalable set of instances of Apache Tomcat container proxied via a “load-balancing purpose” instance driven by HAProxy, as a bonus I also installed Nagios3 on all the instances, so even an example of monitoring infrastructure is given.
The project is available for being cloned on my github named ansible-slb-tomcat. I don't want to go too much in details with the project, since most of the parts are self-explained and I always risk to fall in a TL;NR blog post :) and last but not least the Ansible YAML syntax is very easy to read and understand.

However it is important to note the following:
  • The project is missing a fundamental file which contains basic AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), which can be easily found on your AWS profile.
  • Ansible is a big fan of convention over configuration and I am too! So I've tried following what Ansible itself advises as “best practices”.
  • The project uses the script for ec2 module (ec2.py) and it has a main “playbook” (ec2-lauch.yml), which will launch instances in AWS (this operation will create security groups, launch instances and add tags), then using “tags” of instances as filter (dynamic inventory), another playbook (site.yml) will be in charge of installing and configuring all the software on the corresponding instances.

STAGES OF PLAYBOOK

Let go quickly through the stages occurring when the playbook is run.

First of all:
  • Read the f***ing README for God's sake! Especially for configuration.
  • Templates and playbooks are massively compiled from YAML files of variables. Remember to check them before installing your inventory.
  • Roles are divided in subfolders (they are present in each role only when needed). When referencing one of these subfolders if not specified differently, main.yml is the referenced file as convention over configuration.
    • tasks: operation to be executed
    • templates: files compiled before being copied to target machine
    • vars: files containing definitions of variables used in other files
    • handlers: actions executed as a result of a task operation
    • files: files simply copied to target machine

What will happen when launching the following command:
ansible-playbook -i ec2.py --private-key=<path_to_your_aws-pem_key> ec2-lauch.yml  -v
  1. (ec2-launch.yml) Local machine will create on Amazon AWS: Security Groups and Tagged Instances. This is defined twice using role ec2 (refer to file roles/ec2/tasks/main.yml), one for creating tomcat instances, the other for haproxy instance (the different named files under roles/ec2/vars explain the differences).
  2. After succeeding this operation, the launched and tagged instances will be added to hosts group corresponding to service name. This is a key operation since all the following steps will be influenced by that. This will populate the so called Dynamic Inventory.
    - name: Add new instance to {{ service_name }} hosts group
      local_action: add_host hostname={{ item.public_ip }} groupname={{ service_group}}
      with_items: ec2.tagged_instances
  3. (site.yml) Install and Configure Apache Tomcat 7 on hosts belonging tomcat dynamic inventory.
  4. (site.yml) Install and Configure HAProxy on hosts belonging haproxy dynamic inventory. Note that the template configuration will be filled used IPs of the tomcat dynamic inventory. The following instructions will clarify what will be going on. (using Jinja2 templating format).
backend app
{% for host in groups['tag_Group_haproxy'] %}
  listen {{ daemonname }} {{ hostvars[host]['ansible_' + iface].ipv4.address }}:{{ listenport }}
{% endfor %} 

balance {{ balance }}
{% for server in groups['tag_Group_tomcat'] %}
  server {{ hostvars[server]['ec2_tag_Name'] }} {{ hostvars[server]['ec2_private_ip_address'] }}:{{ tomcatport }} check
{% endfor %}

Extras :

The same playbook will configure Nagios master node on haproxy inventory host. At the mean time the configuration for monitored slave-nodes of Nagios is performed on tomcat inventory hosts.

Open issues:

Launching the playbook for the first time may fail resulting in AnsibleUndefinedVariable for some values of ec2 tags (ec2_tag_Name). Running the same for a new attempt will succeed silently.

Kaboom! It's a Cloud Rock'n'Roll :)

Comments

  1. Thanks for providing this informative information you may also refer.
    http://www.s4techno.com/blog/2015/12/24/aws-rds-in-sql-server-5-minute-deploy/

    ReplyDelete
  2. Nice post with great explanation. Thanks a lot for sharing this.

    ReplyDelete

Post a Comment

Popular posts from this blog

Connect To a PPTP VPN from an Ubuntu Server using command line

Cloud Conf 2017