Sunday, 30 November 2014

Cloudbusters - A Website

Cloudbusters - Building a Web/DB app

We've pretty much finished our first submission, describing our design, and public and private clouds.

Now i'm going to attempt to build a web service so we can see if we can enhance our project a bit, and maybe help with the security bit.

So first - i'm building up a mysql server. 

I've deployed 2 ubuntu instances into AWS, a web and a db servers.


on the db server:

sudo apt-get update
sudo apt-get install mysql-server

I have to set the password on this

Installs mysql

Install a new db now:
sudo mysql_install_db

Now to secure it:

sudo mysql_secure_installation

It asks me to reset the password for the root db user id - well - i just had to set it in the db bit so i'm going to say no.

It asks me if i want to remove the anonymous open user id. in the name of a securer world, i hit the Y button!

I'll only allow root logon locally as well. cant be having people logging onto my db server with root!

I'll get rid of the open test db as well. thanks.

Reload privilege tables to set those changes in motion. Y of course.

done and done.

btw - i'm taking a bit of this info from this blog:

https://www.digitalocean.com/community/tutorials/how-to-set-up-a-remote-database-to-optimize-site-performance-with-mysql

Next step: Allow remote access to the db. we need this of course because we want the web server to be able to access our db server, and multiple web servers, if we go that way (watch this space)

sudo vi /etc/mysql/my.cnf

scroll down to the [mysqld] section

we need to change bind_address from 127.0.0.1 (the local server) to a public ip address. However, we dont want to leave our server unsecured, so we want to put in the ip address in the vpc (so its effectively on the private network). for me its xxx.xxx.xxx.xxx.

Save it, quit out of vi, and restart the mysql service:

sudo service mysql restart.

Now we need to set a db up for wordpress (which is going to be the cms that i'm using)

connect to mysql using root

mysql -u root -p

>mysql  ... I'm in

Create a database for us:

CREATE DATABASE Wordpress;

Now - we want to create a db admin user - for local operations - so this is a user that can only run stuff from the local server (see the @'localhost' bit)

xxx is the password

CREATE USER 'admin'@'localhost' IDENTIFIED BY 'xxxxxxxxxx';

Granting the access to the Wordpress db:

GRANT ALL PRIVILEGES ON Wordpress.* TO 'admin'@'localhost';

ok - now we just need to setup a user for other servers to access. initially i'm going to just setup a single user so i can get wordpress configured. this user is to connect from my web server. So:

CREATE USER 'user'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxxxxxx';

(thats a user that can only connect from that ip address)

So - we're granting a pile of permissions - we'll come back later after the installation of wordpress to clean this up - but lets grant them for the moment:

GRANT ALL PRIVILEGES ON Wordpress.* TO 'user'@'xxx.xxx.xxx.xxx';

we'll come back later to:

GRANT SELECT,DELETE,INSERT,UPDATE ON Wordpress.* TO 'user'@'xxx.xxx.xxx.xxx';

Commit them using:

FLUSH PRIVILEGES;

exit the mysql prompt

exit

Done for the moment. Now to the web server.

Install the mysql client

sudo apt-get install mysql-client

test the connection:

mysql -u -h xxx.xxx.xxx.xxx user -p 

where xxx.xxx is the ip address of the db server

nuts. doesnt work. thats because the port is blocked. Need to setup a security group for mysql to allow port 3306. Done.


now it works.

now need to install nginx (a web server) and some other packages:

sudo apt-get install nginx php5-fpm php5-mysql

now configure php:

sudo vi /etc/php5/fpm/php.ini

uncomment the line cgi.fix_pathinfo=1 and change the 1 to a 0

Thats a security measure to make sure that users can only look for the file that they are asking for - if its not found it just returns an error (as opposed to looking for other files)

now we need to set how php and nginx communciate:

sudo vi /etc/php5/fpm/pool.d/www.conf

Make sure the listen directive is set as follows:

listen = /var/run/php5-fpm.sock

exit and restart php:

sudo service php5-fpm restart

Time to configure nginx. We're copying the default website to a different one that we can modify:

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/cloudbusters.info

(why cloudbusters.info - well - i'll talk about that in a while)

now open that file:

sudo vi /etc/nginx/sites-available/cloudbusters.info

ensure its listening on your specific port (port 80 for us). make sure the listen directive is there

We're also changing the root to our new file:

server {
        listen 80;
        root /var/www/cloudbusters.info
        index index.php index.html index.htm;

now we need to just set our server_names (to be cloudbusters.info) and make try_files is set, and our error pages. This is what we're changing the file to:


server {
    listen 80;
    root /var/www/cloudbusters.info;
    index index.php index.hmtl index.htm;
    server_name example.com;
    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/www;
    }
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }
}


now we link to our enabled directory and remove the link to the default file:

sudo rm /etc/nginx/sites-enabled/default
sudo ln -s /etc/nginx/sites-available/cloudbusters.info /etc/nginx/sites-enabled

and restart nginx:

sudo service nginx restart

Now we're installing wordpress:

cd ~
wget http://wordpress.org/latest.tar.gz

unpack it 

tar xzvf latest.tar.gz

copy the sample config file to be the 'prod' one:

cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php

then edit it

sudo vi ~/wordpress/wp-config.php

stick in the database user id and password etc:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'Wordpress');

/** MySQL database username */
define('DB_USER', 'user');

/** MySQL database password */
define('DB_PASSWORD', 'xxxxxxxx');

/** MySQL hostname */
define('DB_HOST', 'xxx.xxx.xxx.xxx');


close the file.

now we just need to set the server block for our website:

sudo mkdir -p /var/www/cloudbusters.info

now copy all the wordpress files over:

sudo cp -r ~/wordpress/* /var/www/cloudbusters.info

now we just need to modify permissions and ownership of our files:

cd /var/www/cloudbusters.info

give all the files access to the web server user (www-data):

sudo chown -R www-data:www-data *

we also need to make sure the normal user other than root can do stuff:

sudo usermod -a -G www-data USERID
sudo chmod -R g+rw /var/www/cloudbusters.info

now just head to the public ip of your site:

wait - need to open port 80 on there - update the security groups

nope - its no good - i cant get port 80 working no matter what - even from localhost. stupid. 

I'm going to continue for the moment and come back to that. i've set the port as 8080, set a security group and set cloudbusters.info to listen on port 8080. which works.

so - connecting to my public ip for my webserver on port 8080 brings me in to the config page on wordpress (so the db connection works)

I put in a user id and password and hit continue - Wordpress gave me errors. Create command denied to 'user'@'ip-xx-xx-xx-xx.eu-west-1.compute.internal' for table 'wp_users']

what? i set up the user id with the ip address - but aws is logging on with the domain that it has set for that computer. hmmm.

Doesnt matter - i need to change my user anyway - because the plans are to let any number of web servers access - so i'm going to have to give user@* access.

the command is:

CREATE USER 'user'@'%' IDENTIFIED BY 'xxxxxxxxxx';

i'm not granting it yet. i'm going to remove my original user and setup a new one with the domain set by aws, because i dont want my environment to be unsecured. (i still need the grant all bit)

so i'm removing my original user and setting up a new one with the AWS Domain.

Done - and wordpress installed with no issue.

And i've logged on now. So thats good.

back to the db to remove the access.

First of all - i'm removing that wordpress user and setting up the % generic one. so thats DROP USER.

Then i'm setting up the % one so that any host can access - and i'm only letting the permissions that i specified earlier in the blog:

GRANT SELECT,DELETE,INSERT,UPDATE ON Wordpress.* TO 'user'@'%';

So i've done good. Pretty crap having a site and referring to it by its ip address though. 

The good thing is - i've bought cloudbusters.info. so now i'm setting my dns entry.

Done (on godaddy)

So now my website is http://cloudbusters.info:8080

Need to figure out how to sort out that port 80.

Anyway. Whats next. What do i need to do next. I need to clone that web server and see does the site work from each webserver is what i need to do.

Into EC2 in AWS, right click and images - create image. 

Give it a description and bang, its an image.

Now i'm deploying an Instance of my image (effectively a copy of that 1st web server) as an instance in AWS.

I'm just going to see if i can access the wordpress site from the new instance.

YESS!!!!! it works!!!!!

of course it works!!! we have an app which demonstrates precisely what Cloud is all about. I can now hold a single point of information in my db, and expand that instance over and over. 

The things i have yet to sort out though are:

1. i need a load balancer. (haproxy will probably be the one)
2. i need a monitoring system (it will be nagios i'd say)
3. i need a way to stimulate cpu performance.
4. of course - i need to get that image to azure to deploy there.

phew - a couple more things to do alright - but considering the progress today into a website where we can truly demonstrate the power of the cloud, as well as giving us something to work with on the security part of our project, is quite exciting.

Thats all from me for today. 

Richie




Monday, 24 November 2014

Putting the document together for first submission

We will have three pieces of work ready by this weekend and for first submission. The target is to finish first draft by end of day 24th Nov.

The three parts are :

1. Design
2. Private cloud implementation
3. Public cloud implementation

Sunday, 16 November 2014

The New Script - The Cloudbusters are busting into the Cloud!!!!

Busting into the Cloud

This will be the final script for this weekend, with all the functions that we had decided in it.

Just to remind you of the algorithm:

1. Check Capacity of AWS environment
2. If AWS has available capacity then deploy instance to AWS
3. Else deploy instance to Azure

Our capacity counter is now core count.

Lets do the script:

#This script will deploy an instance to our Cloud, private or public.
#Where the instance is deployed to is determined whether we have reached our
#Capacity limit. Our capacity limit is the number of cpus which we have
#available to us in our environment. In this script, the number is set at 6.

#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

#Clearing out any variables in case they were in use somewhere else

Clear-Variable ec2instanceinfo
Clear-Variable Filter
Clear-Variable ec2numbercpus
Clear-Variable numberAWSInstances

#This Variable ec2instanceinfo has the cpu details for each ec2 instancetype

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

#Need to make sure we're doing the right region:

Set-DefaultAWSRegion -Region us-west-2

#$numberAWSInstances = Get-EC2Instance -region us-west-2

#Get the list of instances, and add them to a table - along with some other info
#had to reference this bit from
#http://stackoverflow.com/questions/18090022/how-to-access-list-value-in-get-ec2instances-runninginstance-method

$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId
           

#This bit loops through each instanceid returned in the instances variable
#and returns the number of cpus in each instance, by referencing the number
#of cpus against the instancetype held in the hash table above.
#It then counts up the total number of cpus deployed and leaves that in the
#variable $ec2numbercpus

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}

#Setting our Capacity Limit

$CapacityLimit = 6

echo 'Number of Cpus in use is' $ec2numbercpus

#If we've reached the ceiling:

If ($ec2numbercpus -ge $CapacityLimit)
{

#Deploy an Ubuntu Instance to Azure

echo "Cloudbusting!!! :) - Deploying Azure Instance"

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXX | Set-AzureSubnet -SubnetNames VMNetwork
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName AzureNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

echo "No Cloudbusting :( - Deploying EC2 Instance"

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro -SecurityGroupId sg-82dcb0e7 -SubnetId subnet-50ec2e27

}


And that works!! Heres the output (script ran twice, one to deploy an EC2 Instance when I have 5 cpus deployed in AWS, and then another one straight after when the results of the first run leaves me with 6 cpus deployed) - I XXXed out anything which might be construed as a security risk.

PS C:\Users\Richie> C:\Users\Richie\Documents\MSc Cloud Computing\Software\Scripts\CloudbustersAutoDeploy.ps1
Number of Cpus in use is
5
No Cloudbusting :( - Deploying EC2 Instance


GroupNames    : {}
Groups        : {}
Instances     : {}
OwnerId       : xxxxxxxxxxxxxxx
RequesterId   :
ReservationId : r-20e6d02d

Instance Deployed to AWS



PS C:\Users\Richie> C:\Users\Richie\Documents\MSc Cloud Computing\Software\Scripts\CloudbustersAutoDeploy.ps1
Number of Cpus in use is
6
Cloudbusting!!! :) - Deploying Azure Instance
WARNING: GeoReplicationEnabled property will be deprecated in a future release of Azure PowerShell. The value will be
merged into the AccountType property.


Id          : xxxxxxxxxxxxxxxxxxxx
Name        : xxxxxxxxxxxxx
Environment : AzureCloud
Account     : xxxxxxxxxxxxx
Properties  : {[SupportedModes, AzureServiceManagement,AzureResourceManager], [Tenants,
              xxxxxxxxxxxxxxxxxx], [Default, True], [StorageAccount, richiesstorageaccount]}

OperationDescription : New-AzureVM
OperationId          : 154573c6-656a-3c0b-91cf-67d6ea87f2fa
OperationStatus      : Succeeded

Instance Deployed to Azure



So - thats it. The script is done.

We may clean up the environment a little if we have time,

Thats it from me this weekend.

Richie

More detail on the capacity script

More detail on the capacity script

Now that we've successfully got the script up and running, its time to work on the capacity model.

Its working with the simplest of measurements, number of instances. But that doesnt necessarily apply realistically to private clouds, as vms come in different sizes. So now we are going to add more detail to the capacity calculations. So now, we are going to count the cpus deployed. If we go over the 'capacity ceiling' again we'll deploy to azure.

So - in AWS - CPUs are based on the instancetype.

Heres the thing though - for some reason - AWS doesnt specify a setting in the instancetype which correlates in each case to the number of cpus.

Take t2.medium size instances for example. They have 2 cpus.
m3.medium has only the one.

So - looks like we're using powershell hash tables.

Theres a bit of information here: http://www.computerperformance.co.uk/powershell/powershell_hashtable.htm#Example_1:_Simple_PowerShell_Hashtables_

Havent used hash tables before, but to start i'm going to set the $ec2instanceinfo hash table variable like this:

#This script determines the number of CPUs deployed in AWS
#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

Clear-Variable $ec2instanceinfo

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

$ec2instanceinfo

That returns me with a table which looks like this:

Name                           Value                                            
----                           -----                                            
m3.xlarge                      4                                                
t2.micro                       1                                                
r3.8xlarge                     32                                               
m3.large                       2                                                
c3.xlarge                      4                                                
c3.8xlarge                     32                                               
t2.small                       1                                                
g2.2xlarge                     8                                                
i2.2xlarge                     8                                                
m3.2xlarge                     8                                                
t2.medium                      2                                                
r3.xlarge                      4                                                
c3.large                       2                                                
m3.medium                      1                                                
r3.2xlarge                     8                                                
c3.2xlarge                     8                                                
r3.large                       2                                                
i2.xlarge                      4                                                
hs1.8xlarge                    16                                               
c3.4xlarge                     16                                               
i2.4xlarge                     16                                               
i2.8xlarge                     32                                               
r3.4xlarge                     16

I now have a table with the number of cpus per instance type.

So now - i have to compare the instancetype of each instance deployed in AWS to a value on this table, and i'm going to add the results to the $ec2numbercpus variable. That will give me the full count of the cpus.

So how the hell do i do this?

First i need to get the list of ec2instanceids, to allow me to run the Get-EC2InstanceAttribute command, which will give me the instance types. I'll need to loop through each instance to get the number of cpus for that particular instance, and then count them up each time into a variable.


$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId

Now I have the instanceids, I can run Get-EC2InstanceAttribute against these instance ids, and each time i'll get the instance type (which i'm putting in the $instance_type variable)

Then i can compare that variable to the hash table from above, to get the number of cpus that are in use. Heres the loop where i cycle through the instances counting the cores:

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}


So thats it. My core count in AWS is now held in the variable $ec2numbercpus. From this I can go back to my instance count script and use this new capacity counter and apply that. Heres my cpu counter script in its totality:

#This script determines the number of CPUs deployed in AWS
#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

#Clearing out any variables in case they were in use somewhere else

Clear-Variable ec2instanceinfo
Clear-Variable Filter
Clear-Variable ec2numbercpus
Clear-Variable numberAWSInstances

#This Variable ec2instanceinfo has the cpu details for each ec2 instancetype

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

#Need to make sure we're doing the right region:

Set-DefaultAWSRegion -Region us-west-2

#$numberAWSInstances = Get-EC2Instance -region us-west-2

#Get the list of instances, and add them to a table - along with some other info
#had to reference this bit from
#http://stackoverflow.com/questions/18090022/how-to-access-list-value-in-get-ec2instances-runninginstance-method

$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId
           

#This bit loops through each instanceid returned in the instances variable
#and returns the number of cpus in each instance, by referencing the number
#of cpus against the instancetype held in the hash table above.
#It then counts up the total number of cpus deployed and leaves that in the
#variable $ec2numbercpus

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}

echo 'Number of Cpus in use is' $ec2numbercpus


Savage. 

We are nearly there!!

One final blog post this weekend - cleaning up the script

Richie




Saturday, 15 November 2014

Putting the scripts together

Putting all the scripts together for an autodeploy

In this we're going to put all the scripts together. So initially i'm not going to take in any arguments for the script. I'm setting the 'instance ceiling' at 5 (which means that the total number of AWS instances that i'm allowing in my AWS environment is 5).

So the algorithm is:

1. Check Capacity of AWS environment
2. If AWS has available capacity then deploy instance to AWS
3. Else deploy instance to Azure

So - lets figure out how we are going to do this. New script please:

#This script determines the number of AWS instances deployed, and
#if the number deployed is equal to or greater than the instance ceiling,
#then we're deploying to Azure. If its lower, then we're deploying to AWS.

#Check the number of instances in AWS

$InstanceCeiling = 5

Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count


#If we've reached the ceiling:

If (InstanceCeiling => $numberAWSInstances.Count)
{

#Deploy an Ubuntu Instance to Azure

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXXX | Set-AzureSubnet -SubnetNames VMSubnet
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName RichieNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro

}


So here we are with the scripts put together (i'll clean up what images i'm using later - i'm just concentrating on the logic for the moment)

Lets give it a go and see how we go. I've set the instance ceiling at 5 - and i have 5 instances deployed in aws - so this should deploy to azure.

Testing.

=> doesnt apply in powershell - its -ge (greater than or equal to)

I have the instance count and the instance ceiling variable the wrong way around - it should be:

If $numberAWSInstances.count -ge $InstanceCeiling

The logic is now good - but its asking me for the Add-AzureAccount to be run again.

I want to deploy it to 'AzureNetwork' as thats the vpn connected to aws. need to change that.
Also need to change it to VMNetwork as opposed to VMSubnet

On the AWS deploy its telling me i dont have a default vpc. Must have deleted it. So now i have to specify a security group and a subnet using the -SecurityGroupId and -SubnetId switches.

oh - and I was struggling getting it deployed because i had specfied the incorrect region. The region is so important for these scripts. Set the default every time and just make sure you're in the right region.

Heres our script:

#This script determines the number of AWS instances deployed, and
#if the number deployed is equal to or greater than the instance ceiling,
#then we're deploying to Azure. If its lower, then we're deploying to AWS.

#Check the number of instances in AWS

Set-DefaultAWSRegion us-west-2

$InstanceCeiling = 6

Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count


#If we've reached the ceiling:

If ($numberAWSInstances.Count -ge $InstanceCeiling)
{

#Deploy an Ubuntu Instance to Azure

echo "deploying azure instance"

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXX | Set-AzureSubnet -SubnetNames VMNetwork
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName AzureNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

echo "deploying aws instance"

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro -SecurityGroupId sg-82dcb0e7 -SubnetId subnet-50ec2e27

}


So Now - we have the script working .We have it deploying vms based on the Number of VMS in the private cloud (AWS). If there are too many - it deploys to azure.

Nice one.

Richie

Getting Information about the AWS EC2 Instances we have deployed

Getting Information about the AWS EC2 Instances we have deployed

So this is the bit where we are attempting to script collection of EC2 Instance information, like core count and memory size.

We need to use this cmdlet:

Get-EC2Instance

I'm starting with the following:

Get-EC2Instance -Region us-west-1 (that should list all of my instances in Ireland)

It does - now all i need to do is count them. I'm going to do that - and assign it to a variable - each time i run the script - i'm going to clear the variable at the start. Heres the script i'm using:

#This script determines the number of Instances deployed in AWS


Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count

And that returns a number. So pretty much - if i set my 'instance ceiling' on AWS at 5 - then if i'm going to deploy an instance, it will currently have to deploy to azure as my AWS space is full. (i currently have 5 instances in AWS)

lets work on that in the next script.

Saturday, 8 November 2014

Configuring Openswan as our VPN

Configuring Openswan as our VPN

This document outlines how we configure openswan as our VPN.

We got a some tips from this blog - but its not complete:

http://michaelwasham.com/2013/09/03/connecting-clouds-site-to-site-aws-azure/

1. Create a VPC in AWS (i gave it 10.0.0.0/16 as the VPC CIDR)

2. Launch an ubuntu instance into the new VPC.

3. Make sure theres a public ip address available. ours is xxx.xxx.xxx.xxx (your public ip)

4. Head to Azure and create a Local Network (Local Network in this case means the network on your local site.) So its Networks - Local Networks - New - Give it the CIDR from the VPC in AWS, and put in the VPN ip address (our public ip address of our ubuntu server)

5. Now to create a new Virtual Network (this is the Azure side of the network). Networks - Virtual Networks - New - Network Services - Virtual Network - Custom Create. Give it a name (mine is AzureNetwork), Hit Configure a Site to Site VPN and select the Local Network you've just created in step 4. You'll need to add a gateway network as well.

6. After thats built go into your vpn and click add gateway. It will give the public ip address of the gateway in azure. This takes a little while to complete. (its create gateway - static route)

7. Now head back to your ubuntu server in aws. Its time to configure this. Heres the command to install it:

sudo apt-get install openswan

Just press enter to accept all the defaults for all the questions it asks.

8. Edit ipsec.conf:

cd /etc
sudo vi ipsec.conf

change the config file to be this:

config setup
      protostack=netkey
      nat_traversal=yes
      virtual_private=%v4:10.0.0.0/16
      oe=off

include /etc/ipsec.d/*.conf

This sets the default protocol to be netkey. it should default to this anyway.
the virtual_private is the CIDR in AWS (the local part of the vpn as set in step 1)

save that.

9. create a new vpn conf file

cd ipsec.d
sudo vi amazonazurevpn.conf

change the config file to this:

conn amazonazurevpn                      
   authby=secret
   auto=start
   type=tunnel
   left=10.0.0.238                        (this is the private ip address of the openswan server)
   leftsubnet=10.0.0.0/16            (this is the CIDR of the network in AWS-left is local, right is public)
   leftnexthop=%defaultroute
   right=xxx.xxx.xxx.xxx                  (this is the ip address of the gateway which was created in azure)
   rightsubnet=10.1.0.0/16           (this is the CIDR of the virtual network created in azure)
   ike=aes128-sha1-modp1024
   esp=aes128-sha1
   pfs=no

Save that.

10. We need to use the key in the ipsec.secrets file:

cd /etc
sudo vi ipsec.secrets

Add the following line:

10.0.0.238 xxx.xxx.xxx.xxx : PSK "Azure Gateway Key"

(10.0.0.238 is the private address of the openswan server)
(xxx.xxx.xxx.xxx is the gateway address in azure)
"Azure Gateway Key" is the key from the gateway in azure. You can get this by clicking on manage key. Put it in quotes.

Save that.

11. Need to enable ip forwarding now.

sudo vi /etc/sysctl.conf

uncomment this line:

net.ipv4.ip_forward=1

save that

apply the saved config:

sudo sysctl -p /etc/sysctl.conf

12. Disable source and destination checking on the openswan server (right click on it in aws, select "Change source/dest check" and click 'Yes - Disable"

13. In the Amazon Management console, in AWS, edit the security group and add in 2 inbound udp rules, one for 500 and one for 4500, from a specific ip address - the azure gateway - with /32 at the end:xxx.xxx.xxx.xxx/32

14. Restart ipsec on the openswan server:

sudo service ipsec restart

15. Thats it. The VPN should now be configured. You can do some troubleshooting on the openswan config by looking here:

http://codeidol.com/unix/linux-fix/Configuring-Linux-VPNs/Troubleshooting-Openswan/

16. We need to add a route to AWS to point to the network in Azure:

Go to VPC in AWS, select Route Tables and add in the subnet of the virtual network in azure (10.1.0.0/16). Select the id of the openswan server as the target.

17. Now we just need to launch instances in each of azure and aws, onto the networks that we've created in each.

And guess what - they dont ping. 

At this point i've rebuilt and restarted and reconfigured and tried different things to get this vpn up and running for three weeks. Today, i got them to ping. so the final action:

18. Open up icmp traffic on the inbound rule on the security group in AWS. Then they'll ping. You can see the openswan passing the icmp traffic by running a tcpdump on there:

sudo tcpdump -n -i eth0 icmp

So thats it. Now we have our VPN (although i'm going to rebuild it again - and assign an elastic ip to the openswan server - as that absolutely needs a static ip)

Making good progress this week.

Next - scripts to Check what capacity we're using.

Richie