Sunday 30 November 2014

Cloudbusters - A Website

Cloudbusters - Building a Web/DB app

We've pretty much finished our first submission, describing our design, and public and private clouds.

Now i'm going to attempt to build a web service so we can see if we can enhance our project a bit, and maybe help with the security bit.

So first - i'm building up a mysql server. 

I've deployed 2 ubuntu instances into AWS, a web and a db servers.


on the db server:

sudo apt-get update
sudo apt-get install mysql-server

I have to set the password on this

Installs mysql

Install a new db now:
sudo mysql_install_db

Now to secure it:

sudo mysql_secure_installation

It asks me to reset the password for the root db user id - well - i just had to set it in the db bit so i'm going to say no.

It asks me if i want to remove the anonymous open user id. in the name of a securer world, i hit the Y button!

I'll only allow root logon locally as well. cant be having people logging onto my db server with root!

I'll get rid of the open test db as well. thanks.

Reload privilege tables to set those changes in motion. Y of course.

done and done.

btw - i'm taking a bit of this info from this blog:

https://www.digitalocean.com/community/tutorials/how-to-set-up-a-remote-database-to-optimize-site-performance-with-mysql

Next step: Allow remote access to the db. we need this of course because we want the web server to be able to access our db server, and multiple web servers, if we go that way (watch this space)

sudo vi /etc/mysql/my.cnf

scroll down to the [mysqld] section

we need to change bind_address from 127.0.0.1 (the local server) to a public ip address. However, we dont want to leave our server unsecured, so we want to put in the ip address in the vpc (so its effectively on the private network). for me its xxx.xxx.xxx.xxx.

Save it, quit out of vi, and restart the mysql service:

sudo service mysql restart.

Now we need to set a db up for wordpress (which is going to be the cms that i'm using)

connect to mysql using root

mysql -u root -p

>mysql  ... I'm in

Create a database for us:

CREATE DATABASE Wordpress;

Now - we want to create a db admin user - for local operations - so this is a user that can only run stuff from the local server (see the @'localhost' bit)

xxx is the password

CREATE USER 'admin'@'localhost' IDENTIFIED BY 'xxxxxxxxxx';

Granting the access to the Wordpress db:

GRANT ALL PRIVILEGES ON Wordpress.* TO 'admin'@'localhost';

ok - now we just need to setup a user for other servers to access. initially i'm going to just setup a single user so i can get wordpress configured. this user is to connect from my web server. So:

CREATE USER 'user'@'xxx.xxx.xxx.xxx' IDENTIFIED BY 'xxxxxxxxxx';

(thats a user that can only connect from that ip address)

So - we're granting a pile of permissions - we'll come back later after the installation of wordpress to clean this up - but lets grant them for the moment:

GRANT ALL PRIVILEGES ON Wordpress.* TO 'user'@'xxx.xxx.xxx.xxx';

we'll come back later to:

GRANT SELECT,DELETE,INSERT,UPDATE ON Wordpress.* TO 'user'@'xxx.xxx.xxx.xxx';

Commit them using:

FLUSH PRIVILEGES;

exit the mysql prompt

exit

Done for the moment. Now to the web server.

Install the mysql client

sudo apt-get install mysql-client

test the connection:

mysql -u -h xxx.xxx.xxx.xxx user -p 

where xxx.xxx is the ip address of the db server

nuts. doesnt work. thats because the port is blocked. Need to setup a security group for mysql to allow port 3306. Done.


now it works.

now need to install nginx (a web server) and some other packages:

sudo apt-get install nginx php5-fpm php5-mysql

now configure php:

sudo vi /etc/php5/fpm/php.ini

uncomment the line cgi.fix_pathinfo=1 and change the 1 to a 0

Thats a security measure to make sure that users can only look for the file that they are asking for - if its not found it just returns an error (as opposed to looking for other files)

now we need to set how php and nginx communciate:

sudo vi /etc/php5/fpm/pool.d/www.conf

Make sure the listen directive is set as follows:

listen = /var/run/php5-fpm.sock

exit and restart php:

sudo service php5-fpm restart

Time to configure nginx. We're copying the default website to a different one that we can modify:

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/cloudbusters.info

(why cloudbusters.info - well - i'll talk about that in a while)

now open that file:

sudo vi /etc/nginx/sites-available/cloudbusters.info

ensure its listening on your specific port (port 80 for us). make sure the listen directive is there

We're also changing the root to our new file:

server {
        listen 80;
        root /var/www/cloudbusters.info
        index index.php index.html index.htm;

now we need to just set our server_names (to be cloudbusters.info) and make try_files is set, and our error pages. This is what we're changing the file to:


server {
    listen 80;
    root /var/www/cloudbusters.info;
    index index.php index.hmtl index.htm;
    server_name example.com;
    location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/www;
    }
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }
}


now we link to our enabled directory and remove the link to the default file:

sudo rm /etc/nginx/sites-enabled/default
sudo ln -s /etc/nginx/sites-available/cloudbusters.info /etc/nginx/sites-enabled

and restart nginx:

sudo service nginx restart

Now we're installing wordpress:

cd ~
wget http://wordpress.org/latest.tar.gz

unpack it 

tar xzvf latest.tar.gz

copy the sample config file to be the 'prod' one:

cp ~/wordpress/wp-config-sample.php ~/wordpress/wp-config.php

then edit it

sudo vi ~/wordpress/wp-config.php

stick in the database user id and password etc:

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'Wordpress');

/** MySQL database username */
define('DB_USER', 'user');

/** MySQL database password */
define('DB_PASSWORD', 'xxxxxxxx');

/** MySQL hostname */
define('DB_HOST', 'xxx.xxx.xxx.xxx');


close the file.

now we just need to set the server block for our website:

sudo mkdir -p /var/www/cloudbusters.info

now copy all the wordpress files over:

sudo cp -r ~/wordpress/* /var/www/cloudbusters.info

now we just need to modify permissions and ownership of our files:

cd /var/www/cloudbusters.info

give all the files access to the web server user (www-data):

sudo chown -R www-data:www-data *

we also need to make sure the normal user other than root can do stuff:

sudo usermod -a -G www-data USERID
sudo chmod -R g+rw /var/www/cloudbusters.info

now just head to the public ip of your site:

wait - need to open port 80 on there - update the security groups

nope - its no good - i cant get port 80 working no matter what - even from localhost. stupid. 

I'm going to continue for the moment and come back to that. i've set the port as 8080, set a security group and set cloudbusters.info to listen on port 8080. which works.

so - connecting to my public ip for my webserver on port 8080 brings me in to the config page on wordpress (so the db connection works)

I put in a user id and password and hit continue - Wordpress gave me errors. Create command denied to 'user'@'ip-xx-xx-xx-xx.eu-west-1.compute.internal' for table 'wp_users']

what? i set up the user id with the ip address - but aws is logging on with the domain that it has set for that computer. hmmm.

Doesnt matter - i need to change my user anyway - because the plans are to let any number of web servers access - so i'm going to have to give user@* access.

the command is:

CREATE USER 'user'@'%' IDENTIFIED BY 'xxxxxxxxxx';

i'm not granting it yet. i'm going to remove my original user and setup a new one with the domain set by aws, because i dont want my environment to be unsecured. (i still need the grant all bit)

so i'm removing my original user and setting up a new one with the AWS Domain.

Done - and wordpress installed with no issue.

And i've logged on now. So thats good.

back to the db to remove the access.

First of all - i'm removing that wordpress user and setting up the % generic one. so thats DROP USER.

Then i'm setting up the % one so that any host can access - and i'm only letting the permissions that i specified earlier in the blog:

GRANT SELECT,DELETE,INSERT,UPDATE ON Wordpress.* TO 'user'@'%';

So i've done good. Pretty crap having a site and referring to it by its ip address though. 

The good thing is - i've bought cloudbusters.info. so now i'm setting my dns entry.

Done (on godaddy)

So now my website is http://cloudbusters.info:8080

Need to figure out how to sort out that port 80.

Anyway. Whats next. What do i need to do next. I need to clone that web server and see does the site work from each webserver is what i need to do.

Into EC2 in AWS, right click and images - create image. 

Give it a description and bang, its an image.

Now i'm deploying an Instance of my image (effectively a copy of that 1st web server) as an instance in AWS.

I'm just going to see if i can access the wordpress site from the new instance.

YESS!!!!! it works!!!!!

of course it works!!! we have an app which demonstrates precisely what Cloud is all about. I can now hold a single point of information in my db, and expand that instance over and over. 

The things i have yet to sort out though are:

1. i need a load balancer. (haproxy will probably be the one)
2. i need a monitoring system (it will be nagios i'd say)
3. i need a way to stimulate cpu performance.
4. of course - i need to get that image to azure to deploy there.

phew - a couple more things to do alright - but considering the progress today into a website where we can truly demonstrate the power of the cloud, as well as giving us something to work with on the security part of our project, is quite exciting.

Thats all from me for today. 

Richie




Monday 24 November 2014

Putting the document together for first submission

We will have three pieces of work ready by this weekend and for first submission. The target is to finish first draft by end of day 24th Nov.

The three parts are :

1. Design
2. Private cloud implementation
3. Public cloud implementation

Sunday 16 November 2014

The New Script - The Cloudbusters are busting into the Cloud!!!!

Busting into the Cloud

This will be the final script for this weekend, with all the functions that we had decided in it.

Just to remind you of the algorithm:

1. Check Capacity of AWS environment
2. If AWS has available capacity then deploy instance to AWS
3. Else deploy instance to Azure

Our capacity counter is now core count.

Lets do the script:

#This script will deploy an instance to our Cloud, private or public.
#Where the instance is deployed to is determined whether we have reached our
#Capacity limit. Our capacity limit is the number of cpus which we have
#available to us in our environment. In this script, the number is set at 6.

#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

#Clearing out any variables in case they were in use somewhere else

Clear-Variable ec2instanceinfo
Clear-Variable Filter
Clear-Variable ec2numbercpus
Clear-Variable numberAWSInstances

#This Variable ec2instanceinfo has the cpu details for each ec2 instancetype

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

#Need to make sure we're doing the right region:

Set-DefaultAWSRegion -Region us-west-2

#$numberAWSInstances = Get-EC2Instance -region us-west-2

#Get the list of instances, and add them to a table - along with some other info
#had to reference this bit from
#http://stackoverflow.com/questions/18090022/how-to-access-list-value-in-get-ec2instances-runninginstance-method

$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId
           

#This bit loops through each instanceid returned in the instances variable
#and returns the number of cpus in each instance, by referencing the number
#of cpus against the instancetype held in the hash table above.
#It then counts up the total number of cpus deployed and leaves that in the
#variable $ec2numbercpus

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}

#Setting our Capacity Limit

$CapacityLimit = 6

echo 'Number of Cpus in use is' $ec2numbercpus

#If we've reached the ceiling:

If ($ec2numbercpus -ge $CapacityLimit)
{

#Deploy an Ubuntu Instance to Azure

echo "Cloudbusting!!! :) - Deploying Azure Instance"

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXX | Set-AzureSubnet -SubnetNames VMNetwork
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName AzureNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

echo "No Cloudbusting :( - Deploying EC2 Instance"

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro -SecurityGroupId sg-82dcb0e7 -SubnetId subnet-50ec2e27

}


And that works!! Heres the output (script ran twice, one to deploy an EC2 Instance when I have 5 cpus deployed in AWS, and then another one straight after when the results of the first run leaves me with 6 cpus deployed) - I XXXed out anything which might be construed as a security risk.

PS C:\Users\Richie> C:\Users\Richie\Documents\MSc Cloud Computing\Software\Scripts\CloudbustersAutoDeploy.ps1
Number of Cpus in use is
5
No Cloudbusting :( - Deploying EC2 Instance


GroupNames    : {}
Groups        : {}
Instances     : {}
OwnerId       : xxxxxxxxxxxxxxx
RequesterId   :
ReservationId : r-20e6d02d

Instance Deployed to AWS



PS C:\Users\Richie> C:\Users\Richie\Documents\MSc Cloud Computing\Software\Scripts\CloudbustersAutoDeploy.ps1
Number of Cpus in use is
6
Cloudbusting!!! :) - Deploying Azure Instance
WARNING: GeoReplicationEnabled property will be deprecated in a future release of Azure PowerShell. The value will be
merged into the AccountType property.


Id          : xxxxxxxxxxxxxxxxxxxx
Name        : xxxxxxxxxxxxx
Environment : AzureCloud
Account     : xxxxxxxxxxxxx
Properties  : {[SupportedModes, AzureServiceManagement,AzureResourceManager], [Tenants,
              xxxxxxxxxxxxxxxxxx], [Default, True], [StorageAccount, richiesstorageaccount]}

OperationDescription : New-AzureVM
OperationId          : 154573c6-656a-3c0b-91cf-67d6ea87f2fa
OperationStatus      : Succeeded

Instance Deployed to Azure



So - thats it. The script is done.

We may clean up the environment a little if we have time,

Thats it from me this weekend.

Richie

More detail on the capacity script

More detail on the capacity script

Now that we've successfully got the script up and running, its time to work on the capacity model.

Its working with the simplest of measurements, number of instances. But that doesnt necessarily apply realistically to private clouds, as vms come in different sizes. So now we are going to add more detail to the capacity calculations. So now, we are going to count the cpus deployed. If we go over the 'capacity ceiling' again we'll deploy to azure.

So - in AWS - CPUs are based on the instancetype.

Heres the thing though - for some reason - AWS doesnt specify a setting in the instancetype which correlates in each case to the number of cpus.

Take t2.medium size instances for example. They have 2 cpus.
m3.medium has only the one.

So - looks like we're using powershell hash tables.

Theres a bit of information here: http://www.computerperformance.co.uk/powershell/powershell_hashtable.htm#Example_1:_Simple_PowerShell_Hashtables_

Havent used hash tables before, but to start i'm going to set the $ec2instanceinfo hash table variable like this:

#This script determines the number of CPUs deployed in AWS
#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

Clear-Variable $ec2instanceinfo

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

$ec2instanceinfo

That returns me with a table which looks like this:

Name                           Value                                            
----                           -----                                            
m3.xlarge                      4                                                
t2.micro                       1                                                
r3.8xlarge                     32                                               
m3.large                       2                                                
c3.xlarge                      4                                                
c3.8xlarge                     32                                               
t2.small                       1                                                
g2.2xlarge                     8                                                
i2.2xlarge                     8                                                
m3.2xlarge                     8                                                
t2.medium                      2                                                
r3.xlarge                      4                                                
c3.large                       2                                                
m3.medium                      1                                                
r3.2xlarge                     8                                                
c3.2xlarge                     8                                                
r3.large                       2                                                
i2.xlarge                      4                                                
hs1.8xlarge                    16                                               
c3.4xlarge                     16                                               
i2.4xlarge                     16                                               
i2.8xlarge                     32                                               
r3.4xlarge                     16

I now have a table with the number of cpus per instance type.

So now - i have to compare the instancetype of each instance deployed in AWS to a value on this table, and i'm going to add the results to the $ec2numbercpus variable. That will give me the full count of the cpus.

So how the hell do i do this?

First i need to get the list of ec2instanceids, to allow me to run the Get-EC2InstanceAttribute command, which will give me the instance types. I'll need to loop through each instance to get the number of cpus for that particular instance, and then count them up each time into a variable.


$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId

Now I have the instanceids, I can run Get-EC2InstanceAttribute against these instance ids, and each time i'll get the instance type (which i'm putting in the $instance_type variable)

Then i can compare that variable to the hash table from above, to get the number of cpus that are in use. Heres the loop where i cycle through the instances counting the cores:

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}


So thats it. My core count in AWS is now held in the variable $ec2numbercpus. From this I can go back to my instance count script and use this new capacity counter and apply that. Heres my cpu counter script in its totality:

#This script determines the number of CPUs deployed in AWS
#$ec2instanceinfo is the info from aws about the vcpus deployed
#to each instancetype

#Clearing out any variables in case they were in use somewhere else

Clear-Variable ec2instanceinfo
Clear-Variable Filter
Clear-Variable ec2numbercpus
Clear-Variable numberAWSInstances

#This Variable ec2instanceinfo has the cpu details for each ec2 instancetype

$ec2instanceinfo = @{
"t2.micro"=1;"t2.small"=1;"t2.medium"=2;
"m3.medium"=1;"m3.large"=2;"m3.xlarge"=4;
"m3.2xlarge"=8;"c3.large"=2;"c3.xlarge"=4;
"c3.2xlarge"=8;"c3.4xlarge"=16;"c3.8xlarge"=32;
"g2.2xlarge"=8;"r3.large"=2;"r3.xlarge"=4;
"r3.2xlarge"=8;"r3.4xlarge"=16;"r3.8xlarge"=32;
"i2.xlarge"=4;"i2.2xlarge"=8;"i2.4xlarge"=16;
"i2.8xlarge"=32;"hs1.8xlarge"=16}

#Need to make sure we're doing the right region:

Set-DefaultAWSRegion -Region us-west-2

#$numberAWSInstances = Get-EC2Instance -region us-west-2

#Get the list of instances, and add them to a table - along with some other info
#had to reference this bit from
#http://stackoverflow.com/questions/18090022/how-to-access-list-value-in-get-ec2instances-runninginstance-method

$instances = Get-EC2Instance `
             |%{ $_.RunningInstance } `
             | Select-Object InstanceId
           

#This bit loops through each instanceid returned in the instances variable
#and returns the number of cpus in each instance, by referencing the number
#of cpus against the instancetype held in the hash table above.
#It then counts up the total number of cpus deployed and leaves that in the
#variable $ec2numbercpus

foreach ($i in $instances.InstanceId)
{

$instance_type = Get-EC2InstanceAttribute -InstanceId $i -Attribute instanceType

$ec2numbercpus = $ec2numbercpus + $ec2instanceinfo[$instance_type.InstanceType]


}

echo 'Number of Cpus in use is' $ec2numbercpus


Savage. 

We are nearly there!!

One final blog post this weekend - cleaning up the script

Richie




Saturday 15 November 2014

Putting the scripts together

Putting all the scripts together for an autodeploy

In this we're going to put all the scripts together. So initially i'm not going to take in any arguments for the script. I'm setting the 'instance ceiling' at 5 (which means that the total number of AWS instances that i'm allowing in my AWS environment is 5).

So the algorithm is:

1. Check Capacity of AWS environment
2. If AWS has available capacity then deploy instance to AWS
3. Else deploy instance to Azure

So - lets figure out how we are going to do this. New script please:

#This script determines the number of AWS instances deployed, and
#if the number deployed is equal to or greater than the instance ceiling,
#then we're deploying to Azure. If its lower, then we're deploying to AWS.

#Check the number of instances in AWS

$InstanceCeiling = 5

Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count


#If we've reached the ceiling:

If (InstanceCeiling => $numberAWSInstances.Count)
{

#Deploy an Ubuntu Instance to Azure

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXXX | Set-AzureSubnet -SubnetNames VMSubnet
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName RichieNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro

}


So here we are with the scripts put together (i'll clean up what images i'm using later - i'm just concentrating on the logic for the moment)

Lets give it a go and see how we go. I've set the instance ceiling at 5 - and i have 5 instances deployed in aws - so this should deploy to azure.

Testing.

=> doesnt apply in powershell - its -ge (greater than or equal to)

I have the instance count and the instance ceiling variable the wrong way around - it should be:

If $numberAWSInstances.count -ge $InstanceCeiling

The logic is now good - but its asking me for the Add-AzureAccount to be run again.

I want to deploy it to 'AzureNetwork' as thats the vpn connected to aws. need to change that.
Also need to change it to VMNetwork as opposed to VMSubnet

On the AWS deploy its telling me i dont have a default vpc. Must have deleted it. So now i have to specify a security group and a subnet using the -SecurityGroupId and -SubnetId switches.

oh - and I was struggling getting it deployed because i had specfied the incorrect region. The region is so important for these scripts. Set the default every time and just make sure you're in the right region.

Heres our script:

#This script determines the number of AWS instances deployed, and
#if the number deployed is equal to or greater than the instance ceiling,
#then we're deploying to Azure. If its lower, then we're deploying to AWS.

#Check the number of instances in AWS

Set-DefaultAWSRegion us-west-2

$InstanceCeiling = 6

Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count


#If we've reached the ceiling:

If ($numberAWSInstances.Count -ge $InstanceCeiling)
{

#Deploy an Ubuntu Instance to Azure

echo "deploying azure instance"

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$azureimage = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$azurevm = New-AzureVMConfig -Name richiesubuntu -ImageName $azureimage.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser XXXXX -Password XXXXX | Set-AzureSubnet -SubnetNames VMNetwork
New-AzureVM -VMs $azurevm -ServiceName richiescloud -VNetName AzureNetwork -WaitForBoot

}

Else

{

#Otherwise Deploy to AWS

echo "deploying aws instance"

$amazonimage = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $amazonimage.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro -SecurityGroupId sg-82dcb0e7 -SubnetId subnet-50ec2e27

}


So Now - we have the script working .We have it deploying vms based on the Number of VMS in the private cloud (AWS). If there are too many - it deploys to azure.

Nice one.

Richie

Getting Information about the AWS EC2 Instances we have deployed

Getting Information about the AWS EC2 Instances we have deployed

So this is the bit where we are attempting to script collection of EC2 Instance information, like core count and memory size.

We need to use this cmdlet:

Get-EC2Instance

I'm starting with the following:

Get-EC2Instance -Region us-west-1 (that should list all of my instances in Ireland)

It does - now all i need to do is count them. I'm going to do that - and assign it to a variable - each time i run the script - i'm going to clear the variable at the start. Heres the script i'm using:

#This script determines the number of Instances deployed in AWS


Clear-Variable numberAWSInstances

$numberAWSInstances = Get-EC2Instance -region us-west-2

echo $numberAWSInstances.Count

And that returns a number. So pretty much - if i set my 'instance ceiling' on AWS at 5 - then if i'm going to deploy an instance, it will currently have to deploy to azure as my AWS space is full. (i currently have 5 instances in AWS)

lets work on that in the next script.

Saturday 8 November 2014

Configuring Openswan as our VPN

Configuring Openswan as our VPN

This document outlines how we configure openswan as our VPN.

We got a some tips from this blog - but its not complete:

http://michaelwasham.com/2013/09/03/connecting-clouds-site-to-site-aws-azure/

1. Create a VPC in AWS (i gave it 10.0.0.0/16 as the VPC CIDR)

2. Launch an ubuntu instance into the new VPC.

3. Make sure theres a public ip address available. ours is xxx.xxx.xxx.xxx (your public ip)

4. Head to Azure and create a Local Network (Local Network in this case means the network on your local site.) So its Networks - Local Networks - New - Give it the CIDR from the VPC in AWS, and put in the VPN ip address (our public ip address of our ubuntu server)

5. Now to create a new Virtual Network (this is the Azure side of the network). Networks - Virtual Networks - New - Network Services - Virtual Network - Custom Create. Give it a name (mine is AzureNetwork), Hit Configure a Site to Site VPN and select the Local Network you've just created in step 4. You'll need to add a gateway network as well.

6. After thats built go into your vpn and click add gateway. It will give the public ip address of the gateway in azure. This takes a little while to complete. (its create gateway - static route)

7. Now head back to your ubuntu server in aws. Its time to configure this. Heres the command to install it:

sudo apt-get install openswan

Just press enter to accept all the defaults for all the questions it asks.

8. Edit ipsec.conf:

cd /etc
sudo vi ipsec.conf

change the config file to be this:

config setup
      protostack=netkey
      nat_traversal=yes
      virtual_private=%v4:10.0.0.0/16
      oe=off

include /etc/ipsec.d/*.conf

This sets the default protocol to be netkey. it should default to this anyway.
the virtual_private is the CIDR in AWS (the local part of the vpn as set in step 1)

save that.

9. create a new vpn conf file

cd ipsec.d
sudo vi amazonazurevpn.conf

change the config file to this:

conn amazonazurevpn                      
   authby=secret
   auto=start
   type=tunnel
   left=10.0.0.238                        (this is the private ip address of the openswan server)
   leftsubnet=10.0.0.0/16            (this is the CIDR of the network in AWS-left is local, right is public)
   leftnexthop=%defaultroute
   right=xxx.xxx.xxx.xxx                  (this is the ip address of the gateway which was created in azure)
   rightsubnet=10.1.0.0/16           (this is the CIDR of the virtual network created in azure)
   ike=aes128-sha1-modp1024
   esp=aes128-sha1
   pfs=no

Save that.

10. We need to use the key in the ipsec.secrets file:

cd /etc
sudo vi ipsec.secrets

Add the following line:

10.0.0.238 xxx.xxx.xxx.xxx : PSK "Azure Gateway Key"

(10.0.0.238 is the private address of the openswan server)
(xxx.xxx.xxx.xxx is the gateway address in azure)
"Azure Gateway Key" is the key from the gateway in azure. You can get this by clicking on manage key. Put it in quotes.

Save that.

11. Need to enable ip forwarding now.

sudo vi /etc/sysctl.conf

uncomment this line:

net.ipv4.ip_forward=1

save that

apply the saved config:

sudo sysctl -p /etc/sysctl.conf

12. Disable source and destination checking on the openswan server (right click on it in aws, select "Change source/dest check" and click 'Yes - Disable"

13. In the Amazon Management console, in AWS, edit the security group and add in 2 inbound udp rules, one for 500 and one for 4500, from a specific ip address - the azure gateway - with /32 at the end:xxx.xxx.xxx.xxx/32

14. Restart ipsec on the openswan server:

sudo service ipsec restart

15. Thats it. The VPN should now be configured. You can do some troubleshooting on the openswan config by looking here:

http://codeidol.com/unix/linux-fix/Configuring-Linux-VPNs/Troubleshooting-Openswan/

16. We need to add a route to AWS to point to the network in Azure:

Go to VPC in AWS, select Route Tables and add in the subnet of the virtual network in azure (10.1.0.0/16). Select the id of the openswan server as the target.

17. Now we just need to launch instances in each of azure and aws, onto the networks that we've created in each.

And guess what - they dont ping. 

At this point i've rebuilt and restarted and reconfigured and tried different things to get this vpn up and running for three weeks. Today, i got them to ping. so the final action:

18. Open up icmp traffic on the inbound rule on the security group in AWS. Then they'll ping. You can see the openswan passing the icmp traffic by running a tcpdump on there:

sudo tcpdump -n -i eth0 icmp

So thats it. Now we have our VPN (although i'm going to rebuild it again - and assign an elastic ip to the openswan server - as that absolutely needs a static ip)

Making good progress this week.

Next - scripts to Check what capacity we're using.

Richie

Deleting an Azure VM

Deleting an Azure VM

Add the creds:

Add-AzureAccount

Need to stop the one we created yesterday first:


Stop-AzureVM -ServiceName richiescloud -Name richiesubuntu -Force

Successful. Now to delete it and its disks as well:

Remove-AzureVM -ServiceName richiescloud -Name richiesubuntu -DeleteVHD

Successful as well. Sweet.

Thats how to delete an azure vm.

Now we have scripted deploying and deleting vms in both azure and aws. Now need to figure out how to find out what capacity our clouds are using, so we can script automatic deployments based on how much capacity is available in our private cloud.

Richie



Friday 7 November 2014

Deploying in Azure using Powershell

Deploying in Azure using powershell

This is going to outline how to deploy in Azure using powershell.

Load up Powershell ISE.
Get-AzureAccount to see do i still have my creds there. I do of course. Sweet.

Lets see what images they have to deploy.

Get-AzureVMImage

Whoops - asking me for creds again Add-AzureAccount - enter my creds and we're in again.

Get-AzureVMImage again


I'll be deploying an ubuntu image. I want the newest one (14.04). Lets see what ubuntu ones are there:

Get-AzureVMImage | Where ImageName -Match "Ubuntu" | sort PublishedDate

Theres still loads of results.

Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1

This brings back one. The latest one. Nice one. So i'll set that as a variable:

$image = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1

I'm creating an affinity group. I can script that:

New-AzureAffinityGroup -Name Dublin -Location "North Europe"

And a storage account. I can script that too (make sure the storage account name is all lowercase chars):

New-AzureStorageAccount -StorageAccountName richiesstorageaccount -AffinityGroup Dublin

Need to associate our subscription:

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName -CurrentStorageAccountName (Get-AzureStorageAccount -StorageAccountName richiesstorageaccount)

And a new Cloud Service:

New-AzureService -ServiceName richiescloud -AffinityGroup Dublin

I'm creating a variable for some vm options like the size, the user id and password i set on the ubuntu server, and the subnet:

$vm = New-AzureVMConfig -Name richiesubuntu -InstanceSize small | Add-AzureProvisioningConfig -Linux -LinuxUser richie -Password testvm | Set-AzureSubnet -SubnetNames VMNetwork



So i had a couple of issues with the syntax - sorted now though. Successful deployment. Heres the script:

Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName ` -CurrentStorageAccountName (Get-AzureStorageAccount).Label -PassThru
$image = Get-AzureVMImage | Where ImageName -Match "Ubuntu-14.04" | sort PublishedDate | Select ImageName -First 1
$vm = New-AzureVMConfig -Name richiesubuntu -ImageName $image.ImageName -InstanceSize Small | Add-AzureProvisioningConfig -Linux -LinuxUser richie -Password Testvm01 | Set-AzureSubnet -SubnetNames VMSubnet
New-AzureVM -VMs $vm -ServiceName richiescloud -VNetName RichieNetwork -WaitForBoot



Line 1 = Set the subscription
Line 2 = the image details
Line 3 = the image configuration
Line 4 = the deployment.

Thats me done for tonight.

Tomorrow i'll be scripting a termination, and a bit of capacity reports or something like that from aws. See if i can throw a couple of these scripts together and let the script decide where we need to deploy the servers.

Richie




Thursday 6 November 2014

Deploying in AWS using powershell

Deploy in AWS

This blog post is going to outline the trials and tribulations of getting a script working to deploy an AWS server. Fun and Games.

1. Set the region. We're currently using oregon which has our openswan server. So - Get-EC2Image -Region us-west-1

access denied. Into IAM - set up 'Richie' as a power user.

Access is working, but the script had returned 62000 lines before i cancelled it.

So we know we can connect. Make it the default region:

 Set-DefaultAWSRegion us-west-1

If we want to change to ireland as our default, then:

Clear-DefaultAWSRegion will get rid of the default.

2. Deploy an instance. The command is New-EC2Instance. Lets do it:

New-EC2Instance

here are the variables:

-ImageId is the image id. Windows 2008 server ami-e5f7bbd5

-MinCount and -MaxCount are the minimum and maximum no of instances to run (1 in each case)

-InstanceType t2.micro (we want the free one)

Heres the command:

New-EC2Instance -ImageId ami-e5f7bbd5 -MinCount 1 -MaxCount 1 -InstanceType t2.micro

but wait - i'll add in a variable because using that image id is a pain. I can get the actual name of it.

$ami = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $ami.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro

lets see what happens. Into powershell ise and run the script.

Oops - turns out us-west-1 is California - not Oregon.  It deployed though.

PS C:\Users\Richie> $ami = Get-EC2ImageByName Windows_2008_Base
New-EC2Instance -ImageId $ami.ImageId -MinCount 1 -MaxCount 1 -InstanceType t2.micro


GroupNames    : {}
Groups        : {}
Instances     : {}
OwnerId       : 227910012278
RequesterId   : 
ReservationId : r-7e0d7020

Now to terminate it.

Stop-EC2Instance -Instance i-031762c9 -Terminate -Force

-instance was the instance name it created.
-Terminate means we're terminating it.
-Force means it doesnt come back with 'are you sure'

Sweet - its terminated.

Happy days - successful deployment and termination. I'll leave that now for today. Tomorrow - Azure deployments and terminations, and then maybe a bit more automation and detection.

Richie.





Setting up powershell for Azure and AWS

Setting up AWS and Azure powershell modules to automatically load:


1. Add the modules to powershell:

Add:

Import-Module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1"

into the powershell profile.

2. Add AWS credentials - outlined here:
 http://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html

I set the key. looks like we should come up with a process for changing this periodically and document it. process should be:

       1. Go into users - manage access keys
       2. Download the new secret key
       3. Run

Set-AWSCredentials -AccessKey myaccesskey -SecretKey mysecretkey -StoreAs default

replace myaccesskey and mysecretkey with the keys downloaded from IAM.

Also it might be a good idea to separate the roles. One role for deploying servers, but you cannot 'remote control' the server, and the 'remote control' user cant deploy. (separation of duties)


AWS Credentials added.

3. Add Azure credentials - outlined here:
http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/#Install

Run Add-AzureAccount - and it brings up an azure logon screen. Enter your credentials and then powershell is now holding your azure account information.

the Azure credentials is easier - but its using an AD account, and the scripts contain your password in free text (although you can set them to trasmit encrypted). might be worth doing the certificate method instead.

1. Run Get-AzurePublishSettingsFile
2. Run Import-AzurePublishSettingsFile
3. Delete the publishsettingsfile you had downloaded (recommended as a security precaution by microsoft)

Now - Get-Azure Account will confirm you're ready to go.



Credentials for connecting to both our private and public cloud are ready to go.

4. Add it into Powershell ISE profile as well (a different profile). PowershellISE though - Get-ExecutionPolicy is restricted of course - because its a security risk to run untrusted scripts. Set it to RemoteSigned (Set-ExecutionPolicy RemoteSigned)
(you dont need powershell ISE - i use it because its easier to script with)



Next step - Scripting deployments.

Wednesday 5 November 2014

Docs for cloud scripting

Looking at configuring our Azure scripts and AWS scripts.

Getting our credentials setup on each of the scripting tools:

http://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html

http://azure.microsoft.com/en-us/documentation/articles/install-configure-powershell/

powershell for aws:

http://docs.aws.amazon.com/powershell/latest/userguide/pstools-getting-set-up.html

Tuesday 4 November 2014

VPNs

So we spent the last week trying to get different flavours of VPN setup. not so easy when we are getting access denied on everything in AWS, which is scuppering a lot of work.

I was attempting to get Openstack running with openswan on ubuntu, and windows 2008 r2 RRAS, and then windows 2012 RRAS in an attempt to open a vpn between envs.

Struggled to do it from openstack as I couldnt configure an internet-facing ip to allow the vpn to run.

However, i learned a lot about the different configs and options for vpns over the last week.


So heres what happened yesterday (3rd November)

We finally decided that our hybrid cloud would be AWS as our primary cloud and Azure as our secondary.

We did up a project plan outlining the tasks required to get us to our goal.

Then we went about setting up our vpn (openswan) from AWS VPC (virtual private Cloud) to Azure.

We threw the openswan environment together quickly (must have done it about 20 times now at this stage), but we never recreated the azure site-to-site connection on the virtual networks (it takes a while to create the gateway), so never got the vpn up and running last night.

Jeff recreated the azure virtual network this morning, and lo and behold, our clouds are connected!

So openswan is working for us. We'll leave it at that and maybe research some other options for the VPN, such as openvpn or Windows RRAS. However, we're going to document our work so far.

We have our document structure as well so we're going to update that now as we go.

We have a couple more tasks now to get through, like provisioning, monitoring and orchestration in both AWS and Azure, but its good to have our clouds connected.

Lets see what tomorrows tasks brings.


Richie

Wednesday 29 October 2014

doc structure and tasks

Folks,

heres the tasks:

Structure of main doc:    Richie
Structure of Security doc: Jeff

Jeff to send around the azure/aws 'fix'

investigate
AWS provisioning - Jeff
Azure Provisioning - Ying
AWS Scheduling - Richie
AWS Monitoring - Jeff
Azure Monitoring - Richie
Self Servicing
    Heat - Jeff
    Powershell - Richie
    Any others - Ying

Migration - Clarify Requirement - Ying

Network Features - All (get it working between aws an azure)

thanks
Richie

Wednesday 22 October 2014

Meeting minutes&Amended Proposal- 21Oct

After discussion in class on 20th Oct, the team have decided to amended project proposal based following key points:


1. we will provide a  hybrid cloud solutions with 2 clouds where end users can request VMs on demand

2. it contains a primary cloud - either aws or azure and a secondary cloud - azure or aws depends on who is primary

3. first user request always goes to primary cloud, only when primary cloud runs out of capacity,in our demo, number of vms, requests will go to second cloud - burst out

4. end users will not need to know where their requested VMs are, all they do is " request a vm" and one will be powered up based on availability from either cloud

5. the process will be seamless

6. the scope doesnt need to cover auto scaling at this time.

Sunday 19 October 2014

Project Management Templates&Tools

Ying Tang

Came across a very useful project mangement tool and it is free to use as community edition. It has project schedule planner plus PM templates based on PRINCE2 methodology.

here is the link

http://www.projectinabox.org.uk/Community.asp


Cloudbusters.info

Folks,

I've made a bit of progress with the app.

I got the domain cloudbusters.info so you can access the site from there. its

I have 2 servers - a web server and a db server (Ubuntu on each - nginx for the web server and MySQL for the db). I installed wordpress on top.

I got a SSL cert as well but haven't applied it (thanks github package)

I think the best place to host the app is on openstack, (simply because we don't have to change the public ip every time I move my laptop to a different router) although i'll have to investigate more about how to do a bit of port forwarding there.

​so my recommendation for the app is to build it on openstack and burst out into azure (I haven't yet been successful in doing anything on that - learning Ubuntu, nginx and how to build websites has taken up all my time)!!

anyway - the site will be up for this evening but down tomorrow during the day:

http://www.cloudbusters.info

PS - heres a bit of configuration on the azure side of things to enable a vpn (which thinking about it is the way to go for the load balanced FEs.

http://sebastianmaniak.com/2014/05/30/hybrid-cloud-with-azure-vpn-configuration-guide-hybridcloud-azure/

Still uploading the web and db server to openstack. i probably shouldnt have set the starting image size to 10gb - should have left it at the default.



Richie

Draft submission 1.1 and firewall research

Ying Tang

I have draft submission 1.1 completed to reflect changes/new ideas discussed by the team on 15th Oct.

The next step is to research what firewall to be deployed. An example firewall is Juniper SRX210 with the following features:

Firewall performance (max)850 Mbps
IPS performance (NSS 4.2.1)65 Mbps
AES256+SHA-1 / 3DES+SHA-1 VPN performance85 Mbps
Maximum concurrent sessions64K
New sessions/second (sustained, TCP, 3-way)2,200
Maximum security policies512     
                                                                                                  







website build

Website Build

Richie Dennehy

My tasks are to research the cloud technology and the application. So I've started with the application.

I've got my copy of VMWare workstation installed on my laptop (thanks very much VCP) and have build a couple of ubuntu servers. On one i've installed apache, nginx and php, and on the other i have mysql. 

so far I have:

1 * apache webserver (192.168.130.130)
1 * test apache webserver (192.168.130.132)
1 * mysql server (192.168.130.131)

I've secured the mysql server, and setup a user (wordpressuser) with full privileges.

I'll lock down the permissions later.

Installed and configured wordpress, and setup a new website.

split the db and web server and secure it:


Website IPs and network settings

I've set the ip addressing up. had to configure port forwarding on both my home router and in vmware workstation. 

Bought the website cloudbusters.info.
Setup DNS pointing to cloudbusters.info
forwarded incoming connections from port 80 to port 8000
public website now up and running: www.cloudbusters.info

Bought a SSL Cert.

Next steps: 1. build a second webserver and a load balancer to split the traffic
                   2. bang the servers onto the private cloud and decide on the public cloud.
                   3. look at what firewall we can use






Wednesday 15 October 2014

Day2-Research on hybrid cloud/cloud security

Started research on hybrid cloud and cloud security.

Some useful papers:

Hybrid cloud storage
http://www.storsimple.com/Portals/65157/docs/ESG-White-Paper-Microsoft-HCS-Nov-2013.pdf

Hybrid cloud security - VMware
www.frost.com/prod/servlet/cpo/272112250

Research on public cloud provider - openstack
https://openstack.cloudenci.ie/horizon

The vCloud Cloudburst Architecture Model:

ref: http://download3.vmware.com/vcat/documentation-center/Cloud%20Bursting/7%20Cloud%20Bursting.pdf



This picture accurately represents the cloudburst monitoring model which we are trying to achieve.


The security model:


1. use the security model (CSA 3.0)
2. Put port rules in place (firewalls)
3. Secure the site(s) - Certs/https
4. Encrypt relevant data (in the db) with certs
5. Run a pentest/hacking test


Next Steps:

Brief - All

circulate before the weekend - Ying
complete

Research

Compliance

1. Security - Jeff - ongoing
2. Infrastructure (Arch) - Ying - ongoing
3. Options for Private/Public - Richie - Confirmed - AWS to Azure

Tech

1. Orchestrator - Jeff - going (autoscaling between the 2 clouds)
2. Load Balancer - Richie/Jeff - AWS Load balancer
3. Firewall - Ying - hold off
4. Alerting system - Ying - aws alerting system
5. App - Richie - figure out how to get the app up. 
6. Connectors - Jeff - vpn - needs more


Azure
1. Orchestrator - Ying 
2. Load Balancer - Richie/Jeff - AWS Load balancer - investigate
3. Firewall - Ying - hold off
4. Alerting system - Ying - aws alerting system/azure alerting systems.
5. App - Richie - figure out how to get the app up. - Richie
6. Connectors - richie/jeff/ying - vpn - need more - need to do it.


New tasks
1. Investigate monitoring
2. Automatically deploy to the cloud
3.



Others
Running eucalyptus locally - jeff
vmware - richie





Project compliance:
Design - 15%

Tasks - write the arguments for each of the 2 private cloud models
1. Capacity (new vm requested) Richie
2. Load (auto scale) Ying/Jeff

Implementation of private cloud
Provision of public cloud
Implementation and documentation of private could
Demonstration of private cloud

Security
Approach and project planning
Selection of tools/methodologies/frameworks/benchmarking
technical testing approach
findings and risk rating
challenges and limitations