Amazon AWS CloudFormation stack with Eclipse- Step by step guide_Part1

To follow this step by step guide, please install Eclipse with AWS toolkit.

This article will guide you through the installation and configuration.

Before using CloudFormation, It’s a good idea to navigate different services through the web console; create a VPC with subnets, and assign them CIDR IP blocks; launch EC2 instances, and pay attention to the different options available in the wizard; create security groups, Routes, and NACL rules.

The web console knowledge gained will help you read CF templates, and appreciate the logical relationships between a Resource and its properties, as well as relationships between different sections.

A CloudFormation template can have up to 8 sections, but only the Resources section is required.
If you use some of the optional sections, you will most likely need to reference the data in those sections using Intrinsic Functions.
For example, if you create a Mappings sections; inside your Resources section, you will use the function Fn::FindInMap to return the value corresponding to the key you declared under Mappings.

Let’s take a look at this closely:

“ImageId” : { “Fn::FindInMap” : [ “ImageMap”, { “Ref” : “AWS::Region” }, “MonitoringAMI” ]},

Here, ImageId is a property of AWS::EC2::Instance Resource, and as the name implies, it defines the AMI that will be used for the EC2 instance 

We could have easily assigned an AMI inline without using Intrinsic functions:
“ImageId” : “ami-79fd7eee”,

Using Mappings; however, will make your CF templates more readable and maintainable.

For instance, in the example below, you can add multiple mappings that will cover the regions where you intend to run your stack.

 "Mappings" : {
"ImageMap" : {"us-east-1" : { "OpenVpnAMI" : "ami-bc3566ab", "MonitoringAMI" : "ami-b73b63a0","NiFiAMI" : "ami-b73b63a0", "ClouderaAMI" : "ami-20b6c437", "RstatAMI" : "ami-b73b63a0", "VisualAMI" : "ami-b73b63a0" },
"us-east-2" :{ Hop on to your webconsole, and fill in us-east-2 AMI mappings}
 }
},

As you know, an AMI number is region specific, so for the same image, the ID will be different in each region. Mapping AMI IDs to a region will aid you in optimizing your template, and not needing to create a separate template for each region.
This pseudo parameter that’s predefined by cloudformation is what passes the region name back to the Mappings using the Ref function: AWS::Region

You can expand upon this ImageMap section by creating a mapping for each of the 14 Amazon AWS regions available. (I excluded the US government region)

Amazon AWS documentation is top notch, so there is no need to replicate what’s already available, and in great details from their website. Here is a link that describes the different template sections, and their use: 

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html

The best way to learn is by doing, so let’s get started with an example of creating a stack for a Big data in the Cloud project.

This project will be housed in a VPN with one public subnet and 6 private subnets. Each subnet will run an EC2 instance that preforms a task in this miniaturized big data ecosystem.

I will be unconventional, and I will list how I started and the end result.

In future articles, we can expand more by explaining the stack creation process in detail, updating our CF stack with security groups, RDS database, and more updates to route table or NACL to tighten up security.

If you are a visual person, and you need to see it to believe it, then I would recommend using the web console CF template designer to kick start the building of your stack.

Here is a screenshot from my web console:

After a certain point, where you will need to fill out properties of your resources, that’s where you can save the template to your local drive and open it in Eclipse.

The designer needs to keep track of the dimensions and placement of boxes , objects, and lines in your template, so it adds Metadata containing this extraneous information throughout your template.

It will look like this:

"Metadata": {
"AWS::CloudFormation::Designer": {
"c733e469-afeb-4ccb-b0c1-f6c4125295f8": {
"size": {
"width": 1200,
"height": 1230
},
"position": {
"x": -80,
"y": -130
},
"z": 0,
"embeds": [
"8c3863c1-d144-4baf-8b8b-167eb0c83aae",
"01c6050d-1dc2-40e2-ace6-9c595b881719",
"7d4f0b22-bcdd-4595-a650-b9911f4479ef",
"8cfe599e-6f64-4c67-b720-82a8d3ee91ca",
"d4d7a5a5-7a23-44a8-b8e4-6e4b65d23407",
"b7769298-ed5a-4cea-9507-4b8e1c0709d6",
"74ad0ea8-0d45-4c27-92bc-5d70cac6d2ad",
"55003a29-f97f-4b22-8990-c8c5989a293d",
"4517c146-da09-4c8c-a9bc-ce8613f52f83",
"a0c8a6f3-037c-4957-b48a-415508e57fac",
"375e3d1a-007b-4393-8b05-c5ee7a7a6e15",
"d31fc2ee-1ca6-4ffa-982d-f914481aa62c",
"ed8ca7d4-12f7-42a8-a211-0b6788bef0fd",
"55912648-c1f4-4e93-a12d-1ef206bc28f8",
"71d77869-3266-4421-aaa1-b0efc9b9f19c"
]
},
"8c3863c1-d144-4baf-8b8b-167eb0c83aae": {
},
"size": {
"width": 490,
"height": 120
},
"position": {
"x": 0,
"y": -110
},
"z": 1,
"parent": "c733e469-afeb-4ccb-b0c1-f6c4125295f8",
"embeds": [
"10172244-abe9-49d4-923b-597566a0f720"
]
},
"01c6050d-1dc2-40e2-ace6-9c595b881719": {
"size": {
"width": 490,
"height": 120
},

None of that information is going to be used in creating your stack, so I have decided to clean up my template from all this Metadata, and continue building the stack in Eclipse.

I have decided to use the following sections: Format version, Description, Parameters, Mappings, and Resources. Here is the final CF stack, please note that it’s missing the RDS database, and security groups.

{
  "AWSTemplateFormatVersion": "2010-09-09",
  
  "Description" : "AWS CloudFormation template for a VPC with one public subnet and six private subnets for running a Big Data ecosystem. The following instances will be deployed in each subnet with different tasks: OpenVpn for authentication, Nagios or Kabana for monitoring, Apache Nifi for ETL, RDS MySql for database storage, Cloudera for BigData, OpenR or Revo for analytics,  Qlik or Tabelaux for visualization", 
 
  "Parameters" : {
    "InstanceType" : {
      "Description" : " Lab instances are t1/t2.micro, or t1.small",
      "Type" : "String",
      "Default" : "t2.micro",
      "AllowedValues" : ["t1.micro","t2.micro","t2.small"]
    },
    
    "ClouderaInstanceType" : {
      "Description" : " Lab instances are t1/t2.micro, or t1.small",
      "Type" : "String",
      "Default" : "t1.micro",
      "AllowedValues" : ["t1.micro","t2.micro","t2.small"]
    },
    
    "KeyName" : {
      "Description" : "Name of an existing EC2 keyPair to enable SSH access to the instance",
      "Type": "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "Must be the name of an existing Key pair"
       },
       
      "CFKeyName" : {
      "Description" : "Name of an existing EC2 keyPair to enable SSH access to the instance",
      "Type": "AWS::EC2::KeyPair::KeyName",
      "ConstraintDescription" : "Must be the name of an existing Key pair"
       },
      
    "SSHLocation": {
      "Description": "The IP address range that can be used to SSH to the EC2 instances",
      "Type": "String",
      "MinLength": "9",
      "MaxLength": "18",
      "Default": "0.0.0.0/0",
   
      "ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
    }
}, 
   
  "Mappings" : {
    
     "ImageMap" : {
      "us-east-1" : { "OpenVpnAMI" : "ami-bc3566ab", "MonitoringAMI" : "ami-b73b63a0","NiFiAMI" : "ami-b73b63a0", "ClouderaAMI" : "ami-20b6c437", "RstatAMI" : "ami-b73b63a0", "VisualAMI" : "ami-b73b63a0" },
      "us-east-2" :{ }
      }   
    }
 }, 
    
  "Resources": {
 
    "VPC": {
      "Type": "AWS::EC2::VPC",
      "Properties": {
        "CidrBlock" : "10.0.0.0/16",
        "EnableDnsSupport" : "true",
        "EnableDnsHostnames": "true",
        "InstanceTenancy": "default"
          }
          
         },
         
      "BasicSecurityGroup" : {
      "Type" : "AWS::EC2::SecurityGroup",
      "Properties" : {
        "VpcId" : { "Ref" : "VPC" },
        "GroupDescription" : "Enable SSH access",
        "SecurityGroupIngress" : [
          {"IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : { "Ref" : "SSHLocation"}}
        ]
      }
    },
      
      "PublicAuthentication": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
        "CidrBlock" : "10.0.0.0/24",
       "AvailabilityZone" : "us-east-1e"
      }
      
    }, 
  
    "PrivateDataLanding": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
       "CidrBlock" : "10.0.1.0/24",
       "AvailabilityZone" : "us-east-1c"
      }
     
    },
    "PrivateDatabase": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
        "CidrBlock" : "10.0.2.0/24",
        "AvailabilityZone" : "us-east-1d"
      }
      
    },
    
    "PrivateDatabase2": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
        "CidrBlock" : "10.0.3.0/24",
        "AvailabilityZone" : "us-east-1a"
      }
      
    },
    "PrivateDataLake": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
       "CidrBlock" : "10.0.4.0/24",
       "AvailabilityZone" : "us-east-1d"
      }
    },
    "PrivateAnalytics": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
         "CidrBlock" : "10.0.5.0/24",
       "AvailabilityZone" : "us-east-1d"
      }
     
    },
    "PrivateVisualization": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
        "CidrBlock" : "10.0.6.0/24",
       "AvailabilityZone" : "us-east-1a"
      }
     
    },
   
    "PrivateMonitoring": {
      "Type": "AWS::EC2::Subnet",
      "Properties": {
        "VpcId": {
          "Ref": "VPC"
        },
       "CidrBlock" : "10.0.7.0/24",
       "AvailabilityZone" : "us-east-1a"
      }
      
    },
    
      "InternetGateway" : {
      "Type" : "AWS::EC2::InternetGateway",
      "Properties" : {
        "Tags" : [ {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} } ]
      }
    },

    "AttachGateway" : {
       "Type" : "AWS::EC2::VPCGatewayAttachment",
       "Properties" : {
         "VpcId" : { "Ref" : "VPC" },
         "InternetGatewayId" : { "Ref" : "InternetGateway" }
       }
    },
    
    "PublicRouteTable" : {
      "Type" : "AWS::EC2::RouteTable",
      "Properties" : {
        "VpcId" : {"Ref" : "VPC"},
        "Tags" : [ {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} } ]
      }
    },

    "Route" : {
      "Type" : "AWS::EC2::Route",
      "DependsOn" : "AttachGateway",
      "Properties" : {
        "RouteTableId" : { "Ref" : "PublicRouteTable" },
        "DestinationCidrBlock" : "0.0.0.0/0",
        "GatewayId" : { "Ref" : "InternetGateway" }
      }
    },

    "SubnetRouteTableAssociation" : {
      "Type" : "AWS::EC2::SubnetRouteTableAssociation",
      "Properties" : {
        "SubnetId" : { "Ref" : "PublicAuthentication" },
        "RouteTableId" : { "Ref" : "PublicRouteTable" }
      }
    },

    "NetworkAcl" : {
      "Type" : "AWS::EC2::NetworkAcl",
      "Properties" : {
        "VpcId" : {"Ref" : "VPC"},
        "Tags" : [ {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} } ]
      }
    },

    "InboundHTTPNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "100",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "false",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "80", "To" : "80"}
      }
    },

    "InboundSSHNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "101",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "false",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "22", "To" : "22"}
      }
    },

    "InboundResponsePortsNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "102",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "false",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "1024", "To" : "65535"}
      }
    },

    "OutBoundHTTPNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "100",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "true",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "80", "To" : "80"}
      }
    },

    "OutBoundHTTPSNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "101",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "true",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "443", "To" : "443"}
      }
    },

    "OutBoundResponsePortsNetworkAclEntry" : {
      "Type" : "AWS::EC2::NetworkAclEntry",
      "Properties" : {
        "NetworkAclId" : {"Ref" : "NetworkAcl"},
        "RuleNumber" : "102",
        "Protocol" : "6",
        "RuleAction" : "allow",
        "Egress" : "true",
        "CidrBlock" : "0.0.0.0/0",
        "PortRange" : {"From" : "1024", "To" : "65535"}
      }
    },

    "SubnetNetworkAclAssociation" : {
      "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
      "Properties" : {
        "SubnetId" : { "Ref" : "PublicAuthentication" },
        "NetworkAclId" : { "Ref" : "NetworkAcl" }
      }
    },

    
    "OpenVPNSFTP": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        
      "InstanceType" : {
        	"Ref" : "InstanceType"
         },
         
      "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "OpenVpnAMI" ]},
         
       "KeyName" : {
           "Ref" : "CFKeyName"
         },       
      "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "true",
      "DeviceIndex": "0",
      "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
     
      "SubnetId": { "Ref" : "PublicAuthentication" }
    } ]
    
      }
     
    },
    
    "NagiosOrKabana": {
      
      "Type": "AWS::EC2::Instance",
      
      "Properties": {
        
         "InstanceType" : {
        	"Ref" : "InstanceType"
         },
         
         "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "MonitoringAMI" ]},
         
         "KeyName" : {
           "Ref" : "KeyName"
         },         
      "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "false",
      "DeviceIndex": "0",
      "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
     
      "SubnetId": { "Ref" : "PrivateMonitoring" }
    } ]
        
      }
      
    },
    
    "ApacheNiFi": {
      "Type": "AWS::EC2::Instance",
      
      
      "Properties": {
          "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "NiFiAMI" ]},
          "InstanceType" : {
        	"Ref" : "InstanceType"
         },
         "KeyName" : {
           "Ref" : "KeyName"
         },        
       "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "false",
      "DeviceIndex": "0",
      "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
     
      "SubnetId": { "Ref" : "PrivateDataLanding" }
    } ]
        
      }
     
    },
    
     "Cloudera": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        
         "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "ClouderaAMI" ]},
          "InstanceType" : {
        	"Ref" : "ClouderaInstanceType"
         },
         "KeyName" : {
           "Ref" : "KeyName"
         },         
       "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "false",
      "DeviceIndex": "0",
     "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
      "SubnetId": { "Ref" : "PrivateDataLake" }
    } ]
        
      }
    
    },
    
    "OpenOrRevoR": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        
         "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "RstatAMI" ]},
          "InstanceType" : {
        	"Ref" : "InstanceType"
         },
         "KeyName" : {
           "Ref" : "KeyName"
         },       
       "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "false",
      "DeviceIndex": "0",
     "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
      "SubnetId": { "Ref" : "PrivateAnalytics" }
    } ]
       
      }
    
    },
    
    "QlikQlikviewTableau": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
        
         "ImageId" : { "Fn::FindInMap" : [ "ImageMap", { "Ref" : "AWS::Region" }, "VisualAMI" ]},
          "InstanceType" : {
        	"Ref" : "InstanceType"
         },
         "KeyName" : {
           "Ref" : "KeyName"
         },       
       "NetworkInterfaces": [ {
      "AssociatePublicIpAddress": "false",
      "DeviceIndex": "0",
  	  "GroupSet" : [ {"Ref" : "BasicSecurityGroup"} ],
      "SubnetId": { "Ref" : "PrivateVisualization" }
    } ]
        
      }
    
    }
    
  }
}

Go ahead try and run this stack from your eclipse by right clicking on the page, click on “Run on AWS”, then click “Create stack”.

You can then hop on to your console to watch it in action as your VPC, subnets, and EC2 machines are being created.

Some tips before I conclude the article:

  • There is no delete stack command in Eclipse, so I deleted mine using AWS CLI with the following command:” C:\Program Files\Amazon\AWSCLI> aws cloudformation delete-stack –stack-name SunTest2 “
  • You can also delete the stack from the cloudformation web console. But it’s not recommended that you delete individual stack components manually. It also defeats the purpose of using CF.
  • If you need to update anything in your stack, you can do it in Eclipse with the “Update Stack” command.
  • Please restrict your Stack names to the following characters: [Az az 0-9], or your stack creation will fail.
  • Resource Properties and Parameters are case sensitive, so Default is not the same as default.

That’s it for now, and stay tuned for follow up articles that go on more details about CF stack creation, updating, and troubleshooting.

AWS Cloud Formation-Create, Edit, and deploy

AWS Cloud Formation (CF) is a service that allows Enterprises to manage their infrastructure as code. The CF templates are easy to read in both JSON, or the recently supported YAML data serialization language.

You can use your favorite JSON editor to create, and edit templates, and then upload them to your AWS CF Service to use in creating a stack.

Vi, Notepad++, Sublime, Atom, are some of the well known JSON editors, but if you need more than that, such as the ability to validate JSON syntax, as well as deploy your stack from the editor, then I would recommend Eclipse Java EE IDE with AWS toolkit.

Please follow these steps to get your IDE installed and configured:

1- Download Eclipse: Eclipse Neon 64 bit download

2- Choose the Eclipse Java EE IDE

AWS Eclipse

 

3- Create your workspace, this is the directory where your projects are going to be saved.

4- Launch the Eclipse Marketplace, search for AWS, then install the AWS toolkit for Eclipse 2.0

5- Configure connection to your AWS account , by adding your IAM user Access key and secret access key:

 

 

 

 

 

 

 

 

You can retrieve the account keys from your IAM console by click in the account name, and click on “create access key” under the “Security Credentials “tab.

6- Your Eclipse for Java with AWS toolkit should be setup, and connected to your AWS account. You should be able to see any EC2 instances, EBS volumes, S3 buckets, and other resources you have configured.

But make sure you have the correct region selected first.

7- Click on Cloud Formation under the Java “src” folder, and you will see stacks that you have created.

Double click on your stack to open it.

If your stack doesn’t open the first time, close eclipse, reopen it, then try to launch your stack again.

If you have worked with Eclipse before, you would know about these quirks, and that every now and then, you will have to change your workspace.

8- Tips from Amazon AWS site:

  • Only files that end in .template can be launched from the Eclipse IDE. If your file ends with another extension, such as .json, you will need to rename it first with a .template extension to use this feature.
  • Right click on template editor, and click validate
  • Your template will be validated for JSON correctness only; it will not be validated for CloudFormation correctness. A stack template validated in this way can still fail to launch or update.

9- If you don’t have any CF stacks created, you can create one by clicking on “New AWS Java Project”. (Click on the Amazon Icon on the tool bar to get the menu)

 

Choose AWS CF sample

9- Alternatively, you can start your template using CF builder in your AWS CF console, save it as a “.template” file, and drag and drop it under your cloud formation menu in Eclipse.

Once you create your stack with the CF designer in the AWS console, save it to your local storage, and then load it into your eclipse in order to add parameters to your resources, finalize it, and run it.

Eclipse is by no means perfect, since I think that the JSON validation is not good enough, and that it should also validate CF syntax. I am also trying to figure out, curly braces highlighting, so I know where is the closing brace without having to count. I checked the project’s GitHub repository, and someone has already put in a feature request for that.

I think it will just get better from here, I am impressed with what the AWS toolkit has to offer, and looking forward to learning more.

In the next blog, I will write about creating a CF template to deploy a stack from Eclipse.

 

Clearing the Cloud Computing Fog

My day to day job is centered around architecting the right solution to fit customer’s needs and budget. Recently, I have noticed a trend of many fortune 500 companies moving their servers to the cloud, and asking us as a vendor to move along with them to their new home. So, when I get on calls to discuss projects with clients, I am faced with new terminology, and new expectations of how things should work.

Once I gather the requirements, I browse to the Amazon AWS website, which contains a treasure trove of information about different AWS services from introductory videos, to FAQs, and white papers.

It was about 6 months ago, that I decided to take an Amazon AWS class with emphasis on big data. It made a lot of sense to me. I enjoyed the fact that I can use my networking, security, programming, Linux administration, and a variety of knowledge and experience that I accumulated throughout years of working in IT. I became convinced that going forward, companies that don’t have a strategy to move their computing to a Cloud model would be left behind.

I used my CCNA knowledge to subnet my VPC, and used my CISSP training to think about security every step of the way, and my solutions architect experience to think about fault tolerance, scalability and budget.

My decision to get AWS certified was driven by the trend that I was seeing from my customers, and the need to help them achieve their goals of moving to the cloud seamlessly and within budget.

Here is a break down of my test results of the AWS Certified Solutions Architect exam:
Overall Score: 76%

Topic Level Scoring:
1.0 Designing highly available, cost efficient, fault tolerant, scalable systems : 72%
2.0 Implementation/Deployment: 66%
3.0 Security: 90%
4.0 Troubleshooting: 80%

The exam results point to the fact that I haven’t been doing a lot of hands on implementation and deployment. So my goal is to work hard on the domains I scored low in by designing and deploying systems in the cloud. I am also taking the A Cloud Guru AWS Certified SysOps Administrator class to help remedy the weaknesses identified in my test results.

I will also build on my strength in Security and troubleshooting. which made me think about taking the beta AWS security certificate that was just released during Re-Invent 2016. I will post my experience, if I end up pursuing that.

Tips on taking the exam:

1- Register for the test. If you don’t have a deadline, then you will keep pushing it off indefinitely.

2- Pace yourself. There is a lot of material that’s covered in the different domains. Give yourself enough time to study, do the labs, and mock quizzes.

3- Get into a study group. You get more work done, when you study with like minded people working towards the same goal.

4- Buy a class on Udemy or other e-learning sites. I recommend Acloud Guru, they cover the material very well, and offer quizzes and exam tips.

5- In the exam, don’t check a lot of questions to review later, or you will end up with 30 questions that need to be reviewed, and only 10 minutes left. 🙂

6- Understand concepts, and if you don’t understand by reading, hop on to your AWS account, and do the lab. Yes, You should register for an AWS account, and you will get one year of free tier AWS services.

7- Don’t rely on quizzes online, as some of the answers are wrong. I prefer quizzes that explain why an answer was chosen over the others. Take a lot of quizzes, and if you answer wrong, or you think the correct answer given was wrong, then follow your instinct, and do proper research until you are confident of the solution. If you do enough quizzes, you will get lucky and encounter quiz questions on the test. But be careful, the answers will be worded differently and might throw you off, if you don’t understand the concepts.

8- Have fun! Cloud computing is fun. Amazon AWS documentation is top notch. I have never seen an organization produce such quality documentation and white papers. Aghhh, I remember those dreaded IBM red books, man we have come a long way!!

 

 

 

Cryptography simplified

Cryptography is a science and an art, where both mathematics, algorithms, statistics, and real world use cases of securing communications in public channels are considered and studied. Not long ago, the US government had a ban on export of the technology, since it was considered as munition.

Initially most of the algorithms and standards came out of the NSA, so one would assume a back door was also available for the agency. As with every technology, it can be used by law abiding citizens, as well as criminals. You wouldn’t ban cars because criminals used them to escape law enforcement after robbing a bank? Apply the same logic to the government’s attempt on banning or controlling cryptography. I don’t want to go into a discussion of car registrations and plates, as that could be defeated, the same way the clipper chip or any other attempt at key escrow would have been defeated.
Why do we need Cryptography, and what are all the Algorithms and protocols used for? How can one use it in personal life or in business? It can get very confusing when you try to study Cryptography, because of the different types of algorithms. You spend a couple of hours reading about the wonderful working of an algorithm, later to find out that it was defeated, and is nowadays easy to crack. Or this other wonderful algorithm that can scramble your plain text to an impossible to decipher cipher text, the only catch is that you have no way of securely and economically transferring the secret key to decipher that message to the intended party.
The way I simplify things for my own understanding is by breaking problems into smaller components, and attacking each one at a time. At the end, the big picture will be clearer. Cryptography enables a message to be securely transmitted or stored, so it enables confidentiality. It provides integrity to a message or any digital asset by producing a message digest. Think of a password hash, or a message digest of any software that you download from the internet. Lastly, it provides authentication as it can be used to create a digital signature, and ensures non-repudiation of the document source. Remember, CIA, Confidentiality, Integrity, and Authentication, don’t confuse that with Availability of the CIA triad.
Let’s pick on one at a time. How can you fulfill Confidentiality? For instance, you want to transmit a message to a client securely, and then store that communication in your storage device securely as well. Both qualify for Confidentiality, since you are transmitting and storing. First, you encrypt a plain text to cipher text with a secret key that you possess, then you send it on its way via your favorite e-mail client. It hits many routers and servers, until it gets to the recipient’s box. They open the message, and it’s a scramble of letters and characters. Your client calls you up and asks you for the key to unscramble or decipher the message!
So you can encrypt all you want, but without a key, the encrypted message is worthless. The same way it was worthless in transit, so no malicious user can eavesdrop on your communication channel to read it, it is also worthless to the receiver, since they don’t have the key. Your next move was to schedule a flight from San Francisco where you live to New York where your client is located to give him the key that you saved in a thumb drive. You see how this solution to sharing a secret key can get expensive quickly! Governments can certainly afford it, since they don’t mind spending tax payer’s money, but a business will quickly go bankrupt if they use this method for sharing a secret key for all their encrypted communication.
The solution is Public key Cryptography or asymmetric key encryption, where the geniuses of encryption, read algorithm creators, have figured out through different mathematical formulas that an Encryption key can have two components, a public key components that is accessible to the public, and an associated private key component that’s kept a secret. In the scenario described above, that would save me a trip to New York, by getting my client’s public key to encrypt my message, which can only be decrypted by his own private key.
In his reply to my important secret message, he can use my public key that I attached in my message in plain text, or he could also have gotten it from my website, and encrypted his response. I would use my secret private key component to decrypt, and read his message.
So initially, we used a symmetric key, where one key encrypts and decrypts a message, but we had an issue with key management, as we had to make a long trip to share that key. The asymmetric key solution solved our key sharing issue. Where it’s safe to share the public key, but it’s bad to share your private key! We will discuss the technical details of each method in upcoming posts.
As you read different literature online on the subject, remember that some technologies that implement cryptography can use both symmetric and asymmetric methods to provide confidentiality. (PKI, for instance)
The symmetric key would be used to encrypt the message, and the asymmetric key for symmetric key distribution.
The other uses of cryptography is to provide authentication and non-repudiation, so you cannot say that it’s not my signature on this digital document! Since you are the only one that owns the signature key, then it’s you unless you reported it stolen.

Which key can provide non-repudiation? Obviously, it should be the private key that only you have access to, not the symmetric private key that you have shared with others, since any one of you can sign the document and we wouldn’t know who it was. You would use the private portion of an asymmetric key to sign a document and send it in open to be read by recipients who are certain it’s from you because it carries your digital fingerprint.
Now for a quick explanation of integrity, and what it tries to accomplish. Let’s use the example of password storage. Have you ever called your bank and chose the prompt to be connected to online banking support, and when you got someone on the phone, you identified yourself as the owner of the bank account, but yet the support agent on the phone couldn’t tell you what your password is!
He has access to everything, but yet can’t even give you the first couple of letters of your password, so you can log on to your account, and do your important banking business you were planning on doing except for the constant password errors you kept getting!
Well, the reason the support agent doesn’t know your password is because it’s scrambled beyond recognition. Yes, even if you escalate to speak to a manager, or even if you have had dinner with the CEO of the bank the day before, that won’t get you the lost password.
Simply, the password was scrambled by a hashing algorithm that provides one way encryption before it’s stored in the database. The support agent, cannot call up the database guy and tell him Mr. VIP is on the phone, and need to know his password now! It’s impossible, it’s a one way street!
Well then how does the system know who I am, and the people that work there don’t know?
The answer is that when you enter your password in the website, the same hashing function is applied to it before comparing it to the value stored in the database table under the row where your username is sitting happily in plain text. If the two values are equal, then you are allowed access. Integrity has been accomplished in this case, since even a social interaction didn’t reveal the secret!
There is more to it than I have presented in this article, as you would discover when doing some googling about different subjects. But if you get the big picture, this should be a good start for you to do further research on different algorithms being used for different types of cryptography, their strengths and weaknesses, salting before hashing, and combining different cryptography methods for stronger encryption.