Microsoft SQL server in Azure versus AWS

Microsoft SQL (SQL) runs on all clouds, and in this article, we will specifically discuss the two leading cloud providers – Amazon AWS and Microsoft Azure.
We are going to highlight the main points to consider for an IT executive or professional needing to decide between running their Microsoft SQL databases in Azure SQL Database Platform as a Service (PaaS) offering versus that of Amazon AWS RDS.
We won’t discuss IaaS, since the cost performance will rely on a lot of factors and customizations that will vary from install to install. In the absence of compelling reasons not to migrate to PaaS; IaaS comes with resource management burden that adds to the overall cost. But even in this case, Azure comes with extra benefits where installing the SQL agent extension in your SQL box will “PaaS-ify” your server by adding features such as automated backup and patching, flexible licensing, etc…
I must admit that I am biased since I work for Microsoft; nevertheless, what I write about in my blog is strictly my opinion and my interpretation of articles and studies I have read as well as my years of experience working with products and services in both clouds.
For folks that don’t have time and want to get to the verdict without having to read the whole article, I would strongly advise that they consider running their “significant” Microsoft SQL workload in Azure if they are in the process of selecting a cloud vendor. For enterprises that moved most of their workloads to AWS, they should consider a multi cloud strategy since the cost performance rewards they will achieve will make up for the extra effort to enable an additional cloud in terms of networking, connectivity, reskilling, and app colocation requirements. The reasons will be discussed below, and we will link to articles and studies to support our decision. Mainly, In Azure you have the rights to use your maintained licenses in PaaS whereas you can only use them in AWS IaaS. Your Azure PaaS is highly available and offers better SLA than AWS. Your compute cost in Azure PaaS is much lower than in AWS.
A major point to consider is that Microsoft owns the product and is constantly updating it and innovating in both the cloud PaaS offering which contains the latest bits, and the on premises offering, SQL server 2022, which is touted as Cloud-enabled.
Time to go over the details of the benefits of running MS SQL on Azure. The first benefit is Licensing! Your investments in acquiring SQL licenses and your diligence in paying yearly maintenance will pay off if you move your on-premises or other cloud SQL workload to Azure PaaS. You will get all the benefits of PaaS while paying only for compute. You cannot do that in AWS RDS, you will have to lease those licenses from amazon.
This might not be important for a small Microsoft SQL workload that you are planning to phase out eventually in favor of an OSS database, but worth looking into if your Microsoft SQL footprint is significant and long term.
It’s noteworthy to mention that Azure SQL PaaS does support PostgreSQL, MariaDB, and MYSQL.
Let’s look at Licensing benefits details:

Azure SQL Hybrid Benefits

If you have stopped paying for SA (Software Assurance)/ maintenance on your licenses, and you still want to move to the cloud, you will get the same benefits from both clouds if you use license mobility in AWS, and hybrid benefits in Azure by moving those databases to a dedicated host. Even in the case where you don’t have licenses to move over to the cloud either because it’s a new workload, or an existing one with licenses where SA is not maintained, we will show that Azure is still more cost effective.

The Azure SQL database is built with high availability and has a better SLA than that of AWS. Since Azure SQL Managed Instance (MI) is the most comparable to that of AWS RDS, we will compare their SLAs. AWS only offers 2 9s for a single DB instance, and 3 9s for a Multi AZ DB instance. Azure SQL MI offers an SLA of 4 9s. So why is the SLA low in a single DB instance in AWS? The answer is straight forward, it is only a single instance that is not using Microsoft SQL high availability technology, namely Always on.

Amazon RDS supports Multi-AZ deployments for Microsoft SQL Server by using either SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs)”

For better availability, you need to use Multi AZ deployment in AWS, which unfortunately doubles your cost, and still doesn’t get you 4 99s SLA that Azure offers.

Because Azure SQL MI has two tiers, General purpose, and Business critical, it’s best to compare AWS RDS Multi AZ to Azure SQL Business critical since they both use some flavor of Always on with read replicas. After that comparison is done, and you appreciate the superior cost performance in Azure, then you can evaluate Azure SQL MI general purpose which might be enough performance for your needs and is cheaper than Business Critical. General purpose is not available in AWS, so we can’t use it in our comparison.
A quick pricing exercise will reveal the cost benefits of running SQL server in Azure versus AWS PaaS. In the Azure pricing calculator, you will have options to use Azure hybrid benefits if you own licenses and are current with SA. In both AWS and Azure calculators, you can see the savings attained by reserving compute.
For a detailed look at cost performance, check out this study conducted by Principled Technologies and sponsored by Microsoft.
In summary, the study found:

“Azure SQL Managed Instance significantly outperformed Amazon RDS. Offering between double and five times the performance depending on the workload, the three SQL Managed Instances we tested also offered better overall value, reducing price/performance by as much as 90.3 percent”

The study is well documented and replicable, and the savings are high. If you are wary of vendor sponsored studies, then spend the time reading their methods of comparison, and go ahead and run a pilot in both clouds to explore those results with your database team. You can request assistance from both your Azure and AWS sales teams and ask about potential funding of full or partial cost. If Principled technologies AWS RDS configuration is not optimal, you can change it and follow best practices from AWS.

The savings of running Microsoft SQL in Azure versus AWS or other clouds is significant enough to warrant considering a multi cloud strategy. As you move your SQL to Azure, you will find that other services in the Azure Data estate are also competitive or superior, and thus your investments in enabling multi cloud will be well worth it. Azure Synapse Analytics versus Redshift, Power BI versus QuickSight, Azure machine learning Service versus SageMaker would be good comparisons to make to support your decision to move to Azure. You will find that the Azure services are very competitive, integrated with AD, built with layers of security, very intuitive as they offer a GUI based option, and versatile for users that prefer a code first approach. For the latter users, they will enjoy the solid integration of VS Code IDE with Azure.

For Companies that are still deciding which cloud to move to and have a significant Microsoft SQL workload on premises, moving to Azure should be your first consideration. To cement your decision, use every resource you have available from your sales team to help you with funding POCs, learning about different services, and skilling your enterprise.

Azure Synapse and Dynamics 365 CDM Data – An Overview

Microsoft is leveraging Azure to enable extensibility across Dynamic 365 and Power apps by making data easily accessible to clients via the export to Azure Data Lake feature. Before that could happen, Microsoft needed to collaborate with other industry heavyweights to create a data format standard named Common Data Model (CDM) which is the format used to save data in the Data Lake.

In this article, we are going to focus on how to read and write data saved in the CDM format which is a hierarchy of folders that store the data file in CSV and the schema in JSON format. This enables any compute type to access this data if it understands how to query the data in CSV while fetching its schema in the model.json or the newer manifest.cdm.json files.

To begin this process, we need to export our data to the Datalake from either Azure Dataverse or Dynamics 365 Finance and Operations (F&O). The reason other Dynamics 365 and Power apps are not called out is because they leverage Dataverse as their data backend, while F&O leverages Azure SQL database in the backend. For folks struggling to grasp what Dataverse is, it is the new branding of the Common Data Service (CDS), and it can be considered Database as a service. Apps and Users interface with Dataverse through a set of API calls. The components of the product itself are a combination of Azure services such as Azure SQL Database, Azure Cosmos, Azure Blob, Azure Search, Azure Functions, Azure Service hub, Azure events hub, and additional components such as app service and load balancer etc.. The services are orchestrated and managed by Microsoft which hides the complexity under the hood. Users, however, like to have access to their data to do further analytics and reporting, which is enabled by the export to data lake option that benefits all Dynamics 365 and power apps that leverage Azure Dataverse as their data backend.

For the ERP system, F&O’s export to data lake is a work in progress. It is currently being tested with a select set of customers in private preview, after that it should be open to more testers in public preview before it becomes Generally Available. We will get in front of the public preview event to prepare ourselves with knowledge and tools necessary to analyze that data once it lands in our Data lake. The same learning can be applied to CDM data synced to Data lake from Dataverse which is generally available, or even Power BI Data flows. If you need CDM data to experiment, you can use the SQL to lake solution to export SQL data to a Datalake in CDM format, and you can leverage this limited time offer for Azure Synapse Analytics to save on cost. Once the data lands in the Data Lake in CDM format, it can be acted upon by different compute types such as Power BI Data Flows, Azure Databricks, or Azure Synapse Analytics. We will focus on the latter, since it offers the capability of attaching a Data lake to a workspace, and also offers two types of compute to read the data, transform it, and use it for ML model training among several other use cases.

It’s time to share some code snippets that I gathered from different docs and modified as needed. We will start with reading CDM data from an attached Data Lake with Apache Spark using a Synapse notebook. In Synapse Develop hub, click the plus sign to create a new notebook, and copy and paste each code snippet to a cell.

# Specifying appid, appkey and tenanid is optional in spark-cdm-connector-assembly-0.16.jar with Premium Databricks Cluster and Synapse
appid = "<appId&gt;"
appkey = "<appKey&gt;"
tenantid = "<tenantId&gt;"

storageAccountName = ""
# Reading the content of the entity SalesCube_Customers into the readDF dataframe
readDf = ("")
  .option("storage", storageAccountName)
  .option("entity", "SalesCube_Customers")
# Create a Spark SQL Database and write the contents of our Dataframe to the Customers table.
#Please note that this database and its content can be read with the built-in SQL pool compute.
spark.sql("CREATE DATABASE IF NOT EXISTS financeopdb")
#Changing the context from pyspark to SQL and reading the content of the newly created table
select * from financeopdb.Customers

That was pretty straight forward, and if you need to make transformations to data before you write it back, you will need to add appropriate code to accomplish that. In this example, we wrote data to a Spark SQL database that can be accessed by the built-in SQL compute or even Power BI. If you need to write the dataframe to storage in CDM format, then follow these instructions.

To read CDM data with the built-in SQL compute (On Demand), we will use these scripts from Vesa Tikkanen github repo that were written for reading Power BI Data Flow CDM folders, but can be repurposed for any CDM data produced by Azure Dataverse or Dynamics 365 F&O. I have done some slight modifications to the scripts to account for some differences in the folders hierarchy, and also changed a datatype that Synapse was complaining about. Below, I will paste the three scripts equivalent to the three steps right after creating a schema.

Step 1: Create a view with information about entities and their model.json
IF EXISTS(select 1 from sys.views as v INNER JOIN sys.schemas as s ON v.schema_id=s.schema_id where'MyDynamixModels' and'dynamixfo')
    DROP VIEW [dynamixfo].[MyDynamixModels]; 


CREATE VIEW [dynamixfo].[MyDynamixModels] AS

        Originally Power BI Dataflow reader for Synapse.	
        Reader created by Vesa Tikkanen

     [result].filepath(1) AS [rootfolderName]
    ,[result].filepath(2) AS [EntityOrCubeName]
    ,JSON_VALUE(jsoncontent, '$.name') as EntityCubeName
    ,convert(datetime2,JSON_VALUE(jsoncontent, '$.modifiedTime')) as modifiedTime 
    ,JSON_VALUE(entity.Value, '$.name') as entityName
    ,JSON_VALUE(partitions.Value, '$.location') as FileName
    ,JSON_VALUE(attributes.Value, '$.name') as ColumnName
    ,JSON_VALUE(attributes.Value, '$.dataType') as ColumnDataType
    CASE WHEN JSON_VALUE(attributes.Value, '$.dataType')='date' then 'date'
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='dateTime' then 'datetime2'
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='int64' then 'bigint'
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='string' then 'nvarchar(4000)'        
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='decimal' then 'decimal(18,6)'
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='boolean' then 'bit'
     WHEN JSON_VALUE(attributes.Value, '$.dataType')='double' then 'float'
    ELSE JSON_VALUE(attributes.Value, '$.dataType') END as ColumnDataTypeSQL
    ,[result].filepath() as modelFileName
        -- HERE CHANGE YOUR Azure Datalake Gen2 account that you're using for your DataFlow's
        BULK '*/*/model.json',        	
        FORMAT = 'CSV',
        FIELDQUOTE = '0x0b',
        FIELDTERMINATOR ='0x0b',
        ROWTERMINATOR = '0x0b'
    WITH (
        jsonContent varchar(MAX)
    ) AS [result]
      CROSS APPLY OPENJSON(JSON_QUERY(jsoncontent, '$.entities')) as entity
      CROSS APPLY OPENJSON(entity.Value, '$.attributes') as attributes
      CROSS APPLY OPENJSON(entity.Value, '$.partitions') as partitions

Step 2: Create Stored Procedure that takes three parameters and returns a view with entity data
/****** Object:  StoredProcedure [dynamixfo].[spcreateModelView]    Script Date: 3.30.2021 ******/


CREATE PROCEDURE [dynamixfo].[spcreateModelView] 
    @rootfolderName nvarchar(200), @EntityOrCubeName nvarchar(200),  @entityName nvarchar(200)
        Power BI Dataflow reader for Synapse.	
        Reader created by Vesa Tikkanen


    DECLARE @columnsChar nvarchar(max)='';
    DECLARE @columnsCharSel nvarchar(max)='';

    -- check if names of the workspace, dataflow or entity contains not compatible characters.
    IF CHARINDEX('[',@rootfolderName)&gt;0 OR CHARINDEX(']',@rootfolderName)&gt;0 OR
    CHARINDEX('[',@EntityOrCubeName)&gt;0 OR CHARINDEX(']',@EntityOrCubeName)&gt;0 OR 
    CHARINDEX('[',@entityName)&gt;0 OR CHARINDEX(']',@entityName)&gt;0
        PRINT 'Escape characters detected. Cannot create.'

    select @columnsChar = STRING_AGG(colname,', ')  from 
    (select  '[' + ColumnName + '] ' + CASE 
		when ColumnDataTypeSQL='datetime2' then 'nvarchar(200)' 
		when ColumnDataTypeSQL='date' then 'nvarchar(200)' 
		else ColumnDataTypeSQL END as colname  from  [dynamixfo].[MyDynamixModels] where rootfolderName=@rootfolderName
        and EntityOrCubeName=@EntityOrCubeName and entityName=@entityName
        ) as a;

    select @columnsCharSel = STRING_AGG(colname,', ')  from 
    (select  CASE 
		when ColumnDataTypeSQL='datetime2' then 'convert(datetime2,[' + ColumnName + '],101) AS [' + ColumnName + '] ' 
		when ColumnDataTypeSQL='date' then 'convert(date,[' + ColumnName + '],101) AS [' + ColumnName + '] ' 
		else '[' + ColumnName + '] ' END as colname  from  [dynamixfo].[MyDynamixModels] where rootfolderName=@rootfolderName
        and EntityOrCubeName=@EntityOrCubeName and entityName=@entityName
        ) as a;

    --select @columnsChar;

    DECLARE @sqlcmd nvarchar(max);

    SET @sqlcmd = N'DROP view if exists dynamixfo.[' + @rootfolderName + '_'+ @EntityOrCubeName + '_' + @entityName + N'];';
    EXECUTE sp_executesql @sqlcmd;
    --select @sqlcmd;

    select @sqlcmd = N'CREATE view dynamixfo.[' + @rootfolderName + '_' + @EntityOrCubeName + '_'  + @entityName + N']
    AS' + STRING_AGG(subselect,'


    ')  from 


        ' + @columnsCharSel+ N'
            BULK ''' + REPLACE(FileName,'%20',' ') + N''',
            FORMAT = ''CSV'',
        WITH (
    ' + @columnsChar + N'
        AS [result]

    as subselect  from  [dynamixfo].[MyDynamixModels] where rootfolderName=@rootfolderName
        and EntityOrCubeName=@EntityOrCubeName and entityName=@entityName group by FileName
        ) as a;

    --select @sqlcmd;

    EXECUTE sp_executesql @sqlcmd;

Step 3: Execute the stored procedure
# You can repeat this step for all the views that you need to create.
# Note the examples are for Aggregate Measurements, but F&amp;O datalake sync will expose raw tables as well
EXECUTE [dynamixfo].[spcreateModelView] 'AggregateMeasurements','Recruiting','Recruiting_Performance'

In terms of data refresh, the views once created in the SqlOnDemand database will be pointing to the datalake files, and as they get updated by the data sync process, the views will return the latest information when executed. I have tested it manually by changing data in the CSV files, and it does pick the new changes. This solution can be used with the serverless SQL compute until the Openrowset function can handle the CDM format. There is a feature request under review, which we hope will make it to the roadmap as CDM format is becoming widely used across Microsoft Data estate.

Azure SQL Managed Instance YAQS

Azure SQL Managed Instance (MI) is a Platform as a Service (PaaS) that lives in Vnet, inside a subnet and is assigned IP addresses from the private IP address space.

Power Bi Embedded with PHP CURL- App Owns Data

Power BI Embedded will enable you to embed powerful Dashboards, Reports, and Tiles into your application. An ISV can leverage the continuous development efforts from the Microsoft Power Bi team and embed several types of reports for their users and customers, while branding the application with their Logo and color scheme.
Before you get started, you will need to double check the business requirements. Are you embedding for organization users? External users such as customers, vendors, or suppliers? Embedding in SharePoint or Microsoft Teams? Embedding a public dashboard that’s using a public dataset?
This discovery stage is very important, as you might not need to program anything, if you are embedding a Power bi Dashboard into a SharePoint site, or if you are publishing an iframe of a “Publish to Web” URL with a public dataset or unsecure content.
PBI embedded requires you to buy one of the three SKUs below:

There are three areas where you must do some work to create this solution:
1- Power BI: You need a Power BI pro account to be able to create an app workspace, which will be the container of your reports and dashboards that you will embed. In Addition, you will need either an Azure power bi embedded subscription, or a Power bi premium subscription to allow your users to consume the reports.

Then you will need to link your app workspace to your provisioned capacity.

2- Microsoft Azure: This is where you need to register an Azure AD application that will give you the credentials to allow you to embed Power BI content. This is discussed in more details below. If you are going to use the Azure PBI embedded resource, then you would need to add the resource capacity here as well.

3- Your Application: You can host it anywhere, use any programming language to communicate with Azure AD and Power BI REST API. You will also need to download or link to the power bi JavaScript library.
Once you ascertain that you will need power bi embedded into your own application, you would need to decide between the two types of embedding:
User owns Data: This is for a scenario where an organization wants users to login to their website and see their PBI reports. Users need to have a Power BI account either free or Pro, and the organization needs to provision Premium Capacity
App owns Data: This is a scenario for organizations that will be serving reports to customers that don’t have a power bi license, and usually unaware that the reports are provided by power bi. At least one Power bi Pro user will be required to create the app workspace and reports, as well as either Azure power bi embedded capacity (A SKU) or Office premium capacity (P SKU).
This article will discuss embedding for the “APP owns Data” Scenario, so lets’ get started.
First, study the image below, as you will programmatically need to go through this Authorization Grant Flow to access the power bi resource:

Step 1: Get an OATH2 token through a CURL POST request to the Azure AD API.
To get started with step 1, we must register an application in Azure AD. You can do that through the Azure management portal > Azure Active Directory > App registrations. Make sure you register a native application not a web app.

You can also use the Power BI App Registration tool. After you register the app, you need to assign it the required permissions as illustrated in this updated article.
You are ready to get started with coding: (The code below is mixed with comments and image illustrations. You can download the complete source from GitHub.)

<?php $curl1 = curl_init(); curl_setopt_array($curl1, array( CURLOPT_URL => "",
grant_type => 'password',
scope => 'openid',
resource => '',
client_id => ' ', // Registered App Application ID

username => ' ', // Your Power BI Pro account. For example
password => ''  // Password for above user
$tokenResponse = curl_exec($curl1);
$tokenError = curl_error($curl1);
// decode result, and store the access_token in $embeddedToken variable:
$tokenResult = json_decode($tokenResponse, true);
$token = $tokenResult["access_token"];
$embeddedToken = "Bearer "  . ' ' .  $token;

The above code has accomplished the following:
Application identity with OAuth 2.0 client credentials grant
1. First, the server application needs to authenticate with Azure AD as itself, without any human interaction such as an interactive sign-on dialog. It makes a request to Azure AD’s token endpoint, providing the credential, Application ID, and application ID URI.
2. Azure AD authenticates the application and returns a JWT access token that is used to call the web API.
The code below will accomplish the third step in the authentication/authorization process:
3- Over HTTPS, the web application uses the returned JWT access token to add the JWT string with a “Bearer” designation in the Authorization header of the request to the web API. The web API then validates the JWT token, and if validation is successful, returns the desired resource.
Let’s define what a group is before introducing a group id in the code, because it’s a very important concept, and understanding it will save you time going forward.
Groups are a collection of unified Azure Active Directory groups that the user is a member of and is available in the Power BI service. These are referred to as app workspaces within the Power BI service. To learn how to create a group, see Create an app workspace.
/* Use the token to get an embedded URL using a GET request */
$group_Id = 'f6xzs20-850b-04az-9hga-3d00087191'; // Your power bi app workspace group id.

Easiest way to get your group id is by browsing to your app workspace in Power BI, and getting the alphanumerical string after groups in the URL. You can also get it programmability through a REST call.

/*Use the token to get an embedded URL using a GET request */
$group_Id = ' ';
$curl2 = curl_init();
curl_setopt($curl2, CURLOPT_URL, ''.$group_Id.'/reports/');
curl_setopt($curl2, CURLOPT_RETURNTRANSFER, trUE);
curl_setopt($curl2, CURLOPT_ENCODING, "");
curl_setopt($curl2, CURLOPT_MAXREDIRS, 10);
curl_setopt($curl2, CURLOPT_TIMEOUT, 30);
curl_setopt($curl2, CURLOPT_CUSTOMREQUEST, "GET");
curl_setopt ($curl2, CURLOPT_HTTPHEADER,array(

        'Cache-Control: no-cache'

$embedResponse  = curl_exec($curl2);
$embedError = curl_error($curl2);

if ($embedError) {

echo "cURL Error #:" . $embedError;

} else {

$embedResponse = json_decode($embedResponse, true);

$embedUrl = $embedResponse['value'][1]['embedUrl'];


Now that our application has retrieved the bearer Token from Azure AD, and it presented it to Power BI REST API along with a group id where the reports reside. The REST service will return an array with the report names, their ID, embed URL, and some other data.
Next, we will use jQuery and Power BI JavaScript file to render the report:

include 'pbi-logic.php';
include 'header.php'; ?>
<div id="reportContainer"></div>
// Get models. models contains enums that can be used.
var models = window['powerbi-client'].models;
// Embed configuration used to describe the what and how to embed.
// This object is used when calling powerbi.embed.
// This also includes settings and options such as filters.
// You can find more information at
var embedConfiguration= {
type: 'report',
id: ' ', // the report ID
embedUrl: "<?php echo $embedUrl ?>",
accessToken: "<?php echo $token; ?>" ,
var $reportContainer = $('#reportContainer');
var report = powerbi.embed($reportContainer.get(0), embedConfiguration);

You should be done now. Make the required changes to the embed configuration as needed to turn on and off different aspects of your report, dashboard, or tile. To create a report from your application, for instance, you need to supply a dataset:

var embedCreateConfiguration= {
type: 'report',
datasetId: ' ', // dataset ID
embedUrl: "<?php echo $embedUrl ?>",
accessToken: "<?php echo $token; ?>" ,
var $embedContainer = $('#reportContainer');
// Create report
var report = powerbi.createReport($embedContainer.get(0), embedCreateConfiguration);

// removes a given event handler if it exists."loaded");

// Report.on will add an event handler which prints to Log window.
report.on("loaded", function() {
report.on("error", function(event) {
// removes a given event handler if it exists."saved");
report.on("saved", function(event) {
    Log.logText('In order to interact with the new report, create a new token and load the new report');

Power bi embedded for ISV will save you a lot of time, provide you with a lot of flexibility, and will offer you continuous development and enhancements. I have written this article to help fellow developers, especially if you don’t know .net to be able to follow the samples available from Microsoft.
Find a link to my GitHub repo, where you can download the working code tested on Apache/PHP7.
Links to websites that helped me along the way:

Remember that failure is an event, not a person

I have been through a fair share of failures in my life, which correlates to the amount of success I have achieved so far. It is proven that the more you fail, as long as it’s not accepted as such, the more you will eventually succeed and grow.
I have experienced losing in a variety of ways throughout my life. Losing a foot race, a soccer game, securing a lucrative station while waiting tables at a restaurant, losing at getting a soccer scholarship at an NCCA Division 1 school, then also failing to secure enough scholarship money at an NAIA University, then later on being benched at a no scholarship school, failing some classes in college, failing in creating a business on the side, and then failing at convincing my interviewer that I am a good hire.
When I reminisce on those failures, they were a glaring sign that I am not ready, so I needed to invest time and effort in more preparation, or maybe I needed to take a detour, or sometimes it is as easy as fix my attitude. The problem is that when you know everything, and you believe the world is conspiring against you, then you will see nothing that needs fixing.

Luckily for me, my spiritual beliefs go against the defeatist mental attitude, but actually promote the positive mental attitude, which Napoleon hill preached in his talks and books. I also believe that watching or playing sports enhances the mental toughness, and as a new Dad, I will make sure my son is exposed to a variety of sports. I gained discipline from Martial arts and grit from playing soccer. Lifting weights and watching your muscles develop teaches you patience and the rewards of hard work. Receiving a punch in the face and seeing stars in daylight in a smoker fight in my MuayThai gym made me experience 3 minutes of time turn into eternity, so once you go through such experiences, then everything else is a piece of cake. You say work is tough?! You mean doing some tasks for a specified period of time where I am not gonna get punched by mistake during training, or your chin will not hit my knee when practicing a leg kick? When we learn how to knee, you are not gonna hang all your weight on my neck so it will be sore for a week? How about breaking my arm or finger like I did playing soccer? Oh, so none of that is going to happen at work? Joking aside, the mental demands of work sometimes make you more exhausted then any physical effort. I don’t mind a lot of work and tasks and deadlines, what really gets to me is stress inducing people. Especially, the ones that want to climb over bodies to the top, leaving behind them a pile of corpses. But again, if you can take a punch in the face, you really can withstand a lot in life. I leave you with some quotes to inspire you to keep going in your journey, and to never let failure make you doubt the true successful person you are!

Success consists of going from failure to failure without loss of enthusiasm.

Winston Churchill

Remember that failure is an event, not a person

Zig Ziglar

I can accept failure, everyone fails at something. But I can’t accept not trying

Michael Jordan

During my 18 years I came to bat almost 10,000 times. I struck out about 1,700 times and walked maybe 1,800 times. You figure a ball player will average about 500 at bats a season. That means I played seven years without ever hitting the ball

Mickey Mantle

Some people believe football is a matter of life and death, I am very disappointed with that attitude. I can assure you it is much, much more important than that

Bill Shankly

EC2 AWS Community AMI Security

In the wake of yesterday’s ransomware incident, Information security is in the lime light, and before I connected to the internet today, I clicked restart on the annoying windows updates message, instead of postponing it another 4 hours.

Computer is up to date with all security fixes and anti-virus definition files have been updated. I am ready to go.

I wanted to experiment with a web application, but didn’t want to spend the time configuring LAMP, and a PHP framework, so I opted for using an Amazon AWS EC2 community API pre-loaded with the application stack.

I was lucky to find one, so I went ahead and launched it, and accessed it via the server IP.

I created my RDS DB in order to change the “database.php” config file to point to my own DB.

When I opened the config file to edit it; there I found some brazilian website “.br” as the host name, and mysql root credentials.

I thought, there is no way this is still live, so I tried to connect via command line to the remote host.

I got the prompt back!

I had root on their MySQL database with no effort.

I am not sharing any screenshots on this blog post to protect the server. But this incident is a reminder to make sure you don’t share AMIs until you remove all credentials, user history, etc.

Follow this guide for more info:

Another tip is to run the mysql secure installation script.

It’s also a good idea to block remote connections to your SQL server.

I went ahead and contacted AWS security to alert them to this AMI, so they can take care of taking it down, and contacting the site owner.

In the wake of security breaches, one is amazed how these criminals are so successful with their exploits. It comes down to our deficiency in not following basic, annoying security steps one has to take. I say annoying, because they kind of are. Nobody wants to have multiple locks to their house, and then a lock for every room, security guard outside, etc… You get where I am going with this. However, today’s reality where most of our computing is in public networks makes it a priority to follow good security practices. Creating hard to guess passwords, implementing password expiration, multi factor authentication, software patching, updates to both OS and anti-virus definition files should be part of our disciplined approach to cyber security.

The ransomware that ravaged UK NHS computers was caused by running unpatched OS. It’s easy for me to sit here and speculate about the why?!

What’s apparent is that the NHS CTO or CSO (if they have one) decided to take on that risk, and we now know it was a costly decision.

Update: AWS Security replied to my e-mail where they referred to their “Shared responsibility model“, and that they can not take down the EC2 instance. So, until the owner takes action, this EC2 image is still there for people to use.

Excerpt from AWS Security message:

To be clear, the security concern you have reported cannot be resolved by AWS but must be addressed by the customer, who may not be aware of or be following our recommended security best practices.

We have passed your security concern on to the specific customer for their awareness and potential mitigation.



Reset Password of Bitnami WordPress Installation

I forget my password regularly, and I don’t have e-mail setup on my WordPress site; therefore, clicking on the forget password link doesn’t do me any good. This article will serve as a step by step process I can use in the future in case this happens again, and it can save you time as well if you face the same problem

An AWS bitnami WordPress EC2 machine comes with a packaged installation of LAMP, WordPress, and PhpMyAdmin that’s accessible through localhost.

In order to reset your WordPress password, you will need access to the database either through the command line or through a MySql database management tool like PhpMyAdmin.

This article will guide you through the process of resetting your WordPress password through PhpMyAdmin.

These instructions are for a windows environment using putty SSH client. Same concepts apply to other platforms.

  1. Create an SSH tunnel using putty. You will need to have your EC2 private key, and you need to know your domain name/IP. (Image 1 and Image 2)
  2. After you have entered your HostName under Session Category, your private key under Connection>SSH>Auth; click on Tunnels to create a secure tunnel by forwarding port 80 on the server to port 8888 on the local host ( or localhost). In Source Port type port “8888”, and in the destination type “”, the click add. If you happen to be using port 8888 for anything in your local machine, then change the source port to any non-privileged open port. (Image 3)
  3. Login using your user name, the default user is “bitnami” if you haven’t changed it. (Image 4)
  4. Now that you have a tunnel between your local machine and your server, you can access phpMyAdmin through the URL: (Image 5)
  5. Next, you will need a DB user and password. Since you are on the server, navigate to “/opt/bitnami/apps/wordpress/htdocs” and open wp-config.php, look for the database user and password your WordPress is using to connect, and use that to login to PhpMyAdmin.
  6. Once Logged in, search for the table: “wp_users”, and double click on the name to open it.
  7. Click on edit next to the username you want to edit its password.
  8. The password is MD5 hashed, so you can’t recover it there, but you can delete it, and copy and paste the MD5 hash of a new password. Use this website to get the hash:
  9. Once you paste your new password (Hint, don’t use “new password”, but your real password), click Go to update the table row for that user.
  10. Repeat same process for other users.
  11. Test logging in to WordPress, it should work now!

Design VPC CIDR Block For No Conflict

Today is Saturday, and I am out for an early breakfast at Panera, as usual, and I have my laptop with me to work on some AWS stuff.

My plan is to finish setting up installing Laravel PHP framework as part of a tutorial I am following for storing laravel sessions on DynamoDB. Since my Web Server is hosted on an EC2 instance on a private Subnet, I need to first VPN into my VPC to be able to get to it.

So I proceeded to connect through VPN to my OpenVPN EC2 on my public subnet. Connection is established. Now, I need to SSH to my machine, but nothing is happening, and eventually the connection times out!

I checked my security groups, Routes, and NACLs; everything looks good, and I haven’t changed my setup from when I successfully connected the day before.

Is AWS flaky? What’s going on?

I run Wireshark to help me figure out what’s causing the connection timeout. I am trying to connect through port 22 with putty, and port 80.

I am seeing ICMP packets from a device telling me that the destination is unreachable, and that the communication is administratively filtered.

Not Sure what this device is, but let’s do an ipconfig on the local machine:

Ok, so Panera bread, wireless router is rejecting my request for a server that it thinks is in it’s network.

My VPC is on

My private subnet is on

Ok, so I guess the wireless router will not let this slide by him so my packets can travel through the VPN tunnel to my EC2 machine on my VPC.

I did some google search to see if I can make this work through some Ninja networking tricks. The threads I read unanimously agree that you need to design your VPC with a CIDR block that doesn’t conflict with your local network.

Every now and then, some network guru will tell you about all the hoops he went through to make it somehow work, but why go through the trouble, when you could design your VPC with no conflict in mind? Plus, I needed access to the local network router to try those solutions, and Panera wasn’t about to hand me the credentials to their router.

A Cisco article I read presents a solution through the use of NAT and DNS. A use case for that could be two companies merging with the same private  address space, and what you would do to prevent re-addressing the local network of one of them.

Anyway, I can’t stress enough the importance of choosing the right VPC CIDR.

Avoid the 192.168.x.x since most home network routers use that RFC 1918  IP addresses.

To solve my issue, I need to go home and connect. Surely enough, as soon as I connected to my home network, I was able to VPN, and browse to my private web server and SSH to it:

Traffic through wireshark looks good. Syns are being acknowledged, and traffic is flowing nicely through the VPN adapter:

One caveat is that if you want to see network traffic for your private IP address as the destination IP before it gets encapsulated into VPN encrypted packets, you need to make sure you are listening to the VPN adapter not your wireless adapter, or your network card if you are connected with an Ethernet cable to your router. (It took me some googling to figure that out)

My VPN server is configured with this CIDR:, which it uses to assign IPs to connected clients.

I have tried to produce some conflict again with my VPC, and I changed my VPN server CIDR to

My VPN server couldn’t route traffic correctly, I couldn’t resolve domains to browse the internet, so that’s another place to look at in case you have problems with your VPN.

Software defined networks enables services such Amazon AWS VPC, which gives us greater control over our cloud network topology.

“With great power comes great responsibility”, so choose your CIDR blocks wisely.

AWS CloudFormation and PHP SDK

Recently, I have gotten involved in a project where, among other things, we needed to create EC2 instances from a web application.
We are using CloudFormation templates to create VPC with public and private subnets, Internet gateway, Security Groups, NACLs, etc.
We have decided to have separate stacks for network infrastructure, servers, ELB and auto-scalers, and we are also considering a separate security stack.
What makes this possible is using the cross-stack reference option.
“To create a cross-stack reference, use the Export output field to flag the value of a resource output for export. Then, use the Fn::ImportValue intrinsic function to import the value”
So simply put, you create a new stack where you import values from a parent stack where those same values are exported in the “Outputs” section.
In this example, I am exporting values from my main template to be consumed later by my EC2 instances template:

"Outputs" : {
"PrivateSubnet" : {
"Description" : "The subnet ID to use for web servers behind an ELB",
"Value" : { "Ref" : "PrivateWebServer1" },
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-SubnetID" }}
"BasicSecurityGroup" : {
"Description" : "The security group ID to use for private web servers",
"Value" : { "Fn::GetAtt" : ["BasicSecurityGroup", "GroupId"] },
"Export" : { "Name" : {"Fn::Sub": "${AWS::StackName}-SecurityGroupID" }}

Notice that I am first referencing the value that need to be exported using the “Ref” function.
Next, I am using the Export parameter to export the SubnetID, and in the next declaration the SecurityGroupID of a Stack Name.
Using the Fn-Sub function, I am substituting the $(AWS::StackName) variable with an input parameter.
This Parameter is defined in our Parameters section of the child template:

“"Parameters": {
"NetworkStackName": {
"Description": "Name of an active CloudFormation stack that contains the networking resources, such as the subnet and security group, that will be used in this stack.",
"Type": "String",
"MinLength" : 1,
"MaxLength" : 255,
"AllowedPattern" : "^[a-zA-Z][-a-zA-Z0-9]*$",
"Default" : "Your-Parent-Template"

This is the complete child stack that will import values from its parent stack:

"AWSTemplateFormatVersion": "2010-09-09",

"Description" : "CF stack to create an EC2 instance, with Parent VPC name as parameter",

"Parameters": {
"NetworkStackName": {
"Description": "Name of an active CloudFormation stack that contains the networking resources, such as the subnet and security group, that will be used in this stack.",
"Type": "String",
"MinLength" : 1,
"MaxLength" : 255,
"AllowedPattern" : "^[a-zA-Z][-a-zA-Z0-9]*$",
"Default" : "MyVPC3"

"Resources": {

"WebServer": {

"Type": "AWS::EC2::Instance",

"Properties": {

"InstanceType" : "t2.micro",

"ImageId" : "xxxxxxxx",

"KeyName" : "xxxxxxxxx",

"NetworkInterfaces": [ {
"AssociatePublicIpAddress": "false",
"DeviceIndex": "0",
"GroupSet" : [{ "Fn::ImportValue" : {"Fn::Sub": "${NetworkStackName}-SecurityGroupID" } }],

"SubnetId":{ "Fn::ImportValue" : {"Fn::Sub": "${NetworkStackName}-SubnetID" } }
} ]


when the child stack is ran, it will create an EC2 machine inside the VPC and subnet created in the parent stack, and assign it the appropriate security group.
After testing that everything works, now it’s time to create an EC2 instance from a web application.
We will use PHP, and the well documented AWS SDK for PHP.
Best way to install the SDK is to use Composer.
Go through the trouble of installing it because it’s worth it. You won’t have to worry about troubleshooting issues with dependencies, because composer takes care of that.
I am using PHP with IIS on a windows server.
Download the exe file, and install composer. Check if it’s installed:

Next, install the PHP SDK by following these instructions, and run the following command from the vendor directory of your PHP project: (If PHP binary is not in your path, then specify the full command path)

php -r "eval('?>'.file_get_contents(''));"

To use the PHP SDK for any AWS service, you will need to first start with creating a client. For more information on how to get started with calling an AWS service API, read this guide:
The basic usage pattern of the SDK is that you instantiate a Client object for the AWS service you want to interact with

My Cloudformation service client looks like this: (You can use credentials online, or use a profile in your credentials file. For security, use profiles!)

<?php require 'C:\awscf\vendor\autoload.php'; //Create a client use Aws\CloudFormation\CloudFormationClient; $client = CloudFormationClient::factory(array( 'region' => 'us-east-1',
    'credentials' => [
        'key'    => 'xxxxxxxxxxxxxxxxxxx',
        'secret' => 'xxxxxxxxxxxxxxxxxxxx',

// Create your stack stored on S3 that imports data from your network stack
    $result = $client->createStack(array(
    // StackName is required
    'StackName' => 'Webserver1',
    'TemplateURL' => '',

Very straight forward way of creating a stack from your PHP web application. We have only used 2 elements of the array to keep it simple, but you are welcome to take advantage of the full capabilities of the API call.
For example, you could pass on your parameter of parent stack name to the child CF stack, instead of using the default.

I hope you found this article helpful to get you started quickly with using the AWS cloudformation PHP SDK.