Mirabeau Cloud Framework: proud to announce our open-source initiative!

Auteur
Lotte-Sara Laan
Datum

About 4 years ago I joined Mirabeau without any experience in cloud or automation tooling. I was told I was going to work in a team where we would automate building environments in the cloud. At that time, I couldn’t think of any way this was possible but now I am very proud to announce the open-source Mirabeau Cloud Framework.

During my first weeks at Mirabeau I started exploring puppet, the configuration management tool of choice for Mirabeau. With my background in Linux and Perl development, it didn’t take me too long before I was able to set up a server that was able to provision itself, and with that also a web server using the puppet agent. I started to see how this could help build up the application stacks mentioned during my job interview, but still a lot of automation needed to be done.

Mirabeau Cloud Framework 0.1

About a year later, a colleague attended an AWS Architect Professional course. Inspired by the content of this course, he came back with brilliant ideas about how to set up immutable hosts on AWS. He preached about EC2 AutoScalingGroups, Lifecycle Hooks, SQS and CloudFormation. At that time, one specific customer was starting to see the need to set up their SAP Hybris E-commerce platform as an immutable infrastructure. I was soon asked to work on this project together with this colleague, he as an architect, me as a Cloud DevOps Engineer. This is when I truly started to see the strength of Cloud automation and when the first iteration of the Mirabeau Cloud Framework (MCF) was born under the name of IMH (Immutable Hosts).

Of course CloudFormation capabilities were more limited back then, and my experience with AWS was minimal, but we were able to pull it off. An impression of how we set up this IMH environment can be seen in the image below.

image001

Mirabeau Cloud Framework 0.5

I moved on to another project, the IMH platform was a great success within Mirabeau and another customer wanted to build their new Hybris environment on AWS with IMH as well. Even though I had set up IMH with re-usability in mind, I realized a lot of the scripts were customer specific and needed a rewrite.

This time I had a better understanding of both the Hybris platform, AWS and CloudFormation. I split up the CloudFormations into smaller stacks making it easier to configure them per customer needs and minimize the impact on multiple resources when updating one resource as part of a stack.

Since this was a completely new AWS Account, we also created new CloudFormations to set up the VPC and provision other resources making it easier to rebuild the AWS account from code. This was a requirement for the project as there was a separate AWS Account used for the production environment and it should be possible to build up a new account for China as well.

In the meantime, a CloudFormation module had become available for Ansible, so we decided to use this to provide different properties per environment. This resulted into the second iteration of MCF.

Update 0.9

After a successful delivery of the project, I started on a completely different project. Because this time the CloudFormations were more generic, we could easily use a lot of them to set up a new AWS Environment for this new customer. With the help of a new colleague, we rewrote the Ansible setup to make proper use of Ansible roles. This is when MCF became more modular and we started to create separate GIT repositories for each of the Ansible roles created.

This new set up has proven itself when we migrated an IBM WebSphere Commerce platform from on premise to AWS. Using our already build Ansible roles, we were able to set up a new AWS Account provisioned with VPCs, SecurityGroups, management tools running in dockers on ECS and orchestration lambdas within no-time. Being able to roll out the AWS Environment this quickly allowed us more time to focus on the actual project; writing automation scripts to build IBM WebSphere Commerce from scratch and create puppet modules for environment specific configurations.

Final version (1.0)

As the modules are now set up so generic and are easy to combine, more colleagues started to adopt the framework when setting up new AWS environments. New roles have been created following the same structures as existing ones and improvements have been made.

The diagram below illustrates how the roles can be combined to create a new environment used to deploy a new infrastructure:

image002

Using Ansible group_vars, different variables can be set per AWS account, region, environment or even application stack/slice. Using the merge strategy, group_vars can be loaded in a hierarchy so defaults can be set and hashes can be partly overridden when needed. A playbook defines which roles should be included and will load the group_vars files in the given order. When a role is included, Ansible will run the main task which will check if all required variables are set for the specific role and then create the CloudFormation stack using the template provided in the role. The group_vars gathered by the playbook will be fed into the CloudFormation template as parameters.

By using CloudFormation Export Outputs in combination with predictable naming, we can can use ImportValue to reference resources created with other roles part of MCF.

Freely available

As of today, MCF is free to use by anyone. We are proud of what we’ve built and want to share this as open-source. MCF can be found on gitlab.com under /mirabeau/cloud-framework. We’ve created an example repository which walks you through the setup of an AWS multi account environment.

Please let us know if you’re using our framework and what you think about it! We’re open for suggestions on how to improve and if you have made any changes you think will be useful for others as well, please create a pull request so we can incorporate it into the master repository.

Tags

Automation AWS Cloud Operations Continuous Deployment Innovation