Capgemini Cloud Automation for SAP

Capgemini's industry-leading solution to the old days of manually starting and stopping SAP systems with a cloud automation platform for all your systems and providers.

Capgemini Cloud Automation for SAP

Shortly after joining Capgemini I started working on an internal project now known as Capgemini Cloud Automation for SAP (CCAS). The aim of this project is to automate cloud deployments of SAP systems and to this end we are building a web-app and server. The application will allow users to automate the process of standing-up and tearing-down servers.


A brief history

Our first foray into cloud automation for SAP systems started in July 2017, where we began working on a proof of concept application that would allow us to start, stop and schedule SAP systems on AWS. In the months we spent building this in our spare time, we got to grips with AWS and the kind of mindset needed for automating any tasks.

We completed a successful POC and brought the idea to Capgemini at large and this resulted in a renewed focus on building something sustainable, so with the lessons of the past few months we began to design the application that is now known as the 'Capgemini Cloud Automation for SAP'.

Designing

The issues from our POC were myriad, ranging from our lack of experience with AWS to how we can incorporate new cloud providers into our software without redesigning for each addition.

We decided that it would be best to pick out what we though our biggest issues would be in building the application before we hit them. The main issues that came out of this brainstorm session were:

  • Handling different cloud providers
  • Account security
  • Handling API keys
  • Enforcing an order of operations for tasks

It was due to these issues that we thought it best to completely abstract the complexity away from the front-end and put it into a server, where we could securely handle the sprawling and complex logic that comes with building software such as this.

Designing for expansion

The biggest and foremost worry for the team was how we and the software we were designing, would be able to handle adding and removing cloud providers. If all of this logic was entrenched in the front-end it would mean massive amounts of work for every change in any of the connected cloud providers.

Account security

Allowing for users to access all their cloud providers via one application means that we will need to have some sort of user 'accounts' and with this comes a plethora of issues and worries that a team of UI developers are not best equipped to deal with.

So we looked into the Google Single Sign-on API and decided that using a trusted 3rd-party to deal with the actual accounts and accompanying user data, allowing us to focus on what we do best: Build cool things!

Handling API keys

Allowing users to attach their various cloud provider's API keys to their accounts, which would in turn allow the user to interact with their different providers from one application.

Task ordering

When starting and stopping servers, it's crucial to get the ordering of the operations correct! You can't start the application before the database is up and so on and so forth. So we thought it would be intelligent to implement a proper task queue system.

The Server

The results of all our investigation and research was a Node.js server that would route requests through to the correct cloud providers and perform the specific actions required for each of them to start and stop servers. This means that our front-end can be completely independent of any changes to individual cloud provider APIs, abstracting the sprawling and complex logic of the application to a server.

We then created a rough architecture diagram and it consisted of 4 layers:

Layer Name Description
1 server Starts the node.js server, database and agenda module.
2 index Defines router prefixes, uses these prefixes to pass to the correct platform routes.
3 router Maps request method and route to the correct controller function(s).
4 controller A number of functions, which map directly or via chains to the defined routes of a given platform.

Requests to our server will hit the index, hits the router for a given cloud provider and gets pushed through to the routes for that provider. In the routes file the request route is checked against our available routes that are bound to functions in the controller.

This design focuses on the separation of concern between these layers allowing us to build new functionality into the controller of a specific provider, without impacting the apps 'business as usual' functionality. However this design allows us to quickly implement our work by binding any new functionality to a route with a single line of code.

Where are we at the moment?

We have now finished laying the groundwork for the server and are starting to rebuild the AWS functionality from our POC before moving onto integrating Microsoft Azure and other providers. Soon enough we'll be finished with initial build on the next iteration of the CCAS tool and we'll be sure to shout about it, keep your eyes peeled for the future of SAP in the cloud.

Leave a comment or tweet me

If you can see potential use-cases for an application like this outside of SAP, please leave a comment or tweet me. I'd love to push this application as an architecture to other areas, we're in an age where we shouldn't have to manually start and stop systems anymore!