When The Cloud People started its journey, back then as Symfoni Next, one of the main products its business was based on was a document management system, known as Docs&Folders or the Symfoni Suite. Symfoni Suite was once hosted on virtual machines on VMware. Every customer had their own Docs&Folders instance and their own database. That meant one instance server per customer, whereas the customers’ databases were hosted on a separate server.
All these features made the solution both costly due to the resources needed and difficult to maintain.
As a result, we started looking for hosting alternatives and Google Cloud Platform seemed the right path to follow, because of the following benefits it offered:
- Better pricing that its competitors, introducing per second billing, that means you pay for what you use
- Private Global Fiber Network: the communication is happening over Google’s backbone private network and not over the internet
- Improved performance
- Better security
- Possibilities for further future development
- Redundant backups
Moving one step forward, we decided to also transform the solution from a monolithic one into microservices on docker containers. Using an orchestration system like kubernetes seemed like an attractive option. Therefore, we exploited the features of Google Kubernetes Engine that Google Cloud Platform had to offer.
Google Kubernetes Engine is a reliable, efficient, and secure way to run Kubernetes clusters. Kubernetes clusters on GCP are based on Compute Engine’s instances. Further down you can get an idea of how the Docs&Folders solution was hosted.
In kubernetes, containers can be coupled into bigger components known as pods, so we used a pod for each customer’s instance which consists of different containers for our main application and the various microservices (for email sending, document conversion etc). The database was placed on a different pod on the same cluster, with the future possibility of switching to Cloud SQL. Google Kubernetes Engine offered the ability of making the solution highly scalable and removed the worries of managing the kubernetes master node.
Through kubernetes we also exploited the benefits of ingress controllers, for exposing our services which added an extra layer of security and scalability as ingress could provide load balancing and SSL termination possibilities.
The application’s data for each customer was decided to be stored into a separate persistent disk mounted into the main solution and specifically SSD disks for better performance compared to Standard persistent disks.
GCP offers disk backups as snapshots for persistent disks, so that you can recover your system to a previous state when needed. Before Cloud Scheduler was introduced, in order to automate our disks snapshots we used kubernetes custom resource definition and kubernetes cronjobs for creating relevant snapshots rules. Database backups could also be stored into Google Cloud Storage buckets. Having to choose between different storage classes, the best one decided for our buckets was Nearline with the relevant lifecycle rules applied to the bucket, so that backups are automatically deleted after being stored for a specific amount of days. This feature is one more of GCP’s benefits its managed services, minimizing the time used for manual processing.
After moving the solution to GCP, our customers have experienced less downtime, better service and the cost of maintaining the solution has decreased. Overall, hosting on GCP made the lives of our maintenance team and our customers’ easier.
Are you interested in moving your products from your old hosting environment into a modern infrastructure and to exploit the benefits of Google Cloud Platform? Contact us here.