Advantages OF DEVOPS FOR BUSINESSES: OUR EXPERIENCE


A while ago, Vilmate was involved with a high-load web project. It was established back in 2012, and ever since that time, multiple teams were taking turns working on the internet application. Team members were coming and going, and every new squad attempted to leverage the most recent technology trends as they continued software development.

In the long run, we had a job that was left part on the backend, part in Angular, and there also was another part left on the backend in Angular. Some background tasks were implemented using agent systems, others using Cron.

Our products:

/shopify-pos

/woocommerce-pos

/commercetools-pos/

/magento-pos

/bigcommerce-pos

The deployment process wasn't flawless, too. We had been inconsistently executing both Fabric and Ansible scripts. In general, it may take up to 2 hours and more to finish installation, for example, time spent conducting tests. As it's often the case with projects like ours, it was far too expensive, and its upkeep cost the customer a dear price they had to pay monthly.

Solution

So, it was high time to face up to the revolutionary conclusions on the present problems. First things first, so we started with dockerizing the app. It was a substantial amount of work to accomplish, and here is what we've managed to do. We made Docker multi-stage builds so that our new multi-stage construct pipeline had four phases:

    • The first standard stage was that the Node.js Docker picture running scripts such as npm set up and npm construct .

    • The next phase was inherited from the Docker Nginx picture copying files in the first phase and distributing static js files in return.

    • The third standard stage was that the Python Docker picture, where all the necessary dependencies were installed.

    • The fourth phase was inherited from the third person that copied the code and accumulated files in the initial phase.

Because of this, when we conducted the Dockerfile targeting the next stage from our multi-stage construct, we served it with Nginx that could render HTML and receive a page left with JavaScript. At exactly the exact same time, thas container was lightweight. It didn't contain anything unnecessary and had no access to code.

See also:

/pos-solution-for-toys-hobbies-gifts-retail/

/shopify-vs-woocommerce-ecommerce-platforms-review/

/5-sales-channels-of-an-omnichannel-business/

/6-biggest-business-trends/

/magento-vs-woocommerce-the-platforms-review/

/pos-review-connectpos-vs-foosales/

In addition, when running the exact same Dockerfile targetting the fourth phase, we ensured the backend storage service of all needed files and dependencies excluding node_modules and pip/cache. Thus, this picture was smaller in size. It contained only the documents which were needed to run the backend and nothing more.

We also cared for the programmers and adding the necessary build-time factors (--build-arg), we made it possible for them to conduct the identical environment in development mode more easily.

As a result of the innovation introduced, we made it much easier to get an easy build environment on a CI server which constructed Docker pictures and pushed them into the registry.

We utilized Drone CI that, among other things, could cache layers. Thus, specifying a new entrypoint or control, we could launch a picture and conduct tests receiving results generated in CI.

What about security?

With a stable environment and speeding up CI, we utilized Rancher and Swarm for container management. Rancher allowed us to upgrade containers through CI, connect and utilize physical nodes, in addition to balance requests between nodes from some of the containers.

We place the entry point for that we used HAProxy (High Availability Proxy) functioning in reverse-proxy mode. It was configured to balance requests between containers. At this level, we wrote a script monitoring requests based on client IP address. It kept track of requests and blocked them in the event the frequency exceeded 100 per second, even denying the application's access requests.

Before HAProxy, we utilized Elastic Load Balancing (ELB) using a geolocation-based routing policy which, in turn, was utilizing HAProxy to proxy requests. There was Cloudflare Load Balancer that was responsible for A-records and diagnostic.

Thus, we made an infrastructure closed from the inside. It could be obtained through a range of reverse proxies that provided security at several levels.

More also:

/tips-to-drive-sales-for-supplement-nutrition-retail/

/pos-solution-for-supplements-nutrition/

/shopify-vs-bigcommerce-which-one-is-a-better-option/

shopify-vs-magento-which-one-is-better/

/pos-review-connectpos-vs-amasty-pos/

/top-pos-system-for-supplements-nutrition/

Overall, we had several subnets:

    • The Rancher system that had its own internal DNS support
    • The AWS system that supplied an endpoint to Rancher and ELB
    • The Cloudflare network which could proxy requests using ELB

Cost optimization

According to the structure described above, it wasn't in any way necessary for us to have assigned IP addresses or static physical nodes.

We transferred physical nodes from on-demand instances and configured the launching and link of those nodes to Rancher. After a new node was attached, Rancher individually launched containers onto it, balancing the load on containers on the community.

There was Webhook in Rancher that enabled increasing the amount of conducting containers, the load between which was balanced automatically. Thus, we raised the amount of requests which we could process.

If the load required balancing, we utilized our webhook that doubled the amount of conducting containers, providing thus the amount of employees sufficient to process requests. AWS, consequently, obtained an increased load from place requests, raising the amount and improving the standard of nodes. After the load decreased, the amount of containers was going down, too. It helped facilitate memory and CPU use, while AWS restricted Position example requests, thereby, everything supplied increased cost efficiencies.

Finally, we ceased using services such as ElasticCache, Codedeploy, EC2 On-demand, RDS, Elastic-IP, etc..

Conclusion

Right now, we're proud to have created an infrastructure that can scale up and down depending on the application load. We have optimized costs for our customer and ensured that the runtime environment was easy for the programmers to work with.

More also:

/7-outstanding-retail-trends-to-watch-in-2021/

/pos-review-connectpos-vs-webkul-pos/

/marketing-tools-within-four-ecommerce-platforms/

/pos-solution-for-furniture-and-homeware/

/tips-make-use-of-store-credit/

/7-leading-omnichannel-retail-examples/

Comments

Popular posts from this blog

Payments, Payment Rails and Blockchains and the Metaverse

Faire Launches Brick & Order Podcast for the Retailer Community

Faire Partners with SHOPPE BLACK in Support of Black-Owned Firms