monolith

Why Monolith Apps are Evil?

The title of this article is a bit of an exaggeration. Monoliths are evil and not so evil at the same time. How is that possible? Let’s find out.

Every application starts simple, so monolith architecture is the right choice. When the project starts there is often very little information available, so the project should always start with the most straightforward approach. However, one thing to keep in mind is that the project is bound to grow with the introduction of more features. Therefore, the projects need to be architected with the ultimate goal of Microservices in mind.

Modular architecture design following the Single Responsibility Principle can help us achieve this goal. Boundaries between different classes and services need to be along functional aspects. Upon neglection of these principles, the once simple monolithic application eventually becomes a tangled spaghetti.

Organizations often focus on adopting the best agile processes and practices. However, after a while, they realize that engineers are still struggling with meeting deadlines. The rate of feature development and release is prolonged, what they fail to realize is that the application is suffering from monolith fever, therefore no matter what they do to improve the process they are only putting band-aids rather than fixing the core problem.

Monolithic applications have some advantages and disadvantages. However, its disadvantages are far more than the advantages when it comes to large and complex projects.

Monolithic applications are tightly coupled with the technology stack. Therefore as the time passes and the technology becomes obsolete, it is challenging to upgrade it to use the latest technology frameworks or even upgrade the version of an existing framework if the new version is not backward compatible. This necessitates a rewrite of the complete application from scratch using the latest tech stack.

Because of the obsolete technology, it is difficult to find talent and resources who are interested in working on the old tech stack. Often to work on those applications engineers have to learn old technologies which does not help them much in their career advancements. It is far easier to recruit engineers who want to learn and apply the most cutting edge technologies.

Monolithic applications are tough to scale; they often have one centralized database which is a single point of failure. If that database is down or struggling due to heavy load the only way to scale it without additional development work is to use more powerful hardware, i.e. vertical scaling. The nonresponsive database becomes a bottleneck and in turn, decommissions the whole application. To handle increased load more application instances can be launched and added to the Load Balancer.

Another major problem with monolith applications is that there is no way to use optimized hardware based on feature requirements. If one feature is memory intensive whereas another feature is computationally intensive, then the monolithic application is out of luck as it can only utilize general purpose hardware.

As monolithic applications tend to be large and complex their testing become very error-prone and challenging. Engineers are scared to make any significant refactorings because it becomes very time consuming and tedious to figure out all paths that are affected by the refactored code. This causes a ripple effect, creating more code debt.

Deployment cycle is yet another victim of Monolithic architecture. Since the application is hard to test, even a minor change in the application requires a complete retest. Testing takes longer, causing the deployment cycle to prolong.

As the code base of the monolith is quite extensive, the learning curve is pretty steep, which in turn affects the productivity of the whole team. No single engineer can become SME of the application.

Larger applications take a long time to build and start, thus development time and feedback cycle prolongs. Continuous integrations take a long time as well, and if the build fails, it becomes very time consuming to figure out the cause.

To avoid breaking the build, developers use the branching strategy to work on the features; this becomes very problematic when they merge their branches to the master branch. Merges are error-prone, so instead of helping, it hurts them instead.

If a nasty bug creeps in such as memory leak, it affects the whole application thus causing production outages and affecting customers.

So in short monolithic architecture violates all the requirements of the modern software application. i.e.

Maintainability

Extensibility

Testability

Scalability

Reliability

Releasability

Monolithic architecture is an excellent choice for small applications because they offer several advantages.

Monolithic applications can catch most of the bugs at compile time since they only have binary dependencies and no dependency on external services.

Monolithic applications are inherently more secure as the surface area for the attack is minimal and is centralized. So enforcing security is more natural.

The deployment procedure is very straight forward and only requires the deployment of one artifact across all instances.

For monolithic applications, setting up the development environment is very straight forward. Usually, only one code repository needs to be checked out, and IDE has access to all the code base of the application.

Debugging is easier in general, as the engineer can step through the code quickly without having to worry about external calls.

Transaction Management is very easy to implement in a monolithic architecture, since it is only dealing with a single database.

Testing monolithic application is simple, as there are no external dependencies to mock. Setting up the data required to run the test is straight forward as there is only one database involved.

To perform a similar task, monolithic architecture can be more performant as they only deal with local API calls, rather than making an over the network call to fetch the equivalent data. However, this advantage is at the cost of scalability.

cloud-adoption

Why Organizations Should Move To Cloud?

Before the internet, computers were being used for limited use cases. The dawn of the internet allowed multiple computers to exchange data in a meaningful way, providing the capability to solve many more interesting problems.

Based on this premise, various distributed applications have developed ever since. These applications need to run 24/7 to provide services to their users. Data centers are facilities that provide a structure to host computer servers connected to the internet. These applications are then installed on the servers to serve user requests.

Setting up the proper environment in these data centers required lots of effort and was very time-consuming, hindering the companies from innovating quickly and efficiently. Companies had to purchase or rent racks in these data centers, then purchase computer servers, provision them, install them in the rack, and connect them to the internet.

Once they are done setting up the environment, they have to provide on-going support in patching the OS, monitoring for security threats, handling hardware failure, and replacing old servers with new ones.

That is to say, all this took specialized teams and was a waste of time and resources for most of the organizations. It took away precious time, which could instead be spent developing features that can differentiate their business from others and get them ahead of their competition.

Cloud Computing became popular very quickly to resolve these issues. It provided a way to ease the heavy lifting for these organizations. Cloud computing pretty much refers to renting someone else’s computer and pay them for the use, such as by the hour, etc.

It is analogous to how we use electricity. When we flip the switch to turn on the light, we are charged for the usage, just like that cloud computing provides an on-demand usage of compute, network, and storage services.

Cloud providers have a shared responsibility model for consumers. The cloud provider manages the infrastructure and virtualized hosts’ security, whereas the consumers are responsible for securely running their applications.

An organization willing to move to the cloud can benefit from the following characteristics of the cloud.

  • Cost-Effective: Pay for what you use, instead of over-provisioning to accommodate peak loads or increased future trends.
  • Operational Excellence: Monitor and audit every aspect of your infrastructure, detect and react to failures before customers do.
  • Replace CAPEX with OPEX: Do not have to spend thousands of dollars to purchase equipment in advance.
  • Automation: Using various tools and services, you can automate as much as your heart desires.
  • Scalability: Allows you not to guess what the system load will be in the future; you can scale as the need arises.
  • Elasticity: Ability to scale up to absorb more traffic/load during peak times and to scale down to save cost during quiet periods.
  • Reliability: running apps in multiple data centers so that one data center going down does not affect the application uptime.
  • Performance: The breadth and depth of cloud services make it easy to achieve the desired performance.

In summary, there is quite a bit of attraction in moving to the cloud. However, this task requires cloud knowledge and the ability to architect cost-effective and secure solutions. CRE8IVELOGIX Inc. is founded by a former AWS Sr. Solutions Architect, who not only have an in-depth knowledge of AWS services but also helped various startups and enterprises with their transformation strategy. We would love to partner with you.