cloud-adoption

Why Organizations Should Move To Cloud?

Before the internet, computers were being used for limited use cases. The dawn of the internet allowed multiple computers to exchange data in a meaningful way, providing the capability to solve many more interesting problems.

Based on this premise, various distributed applications have developed ever since. These applications need to run 24/7 to provide services to their users. Data centers are facilities that provide a structure to host computer servers connected to the internet. These applications are then installed on the servers to serve user requests.

Setting up the proper environment in these data centers required lots of effort and was very time-consuming, hindering the companies from innovating quickly and efficiently. Companies had to purchase or rent racks in these data centers, then purchase computer servers, provision them, install them in the rack, and connect them to the internet.

Once they are done setting up the environment, they have to provide on-going support in patching the OS, monitoring for security threats, handling hardware failure, and replacing old servers with new ones.

That is to say, all this took specialized teams and was a waste of time and resources for most of the organizations. It took away precious time, which could instead be spent developing features that can differentiate their business from others and get them ahead of their competition.

Cloud Computing became popular very quickly to resolve these issues. It provided a way to ease the heavy lifting for these organizations. Cloud computing pretty much refers to renting someone else’s computer and pay them for the use, such as by the hour, etc.

It is analogous to how we use electricity. When we flip the switch to turn on the light, we are charged for the usage, just like that cloud computing provides an on-demand usage of compute, network, and storage services.

Cloud providers have a shared responsibility model for consumers. The cloud provider manages the infrastructure and virtualized hosts’ security, whereas the consumers are responsible for securely running their applications.

An organization willing to move to the cloud can benefit from the following characteristics of the cloud.

  • Cost-Effective: Pay for what you use, instead of over-provisioning to accommodate peak loads or increased future trends.
  • Operational Excellence: Monitor and audit every aspect of your infrastructure, detect and react to failures before customers do.
  • Replace CAPEX with OPEX: Do not have to spend thousands of dollars to purchase equipment in advance.
  • Automation: Using various tools and services, you can automate as much as your heart desires.
  • Scalability: Allows you not to guess what the system load will be in the future; you can scale as the need arises.
  • Elasticity: Ability to scale up to absorb more traffic/load during peak times and to scale down to save cost during quiet periods.
  • Reliability: running apps in multiple data centers so that one data center going down does not affect the application uptime.
  • Performance: The breadth and depth of cloud services make it easy to achieve the desired performance.

In summary, there is quite a bit of attraction in moving to the cloud. However, this task requires cloud knowledge and the ability to architect cost-effective and secure solutions. CRE8IVELOGIX Inc. is founded by a former AWS Sr. Solutions Architect, who not only have an in-depth knowledge of AWS services but also helped various startups and enterprises with their transformation strategy. We would love to partner with you.

agile-process

Agile Development Process

Although every team has their flavor of following and implementing the Agile software development process, there are specific guidelines which have been refined based on the industry feedback. These guidelines are outlined below.

Vision: Every project starts with a vision. The stakeholder defines his/her vision in a few sentences. This may be a very high level of 60,000 feet view of the project.

I would like to develop a mobile banking platform

Project Planning: Once the vision is defined then its time to get together with stakeholders, project managers, engineers, etc and come up with high level 30,000 feet view of the project by defining features called EPICS. High-level EPICs are determined to capture the feature set required for accomplishing the overall vision.

EPIC-1: Customers can send money to other members instantly using their phone numbers.

Milestone Planning: Following the high-level EPICS identification there is a need to define project milestones, which will also help in release planning. EPICs are assigned to different milestones based on their priorities.

Milestone 1.0 will include EPIC-1, EPIC-2, EPIC3.

Milestone 2.0 will include EPIC-4, EPIC-5.

Sprint Planning: Once the features are defined and prioritized, engineers commit to them during the sprint planning sessions. In these sessions, engineers get together with the product owners and sometimes stakeholders to dive deep into the high priority features. During the process, the acceptance criteria for the feature is defined.

GIVEN the member has a bank account

WHEN the member sends money to another registered member

AND the transaction is successful

THEN the receiver will receive money in his account

AND the money should be spendable by the receiver

AND the receiver will get the notification of receiving the money

Story Estimation:After the top priority stories are defined and the requirements are clear, the engineering team votes on the estimates. This helps determine the effort level required to implement the story. Every team has their style of estimation. Some follow the Fibonacci series, and some follow the number of days, etc. If the estimate is too big, the story needs to be broken down into smaller more manageable stories. The story should be short enough to be finished within a few days.

Iteration Start: After estimation, the stories are moved into the iteration, and the engineering team commits to them and starts working. Iterations are usually between 1–4 weeks, but majority teams follow two-week iterations. Iterations represent the heartbeat of the project. At the end of the iterations, there should be some demo-able product.

Daily Stand Ups: Everyday engineering team would meet with their product owners and give updates. These updates are normally focused on three things

What did I do yesterday?

What do I plan on doing today?

Am I blocked by something?

QA: The software qualifies as good quality if its easier to maintain in the long run. It is essential for the success of the project. But merely good quality is not enough, it is also crucial to make sure the software is built as per acceptance criteria, can handle various error scenarios and meets the performance criteria. All these are verified and tested during the QA phase. Once the engineer feels comfortable with the implementation the story is assigned to the QA team for further grilling.

Demo: At the end of the iterations, there should be some demo-able product. During the demo meeting, engineers demo the finished stories to the product owner and/or to stakeholders. If the demo meets the acceptance criteria defined in the story, the story is accepted and considered complete otherwise it goes back to in progress.

Retrospectives: A retrospective meeting usually follows a demo meeting. During retrospective, the team reviews their performance and deficiencies. They focus on what helped them this iteration and what was lacking that can be improved in the next iteration. Once the team identifies problem areas, they brainstorm for a solution. They pick a solution to try out in the next iteration, and that’s how they engage in a continuous improvement cycle. Retrospectives are also the time for the team members to appreciate or encourage each other.

Release: At some point during the project development if a milestone is achieved then a release may be necessary to get the project out in the hands of the customers and get early feedback.

Feedback: After the software is released, feedback is collected, and more stories or epics may be added to either improve an existing feature or add a new feature.

microservices

Microservices? What’s in it for me?

I would like to set the stage for this article with an opening sentence.

Microservice architecture is not a swiss army knife

Understanding this phrase will make us ready to dive into the world of Microservices and discover its advantages as well as drawbacks. This phrase will help us perceive if Microservices Architecture is the right choice for our next project.

Microservices architecture is a way of architecting software applications involving multiple self-contained services. Each service provides specific business functionality and follows its own development and deployment cycle. This is synonymous to the Lego bricks which can be connected to create different objects. Similarly, multiple microservices can be mashed up to create an application that provides a more concrete value to the end users.

Microservices thus offer a reusable infrastructure which can be used in different contexts, e.g., a Messaging app encapsulates messaging service so it can be used with a Banking App, eCommerce App, Ticketing App, etc. It encapsulates the messaging domain so that other applications don’t have to worry about re-inventing the wheel for the messaging system.

Microservices are genuinely independent of each other; they are loosely coupled without any binary dependency between them. They only use each other’s services defined by messaging protocols or external API contracts.

Each of these services can evolve independently, one thing to keep in mind is that these services need to be evolved in a backward compatible way. If care is not taken, it will create a situation where other microservices need to be deployed in lock-step.

Microservices are small, easily manageable applications; therefore it is easy for engineers to get up to speed and be able to contribute without understanding the overall applications architecture. If the whole system is reasonably complex, this also helps with creating agile teams that focus on particular microservices.

Microservices are easy to test, writing the end to end acceptance or integration tests do not require a pile of data to be set up. Each microservice focuses on its domain; therefore data setup for testing is relatively straight forward.

Each properly crafted microservice has its database, which is chosen based on the requirements for that particular domain. Significant schema changes in one microservice do not affect other microservices. This helps tremendously with fewer down times.

If one microservice is hit with the outage, it does not have a significant impact on the usability of the entire system, as other microservices continue to provide their services. This, however, may not be true in case of a microservice which is central to the system such as Authentication App.

Each microservices can use the technology stack, including development language, frameworks, databases, etc suitable for solving problems for that particular domain.

In terms of security, even though microservices have a larger surface area that needs to be secured, but even in case of a breach, only the data managed by that microservice is at risk. Other microservices are not affected as they either have their separate databases or if they share the same database, they will have a different schema and credentials.

Best of all each service can be deployed on the most optimized hardware for that service. If one service requires more memory whereas others require more CPU, then they can each be deployed on the hardware which fulfills those requirements.

Microservices have a fairly small footprint; therefore it takes less time to build and launch, this helps engineers tremendously, giving them the ability to make changes and get much quicker feedback.

Deployment of microservices is effortless, provided they are enhanced in a backward compatible fashion. So utmost attention should be paid not to make changes which will break the contract with external clients, as it will require external clients to be upgraded in lockstep.

Microservices applications are easy to scale, it is easy to identify highly utilized services and focus on scaling just those services rather than scaling the whole infrastructure.

So far we were focused entirely on the good parts of microservices. However, as I mentioned early, it’s not a one size fits all architecture. It is well suited for some applications but not for others.

One issue is with designing the microservices, as it is difficult to come up with a microservice with well-defined boundaries. If the microservices are not crafted with the correct boundaries, they can drive the application towards a big, and messy dependency web. Where each microservice depends on various other services which in turn depend on more services and so on, microservices need to be modularized based on the business domain or functional boundaries which have minimal dependency on each other.

Transaction management across multiple services is the most challenging part of the microservice architecture. Imagine on an e-commerce website, a customer places an order by calling order management microservice, and payment processing is handled by some other microservice. What will happen if the order is placed successfully, but the payment processing fails. There needs to be appropriate infrastructure in place to reverse the order if the payment fails, which of course would be more challenging to handle compared to transactions handled at the database level.

If we look at the overall application comprising of multiple microservices, there are more moving parts, so a lot more places where things can go wrong.

Debugging an issue in microservices based platforms is also exceptionally difficult. The request has to travel through multiple microservices, so pinpointing the exact location where an error or exception occurred is cumbersome.

Hopefully, the advantages and disadvantages outlined in this article will help the reader decide which architecture to choose when designing an application.

0*on9p8-1ENhNADP9e

Not knowing these cloud storage differences can impact success of your project

Storage is an essential component of any architecture. Storage is necessary because it is responsible for storing valuable data produced by the applications. Whether you are an architect or an application developer, you need to understand various storage options, their differences, and the applicable use cases. Knowing this will help you choose the right storage solution for your application and avoid any headaches down the road.

This blog will talk about different storage options and the portfolio of equivalent storage services offered by AWS. I will also discuss use cases that each solution supports.

Choice of storage option is influenced by the semantics of the data that need to be stored or processed. These semantics often define specific scalability, durability, and availability requirements. Therefore it is vital to understand the project’s needs, criticality, and sensitivity of the data, before selecting a particular storage technology.

Block storage: is the one that architects and developers are most familiar with. In this storage technology, each file is divided into several blocks or chunks and stored on the hard disk attached to the server. The disk is usually formated as NTFS, ext3, or some other standard. This technology is mostly used in environments that require frequent updates of the stored files such as databases. As these files are stored in chucks, only the changed section of the file needs to be updated.

Amazon Elastic Block Store — EBS is the block storage solution in AWS Cloud. Before using EBS, you need to mount it to an Elastic Compute Cloud EC2. However, once it is mounted, it can not be mounted to another EC2 instance simultaneously. Therefore it does not support simultaneous access of data from different compute resources. If the EC2 instance stops or terminates, the data on EBS is not lost and can be mounted to another EC2 instance for access.

File Storage: File Storage is also well known among consumers and developers alike. Nowadays, it is common to have an external Network Attached Storage — NAS to store large files or for backups. NAS Servers empower this type of file storage. This storage solution allows sharing of data between different servers over the network.

Amazon Elastic File Storage — EFS is a file-based storage solution available in AWS Cloud. Data in EFS backed storage can be easily shared between multiple EC2 instances.

For Windows-based high-performance workloads, Amazon FSx provides a similar file storage solution.

Object Storage: Object storage is a modern storage technology of the internet. Although both EBS and EFS need to be mounted to EC2 instances, object storage is an entirely independent storage service. Any client that supports the HTTP protocol can communicate with object storage over the internet using API calls. If there is a change in the file in this storage solution, the complete file needs to be replaced.

Amazon Simple Storage Service — S3 is the object store provided by AWS. There are multiple ways to interact with S3, either using AWS Console, AWS CLI, or AWS SDK regardless; the underlying communication occurs using API calls over the HTTP protocol. S3 provides virtually limitless amount of data storage and can empower various workloads such as data lakes.

In summary, AWS provides multiple storage solutions to support different workload requirements. It would be best if you understood the pros and cons of each storage technology to use them effectively.