There’s a perfect storm brewing that promises to disrupt the traditional, widely practiced approach to building and delivering Web applications.
The storm is fueled first and foremost by market opportunity. As we become a digital-centric world, businesses are either differentiating through technology, or they are losing out to those who can. Enterprises need to develop applications faster, users want to consume the enterprise’s services in multiple ways, and developers are frustrated by having to deal with legacy systems. This is not entirely new, but we are reaching a tipping point for development teams, challenging each of us to think about how applications are built and delivered.
With conflict comes innovation, and the pressure on development teams has brought to life some exciting new ways to design, develop, test, deliver, and optimize applications. Microservices, containers, and APIs have become all the rage. In all my years in the industry, I don’t think I’ve ever seen such a rapid growth of interest in development tools.
In summary, it means replacing a monolithic application architecture (where a few large components provide a single application targeted at a single user and device) with a set of small, single-function, loosely coupled applications that communicate mostly through APIs and are easily assembled into bespoke experiences for distinct users or devices.
How microservices, containers, and APIs all fit together can seem confusing at first, so it’s useful to keep four basic principles in mind:
Applications are collections of functionalities that are simultaneously consumed by many different types of users and client devices. Don’t regard an application as a single product that meets a specific need. Your applications (now microservices) should provide the services your business delivers, and different users (clients, partners, or employees) should be able to access distinct subsets of that functionality. Ideally, your users will access those services through different client devices in a form that is tailored to their needs and context. In addition, you likely will want to expose some of those microservices as APIs so that they can be easily consumed by internal and external partners.
Microservices are the building blocks of modern applications. Application functionality, wherever possible, should be broken into lightweight, discrete services, each of which meets a particular business-focused concern. These concerns are often focused on data (a microservice might manage inventory records, publish pricing data, or generate log files, for example) or functionality (such as providing delivery estimates or performing searches).
Microservices components are loosely coupled, accessed using APIs. Microservices are a little like the code or object libraries of legacy applications, but they are not tightly bound to the application. Instead, they are accessed using an API, the specification of which forms a contract (“this is the service I provide, and this is how you consume it”). Loose coupling through APIs creates a huge degree of freedom. Each type of microservice can be created and managed independently, using the language and framework the developers are most comfortable with.
Deployment is based around the concept of immutable containers. Your devops team no longer needs to think in terms of traditional, “big bang” deployments. Individual services can be deployed and updated independently. Couple this with the very efficient virtualization that is enabled by Docker and other container technologies, and you get a new approach to deployment. Microservices are checked out and deployed from your configuration database. Microservices infrastructure is immutable (cannot be changed), so any changes in the configuration database provoke a new deployment of the service instances.
When you are ready to start adopting a microservices architecture and the associated development and deployment best practices, you’ll want to follow the three C’s of microservices: componentize, collaborate, and connect.
Microservices rule No. 1: Componentize
The first stage of many new IT initiatives is to identify a pilot project, and that approach is as suitable when adopting microservices.
Pilot projects serve to discover new technologies, processes, and practices of working. It’s important to set appropriate goals, accepting that even if the pilot project doesn’t go smoothly, the lessons learned will support and streamline future initiatives.
Select a component of an existing application that can be easily separated into a microservice -- perhaps a function such as search or a set of objects that is currently represented as a group of database columns. Begin by defining a RESTful API to access this service, then plan and create an implementation using whatever development language and platform your development team is most comfortable with.
You need to select a range of tools to support the microservice. Wherever possible, keep it simple by using the tools you already know, without compromising the four principles explained above.
Your goal should be to create a microservice with an integrated process for development, test, and deployment, bringing you well along the road toward continuous delivery.
Microservices rule No. 2: Collaborate
People are more important than process. It’s key to share the lessons learned during the pilot program with the entire development team, so that when you expand the scope of your microservices initiative, they are supportive and willing to embrace the change.
As you plan to decouple your application into smaller, independent services, expect to split your existing teams into smaller, independent units. Jeff Bezos, founder and CEO of Amazon.com, famously coined the idea of a “two-pizza team” -- that is, teams should not be larger than what two pizzas can feed. This idea speaks more to the challenges of communication than the appetite of developers; the message is that communication within teams larger than a certain size becomes disproportionately complex, leading to more mistakes and slowing the pace of development.
Within each team, you must have the full set of skills needed to create a simple service -- presentation, logic, data -- and each team should take responsibility for the development and test framework of the services they create. That’s why it’s so important to be open and share the lessons of the pilot project.
Between teams, collaboration centers around two items: technology standardization and API contracts. Technology standardization ensures that each team’s output (the microservice) is deployable on the shared infrastructure. The API contract is the formal expression of how the microservice is to be consumed; provided that this contract is comprehensive and the team adheres to it, the team is free to reimplement or refactor the internals of the microservice at will.
Microservices rule No. 3: Connect
The successful delivery of an application involves much more than the creation of the constituent components. These components must be connected and a presentation layer and additional services layered in, then the completed application must be delivered to users.
Given that microservices communicate using APIs, the most natural way to orchestrate and ensure the reliability of that communication is by using a stable, reliable, persistent reverse proxy such as Nginx. A reverse proxy is a software device that acts on behalf of a real server.
The reverse proxy provides the “public face” of your application. You do not need to expose each microservice instance to the outside world; instead the reverse proxy can accept and route all of the API and other traffic on behalf of the services.
When clients need to consume a service, they therefore don’t access it directly. After all, there may be several instances of the service, and the IP addresses of these instances might be dynamic or unknown to the client. Instead, the client contacts a stable, known reverse proxy, which then forwards the request to a real service instance.
By using a proxy in this way, you can layer additional control and management over your services. When you deploy services, only the proxy layer needs to know. Also, the proxy can load balance, cache, and scale microservices independently to improve capacity and reliability in a highly efficient manner. The proxy is also a great point of control for external requirements such as authentication, security, and access control, and a place where you can implement instrumentation, rate limiting, and logging in a consistent fashion.
The benefits of the proxy model is that your developers do not need to code all of this functionality into each microservice instance, and the business can change the delivery rules (such as security, rate limits, or metering) quickly and easily.
In many ways, microservices and the API-driven approach are a reinvention of the service-oriented architecture approach we saw a generation ago. However, there are subtle but important differences. The modern, microservices approach is (critically) less prescriptive, more flexible, and easier to adopt in phases.
Microservices and the technologies that support them can increase your pace of innovation and the reliability of your deployments, both being key competitive advantages in this fast-paced world. With change as the only constant in our lives today, we’ll take all of the advantages we can get.
Owen Garrett is head of products at Nginx.
inquiries to email@example.com.
This story, "Three keys to successful microservices" was originally published by InfoWorld.