A service mesh is an infrastructure layer added alongside your application code which
provides security, traffic control, and observability features to your
application.
How Service Mesh Works
As shown in the following diagram, a service mesh is implemented by deploying a proxy
alongside each microservice which receives configuration information from a managed
control plane. The mesh uses the proxies to perform incoming and outgoing requests
on behalf of your microservice application. This configuration enables the security,
traffic control, and observability features of a service mesh. Specialized ingress
gateway resources manage ingress traffic to the mesh.
Figure 1. Service Mesh Overview
The preceding example diagram shows a high-level diagram displaying Service Mesh with
two services enclosed in a mesh. Each service is connected to a proxy. The proxies
bring incoming and outgoing traffic to each service. An access policy is required to
allow outgoing traffic from the mesh.
Figure 2. BookInfo Application on Service Mesh
Note
The gray rectangular boxes in the picture represent virtual deployments in the
application. The named virtual deployments include: Product Page, Details, Reviews
v1 to v3, and Ratings.
The preceding diagram shows the BookInfo sample application deployed on a Service
Mesh. Ingress traffic is routed through an ingress gateway and ingress gateway route
table to the main Product Page service. The different versions of the Reviews
service represent virtual deployments for that service. Access policies are set up
to determine service to service communication.
Service Mesh Resource Overview 🔗
In practice, a service mesh is composed of resources that are logically mapped to key
components in your application. A mesh is a top-level resource that includes all
mesh resources that represent a microservice within an application. Virtual
services, ingress gateways, and access policies are contained in a mesh.
A Virtual Service is a logical representation of a service in a service mesh. A
virtual service might contain virtual deployment, virtual deployment bindings, and
virtual service route table resources. Different versions of a virtual service are
defined using virtual deployments. Virtual service route tables can be defined to
route traffic to specific virtual deployments. Virtual deployment bindings are
generated to associate virtual deployments with the pods in an application
cluster.
Ingress gateways manage ingress traffic to the mesh. For example, security can be
configured to enable encryption on all incoming/outgoing traffic using TLS
(Transport Layer Security). An ingress gateway might contain ingress gateway
deployments and ingress gateway route tables. Ingress gateway route tables define
rules to route traffic to virtual services. An ingress gateway deployment generates
the configuration information to deploy the proxy software to the application
cluster. Finally, access policies define access rules for communication between
virtual services and to external services.
The following diagram defines the relationships between each of the Service Mesh
resources.
Figure 3. Service Mesh Resource Hierarchy
The preceding diagram shows a service mesh that includes three components: a virtual
service, ingress gateway, and access policy. The virtual service has two components:
virtual deployment and virtual service route table. The ingress gateway component
has an ingress gateway route table component.
Note
Each resource in the hierarchy has a one-to-many relationship with child
resources.
Service Mesh Related Concepts 🔗
Service Mesh relies on various cloud and network resources. The following is a list
of key concepts related to Service Mesh.
Application
An application is a program or group of programs (service) designed to
run together and accomplish their intended tasks. Websites are a classic
example of an application which is made up of a front-end program and a
backend program.
Certificate Authorities
A certificate authority (CA) issues certificates and subordinate CAs. CAs
exist to certify the ownership of a public key in a given certificate. A
CA certificate authenticates the CA signature on the certificates that
the CA issues. CAs exist in a hierarchy where the CA at the top is known
as the root CA and any CA that exists within the hierarchy is a
subordinate CA.
A CA hierarchy establishes a chain of trust (or certification path) in
which each entity signs the entity below it in the chain. The root CA is
self-signed. For a certificate to be trusted, the root CA must be a
trusted root CA according to the endpoint performing the
validation.
Microservices
Microservices are an architectural approach to developing a single
application as a suite of small services. Each service runs in its own
process and communicates with lightweight mechanisms such as HTTP. These
services are built around business capabilities and use automated
deployment techniques. The services have minimal centralized management
and might be written in different programming languages using different
data storage technologies. For a more detailed definition, see: martinfowler.com:
Microservices.
Mutual Transport Layer Security (mTLS)
Mutual TLS, or mTLS for short, is a method for mutual authentication. mTLS
ensures that the parties at each end of a network connection are who they
claim to be by verifying that they both have the correct private key. The
information within their respective TLS certificates provides more
verification.
OCI Service Operator for Kubernetes
The OCI Service Operator for Kubernetes makes it easy to create, manage,
and connect to OCI resources from a Kubernetes environment. Kubernetes
users can simply install OCI Service Operator for Kubernetes and perform
actions on OCI resources using the Kubernetes API. The OCI Service
Operator for Kubernetes removes the need to use the OCI CLI or other OCI
developer tools to interact with a service API.
OCI Service Operator for Kubernetes is based on the Operator Framework,
an open source toolkit used to manage Operators. It uses the
controller-runtime library, which provides high-level APIs and
abstractions to write operational logic and also provides tools for
scaffolding and code generation for Operators.
Proxy
An intermediary server between a client requesting a resource and an
application providing that resource. Instead of a request going directly
to an application such as a web page, the request goes to the proxy.
Then, the proxy authenticates and load balances the request. The proxy
talks to the application and provides the client the resources for that
request.
OCI Service Operator for Kubernetes provides a seamless experience for
managing and connecting to OCI container-native applications. After
installing OCI Service Operator for Kubernetes, Kubernetes users can
perform actions on OCI resources like Service Mesh using the Kubernetes
API rather than the OCI CLI or other OCI developer tools. OCI Service
Operator for Kubernetes is based on the Operator Framework, an open
source toolkit used to manage operators. It uses the controller-runtime
library, which provides high-level APIs and abstractions to write
operational logic and also provides tools for scaffolding and code
generation for operators.
Telemetry
Measuring the networking performance of applications. To measure
performance, you look at:
Latency
The time it takes for an application to respond.
Traffic
The number of requests that your service gets. Some examples are
HTTP requests, or queries per second for the database in your
application.
Saturation
The limit of requests that your service can handle. For example,
the maximum number of database queries per second that your
service can handle.
TLS
Transport Layer Security (TLS), the successor of the now-deprecated
Secure Sockets Layer (SSL), is a cryptographic protocol designed to
provide communications security over a computer network.
Service Mesh Concepts 🔗
Given the preceding overview, this section provides a deeper dive into each of the
resources in Service Mesh. The following is a detailed list of the resources and
concepts used in Service Mesh.
Mesh
Mesh is the top-level container resource that represents the logical
boundary of application traffic between the services that reside within
it. With the resources in a service mesh, you can manage the traffic
coming into the mesh, define the services available in the mesh, and
manage the traffic between the services you define. To manage your mesh,
the following resources are used:
A Virtual Service is a logical representation of a service in a service
mesh. Each virtual service has its own configuration for the service
host name, TLS certificates (client and server), and Certificate
Authority bundles. Virtual services support multiple versions through
virtual deployments. Also, the virtual service also contains route
tables which route virtual service ingress traffic to specific versions
of the service.
A virtual deployment is a version of a virtual service in the
mesh. Conceptually, it maps to a group of instances/pods
running a specific version of the actual microservice. Each
virtual deployment has its own configuration for service
discovery type, host name, network protocol, and
logging.
A virtual service route table contains a list of routing
rules which are used to manage the ingress traffic to a
virtual service. Route rules route requests to specific
virtual deployments of a virtual service. The route rules
allow the developers to split traffic based on protocol and
path.
For more information on virtual service route tables,
see:
A virtual deployment binding associates the pods in a
Kubernetes cluster to a virtual deployment in a mesh. This
binding resource enables automatic sidecar injection and pod
discovery for proxy software. Automatic version upgrades for
proxy software are enabled in the config map.
For more information on virtual deployment bindings, see:
An ingress gateway allows resources that are outside of a mesh to
communicate to resources inside the mesh. The ingress gateway sits on
the edge of a service mesh receiving incoming HTTP/TCP connections to
the mesh. You can specify host names and listening ports for inbound
traffic to your mesh. Also, you can choose which protocols (for example,
HTTP, TCP) and whether secure connections are required with TLS. Ingress
gateways also support mTLS which provides encryption and authentication
for inbound connections. Encrypted connection options can be configured
with Oracle Cloud Infrastructure Certificate Service to automatically
manage certificates. Finally, you have the option of specifying a
passthrough and letting your virtual service handle connection
options.
Ingress gateways provide flexible routing policies based on protocol,
path, and port. Log and metrics options provide visibility over external
requests.
Each ingress gateway can have one or more route tables that
specify the rules for incoming requests and direct them to
virtual services within the mesh. Rules are based on
protocol and path. When the HTTP protocol and path are
specified, routing options are included for gRPC headers,
path rewriting, and host name rewriting. Priorities can be
assigned to each rule and weights for destination virtual
services.
For more information on ingress gateway route tables,
see:
After an ingress gateway is created, you can deploy the proxy
software to the application cluster configured as an ingress
gateway (different from Kubernetes Ingress resources). An
ingress gateway deployment is created for this purpose. The
deployment offloads the management of the deployment and
pods backing the ingress gateway to OCI Operator for
Kubernetes. An ingress gateway deployment is only required
for Kubernetes-based workloads. The deployment is local to
the cluster and is not replicated back to the Service Mesh
control-plane.
Note
OCI Operator for Kubernetes is an open source Kubernetes add-on that allows users to manage OCI resources through the Kubernetes API. OCI Operator for Kubernetes makes it easy to create, manage, and connect to OCI resources from a Kubernetes environment and using Kubernetes tooling. OCI Operator for Kubernetes can be used on Kubernetes clusters running on OCI Kubernetes Engine (OKE) or outside OCI.
For more information on ingress gateway deployments, see:
An Access Policy sets access rules to virtual services in a mesh. By
default all requests are deny-all if no access policy
exists. The use of access policies in mesh networks allows
administrators to control how services communicate with one another.
Access policies work on three categories of traffic.
Internal Mesh Traffic: The requests that are flowing between
virtual services within a mesh.
Ingress Traffic: The requests that a virtual service receives
from clients outside a mesh.
Egress Traffic: The requests that a virtual service makes to
services/applications outside a mesh.
By default, a mesh enforces a "Deny All Traffic" policy for
traffic between virtual services. Virtual services are not
able to call each other and the destination proxy instance
rejects any requests. Policy statements are required to
allow traffic between virtual services. After a policy is
created, the following rules are evaluated:
If a policy exists for the source and the destination
virtual service that matches the request, allow the
request.
If no access policies exist, deny the request.
Ingress and Egress Traffic
External Services are the set of clients and services that
invoke the services within the mesh. These external services
could be applications hosted in another service mesh or
standalone applications not part of any service mesh. From
the client proxy's perspective, these services are not in
the same service mesh.
By default, the service mesh denies requests to and from all
external services. To allow traffic inside a mesh, add
policy statements to allow ingress and egress traffic. Rule
evaluation is the same as the rules laid out in the
preceding definition.
Service Mesh Processes 🔗
After deployment to a Kubernetes cluster, the cluster runs three processes that are
essential to the working of Service Mesh. These processes require permissions for
proper functioning.
Mesh Kubernetes Operator: The OCI Service Operator for Kubernetes contains the
mesh Kubernetes operator which manages the lifecycle of all mesh custom
resources. The Kubernetes operator is also responsible for performing control
plane operations.
Mesh Proxies: Mesh proxies run next to the application containers and connect
with Service Mesh backend to download various configurations for traffic
routing, security, and so on.
Logging Agent: Service Mesh provides rich access logs for Observability which is
achieved through a logging agent. The logging agent connects with the Logging
backend to publish the logs.