Set up Service Mesh for your Application using kubectl

To set up Service Mesh for your application, you must configure a bunch of Service Mesh resources.

This section provides an example of managing Service Mesh with kubectl (for more information see, Managing Service Mesh with Kubernetes). An assumption is that a Kubernetes namespace <app-namespace> is created and the application is deployed in that namespace. You can create the Service Mesh custom resources in the same namespace as your application or a different namespace. In this example, we use the <app-namespace> for Service Mesh custom resources.

Application design

This example assumes an application composed of the following.

  • A front-end microservice named ui.
  • Two backend microservices ms1 and ms2.
  • The backend microservice ms2 has two versions ms2-v1 and ms2-v2.

The following are the assumptions for the Kubernetes cluster.

  • Each of the microservices ui, ms1, and ms2 have a Kubernetes service defined with the same name to enable DNS-based hostname lookup for them in the cluster.
  • The ui Kubernetes service definition has a selector that matches the ui pod.
  • The ms1 Kubernetes service definition has a selector that matches the ms1 pod.
  • The ms2 Kubernetes service definition has a selector that matches the ms2-v1 and ms2-v2 pods.
  • The cluster has an ingress Kubernetes service of type load balancer to allow ingress traffic into the cluster and has a selector that matches the ui pod.
Note

Kubernetes Service Mesh resource creation YAML configuration data must be in a particular order.
  1. Mesh
  2. Virtual Service
  3. Virtual Deployment
  4. Virtual Service Route Table
  5. Ingress Gateway
  6. Ingress Gateway Route Table
  7. Access Polices
  8. Virtual Deployment Binding
  9. Ingress Gateway Deployment

Whether your Kubernetes configuration resources are in a single YAML file or multiple YAML files, the ordering of resources remains the same.

Create Service Mesh Resources

To enable Service Mesh for your application, you need to create two sets of resources:

  1. Service Mesh Control Plane resources
  2. Service Mesh binding resources.

In this example, we manage the Service Mesh with kubectl and create custom resources in the Kubernetes cluster to create the control plane resources. The Service Mesh binding resources are always created as custom resources in the Kubernetes cluster.

You create the Service Mesh control plane resources based on your application design. The following suggestion is how you would model the Service Mesh resources based on the preceding application design.

  1. Mesh: Create a service mesh named app-name
  2. Virtual Service: Create three virtual services (ui, ms1, ms2) corresponding to the three microservices
  3. Virtual Deployment: Create four virtual deployments, one for each version of the microservice (ui, ms1, ms2-v1, ms2-v2)
  4. Virtual Service Route Table: Create three virtual service route tables, one for each of the virtual services to define the traffic split to the virtual service versions
  5. Ingress Gateway: Create one ingress gateway to enable ingress into the mesh
  6. Ingress Gateway Route Table: Create one ingress gateway route table, to define the traffic routing for incoming traffic on the ingress gateway
  7. Access Policies: Create one access policy with rules enabling access for traffic between the microservices in the mesh
Create the Service Mesh control plane resources using a local Service Mesh configuration file on your system:
kubectl apply -f meshify.yaml

After applying the service mesh resources using the kubectl command, you need to wait until all the resources are in active state:

  1. List all custom resources.
    kubectl get crd
    NAME                                                   CREATED AT
    accesspolicies.servicemesh.oci.oracle.com              2022-05-10T21:50:24Z
    autonomousdatabases.oci.oracle.com                     2022-05-10T21:50:24Z
    catalogsources.operators.coreos.com                    2022-05-10T21:48:21Z
    clusterserviceversions.operators.coreos.com            2022-05-10T21:48:23Z
    ingressgatewaydeployments.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    ingressgatewayroutetables.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    ingressgateways.servicemesh.oci.oracle.com             2022-05-10T21:50:24Z
    installplans.operators.coreos.com                      2022-05-10T21:48:24Z
    meshes.servicemesh.oci.oracle.com                      2022-05-10T21:50:24Z
    mysqldbsystems.oci.oracle.com                          2022-05-10T21:50:24Z
    olmconfigs.operators.coreos.com                        2022-05-10T21:48:24Z
    operatorconditions.operators.coreos.com                2022-05-10T21:48:25Z
    operatorgroups.operators.coreos.com                    2022-05-10T21:48:26Z
    operators.operators.coreos.com                         2022-05-10T21:48:26Z
    streams.oci.oracle.com                                 2022-05-10T21:50:24Z
    subscriptions.operators.coreos.com                     2022-05-10T21:48:27Z
    virtualdeploymentbindings.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    virtualdeployments.servicemesh.oci.oracle.com          2022-05-10T21:50:25Z
    virtualserviceroutetables.servicemesh.oci.oracle.com   2022-05-10T21:50:24Z
    virtualservices.servicemesh.oci.oracle.com             2022-05-10T21:50:24Z
  2. List all objects in custom resource definition by replacing the name of the custom resource in <service-mesh-crd-name> and the namespace where the custom resource is located (<crd-namespace>).
    kubectl get <service-mesh-crd-name> -n <crd-namespace>
    NAME                                            ACTIVE   AGE
    app-namespace/app-name                          True     1h

With these steps completed, your service mesh resources are available in the console.

Service Mesh Binding Resources

The next step is to bind the Service Mesh control plane resources with your infrastructure, in this case the pods in the Kubernetes Cluster. This binding resource enables automatic sidecar injection and pod discovery for proxy software. For more information on binding resources, see: Architecture and Concepts.

The following are the binding resources.

  1. Virtual Deployment Binding: Create four virtual deployment binding resources to associate each of the four virtual deployments in the control plane to the corresponding pods representing those virtual deployments.
  2. Ingress Gateway Deployment: Create one ingress gateway deployment to deploy the ingress gateway defined in the control plane.

Ensure you have enabled sidecar injection in your Kubernetes namespace by running the following command. If you do not enable sidecar injection, the proxies are not injected in your application pods.

kubectl label namespace app-namespace servicemesh.oci.oracle.com/sidecar-injection=enabled
Next, bind the Kubernetes services and deployment to the service mesh with the command:
kubectl apply -f bind.yaml

For more information on mTLS and routing policies, see:

Use Service Mesh Ingress Gateway

So far we have setup and deployed the ingress gateway but the incoming traffic must be redirected to it. Assuming you have an ingress service of type LoadBalancer in your Kubernetes Cluster you have to update it to point to the ingress gateway.

kubectl apply -f ingress.yaml

Enabling Egress Traffic for your Service Mesh

To allow outgoing egress traffic from your service mesh, an access policy must be configured. To create an egress access policy, use the kubectl apply command. For example:

kubectl apply -f egress-access-policy.yaml

The following sample yaml configuration file creates an egress access policy. The policy defines two egress rules, one for HTTP and one for HTTPS. For an external service, three protocols are supported: HTTP, HTTPS, and TCP. The protocols correlate with the httpExternalService, httpsExternalService, and the tcpExternalService keys in Kubernetes. Host names and ports can be specified to for each entry.

For more information on creating an access policy with the console, see: Creating an Access Policy.

For more information on creating an access policy with the kubectl, see: Managing Access Policies with kubectl.

Add Logging Support to your Mesh

Now that your application has Service Mesh support, you can add logging features. After adding logging features, you can see your logs in the OCI Logging Service.

Note

To create the policy that allows your instances to support logging, follow the instructions in Set up Policies required for Service Mesh.

Next, set up the OCI Logging service to store your access logs. Set up log scraping by creating a log group and custom Log.

  1. Create the log group:
    oci logging log-group create --compartment-id <your-compartment-ocid> --display-name <your-app-name>
  2. Get the OCID for your new log group.
    • From the console, go to Observability & Management under Logging select Log Groups.
    • Click the name of the log group you created in the preceding step.
    • Locate the OCID field and click Copy. Save the OCID in a text file.
  3. Create a custom log in the log group:
    oci logging log create --log-group-id <your-log-group-ocid> --display-name <your-app-name>-logs --log-type custom
  4. Get the OCID for your new log group.
    • From the console, go to Observability & Management under Logging select Logs.
    • Click the name of the log you created in the preceding step.
    • Locate the OCID field and click Copy. Save the OCID in a text file.
  5. On your system, create the logconfig.json configuration file using the following sample file. Ensure to put in the OCID for your custom log in the logObjectId field.
    {
      "configurationType": "LOGGING",
        "destination": {
          "logObjectId": "<your-custom-log-ocid>"
        },
        "sources": [
          {
            "name": "proxylogs",
            "parser": {
              "fieldTimeKey": null,
              "isEstimateCurrentEvent": null,
              "isKeepTimeKey": null,
              "isNullEmptyString": null,
              "messageKey": null,
              "nullValuePattern": null,
              "parserType": "NONE",
              "timeoutInMilliseconds": null,
              "types": null
            },
            "paths": [
              "/var/log/containers/*<app-namespace>*oci-sm-proxy*.log"
            ],
            "source-type": "LOG_TAIL"
          }
        ]
    }
  6. Create a custom agent-configuration to scrape the log files for the proxy containers:
    oci logging agent-configuration create --compartment-id <your-compartment-ocid> --is-enabled true --service-configuration file://your-log-config.json --display-name <your-app-name>LoggingAgent --description "Custom agent config for mesh" --group-association '{"groupList": ["<your-dynamic-group-ocid>"]}'
Note

For information on how to configure your log, see: Agent Management: Managing Agent Configurations

Add Application Monitoring and Graphing Support

To add Kubernetes monitoring and graphing support for your application, you need to have Prometheus and Grafana installed as specified in the prerequisites. In addition, you need to configure Prometheus to enable scraping metrics from the Service Mesh proxies.

The service mesh proxies expose the metrics on the /stats/prometheus endpoint. When creating the ClusterRole for the prometheus service, include /stats/prometheus in the "nonResourceURLs." See the following ClusterRole configuration example.

Add Scrape Job

As a part of the prometheus scrape config you need to add a job to scrape metrics from the Service Mesh proxy endpoints. See the following scrape_config example.

More Information

For more information on managing mesh resources, see:

Next: Configure OCI Service Operator for Kubernetes Service Mesh