This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Odigos Documentation

Welcome to the Odigos user guide! We will show you how to get started sending telemetry data such as traces, metrics and logs within minutes, using Odigos.

Odigos is an Open-Source Observability Control Plane that allows developers to easily create and build their observability pipelines, by abstracting away the complexities of technologies such as eBPF and OpenTelemetry.

Every new line of code automatically gets metrics and distributed traces without intervention from the developer and with no code changes necessary. With Odigos, production issues can be resolved using the best observability tools available, without the overhead of instrumenting code or taking care of collector agents.

The name Odigos originates from Greek, meaning guide

Odigos in Short

🧑‍💻 Automatic instrumentation Odigos detects the programming language of your applications and apply automatic instrumentation accordingly..
📖 Open Technologies Observability pipelines created by Odigos are based on popular, battle tested, open source technologies. Specificlly, OpenTelemetry and eBPF.
⚖️ Scales with your data Odigos scales the number of collectors based on the traffic of your applications. No need to manage complex collectors topology
📈 No learning curve Use advanced features like tail-based sampling without complex YAML configurations
✅ Best practices out of the box API key are saved as Kubernetes secrets, minimal collector images contains only relevant exporters and many more
☸️ Cloud Native Odigos is specially designed to instrument containers deployed in Kubernetes. Providing a natural experience to Kubernetes users.

Who is Odigos for?

Application developers - focus on writing code. Odigos takes care of producing metrics ands traces for any open source library in use, without any code changes.

DevOps engineers - Odigos automatically deploys and scales collectors for you, according to the current traffic of your applications. No need to spend time deploying and configuring collectors.

Where do I start?

1 - Getting Started

In this tutorial we are going to use Odigos for getting automatic observability of a microservices application written in Go, Java, Python, .NET and Node.js. We will also send and explore all the collected data in Datadog.

Prerequisites

To follow the guide, you need the following:

  • A Kubernetes cluster. We recommend Kubernetes kind for trying Odigos out in a local development environment.
  • Helm CLI. We are going to install Odigos using a helm chart.
  • A Datadog account with API key. Go to Datadog website to create a new free account. In addition, create a new API key by navigating to Organization settings, then click on API keys, and create a new key.

Creating the Kubernetes cluster

Create a new local Kubernetes cluster, by running the following command:

kind create cluster

Deploying the target application

We are going to install a fork of microservices-demo, an example of e-commerce application created by Google. We use a modified version without any instrumentation code to demonstrate how Odigos automatically collects observability data from the application.

Deploy the application using the following command:

kubectl apply -f https://raw.githubusercontent.com/keyval-dev/microservices-demo/master/release/kubernetes-manifests.yaml

Before moving to the next step, make sure that all the application pods are running (may take a few moments).

Installing Odigos

The easiest way to install Odigos is to use the helm chart:

helm repo add odigos https://keyval-dev.github.io/odigos-charts/
helm install my-odigos odigos/odigos --namespace odigos-system --create-namespace

After all the pods in the odigos-system namespace are running, open the Odigos UI by running the following command:

kubectl port-forward svc/odigos-ui 3000:3000 -n odigos-system

And navigate to http://localhost:3000

Choosing where to send the data

You should now see the following page:

After Odigos detected all the applications in the cluster, choose the opt out option for applications instrumentation. Opt in mode is recommended when you want to have greater control over which applications are instrumented.

On the next page, select Datadog as the destination for the data. You will be required to fill in the following information:

  • Name: some unique name for the destination.
  • Site: This is the domain used for your datadog account, in our exaple it is us5.datadoghq.com. (list of available sites)
  • API Key: The api key created in the previous step.

Generating data

That’s it! Odigos will automatically do the following:

  • Instrument all the applications in the cluster:
    • Runtime langauges will be instrumented using OpenTelemetry instrumentation.
    • Compiled languages will be instrumented using eBPF.
  • Deploy and configure a collector to send the data to Datadog.

Now all that is left is to generate some traffic in the application. Execute the following command to port forward into the application UI:

kubectl port-forward svc/frontend 1234:80 -n default

Navigate to http://localhost:1234 and perform some purchases.

Exploring the collected data

In a few minutes, you should see distributed traces in Datadog APM. You now have all the data needed to understand how your application is behaving, without having to do any additional work.

Any new application deployed to this Kubernetes cluster will automatically be instrumented and sent to Datadog.

Cleanup

Delete the Kubernetes cluster by running the following command:

kind delete cluster

2 - Architecture

Goals

Odigos acts as a control plane for all the observability needs in a cluster. It is responsible for:

  • Automatic instrumentation of applications
  • Automatic configuration and deployment of collectors
  • Infrastructure observability (Kubernetes nodes observability data)

High Level Architecture

These tasks are performed by 4 microservices:

  • Instrumentor
  • Autoscaler
  • Scheduler
  • UI

The different microservices communicate via Kubernetes API server (see Custom Resources for more details).

The following diagram shows the architecture of the Odigos observability system.

Odigos Architecture

Instrumentor

The instrumentor microservice is responsible for automatic detection of applications in the cluster and instrumentation of them. Automatic instrumentation is done according to the applications selected by the user in the UI. The instrumentor may change the arguments passed to the instrumentation SDK to reflect the following changes:

  • A configuration change made by the user (for example changing the sampling rate in the UI)
  • Rescheduling done by the scheduler (when the collectors pipeline changes)

Language Detection

A key part of being able to automatically instrument every new application is to be able to detect the language of the application. After the language is detected Odigos will perform automatic instrumentation according to the language. For runtime languages Odigos uses the appropriate OpenTelemetry instrumentation. For compiled languages Odigos uses eBPF instrumentation. In order to detect the language of the application Odigos deploys a lang detection pod that analyzes one of the target application instances. This pod is deployed on the same node as the target instance and is able to look into the target pod filesystem.

The lang detection pod uses the following heuristics in order to detect the language of the application:

  • process name
  • environment variables
  • dynamically loaded libraries

The lang detection pod reports the detected language by leveraging the TerminationMessagePath field of the Pod resource.

Autoscaler

Autoscaler is responsible for deploying and configuring the collectors. Deployment of collectors is done in two scenarios:

  • A user action in the UI (for example, adding a new destination)
  • A change in observabiltiy traffic (for example, if one of the applications sends most of the data, the autoscaler may decide to deploy a dedicated collector for that application)

Scheduler

The scheduler service assigns applications discovered by the instrumentor to the collectors pipeline create by the autoscaler.

UI

Odigos UI is a Next.js application that allows the user to control their observability needs. The UI is not accessible outside of the cluster. In order to access to the UI the user should use port forwarding by executing the following command:

kubectl port-forward svc/odigos-ui 3000:3000 -n odigos-system

3 - Telemetry Types

Odigos supports the producing and correlation of the following telemetry types:

  • Distributed traces
  • Metrics
  • Logs

For a telemetry type to be collected, a destination that accepts it must be configured. In addition, users can enable or disable the collection of a telemetry type on a per-application basis. If none of the applications are configured to collect a telemetry type, the telemetry type will not be collected.

Traces

Distributed traces collected by Odigos will automatically include spans for popular open source projects such as HTTP clients and servers, gRPC clients and servers, Database clients, and many more.

In addition, users can enrich their distributed traces with manually created spans by using the relevant OpenTelemetry APIs.

The collection of traces is achieved by combining two open source technologies:

  • OpenTelemetry for languages with JIT compilation such as Python, Java,.NET and Javascript.
  • eBPF for compiled languages such as Go.

Being based on popular open-source standards allows Odigos to automatically support a huge number of libraries and frameworks. Below is a list of supported libraries for every language:

Metrics

There are three kinds of metrics that Odigos supports:

  • Metrics related to the running of the application (number of HTTP requests, latency, DB connections, etc.)
  • Metrics related to the language runtime (GC, threads, heap, etc.)
  • Metrics related to the host environment (CPU, memory, disk, etc.)

Applicative Metrics

Application-related metrics are derived from the distributed traces described above. Many observability vendors automatically compute application metrics according to the distributed traces. For example, the number of requests served by the application is computed by counting the number of spans with the http.server label. In case a destination that does not automatically compute application metrics (like Prometheus + Tempo) is configured, Odigos will compute them on its own by invoking the spanmetrics processor.

Runtime Metrics

Those metrics describe the behavior of the language runtime, such as garbage collection, threads, heap, etc. Runtime metrics are collected by the same mechanism that collects distributed traces.

Host Metrics

Host-related metrics are collected via the host metrics receiver

Logs

Currently, Odigos will ship logs written by the application to the stdout or stderr of the process. Correlation to other telemetry types will be done according to timestamp and resource.

We are in the process of shipping logs using the same mechanism that collects distributed traces, this would allow users to correlate logs to distributed traces and metrics in a more accurate way.

4 - Custom Resources

The different components of the Odigos observability control plane work together to achieve observability for the cluster. The interaction between the components is performed via the Kubernetes API server.

The autoscaler, instrumentor and scheduler are Kuberentes Operators. They coordinate by writing and watching the odigos custom resources in the Kubernetes API server.

Odigos adds the following custom resources to the Kubernetes API server:

Destination

This custom resource is used to define the backend destinations for the observability data. Destinations can be either vendors (Datadog, Honeycomb, etc) or on premise.

The destination object holds any data that is needed to connect to the backend system. Notice that sensitive fields such as API keys are stored in a Kubernetes secret and referenced by the destination object.

InstrumentedApplication

This object is used to define the applications that should be instrumented. There is a single InstrumentedApplication object per selected Deployment / StatefulSet. The InstrumentedApplication object holds information about the application that should be instrumented, such as the detected programming language. InstrumentedApplication objects are created and managed by the instrumentor.

CollectorsGroup

The CollectorsGroup object is used to define a group of collectors that have a shared role. For example, a collectors group might include collectors that are deployed as DaemonSet that are responsible for collecting telemetry data, or collectors that are deployed as Deployment and are responsible for shipping the data to the selected destinations.

5 - Contribution Guidelines

5.1 - Adding New Observability Destination

There are tens if not hundreds of different observability destinations. Odigos goal is to provide a seamless and easy way to ship observability data to any one of them.

In this guide, you will learn how to contribute a new destination to Odigos. We will create a new dummy destination called mydest. Creating a new destination involves two steps:

  1. Extending the UI for the new destination
  2. Adding the collector configuration for the new destination

User Interface

For our new destination to be visible in the UI, we need to make several changes to the UI code:

  1. Go to ui/img/vendor directory and add your logo file, for example mydest.svg. Please use svg format for the logo.

  2. Go to ui/vendors directory and create a new file called mydest.tsx.

  3. Start by adding the required import statements (Notice that line 6 points to the file you created in step 1).

1
2
3
4
5
6
7
import {
  ObservabilityVendor,
  ObservabilitySignals,
  VendorObjects,
} from "@/vendors/index";
import MyDestLogo from "@/img/vendor/mydest.svg";
import { NextApiRequest } from "next";
  1. Create a class called MyDest that implements the ObservabilityVendor interface:
export class MyDest implements ObservabilityVendor {
  name = "mydest";
  displayName = "My Destination";
  supportedSignals = [ObservabilitySignals.Traces, ObservabilitySignals.Metrics];

  getLogo = (props: any) => {
    return <MyDestLogo {...props} />;
  };

Everything should be self-explanatory. Specify the name (unique lowercase string) of the destination, the display name (human-readable string) and the observability signals your destination supports. The getLogo method should return a React component that will be used as the logo for the destination.

  1. Specify the required fields for communicating with the destination by implementing the getFields method:
getFields = () => {
  return [
    {
      displayName: "URL",
      id: "url",
      name: "url",
      type: "url",
    },
    {
      displayName: "Region",
      id: "region",
      name: "region",
      type: "text",
    },
    {
      displayName: "API Key",
      id: "apikey",
      name: "apikey",
      type: "password",
    },
  ];
};

This method returns an array of IDestField objects. The type field corresponds to the type of input field that will be displayed in the UI.

  1. Add the method toObjects. This method converts the data received from the UI to the data that will be persisted in the Kubernetes data store. Each destination can have nonsecret data (like region) - defined in the Data field of the returned object and secret data (like API key) - defined in the Secret field of the returned object.
toObjects = (req: NextApiRequest) => {
  return {
    Data: {
      MYDEST_URL: req.body.url,
      MYDEST_REGION: req.body.region,
    },
    Secret: {
      MYDEST_API_KEY: Buffer.from(req.body.apikey).toString("base64"),
    },
  };
};

Notice that Kubernetes requires that the secret data be base64 encoded.

  1. Add the mthod mapDataToFields which converts the data received from the Kubernetes data store to the data that will be displayed in the UI. This method is invoked when user edits the destination in the UI.
mapDataToFields = (data: any) => {
  return {
    url: data.MYDEST_URL,
    region: data.MYDEST_REGION,
  };
};
  1. Finally, register the destination you just created by modifying theui/vendors/index.ts file:
// Add an import for the new destination
import { MyDest } from "@/vendors/mydest";

// Add the new destination to the list of vendors
const Vendors = [/* List of existing vendors... */ new MyDest()];

For a complete UI implementation example, see one of our existing vendors.

Collector Configuration

Now that our new vendor can be persisted/loaded in the Kubernetes data store, we need to implement the collector configuration.

  1. Go to common/dests.go and add your new destination to the DestinationType enum. Make sure the value is the same as the name property of the destination UI class.
  2. Go to autoscaler/controllers/gateway/config directory and create a new file called mydest.go with the following content:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
package config

import (
	odigosv1 "github.com/keyval-dev/odigos/api/v1alpha1"
	commonconf "github.com/keyval-dev/odigos/autoscaler/controllers/common"
	"github.com/keyval-dev/odigos/common"
)

type MyDest struct{}

func (m *MyDest) DestType() common.DestinationType {
	return common.MyDestDestinationType
}

func (m *MyDest) ModifyConfig(dest *odigosv1.Destination, currentConfig *commonconf.Config) {
    // Modify the config here
    	if isTracingEnabled(dest) {
		currentConfig.Exporters["otlp/mydest"] = commonconf.GenericMap{
			"endpoint": "https://mydest.com:4317",
			"headers": commonconf.GenericMap{
				"x-mydest-header-apikey": "${MYDEST_API_KEY}",
			},
		}

		currentConfig.Service.Pipelines["traces/mydest"] = commonconf.Pipeline{
			Receivers:  []string{"otlp"},
			Processors: []string{"batch"},
			Exporters:  []string{"otlp/mydest"},
		}
	}
}
  • The method DestType returns the enum value of the destination added earlier.
  • The method ModifyConfig is called with the dest object which holds the data received from the UI and the currentConfig object. The currentConfig object contains the current configuration of the gateway collector. Modify this object to include the OpenTelemetry configuration needed by your destination. Make sure to give any exporter or pipeline a unique name in order to avoid conflicts (use the convention traces/<dest-name> for traces pipelines and otlp/<dest-name> for OpenTelemetry exporters). You can assume a basic configuration is already provided in the currentConfig object, for details see getBasicConfig method in autoscaler/controllers/gateway/config/root.go file.
  • You can use the utility methods isTracingEnabled, isMetricsEnabled and isLoggingEnabled to determine which signals are selected by the user for the destination and configure the collector accordingly.
  1. The last step is to register the new destination struct in the autoscaler/controllers/gateway/config/root.go file:
var availableConfigers = []Configer{/* List of existing destinations  */, &MyDest{}}

That’s it! Now you can use your new destination in the UI and send data to it.

Please submit a PR to the odigos git repository, we are happy to accept new destinations.