How to Auto-Instrument Python Servers with OpenTelemetry for Performance and Error Monitoring

How to Auto-Instrument Python Servers with OpenTelemetry for Performance and Error Monitoring

Panchanan Panigrahi Panchanan Panigrahi • Apr 5, 2024
How to Auto-Instrument Python Servers with OpenTelemetry for Performance and Error Monitoring

As a developer, it's crucial to have deep observability into your Python application servers to identify and resolve issues quickly. OpenTelemetry provides a powerful solution for automatic instrumentation, simplifying the process of gaining insights into your Python application's performance and stability.

In this guide, we will walk you through the process of instrumenting your Python servers with OpenTelemetry. This guide is applicable to a variety of frameworks from Flask, Django, FastAPI, Starlette, and more.

You will learn how to collect logs, metrics, and traces, and how to forward this telemetry to any OpenTelemetry-compatible backend for monitoring and analysis. With OpenTelemetry, you can easily debug errors, optimize performance, and gain a deeper understanding of your Python servers.

What is OpenTelemetry?

OpenTelemetry is an open source observability framework used to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) for applications.

OpenTelemetry has rapidly become a vital tool for observability in modern day applications. A few reasons why you should think about using it in your projects:

  • Auto Instrumentation Included: OpenTelemetry offers out-of-the-box auto-instrumentation for a variety of Python libraries such as Django, Flask, FastAPI, requests, SQLAlchemy and many more (opens in a new tab) as well as a variety of languages outside of Python as well.
  • Vendor Neutral: Because OpenTelemetry adopts a vendor-neutral strategy, you can select whatever monitoring and tracing backend you want without worrying about being locked into a particular provider.
  • Open Source Standard: OpenTelemetry, as an open-source initiative, provides community-driven development, transparency, and ongoing innovation.
  • Fastest Growing CNCF Project: Within the Cloud Native Computing Foundation (CNCF), OpenTelemetry is the fastest growing project, a sign of its rising community support and rapid adoption.

Benefits of Instrumenting with OpenTelemetry

Instrumenting Python servers with OpenTelemetry can help with fixing issues such as:

  • Performance Monitoring: With OpenTelemetry tracing, it’s possible to monitor server response times and trace slowdowns within the application. Whether performance issues is caused by a slow DB query, N+1 queries, or other bottlenecks in the code, OpenTelemetry can make it much easier to discover the root cause of performance issues.
  • Alert & Resolve Errors: Errors within the application can be captured by OpenTelemetry and correlated with other request information. This makes it easier to understand the root cause of an error and have surrounding context to resolve the issue.
  • Distributed Tracing: With OpenTelemetry instrumentation, it’s possible to trace requests across multiple microservice calls while keeping it all under one trace context. This way, it’s trivial to understand upstream and downstream calls being made within a complex microservice environment.

Setting Up OpenTelemetry Instrumentation

Example Application Setup

If you don't already have a Python application to instrument, feel free to clone our FastAPI Example GitHub Repo (opens in a new tab) to get started trying it out. After cloning the repository, you can install the dependencies and establish a virtual environment by running the following commands:

git clone https://github.com/hyperdxio/fastapi-opentelemetry-example.git
cd fastapi-opentelemetry-example
 
# Create a Virtual Env
virtualenv -p python3 venv
source venv/bin/activate
 
# Install Dependencies
pip install -r requirements.txt

Now you can run the application for testing:

uvicorn main:app

The command above will start the FastAPI app on port 8000. You can access the app by visiting http://localhost:8000 (opens in a new tab) in your browser. To close the application, press Ctl+C in your terminal.

Setting up the OpenTelemetry Instrumentation Package

Now its time to install OpenTelemetry dependencies. To get started with a generic OpenTelemetry installation you can install the following packages:

pip install opentelemetry-distro opentelemetry-exporter-otlp
opentelemetry-bootstrap -a install

Alternatively, the HyperDX distribution, which makes it easier to send data to HyperDX by default (but is still compatible with any OpenTelemetry backend) can be installed via:

pip install hyperdx-opentelemetry

Afterwards, you’ll want to install the various packages to instrument your specific application. This way libraries such as Redis, SQLAlchemy, requests, and more can be automatically traced. OpenTelemetry can automatically detect and install relevant packages by running the opentelemetry-bootstrap command:

opentelemetry-bootstrap -a install

Configure Environment Variables

After installing the OpenTelemetry packages, you’ll want to configure a few environment variables to ensure that your telemetry is tagged and sent to the right destination. If you’re using the generic OpenTelemetry install, you’ll want to configure the OTEL_EXPORTER_OTLP_ENDPOINT as well as any other configurations depending on which backend you’re sending to.

Additionally you’ll want to configure a OTEL_SERVICE_NAME to name the service the telemetry will be tagged with, it can be any name you want.

An example configuration for sending to a generic OpenTelemetry collector:

export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>'

If you’re using the HyperDX distribution to send to HyperDX cloud, you’ll just need to configure the HYPERDX_API_KEY environment variable for your account:

export HYPERDX_API_KEY='<YOUR_HYPERDX_API_KEY_HERE>' \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>'

Run the Application with the OpenTelemetry Python Auto-Instrumentation

Now we can run the application with the OpenTelemetry auto-instrumentation by using opentelemetry-instrument command. This will automatically initialize the OpenTelemetry Python SDK and add the instrumentation to the application.

In our example FastAPI application, it'd look like the following:

opentelemetry-instrument uvicorn main:app

Or in a Flask application, it'd look like the following:

opentelemetry-instrument flask run -p 8000

Additional Configuration:

  • If using gunicorn or uWSGI, you'll need to add a few extra lines of code to ensure that the OpenTelemetry SDK is initialized correctly.
  • Using uvicorn with --reload or --workers will not work with OpenTelemetry auto-instrumentation, we recommend using gunicorn for multiple workers instead. Alternatively, a few extra lines of code (opens in a new tab) can be added to support uvicorn’s subprocess model.

Troubleshooting Instrumentation Issues

If you’re having issues with the OpenTelemetry instrumentation, you can configure outputting the telemetry into your console to view any issues:

opentelemetry-instrument --traces_exporter console --metrics_exporter console --logs_exporter console uvicorn main:app

If you're using the HyperDX distribution, you can simply enable the DEBUG flag to see more verbose output:

DEBUG=true \
HYPERDX_API_KEY='<YOUR_HYPERDX_API_KEY_HERE>' \
OTEL_SERVICE_NAME='<NAME_OF_YOUR_APP_OR_SERVICE>' \
opentelemetry-instrument uvicorn main:app

Inspecting OpenTelemetry Telemetry

Now with the application instrumented and sending telemetry to our backend, we can take a look at the types of logs, traces and metrics being sent by the OpenTelemetry instrumentation automatically for our Python application.

Here we’ll take a look at the telemetry using HyperDX, an open source OpenTelemetry backend. If you’d like to follow along, you can spin up a single-container instance with the following command:

docker run -p 8000:8000 -p 4318:4318 -p 4317:4317 -p 8080:8080 -p 8002:8002 hyperdx/hyperdx-local

Inspecting a Single Request Trace

OpenTelemetry will automatically generate traces for every API request being made, and show you the various timings and hierarchy of events and spans happening during the request.

In the trace below, you’ll be able to see the overall request timing, response code, as well as timings of intermediate actions such as running DB queries or rendering the Jinja template to return to the user.

Single Request Trace

Debugging Errors with Logs & Traces

Instrumenting with OpenTelemetry also makes it easy to tie together logs and traces to make it easy to debug issues when an error occurs with the full context of the original request.

Error Logs with Traces

Graphing Application Performance

Beyond visualizing individual endpoint performances, you can also zoom out and look at tracing information in aggregate to view key metrics such as request throughput, error rates, latencies, and dive into specific endpoints that are causing issues. All the kinds of telemetry you’d want to gain from your Application Performance Monitoring (APM) tool.

Application Performance Monitoring

Next Steps

With OpenTelemetry’s Python auto-instrumentation, we’re able to easily get performance information out of our servers, and tie that debugging context with logs and errors inside our application to additionally easily resolve errors.

You can continue to customize tha auto-instrumentation via the various configurations available (opens in a new tab) to fine tune which libraries or endpoints are instrumented, how logs are correlated and more.

Beyond auto-instrumentation, it can be useful to manually instrument (opens in a new tab) performance-sensitive parts of your application or tune the OpenTelemetry SDK to capture additional information depending on the library integration.

If you’re interested in sending your OpenTelemetry instrumentation to an OpenTelemetry-native backend, you can try using HyperDX, on either the self-hosted open source version (opens in a new tab) or the cloud hosted version (opens in a new tab).