This page will show you how to create API models when your service lives in a Docker container, or is started by Docker Compose. Docker networking can be difficult to get working correctly; we will show a few different options that allow Akita to see your API traffic. By default, one container cannot see traffic destined for another container, so special configuration is needed in Docker.

If you are running your application in Kubernetes, see Capturing Packet Traces in Kubernetes.

❗️

Windows support

We do not yet have official Windows support for the Akita CLI. While our users have gotten Akita to work as a container within the Windows Subsystem for Linux (WSL), it does not currently work to trace Windows APIs from a Docker container in WSL.

Attach the Akita CLI to a running container

This method is the most straightforward way of capturing a single container's network traffic. However, because it involves a separate "docker run" step, it cannot be used as part of a Docker Compose script. It will also not work if your container is connected only to an internal network.

Start your service first; this can be done through docker run or docker-compose as usual. Find the name of the container you want to monitor. Then, start the Akita CLI and specify that it should attach to that container's network stack:

docker run --rm --network container:your-container-name \
  -e AKITA_API_KEY_ID=... \
  -e AKITA_API_KEY_SECRET=... \
  akitasoftware/cli:latest apidump \
  --service your-service-name
  --filter "port 80"

This will cause the Akita CLI to be able to see all the traffic to and from the service running in the specified container. You should switch "port 80" to be the port number that your service is using (inside the container), or the filter can be omitted to capture outgoing traffic from your service as well.

As shown in the diagram above, Akita will use the same network device as the service being monitored. It will not have visibility into other containers.

Attaching to a container in Docker Compose

This configuration can be created in Docker Compose by specifying the container network as the network_mode

version'3'
services:
  ...
  akita:
    container_name: akita
    image: akitasoftware/cli:latest
    environment:
     - AKITA_API_KEY_ID=apk_xxxxxxxx
     - AKITA_API_KEY_SECRET=xxxxxxx
    network_mode: "container:your-container-name"
    entrypoint: /akita apidump --service your-service

Attach the Akita CLI to the host network

If you wish to monitor multiple containers simultaneously, or if the container you want to monitor is connected only to an internal network, you can attach the Akita CLI to the host network. This works with either docker run or docker-compose. Here's an example command line for the former:

docker run --rm --network host \
  -e AKITA_API_KEY_ID=... \
  -e AKITA_API_KEY_SECRET=... \
  akitasoftware/cli:latest apidump \
  --service your-service-name
  --filter "port 80"

You should still use the port number from "inside" your service's container; Akita's packet capture sees the untranslated port instead of the externally-visible port number. In this mode, Akita can capture any container's external network traffic, and even traffic on internal networks.

Host mode is only available for Linux-based Docker.

To use host networking the Akita CLI in Docker Compose, specify network_mode: "host" in the YAML definition, as in this example:

version'3'
services:
  ...
  akita:
    container_name: akita
    image: akitasoftware/cli:0.17.0
    environment:
     - AKITA_API_KEY_ID=apk_xxxxxxxx
     - AKITA_API_KEY_SECRET=xxxxxxx
    network_mode: "host"
    entrypoint: /akita apidump --service your-service

Use the Akita CLI as a wrapper for your server process

The Akita CLI can start a child process while collecting traces. You can use this feature to change the entry point of your container to the Akita CLI, have Akita start your main server process, and then capture a trace for the entire lifetime of that server process. This is a more intrusive change that requires changes to your container build process, but will work for any way that the container is run.

To capture traces this way:

  1. Install the Akita CLI binary in your container at build time, for example in its usual location at /usr/local/bin/akita.
  2. Change the entry point of the container to call the akita apidump command, with the normal server command line specified using the -c option:
...
ENTRYPOINT ["/usr/local/bin/akita", "apidump",
            "-c", "normal server command line here", 
            "-u", "root", "--service", "your service here" ]

Separating different deployments

You may want to split the models you've created based on which cluster or environment they've come from. You can use the --deployment flag or the AKITA_DEPLOYMENT environment variable to attach a tag to the traces, identifying their context. All traces with the same deployment name will be combined into a single model.

The default deployment name is "default".

Running the Akita Daemon on an internal network

When running the Akita CLI in daemon mode (see Staging and Production Express.js via Middleware), it is necessary that the middleware have access to the Akita daemon, and that the Akita daemon is able to access the Akita cloud services. You can use Docker-compose to ensure that the service being monitored and the Akita CLI are connected to the same network.

If the service's network is internal, then the Akita CLI must be connected to an additional external networks. Here is an example Docker Compose file that shows how this can be done:

version: '3'
services:
  test:
    container_name: test
    image: test-middleware:latest
    networks:
      - int-network

  akita_daemon:
    container_name: akita_daemon
    image: akitasoftware/cli:0.16.2
    environment:
      - AKITA_API_KEY_ID=apk_xxxxxxxx
      - AKITA_API_KEY_SECRET=xxxxxxx
    networks:
      - int-network
      - ext-network
    entrypoint: /akita --debug daemon --port 50080 --name my-daemon-name

networks:
  ext-network:
    driver: bridge
  int-network:
    driver: bridge
    internal: true

In the Express.js middleware configuration, the daemon host would be set to akita_daemon:50080. Docker's DNS setup ensures that the correct IP address is used.

Providing the Akita credentials in a volume

Many of the examples above specify the Akita credentials for the container as environment variables. Because the environment variables are set up on the command line, they may be exposed. A better approach is to mount the file containing your Akita credentials into the container. You can do this by using the following flag on the Docker command line:

docker run ... \
  --volume ~/.akita/credentials.yaml:/root/.akita/credentials.yaml:ro 
  ...

This maps your current user's credentials.yaml file, created with "akita login", into the container.

You can also use docker's --env-file argument to specify environment variables in a file instead of on the command line.


Did this page help you?