This page will show you how to create API models when your service lives in a Docker container, or is started by Docker Compose, or runs in an environment like Amazon's Elastic Container Service. Docker networking can be difficult to get working correctly; we will show a few different options that allow Akita to see your API traffic. By default, one container cannot see traffic destined for another container, so special configuration is needed in Docker.
If you are running your application in Kubernetes, see Capturing Packet Traces in Kubernetes.
The following examples use
akita learnto create both a trace and an API model. If you only want to create a trace, you can use the
akita apidumpcommand instead.
We do not yet have official Windows support for the Akita CLI. While our users have gotten Akita to work as a container within the Windows Subsystem for Linux (WSL), it does not currently work to trace Windows APIs from a Docker container in WSL.
This method is the most straightforward way of capturing a single container's network traffic. However, because it involves a separate "docker run" step, it cannot be used as part of a Docker Compose script. It will also not work if your container is connected only to an internal network.
Start your service first; this can be done through
docker run or
docker-compose as usual. Find the name of the container you want to monitor. Then, start the Akita CLI and specify that it should attach to that container's network stack:
docker run --rm --network container:your-container-name \ -e AKITA_API_KEY_ID=... \ -e AKITA_API_KEY_SECRET=... \ akitasoftware/cli:latest learn \ --service your-service-name --filter "port 80"
This will cause the Akita CLI to be able to see all the traffic to and from the service running in the specified container. You should switch "port 80" to be the port number that your service is using (inside the container), or the filter can be omitted to capture outgoing traffic from your service as well.
As shown in the diagram above, Akita will use the same network device as the service being monitored. It will not have visibility into other containers.
This configuration can be created in Docker Compose by specifying the container network as the
version'3' services: ... akita: container_name: akita image: akitasoftware/cli:0.17.0 environment: - AKITA_API_KEY_ID=apk_xxxxxxxx - AKITA_API_KEY_SECRET=xxxxxxx network_mode: "container:your-container-name" entrypoint: /akita learn --service your-service
akita learncreates a model when the trace is stopped with SIGINT. If you use
docker-compose killthen the Akita container will not be shut down gracefully, and no model will be generated. You can still create a spec from the trace with the apispec command.
docker-compose kill -s SIGINTor
docker-compose downwill shut down the container gracefully, giving it time to stop the trace and request that the spec be created.
If you wish to monitor multiple containers simultaneously, or if the container you want to monitor is connected only to an internal network, you can attach the Akita CLI to the host network. This works with either
docker run or
docker-compose. Here's an example command line for the former:
docker run --rm --network host \ -e AKITA_API_KEY_ID=... \ -e AKITA_API_KEY_SECRET=... \ akitasoftware/cli:latest learn \ --service your-service-name --filter "port 80"
You should still use the port number from "inside" your service's container; Akita's packet capture sees the untranslated port instead of the externally-visible port number. In this mode, Akita can capture any container's external network traffic, and even traffic on internal networks.
Host mode is only available for Linux-based Docker.
To use host networking the Akita CLI in Docker Compose, specify
network_mode: "host" in the YAML definition, as in this example:
version'3' services: ... akita: container_name: akita image: akitasoftware/cli:0.17.0 environment: - AKITA_API_KEY_ID=apk_xxxxxxxx - AKITA_API_KEY_SECRET=xxxxxxx network_mode: "host" entrypoint: /akita learn --service your-service
The Akita CLI can start a child process while collecting traces. You can use this feature to change the entry point of your container to the Akita CLI, have Akita start your main server process, and then capture a trace for the entire lifetime of that server process. This is a more intrusive change that requires changes to your container build process, but will work for any way that the container is run.
To capture traces this way:
- Install the Akita CLI binary in your container at build time, for example in its usual location at
- Change the entry point of the container to call the
akita learncommand, with the normal server command line specified using the
... ENTRYPOINT ["/usr/local/bin/akita", "learn", "-c", "normal server command line here", "-u", "root", "--service", "your service here" ]
This command will create a model when the server process exits.
learn command captures a single trace for as long as the Akita docker container is allowed to run. When the container is stopped, the
learn command attempts to create an API model from that trace. However, this is not a good fit for continuous monitoring, as no model will be created until the process exits. This section shows two alternatives for getting periodic models.
You can use the
-c option to
akita learn to run a subcommand. The learn session will last as long as this command runs (as described in the section above on using the CLI as a wrapper.) You can use this functionality to generate periodic traces by running the
... akita learn -c "sleep 3600" -u root ...
This command will capture a trace for one hour, then exit. You can run the command in a loop, or have the environment automatically restart the container, to capture traces on a continuous basis. A model will be generated for each one hour of trace, if you use the
learn command. To create traces only (and create models by hand), use
akita apidump instead.
The Akita cloud can automatically create models from traces that are identified as coming from a production, staging, or test environment. To use this feature, you need to mark the trace with the reserved
x-akita-deployment tag. The easiest way to do this is to set the
AKITA_DEPLOYMENT environment variable to a meaningful name for the context in which the trace was collected; we recommend a name such as "staging", or "production".
All traces from the same time period that have the same value of the
x-akita-deployment tag will be combined into a single model.
If you use this feature, it is best to switch from
akita learn to
akita apidump so that you do not get multiple models from the same trace.
When running the Akita CLI in daemon mode (see Staging and Production Express.js via Middleware), it is necessary that the middleware have access to the Akita daemon, and that the Akita daemon is able to access the Akita cloud services. You can use Docker-compose to ensure that the service being monitored and the Akita CLI are connected to the same network.
If the service's network is internal, then the Akita CLI must be connected to an additional external networks. Here is an example Docker Compose file that shows how this can be done:
version: '3' services: test: container_name: test image: test-middleware:latest networks: - int-network akita_daemon: container_name: akita_daemon image: akitasoftware/cli:0.16.2 environment: - AKITA_API_KEY_ID=apk_xxxxxxxx - AKITA_API_KEY_SECRET=xxxxxxx networks: - int-network - ext-network entrypoint: /akita --debug daemon --port 50080 --name my-daemon-name networks: ext-network: driver: bridge int-network: driver: bridge internal: true
In the Express.js middleware configuration, the daemon host would be set to
akita_daemon:50080. Docker's DNS setup ensures that the correct IP address is used.
Many of the examples above specify the Akita credentials for the container as environment variables. Because the environment variables are set up on the command line, they may be exposed. A better approach is to mount the file containing your Akita credentials into the container. You can do this by using the following flag on the Docker command line:
docker run ... \ --volume ~/.akita/credentials.yaml:/root/.akita/credentials.yaml:ro ...
This maps your current user's credentials.yaml file, created with "akita login", into the container.
You can also use docker's
--env-file argument to specify environment variables in a file instead of on the command line.
Amazon Elastic Container Service does not support attaching one container directly to another's network stack. This leaves two options, either connecting to the host network, or running Akita as an extra process inside your application container(s). We describe the first solution here, as it requires fewer changes to deploy.
The following configuration will create an ECS service that runs at most one Akita capture agent on any EC2 instance.
This configuration has not been tested on AWS Fargate-based ECS; it probably won't work as Fargate doesn't permit host networking.
The following Docker-Compose file defines an Akita agent that captures for an hour, then exits. You should fill in your own Akita credentials, and the service name you have created in the Akita web console.
AKITA_DEPLOYMENT is optional, but it is highly recommended that you fill this in with a descriptive name such as "staging" or "production".
version: '3' services: akita: image: public.ecr.aws/akitasoftware/akita-cli:latest environment: - AKITA_DEPLOYMENT=test - AKITA_API_KEY_ID=apk_XXXXXXXXXX - AKITA_API_KEY_SECRET=XXXXXXXXXX entrypoint: /akita apidump --service my-service-name -c "sleep 3600" -u root
For production use, you may wish to capture the logs using a
logging section in the definition. You can also omit the
-c "sleep 3600" -u root arguments to collect a single trace that lasts for the entire lifetime of the container, if regular contain restarts cause any operational concern. This configuration uses our public ECR repository, to avoid rate-limiting problems pulling from Dockerhub.
ECS-specific settings go into a separate file (by default called ecs-params.yaml). The ones necessary for Akita are:
version: 1 task_definition: ecs_network_mode: host run_params: task_placement: constraints: - type: distinctInstance
These settings cause the Akita agent to capture all traffic on the host, and ensure that only one Akita container is run per host. An example of creating a service using these definition is:
$ ecs-cli compose -p akita-capture -f akita-compose.yaml --ecs-params akita-params.yaml service up --cluster-config <mycluster>
This creates a new project named
akita-capture and configures a service based on the previous two YAML files. The service is initialized with a desired container count of 1.
You can verify that the contain has started with
ecs-cli ps, or view its log output if you configured a
logging section in the UI. In the Akita web consoler, you should be able to see a new trace in the
Traces tab, or list them with the akita get trace CLI command. You can then wait for an automatically created model (if you specified a value for
AKITA_DEPLOYMENT), or create one manually from the trace using akita apispec.
Once you have verified that traffic is successfully being captured, you can scale up to more capture agents using
$ ecs-cli compose -p akita-capture service scale NNN
Updated about a month ago