Where Observability
meets Structure

The only observability tool to build API behaviour models, Akita enables API-centric system monitoring and allows users to automatically detect breaking changes.

More about how Akita works

With Non-Docker Services

The Akita Client provides an agent to collect API traffic from a local network. The Akita agent runs alongside your service in a virtual machine, docker container, or wherever your service lives. To get started quickly, visit Quick Start: Server-Side APIs.

The apidump command tells the Akita Client to begin collecting network traffic. To start, Akita needs to know what network traffic to monitoring and which service to associate the monitored traffic with:

  • Service Name - the name of the Service we created in the Akita Cloud.
  • Network Interface - the network interface your Service is sending and receiving data on.
  • Port - the port your service is listening for a connection on.

If you don't know these, you may find it helpful to read our FAQ answers about how to figure out the Docker container where your service runs and what port your service runs on.

If you are running in Docker we will also need your API Key ID, API Key Secret, and the Network your container is attached to.

With this information, you can start Akita by running the command:

akita apidump
    --interfaces {network interface} \
    --port {port} \
  --out akita://myService:trace:myTrace
akita apidump \
    --interfaces {network interface} \
    --port {port} \
  --out akita://myService:trace:myTrace
docker pull akitasoftware/cli:<<current_cli_version>> && docker run --rm -it \
  --env AKITA_API_KEY_ID=${KEY_ID} \
  --env AKITA_API_KEY_SECRET=${KEY_SECRET} \
  --net="container:${CONTAINER_NAME}" \
  akitasoftware/cli:<<current_cli_version>> apidump \
    --port {port} \
    --out akita://myService:trace:myTrace

Next, exercise your API, for instance by running integration tests, making cURL requests, or connecting through your web browser. Once you've generated network traffic, hit ctrl + C to terminate apidump. The Akita Client will remove any payload data and send a trace to the Akita Cloud containing metadata from the observed requests and responses.

--out

The argument to --out can be an AkitaURI or a local path pointing to a directory. When you use an AkitaURI, the trace is stored in the Akita Cloud.

When you use a local path, the Akita Client stores traces as HAR files in the directory you specify, with one file per interface, named akita_{interface}.har.

More Info

You can find more details of the apidump command here.

Test That Akita Saw All Your Endpoints

Once you run Akita, here’s how you can test that your setup worked. First, use apidump to capture traffic to your service and store it in a local trace.

# 1. Start capturing traffic:
akita apidump --filter "port ${YOUR_PORT}" --out path/to/local/dir

# 2. Exercise your service's API.
# 3. Hit Ctrl+C to stop capturing traffic.
# 1. Start capturing traffic:
akita apidump --filter "port ${YOUR_PORT}" --out path/to/local/dir

# 2. Exercise your service's API.
# 3. Hit Ctrl+C to stop capturing traffic.
# 0. Find your Akita API key, secret, the name of the docker container
#    your service is in, and the port it listens on.  Use these values
#    in the next step.
#
#    Also set the HAR_OUTPUT_DIR environment variable (or replace it in
#    the --volume flag) to specify a directory to store the output.

# 1. Start capturing traffic:
docker run --rm -it \
  --env AKITA_API_KEY_ID=${KEY_ID} \
  --env AKITA_API_KEY_SECRET=${KEY_SECRET} \
  --net="container:${CONTAINER_NAME}" \
  --volume ${HAR_OUTPUT_DIR}:/har \
  akitasoftware/cli:<<current_cli_version>> apidump \
  --filter "port ${YOUR_PORT}" \
  --out /har

# 2. Exercise your service's API.
# 3. Hit Ctrl+C to stop capturing traffic.

πŸ“˜

Getting HAR Files From Docker

If you run apidump in Docker, then you'll need to mount a volume in order to access the HAR files outside of Docker.

The Docker command above includes the flag --volume ${HAR_OUTPUT_DIR}:/har, which ensures that any files written to /har in the container are accessible at the directory you supply with ${HAR_OUTPUT_DIR} outside the container. Combined with the --out /har flag, this ensures that the HAR files will be accessible from the local file system.

Traces are formatted as HAR files, which use a JSON encoding to represent traffic. If you install the jq utility, you can use the following command to list the endpoints in the HAR files.

# Echo unique raw endpoints observed:
for har in path/to/local/dir/*.har; do
  cat ${har} | jq '.log.entries | .[] | .request.url' | sort | uniq
done

You can also print the contents of each request, rather than just the endpoint. Additional information can often be found in the request headers, like the host and port receiving the request.

# Echo requests observed:
for har in path/to/local/dir/*.har; do
  cat ${har} | jq '.log.entries | .[] | .request'
done

Troubleshooting

Updated about a month ago



With Non-Docker Services


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.