Single Host/VM

These instructions cover installing and running the Akita Client on a single host, either bare metal or a virtual machine. There are separate instructions for deploying the client across multiple machines with Kubernetes or Docker.


First Run

If you can deploy your service(s) locally, then running the Akita Client locally is a great way to test out Akita. The client will start capturing traffic on the local network and sending metadata to the Akita Cloud, which will analyze it and build a model of your APIs.

Follow the instructions below to start capturing traffic on your local machine with the Akita Client.

Install the client

The first step is to install the client. We've made the Akita Client available on Linux and macOS, and as a Docker container.

Set up authentication

In order to access the Akita Cloud, you will need to provide your API Key ID and Secret to the CLI. You can do this by using the "login" command or by setting environment variables.


Get an API Key

If you don't have an API key, or if you misplaced the secret, click here for instructions to create a new one.

Login command

Running the login command will prompt you for your API Key ID and API Key Secret. Your API Key ID and API Key Secret will be stored securely in your $HOME directory for future use.

> akita login
API Key ID: apk_0000000000000000000000
API Key Secret: ******************************
Login successful!
API keys stored in ${HOME}/.akita/credentials.yaml

Environment variables

In instances where running the login command is not possible (e.g., a CI/CD pipeline), you can provide your API Key ID and API Key Secret as environment variables.


Capture traffic

With everything set up, you're ready to start capturing API traffic and building an API model! For this to work, you'll need to run the Akita client alongside your service.

To start, Akita needs to know what network traffic to monitor and which project to associate the monitored traffic with:

  • Project Name - the name of the project we created in the Akita Cloud.
  • Port (optional) - the port your service is listening for a connection on.

If your service is running in a Docker container, we will also need your API Key ID, API Key Secret, and the name of the container running your service.


Why specify a port?

Without the --port flag, Akita will collect all the traffic going over your network interfaces. This includes outgoing traffic your service may send to its downstream dependencies (e.g. calls to Datadog APIs, Amazon Web Services, etc.), as well as other network traffic from other applications running on your machine.

Adding the --port flag will exclude any traffic not sent to or from the specified port.

If you don't know what port your service runs on, you may find it helpful to read our FAQ answers about how to figure out the Docker container where your service runs and what port your service runs on.

With this information, you can start Akita by running the command:

akita apidump --project {project name} --filter "port {port}"
sudo akita apidump --service {project name} --filter "port {port}"
# Only use this if your service is running in a Docker container!  If
# you run the Akita agent in Docker, it will only see Docker network
# traffic, not traffic on your host machine.

docker run --rm -it \
  --env AKITA_API_KEY_ID=${KEY_ID} \
  --net="container:${CONTAINER_NAME}" \
  akitasoftware/cli:latest apidump \
    --service {project name} \
    --filter "port {port}"

After successfully starting the client, you should see output that looks like this:

[INFO] Created new trace on Akita Cloud: akita://akibox:trace:brown-hugger-64f0f9ee
[INFO] Running learn mode on interfaces awdl0, utun0, utun1, en0, llw0, utun3, lo0, utun2
[INFO] Send SIGINT (Ctrl-C) to stop...

The Akita Client is now monitoring traffic. Be sure to send some traffic to your service! Then head to the Akita App, and your first API model will be ready in 3-5 minutes.

Did this page help you?