This page will walk you through various options for customizing your Akita Agent on a host network in Kubernetes.
If you are just getting started with Kubernetes host networking, check out our getting started instructions.
If you are looking to install the Akita Agent on a single staging or production Kubernetes service, check out our Single-Service Kubernetes page.
Below you will find instructions for:
- Rate limiting the Akita Agent
- Tagging your Akita Agents
- Removing Kubernetes APIs
- Creating a pod definition to run the Akita Agent
This guide assumes that you:
- Are in the Akita beta
- Have set up an Akita account
- Have created a project
- Have generated an API key ID and secret
- Are using the Akita Agent on a Kubernetes host network
If you would like to rate limit the Akita Agent, set a per-node rate limit by using the
--rate-limit flag in your
In the below example, we set a rate limit of 200 requests/minute. If the rate of packet captures is higher than this, the Akita Agent will start performing statistical sampling of the packets it sees.
spec: containers: - image: akitasoftware/cli:latest imagePullPolicy: Always name: akita args: - apidump - --project - <your project name here> - --rate-limit - "200"
You can add tags to help you identify the source of the incoming data. Use the
--tags argument with
akita apidump and include a comma-separated list of
args: ... - --tags - owner=mark,namespace=staging
For tagging deployments, use separate projects to separate data from test, staging, and production into different models. See Tag Your Traffic with Versions.
You can use the
--tags argument with
akita apispec to specify key-value pairs that help you identify the source of a trace. The Akita Agent (in versions 0.16.1 and later) also recognizes the following environment variables and converts them into reserved tags. You can make this information available to the Akita container via the downward API.
All of the following tags are optional, and some have support in the Akita web console.
Identify the Kubernetes cluster, for example as "staging" or "deployment".
DEPRECATED: use a separate project instead.
Records a Git commit hash; can be used to record the deployed version for a monorepo, or the last Terraform commit.
Amazon Web Services region in which the cluster is running, if applicable.
Kubernetes namespace in which the capture agent is running.
Identifier of the node on which the capture agent is running.
Host IP of the node.
Name of the pod which captured the trace.
IP address of the pod.
Name of the Daemonset used to create the pod.
Adding the following code in the
env section of the Daemonset or pod definition will extract all the available information from the Kubernetes Downward API. The other variables listed in the table would need to be configured to static values, or filled at creation time by a Terraform module.
env: - name: AKITA_K8S_NODE valueFrom: fieldRef: fieldPath: spec.nodeName - name: AKITA_K8S_POD valueFrom: fieldRef: fieldPath: metadata.name - name: AKITA_K8S_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: AKITA_K8S_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: AKITA_K8S_HOST_IP valueFrom: fieldRef: fieldPath: status.hostIP
You may want to filter out the Kubernetes control traffic itself, which you can do by adding
--filter "not port 80" to the
akita learn command line:
args: ... - --filter - not port 80
If you want to look at only a particular service or a particular node, directly create a single pod using the example shown below. This YAML file creates a single pod named
akita-capture, which collects API traces from a service running on port 50100, for 10 minutes:
apiVersion: v1 kind: Pod metadata: name: akita-capture spec: containers: - image: akitasoftware/cli:latest name: akita args: - learn - --filter - dst port 50100 - -c - sleep 600 - -u - root - --project - my-project-name env: - name: AKITA_API_KEY_ID valueFrom: secretKeyRef: name: akita-secrets key: api-key-id - name: AKITA_API_KEY_SECRET valueFrom: secretKeyRef: name: akita-secrets key: api-key-secret dnsPolicy: ClusterFirst hostNetwork: true restartPolicy: Never
This example uses normal Kubernetes scheduling to decide where to run the new pod. To control which host is used add a node selector to the rule, like this:
spec: nodeName: your-node-name
or use a pod affinity rule.
Once the collection has finished, you will have to delete the pod manually.
Updated about 1 month ago