Customize Kubernetes Host Networking
This page will walk you through various options for customizing your Akita Agent on a host network in Kubernetes.
If you are just getting started with Kubernetes host networking, check out our getting started instructions.
If you are looking to install the Akita Agent on a single staging or production Kubernetes service, check out our Single-Service Kubernetes page.
Below you will find instructions for:
- Rate limiting the Akita Agent
- Tagging your Akita Agents
- Removing Kubernetes APIs
- Creating a pod definition to run the Akita Agent
This guide assumes that you:
- Have an account in the Akita beta
- Have created a project
- Have generated an API key ID and secret
- Are using the Akita Agent on a Kubernetes host network
Rate limiting
If you would like to rate limit the Akita Agent, set a per-node rate limit by using the --rate-limit
flag in your daemonset.yaml
file.
In the below example, we set a rate limit of 200 requests/minute. If the rate of packet captures is higher than this, the Akita Agent will start performing statistical sampling of the packets it sees.
spec:
containers:
- image: akitasoftware/cli:latest
imagePullPolicy: Always
name: akita
args:
- apidump
- --project
- <your project name here>
- --rate-limit
- "200"
Tagging
You can add tags to help you identify the source of the incoming data. Use the --tags
argument with akita apidump
and include a comma-separated list of key=value
pairs.
args:
...
- --tags
- owner=mark,namespace=staging
For tagging deployments, use separate projects to separate data from test, staging, and production into different models. See Tag Your Traffic with Versions.
Tagging traces and specs with Kubernetes information
You can use the --tags
argument with akita apidump
to specify key-value pairs that help you identify the source of a trace. The Akita Agent (in versions 0.16.1 and later) also recognizes the following environment variables and converts them into reserved tags. You can make this information available to the Akita container via the downward API.
All of the following tags are optional, and some have support in the Akita web console.
Environment variable | Tag | Suggested use |
---|---|---|
AKITA_DEPLOYMENT | x-akita-deployment | Identify the Kubernetes cluster, for example as "staging" or "deployment". DEPRECATED: use a separate project instead. |
AKITA_DEPLOYMENT_COMMIT | x-akita-git-commit | Records a Git commit hash; can be used to record the deployed version for a monorepo, or the last Terraform commit. |
AKITA_AWS_REGION | x-akita-aws-region | Amazon Web Services region in which the cluster is running, if applicable. |
AKITA_K8S_NAMESPACE | x-akita-kubernetes-namespace | Kubernetes namespace in which the capture agent is running. |
AKITA_K8S_NODE | x-akita-kubernetes-node | Identifier of the node on which the capture agent is running. |
AKITA_K8S_HOST_IP | x-akita-kubernetes-host-ip | Host IP of the node. |
AKITA_K8S_POD | x-akita-kubernetes-pod | Name of the pod which captured the trace. |
AKITA_K8S_POD_IP | x-akita-kubernetes-pod-ip | IP address of the pod. |
AKITA_K8S_DAEMONSET | x-akita-kubernetes-daemonset | Name of the Daemonset used to create the pod. |
Adding the following code in the env
section of the Daemonset or pod definition will extract all the available information from the Kubernetes Downward API. The other variables listed in the table would need to be configured to static values, or filled at creation time by a Terraform module.
env:
- name: AKITA_K8S_NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: AKITA_K8S_POD
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: AKITA_K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AKITA_K8S_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: AKITA_K8S_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Remove APIs
You may want to filter out the Kubernetes control traffic itself, which you can do by adding --filter "not port 80"
to the akita learn
command line:
args:
...
- --filter
- not port 80
Create pod definition
If you want to look at only a particular service or a particular node, directly create a single pod using the example shown below. This YAML file creates a single pod named akita-capture
, which collects API traces from a service running on port 50100, for 10 minutes:
apiVersion: v1
kind: Pod
metadata:
name: akita-capture
spec:
containers:
- image: akitasoftware/cli:latest
name: akita
args:
- learn
- --filter
- dst port 50100
- -c
- sleep 600
- -u
- root
- --project
- my-project-name
env:
- name: AKITA_API_KEY_ID
valueFrom:
secretKeyRef:
name: akita-secrets
key: api-key-id
- name: AKITA_API_KEY_SECRET
valueFrom:
secretKeyRef:
name: akita-secrets
key: api-key-secret
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Never
This example uses normal Kubernetes scheduling to decide where to run the new pod. To control which host is used add a node selector to the rule, like this:
spec:
nodeName: your-node-name
or use a pod affinity rule.
Once the collection has finished, you will have to delete the pod manually.
Updated almost 2 years ago