Start your 30 day free trial.
START FOR FREE
Docs Home

Process Exhaustion Experiment

No items found.

The Process Exhaustion experiment simulates running processes on a target to consume process IDs (PIDs).

Overview

Process Exhaustion works by creating new operating system processes, with the goal of consuming the number of process IDs (PIDs) available on the target. Operating systems have a limited number of PIDs that they can allocate. Once all PIDs are in use, the operating system can no longer start new processes and may crash.

Linux

Gremlin uses Linux cgroups to retrieve process information. This is the same method used to run container experiments.

Options

ParameterFlagRequiredDefaultAgent VersionDescription
Allocation strategy-s <absolute, total>FalseabsoluteDetermines how Gremlin interprets the --processes and --percent arguments. The value absolute tells Gremlin to consume the specified process allocation, regardless of how many processes are already running. The value total tells Gremlin to allocate only what is needed to bring the entire system up to the specified target allocation. When unspecified, Gremlin uses absolute.
Length-l intFalse60The length of the experiment in seconds.
Percent-p <1-100>False1The percentage of maximum processes to allocate.
Processes-n intFalseThe number of processes to allocate.
Note
If both percent and processes are defined, Gremlin will default to percent.

Troubleshooting

If you receive an error message while trying to run this experiment, check to see if it's listed below.

apply caps: operation not permitted: This error occurs when the Kubernetes agent doesn't have the Linux capability SYS_RESOURCE enabled. See our security page for details on which capabilities are required.

Resource temporarily unavailable (os error 11): This error indicates we've exceeded the process limit for the host. Try reducing the number of processes created.

Attack interrupted by the OOMKiller. Target state is exited, OOMKiller killed target.: This error occurs when we've exceeded the memory available to a Kubernetes target (e.g. a pod). If this error message appears, Kubernetes will terminate the target container.

Process exhaustion triggers the OOM (out of memory) killer to terminate a Pod

When running Process Exhaustion on a container or Kubernetes resource, if the container runs out of memory, the system's OOM (out of memory) killer will terminate the container. This indicates that the experiment is exceeding the amount of processes that the target can manage reliably, and the OOM killer is acting as a resilience mechanism.

On this page
Back to top