CloudWiki
Resource

StatefulSet

Kubernetes
Kubernetes
A StatefulSet manages the deployment and scaling of a set of Pods , and provides guarantees about the ordering and uniqueness of these Pods.
Terraform Name
terraform
kubernetes_stateful_set
StatefulSet
attributes:
  • metadata - (Required) Standard Kubernetes object metadata. For more info see Kubernetes reference
  • spec - (Required) Spec defines the specification of the desired behavior of the stateful set. For more info see Kubernetes reference
  • wait_for_rollout - (Optional) Wait for the StatefulSet to finish rolling out. Defaults to true.

Nested Blocks

metadata

Arguments

  • annotations - (Optional) An unstructured key value map stored with the stateful set that may be used to store arbitrary metadata.

Note

By default, the provider ignores any annotations whose key names end with kubernetes.io. This is necessary because such annotations can be mutated by server-side components and consequently cause a perpetual diff in the Terraform plan output. If you explicitly specify any such annotations in the configuration template then Terraform will consider these as normal resource attributes and manage them as expected (while still avoiding the perpetual diff problem). For more info see Kubernetes reference

  • generate_name - (Optional) Prefix, used by the server, to generate a unique name ONLY IF the name field has not been provided. This value will also be combined with a unique suffix. For more info see Kubernetes reference
  • labels - (Optional) Map of string keys and values that can be used to organize and categorize (scope and select) the stateful set. Must match selector.

Note

By default, the provider ignores any labels whose key names end with kubernetes.io. This is necessary because such labels can be mutated by server-side components and consequently cause a perpetual diff in the Terraform plan output. If you explicitly specify any such labels in the configuration template then Terraform will consider these as normal resource attributes and manage them as expected (while still avoiding the perpetual diff problem). For more info see Kubernetes reference

  • name - (Optional) Name of the stateful set, must be unique. Cannot be updated. For more info see Kubernetes reference
  • namespace - (Optional) Namespace defines the space within which name of the stateful set must be unique.

Attributes

  • generation - A sequence number representing a specific generation of the desired state.
  • resource_version - An opaque value that represents the internal version of this stateful set that can be used by clients to determine when stateful set has changed. For more info see Kubernetes reference
  • uid - The unique in time and space value for this stateful set. For more info see Kubernetes reference

spec

Arguments

  • pod_management_policy - (Optional) podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady, where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once. Changing this forces a new resource to be created.
  • replicas - (Optional) The desired number of replicas of the given Template. These are replicas in the sense that they are instantiations of the same Template, but individual replicas also have a consistent identity. If unspecified, defaults to 1. This attribute is a string to be able to distinguish between explicit zero and not specified.
  • revision_history_limit - (Optional) The maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10. Changing this forces a new resource to be created.
  • selector - (Required) A label query over pods that should match the replica count. It must match the pod template's labels. Changing this forces a new resource to be created. More info: Kubernetes reference
  • service_name - (Required) The name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller. Changing this forces a new resource to be created.
  • template - (Required) The object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet.
  • update_strategy - (Optional) Indicates the StatefulSet update strategy that will be employed to update Pods in the StatefulSet when a revision is made to Template.
  • volume_claim_template - (Optional) A list of volume claims that pods are allowed to reference. A claim in this list takes precedence over any volumes in the template, with the same name. Changing this forces a new resource to be created.

Nested Blocks

spec.template

Arguments

Nested Blocks

spec.template.spec

Arguments

These arguments are the same as the for the spec block of a Pod.

Please see the Pod resource for reference.

Nested Blocks

spec.update_strategy

Arguments

  • type - (Optional) Indicates the type of the StatefulSetUpdateStrategy. There are two valid update strategies, RollingUpdate and OnDelete. Default is RollingUpdate.
  • rolling_update - (Optional) The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order, while respecting the StatefulSet guarantees.

spec.update_strategy.rolling_update

Arguments

  • partition - (Optional) Indicates the ordinal at which the StatefulSet should be partitioned. You can perform a phased roll out (e.g. a linear, geometric, or exponential roll out) using a partitioned rolling update in a similar manner to how you rolled out a canary. To perform a phased roll out, set the partition to the ordinal at which you want the controller to pause the update. By setting the partition to 0, you allow the StatefulSet controller to continue the update process. Default value is 0.

Nested Blocks

spec.volume_claim_template

One or more volume_claim_template blocks can be specified.

Arguments

Each takes the same attributes as a kubernetes_persistent_volume_claim resource.

Please see its documentation for reference.

Timeouts

The following Timeout configuration options are available for the kubernetes_stateful_set resource:

  • create - (Default 10 minutes) Used for creating new StatefulSet
  • read - (Default 10 minutes) Used for reading a StatefulSet
  • update - (Default 10 minutes) Used for updating a StatefulSet
  • delete - (Default 10 minutes) Used for destroying a StatefulSet

Associating resources with a
StatefulSet
Resources do not "belong" to a
StatefulSet
Rather, one or more Security Groups are associated to a resource.
Create
StatefulSet
via Terraform:
The following HCL demonstrates the components of a StatefulSet
Syntax:

resource "kubernetes_stateful_set" "prometheus" {
 metadata {
   annotations = {
     SomeAnnotation = "foobar"
   }

   labels = {
     k8s-app                           = "prometheus"
     "kubernetes.io/cluster-service"   = "true"
     "addonmanager.kubernetes.io/mode" = "Reconcile"
     version                           = "v2.2.1"
   }

   name = "prometheus"
 }

 spec {
   pod_management_policy  = "Parallel"
   replicas               = 1
   revision_history_limit = 5

   selector {
     match_labels = {
       k8s-app = "prometheus"
     }
   }

   service_name = "prometheus"

   template {
     metadata {
       labels = {
         k8s-app = "prometheus"
       }

       annotations = {}
     }

     spec {
       service_account_name = "prometheus"

       init_container {
         name              = "init-chown-data"
         image             = "busybox:latest"
         image_pull_policy = "IfNotPresent"
         command           = ["chown", "-R", "65534:65534", "/data"]

         volume_mount {
           name       = "prometheus-data"
           mount_path = "/data"
           sub_path   = ""
         }
       }

       container {
         name              = "prometheus-server-configmap-reload"
         image             = "jimmidyson/configmap-reload:v0.1"
         image_pull_policy = "IfNotPresent"

         args = [
           "--volume-dir=/etc/config",
           "--webhook-url=http://localhost:9090/-/reload",
         ]

         volume_mount {
           name       = "config-volume"
           mount_path = "/etc/config"
           read_only  = true
         }

         resources {
           limits = {
             cpu    = "10m"
             memory = "10Mi"
           }

           requests = {
             cpu    = "10m"
             memory = "10Mi"
           }
         }
       }

       container {
         name              = "prometheus-server"
         image             = "prom/prometheus:v2.2.1"
         image_pull_policy = "IfNotPresent"

         args = [
           "--config.file=/etc/config/prometheus.yml",
           "--storage.tsdb.path=/data",
           "--web.console.libraries=/etc/prometheus/console_libraries",
           "--web.console.templates=/etc/prometheus/consoles",
           "--web.enable-lifecycle",
         ]

         port {
           container_port = 9090
         }

         resources {
           limits = {
             cpu    = "200m"
             memory = "1000Mi"
           }

           requests = {
             cpu    = "200m"
             memory = "1000Mi"
           }
         }

         volume_mount {
           name       = "config-volume"
           mount_path = "/etc/config"
         }

         volume_mount {
           name       = "prometheus-data"
           mount_path = "/data"
           sub_path   = ""
         }

         readiness_probe {
           http_get {
             path = "/-/ready"
             port = 9090
           }

           initial_delay_seconds = 30
           timeout_seconds       = 30
         }

         liveness_probe {
           http_get {
             path   = "/-/healthy"
             port   = 9090
             scheme = "HTTPS"
           }

           initial_delay_seconds = 30
           timeout_seconds       = 30
         }
       }

       termination_grace_period_seconds = 300

       volume {
         name = "config-volume"

         config_map {
           name = "prometheus-config"
         }
       }
     }
   }

   update_strategy {
     type = "RollingUpdate"

     rolling_update {
       partition = 1
     }
   }

   volume_claim_template {
     metadata {
       name = "prometheus-data"
     }

     spec {
       access_modes       = ["ReadWriteOnce"]
       storage_class_name = "standard"

       resources {
         requests = {
           storage = "16Gi"
         }
       }
     }
   }
 }
}

Create
StatefulSet
via CLI:
Parameters:

apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: web
spec:
 selector:
   matchLabels:
     app: nginx # has to match .spec.template.metadata.labels
 serviceName: "nginx"
 replicas: 3 # by default is 1
 minReadySeconds: 10 # by default is 0
 template:
   metadata:
     labels:
       app: nginx # has to match .spec.selector.matchLabels
   spec:
     terminationGracePeriodSeconds: 10
     containers:
     - name: nginx
       image: registry.k8s.io/nginx-slim:0.8
       ports:
       - containerPort: 80
         name: web
       volumeMounts:
       - name: www
         mountPath: /usr/share/nginx/html
 volumeClaimTemplates:
 - metadata:
     name: www
   spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: "my-storage-class"
     resources:
       requests:
         storage: 1Gi

Example:

kubectl apply -f statefulset.yaml

aws cost
Costs
Direct Cost
Indirect Cost
No items found.
Best Practices for
StatefulSet

Categorized by Availability, Security & Compliance and Cost

Low
Access allowed from VPN
No items found.
Low
Auto Scaling Group not in use
No items found.
Medium
Connections towards DynamoDB should be via VPC endpoints
No items found.
Medium
Container in CrashLoopBackOff state
No items found.
Low
EC2 with GPU capabilities
No items found.
Medium
EC2 with high privileged policies
No items found.
Medium
ECS cluster delete alarm
No items found.
Critical
ECS task with Admin access (*:*)
Medium
ECS task with high privileged policies
No items found.
Critical
EKS cluster delete alarm
No items found.
Medium
ElastiCache cluster delete alarm
No items found.
Medium
Ensure Container liveness probe is configured
No items found.
Medium
Ensure ECS task definition has memory limit
No items found.
Critical
Ensure EMR cluster master nodes are not publicly accessible
No items found.
More from
Kubernetes