IAM roles for service accounts provide a secure and efficient way to manage access to cloud resources in a cloud environment. By assigning roles to service accounts instead of individual users, organizations can improve their security posture by minimizing the risk of human error or credential misuse.
Setting the correct IAM configuration is key in managing access control for Kubernetes on AWS (using EKS). A complete IAM configuration for EKS consists of:
To ensure a secure and healthy interaction with Kubernetes, several elements need to co-exist within a cluster’s configuration. Kubernetes doesn’t provide access randomly to everyone. Instead, it grants secure access by relying on three main cornerstones:
• Authentication
• Authorization
• TLS/SSL Communication
To get a well-rounded perspective on Kubernetes security aspects, let’s detail these aspects.
Authentication focuses on securing access to the Kubernetes cluster. This step acts as a security barrier that helps a Kubernetes cluster to interact with recognizable entities. Authentication is the process through which K8s identifies and acknowledges the entities that are wishing to reach out to the cluster’s resources. This communication is mainly established via the wkube-API server. Each time the kube-API receives a request, it authenticates the sender at first. If it is recognized then the request is validated under the light of the granted permissions attached to the requester.
In this article, we will mainly focus on authentication related to entities that are intended to access Kubernetes clusters for administrative tasks only. In other terms, we will discard the authentication process related to final users’ access to the applications running on Kubernetes. This type of authentication is generally performed on the application itself. This is the reason why we are left with only two main types of users:
In this same perspective, we will highlight the authentication.
After defining those who may reach out to API, it’s crucial to precise the perimeter of freedom for each identity. Through this step, Kubernetes administrators can define granular permissions to the cluster users by using a wide range of possible policies.
In order to build well-founded communication, each entity within the cluster needs to have its own certificate. This element is signed by a trusted Certificate Authority and it is used to prove the identity of the owner.
In fact, certificates are mainly created to establish a trusted bonding between the identity of the owner and the cryptography keys used to secure traffic with other peers. All the main components of both the control plane and worker nodes have dedicated certificates issued by Certificate Authorities (CAs).
To authenticate to a Kube-API, there are four main mechanisms that can be used which are:
• Certificates
• Static password files
• Static token tiles
• Bearer tokens
Takes advantage of SSL/TLS certificates in order to verify the identity of interacting elements. Client certificates contain all the fields defining it such as CN (Common Name), ON (Organizational Name), DNS (Domain Name Service), IP address, etc.
To obtain an X.509 certificate, the concerned entity needs to generate a Certificate Signing Request (CSR) for Certificate Authorities (CA). If you are using cert-manager and HashiCorp Vault, you can generate your request to obtain a certificate as follows:
$ cat > lightlytics-cert.yaml <<EOF
apiVersion : cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: lightlytics
namespace: default
spec:
secretName: lightlytics-tls
issuerRef:
name: vault-issuer
commonName: www.lightlytics.com
dnsNames:
- www.lightlytics.com
EOF
Kubernetes administrators can take advantage of external PKIs to manage the Kubernetes cluster TLS certificates. These external PKIs can be a national agency for digital certification, such as the Federal Public Key Infrastructure (FPKI) in the United States of America and the National Digital Certification Agency TunTrust in Tunisia. Nowadays, there are some open-source tools that can be used as external PKIs to manage efficiently TLS certificates for the cluster such as HashiCorp Vault and Cert-manager.
This method is considered the easiest strategy. A CSV file is defined as the source of user information for the Kube-API server. For each user, the file hosts the username, the corresponding password, the user ID, and optionally the user group to which it belongs.
This directory of the static file needs to be defined so that Kube-API server can reach out to it and validate user details.
When your cluster is installed from scratch, Kube-API is installed by using binaries and it is running as a service. To define the static password file you have to inspect the configuration file:
$ vim /etc/systemd/system/kube-apiserver.service
Then, you need to define the static password file (for example called my-users.csv) by adding the following line:
--basic-auth-file = my-users.csv
When you have a Kubernetes cluster installed by using the Kubeadm tool, you can simply add the previous file definition line in the kube-apiserver manifest in the list of container commands under the specification section:
$ vim /etc/kubernetes/manifests/kube-apiserver.yaml
In the same way, we define static files to host tokens instead of usernames and passwords. As usual, tokens are hosted in CSV format file that should be defined in the previously mentioned kube-api server configuration file or kube-api manifest by using the parameter:
--token-auth-file = user-tokens.csv
We will get back to a deeper discussion about the Kubernetes Token generation process later in this article.
Pro Tip:
It’s possible to insert user credentials (usernames, passwords, tokens,etc) within the connection command itself to establish authentication as follows:
$ curl -k -v https://master-node-ip:6443/api/v1/nodes -u "userNumber01:UserPassword@0000"
$ curl -k -v https://master-node-ip:6443/api/v1/nodes --header "Authorization: Bearer XbNWmjosfjBjOppkZAgtd..."
Although this approach works perfectly to authenticate the users (such as ”userNumber01”) with their corresponding passwords (”UserPassword@0000”), we highly recommend avoiding it. In fact, typing credentials in plain text in the command line interface can expose it to theft. Those who have access to the CLI history can view all the credentials and take advantage of them.
In order to enable authentication in Kubernetes while relying on subscribed identities, you can simply integrate recognizable solutions for identity management.
If you are running your cluster on a public cloud, there are always possible identity management solutions that you can take advantage of. It’s always recommended that Kubernetes administrators integrate their cluster with the co-existent identity service to optimize the time and energy spent on this task.
In this same perspective, we will highlight the usage of Amazon Identity and Access Management service (Amazon IAM) to manage identities within the perimeter of Amazon VPCs. Another alternative for this solution is AzureAD, the Active Directory identity management service provided by Microsoft.
As we have already mentioned, a Kubernetes cluster can be either be used by a human being like administrators and developers or a service built to interact with the cluster components. Unlike humans who may type by hand their credentials to authenticate, bots rely on API requests with the kube-API. This is the reason why we use a Kubernetes service account to build a guaranteed authentication approach for processes and services.
A very useful example of services accessing the Kubernetes administrative services is the use case of an application that needs to retrieve the cluster's info and display it on a sophisticated web interface.
To do this, the application needs to access the K8s cluster via the kube-API server, get authenticated and authorized, then it will be granted access to perform the cluster info retrieval.
First and foremost, we have to create the service account dedicated to this application called cluster-infos-sa:
$ kubectl create serviceaccount cluster-infos-sa
The service account will be created successfully. To display the list of the available service accounts you can use the command:
$ kubectl get serviceaccount
You can inspect the full description of the created service account by displaying the command:
$ kubectl describe serviceaccount cluster-infos-sa
There is a wide range of public cloud providers that you can consider while migrating your infrastructure. Amazon Web Services is one of the leading cloud providers in the market nowadays. Other available options are Microsoft Azure, Google Cloud, etc. To see a very pertinent and unbiased comparison between these providers, you can inspect the Gartner Magic Quadrant.
In this article, we will highlight a use case that uses identity and access management managed by AWS. In fact, we select this specific public cloud provider as it has an undeniable range of innovative technologies. Furthermore, AWS was named a Leader in the 2022 Gartner Cloud Infrastructure Platform Services (CIPS) Magic Quadrant for the 12th Consecutive Year. This classification is established while considering two main factors which are the completeness of vision and the ability to execute.
As its name implies, Amazon Identity and Access Management is the AWS built-in service responsible for managing identities and granting access to entities within the perimeter of Amazon Cloud. In fact, this service provides a fully developed answer for each word of this question: Who can access what?
• Who: Defining users and applications who are wishing to identify to AWS.
• Can Access: Defining granular permissions for the entities who are willing to be identified. This is established by the allocation of specific policies (AWS-managed policies, custom policies or inline policies).
• What: Defining resources that will be accessed by the authorized entities within AWS.
While considering the possible identities accorded by AWS, we can distinguish two main categories.
Identities can be allocated to:
- Human users: which are capable of performing authentication by inserting their credentials. They generally have an IAM user with a Username/password or an Access key in case of programmatic access. They may also have an additional authentication token (for multi-factor authentication).
- Processes and Services: Applications cannot build an authentication process with resources while using traditional credentials used by Humans. They mainly use APIs along with certificates and tokens to authenticate with other resources. This is the reason why we generally build Roles dedicated to applications.
In this same perspective, authenticating applications hosted by Kubernetes clusters as pods urge the usage of AWS Roles. Kubernetes applications authenticate to external entities via service accounts.
In this next session, we will dive deep into the usage of both IAM Roles and K8S service accounts. This comes for the benefit of applications hosted on Kubernetes clusters and which are willing to build authentication with AWS resources.
In this section, we will take advantage of IAM roles and we will exploit the within Kubernetes service accounts.
In other terms, we will demonstrate how to grant access to AWS resources for Kubernetes services by defining a corresponding IAM role and how to accord it to service accounts.
In this scenario, we will suppose that:
• We have a web application running on Kubernetes
• We have a ready-to-use Amazon S3 bucket named ”lightlytics-s3-bucket"
• The web interface needs to retrieve pictures and documents stored on Amazon Simple Storage Service (S3).
• We are using HashiCorp Terraform as an Infrastructure as a code tool . Our local station is already authenticated to our AWS cloud account via AWS ACCESS KEY and AWS SECRET ACCESS KEY. The authentication parameters are also set as environment variables.
In this same perspective, we will assume that the web application is running as a Deployment. The deployment is exposed to external users via a NodePort.
We are willing to connect this deployment to Amazon S3 so it can retrieve the required objects that will be displayed on the static website.
While building Amazon Identities we follow the principle of least privilege (PoLP). In fact, after defining an identity we always provide the slightest advantage possible.
In this example, we will create a role with one policy that enables Amazon EKS-hosted application to get Objects from our specific S3 bucket.
While we apply this previous concept, let’s use HashiCorp Terraform to build the role that should be attached to the Kubernetes service.
resource "aws_iam_role" "k8s_test_role" {
name = "talos"
# Terraform expression result to valid JSON syntax.
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["s3:GetObject"]
Effect = "Allow",
Resource= [
"arn:aws:s3::lightlytics-s3-bucket/*"
]
},
Action = "sts:AssumeRoleWithWebIdentity" Effect = "Allow"
Sid =""
Principal = {
Federated: "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
},
Condition: {
StringEquals: {
"$oidc_provider:aud": "sts.amazonaws.com",
"$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
}
},
]
})
tags = {
tag-key = "app-tag"
}
}
You have then to run HashiCorp Terraform command suit as follows:
$ terraform init
$ terraform plan
$ terraform apply
You may have noticed that the ”Resources” field is defined as ”arn:aws:s3::lightlytics-s3-bucket/*”. In fact, each resource on AWS has a specific ARN (Amazon Resource Name). This entity uniquely identifies AWS resources.
The asterisk that follows this same syntax refers to all the files and folders hosted by the bucket, so there are no restrictions on specific elements in the bucket.
Please notice that the K8s service assuming this role will only be able to get objects from the bucket thanks to the ”s3:GetObject” policy. However, it will not be able to host any object on the bucket because we did not allow the bucket policy ”s3:PutObject”.
This granularity in bucket policy syntax keeps users to a reduced perimeter of freedom and minimizes threats on the Amazon S3 bucket. For instance, hackers will not have the chance to host malicious scripts on S3 storage by taking advantage of the IAM role privileges accorded to the Web Application.
Please notice that each of the environment variables available in the previous script are crucial to building the trust policy. This later will enable the service account to obtain temporary credentials and to enjoy a limited-access through API.This type of privilege is granted through Amazon Security Token Service (STS).
In this paragraph, we will build a service account and associate to the IAM Role (created in the previous step).
On the control plane Node, we will create the service account by applying a declarative approach. The service account sa-app.yaml manifest will look as follows:
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-app
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:I am::XXXXXXXXXXXX:role/S3TalosAppTestRole
To create the service account , you should apply the manifest content:
kubectl create -f sa-app.yaml
Each resource created by Terraform is described in Terraform state file ”terraform.tfstate” along with all its related properties and details. It’s not recommended to access the state file wby using text editing commands like vim. Alternatively, you can retrieve the arn related to the created IAM role by inspecting AWS Management Console, or by using terraform command that displays resource properties:
$ terraform state show aws_iam_role.k8s_test_role
In conclusion, IAM roles for service accounts provide a secure and efficient way to manage access to cloud resources in a cloud environment. By assigning roles to service accounts instead of individual users, organizations can improve their security posture by minimizing the risk of human error or credential misuse. Additionally, service accounts can be used to automate tasks and enable secure communication between services within a cloud environment.
It is important for organizations to understand the best practices for managing IAM roles for service accounts, such as implementing a least privilege approach, regularly reviewing and updating roles, and monitoring account activity for suspicious behavior. By following these guidelines, organizations can ensure that their cloud infrastructure remains secure and accessible to authorized users and services.
As cloud adoption continues to grow, the use of IAM roles for service accounts will become increasingly important in managing access to cloud resources. It is important for organizations to stay up-to-date with the latest best practices and technologies in order to effectively manage their cloud infrastructure and protect their sensitive data.
Stream.Security delivers the only cloud detection and response solution that SecOps teams can trust. Born in the cloud, Stream’s Cloud Twin solution enables real-time cloud threat and exposure modeling to accelerate response in today’s highly dynamic cloud enterprise environments. By using the Stream Security platform, SecOps teams gain unparalleled visibility and can pinpoint exposures and threats by understanding the past, present, and future of their cloud infrastructure. The AI-assisted platform helps to determine attack paths and blast radius across all elements of the cloud infrastructure to eliminate gaps accelerate MTTR by streamlining investigations, reducing knowledge gaps while maximizing team productivity and limiting burnout.