Skip to content

RBAC

Role-based access control is a key to properly secure Kubernetes cluster. The roles can be namespace or cluster based.

Namespace based

Role

Role is set of resources and operations allowed on them. It is a set of rules for resources (grouped in apiGroups) and allowed actions (verbs) associated with a specific namespace.

RoleBinding

Role binding connects role to the user/group.

Cluster based

ClusterRole

ClusterRole is a version of the role which is applied to the whole cluster.

ClusterRoleBinding

Cluster role binding connects cluter role to the user/group.

Roles

Cluster wide

  • cluster-admin - allows to perform any access on any resource

Namespace based

  • admin - allows admin access, does not allow to modify resource quota or namespace for which it is granted.
  • edit
  • view

Groups:

  • system-masters - full admin for the cluster
  • system:basic-user, system:unauthenticated - unauthenticated, no operations allowed

Network isolation

By default, pods are non-isolated; they accept traffic from any source.

Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)

Environments isolation

In staging/test environment and production cluster all pods should be isolated using network policies: https://kubernetes.io/docs/concepts/services-networking/network-policies/

Production cluster/namespace may have no network policies for pods for easier prototyping and development.

Certificates and user accounts

1) SSH env jumpbox ssh ubuntu@jumpbox.env1.aws.test.cloudboostr.com -I jumpbox.pem

2) Login to credhub and grab private key for CA and CA certs

credhub_login
credhub get -n /bosh/k8s/kubo_ca

3) Save certs to files

Private key (--tls-private-key-file, /etc/kubernetes/pki/private/server.key): kubernetes-key.pem
CA (--client-ca-file, /etc/kubernetes/pki/ca.crt): kubernetes-ca.pem

Optional:

Certificate (--tls-cert-file, /etc/kubernetes/pki/issued/server.crt): kubernetes-crt.pem

4) Create private key and CSR using openssl

openssl genrsa -out adko.key 2048
openssl req -new -key adko.key -out adko.csr -subj "/CN=adko/O=dev/O=test"

CN - common name (login, username) O - organisation unit (group)

openssl x509 -req -in adko.csr -CA kubernetes-ca.pem -CAkey kubernetes-key.pem -CAcreateserial -out adko.crt -days 365

5) Configure kubectl

kubectl config set-cluster adko-aws --certificate-authority kubernetes.pem --embed-certs=true --server=https://cloudboostr-k8s-api.us-west-1.elb.amazonaws.com:8443
kubectl config set-credentials adko --client-certificate=adko.crt --client-key=adko.key --embed-certs=true
kubectl config set-context adko-aws-test --cluster=adko-aws --user=adko

For easier context switching use: https://github.com/ahmetb/kubectx

Yaml samples for roles

Instead of specific resources or verbs it is possible to use wildcards: ["*"] To see existing roles and allocation in specific apiGroups check: https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/auth/authorizer/rbac/bootstrappolicy/testdata/cluster-roles.yaml

View-only role for test namespace:

Role configuration:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: test
  name: test-view-role
rules:
- apiGroups: [""]
  resources: ["pods", "deployments", "services", "replicasets"]
  verbs: ["get", "watch", "list"]

Binding role to the user:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dev-view-rolebinding
  namespace: test
subjects:
- kind: User
  name: adko
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: test-view-role
  apiGroup: rbac.authorization.k8s.io

View access for all developers to prod

Role configuration:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: prod
  name: prod-view-role
rules:
- apiGroups: [""]
  resources: ["pods", "deployments", "services", "replicasets", "replicationcontrollers"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources: ["deployments"]
  verbs: ["get", "list", "watch"]

Binding role to the users group:

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: prod-view-rolebinding
  namespace: prod
subjects:
- kind: Group
  name: dev
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: prod-view-role
  apiGroup: rbac.authorization.k8s.io

If you have an error below kubectl update is required: error: SchemaError(io.k8s.api.rbac.v1.RoleBindingList): invalid object doesn't have additional properties

Helm, security and colocating production with test and production

Helm

Helm runs inside the cluster. This means effectively anyone can run anything from the inside if the Helm installation is using default, cluster-wide, serivce account. Bear in mind that in general helm charts itself are also not safe and may create service accounts with cluster-wide permissions.

The best solution for that will be to have multiple tillers - e.g. tiller per dev team with the same privileges the team has set with a service account. Another way would be to have tiller per namespace depending on the security layout.

Helm permissions:

Port-forward in tiller namespace

Security

Without network policies in place, all pods can access each other. This is not a problem in dev environment, but very much is for production namespace. Having prod colocated in the same cluster with all other envs means that network policies are required.

Performance

To avoid resources starvation by dev environment node affinity/anti-affinity or nodeSelectors should be applied, so the production pods are physically isolated from the test and dev. Additionally in general dev/test pods should never have higher priority than prod ones to avoid eviction.

As mentioned in kubernetes docs:

Setting resource limits properly, and testing that they are working takes a lot of time and effort so unless you can measurably save money by running production in the same cluster as staging or test, you may not really want to do that.

Using namespaces to partition the cluster

The article below explains with examples how namespaces can be used to partition the cluster:

https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/