0% found this document useful (0 votes)
198 views

Locking Down Your Kubernetes Cluster With Linkerd

In this hands-on workshop, we cover the basics of locking down in-cluster network traffic using the new traffic policies introduced in Linkerd 2.11. Using Linkerd’s ability to authorize traffic based on workload identity, we cover a variety of practical use cases, including restricting access to a critical service, preventing traffic across namespaces, and locking down traffic while still allowing metrics scrapes, health checks, and other meta-traffic.

Uploaded by

Buoyant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
198 views

Locking Down Your Kubernetes Cluster With Linkerd

In this hands-on workshop, we cover the basics of locking down in-cluster network traffic using the new traffic policies introduced in Linkerd 2.11. Using Linkerd’s ability to authorize traffic based on workload identity, we cover a variety of practical use cases, including restricting access to a critical service, preventing traffic across namespaces, and locking down traffic while still allowing metrics scrapes, health checks, and other meta-traffic.

Uploaded by

Buoyant
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Locking down your Kubernetes

Cluster with Linkerd


Hands-on workshop
Hi, we're Buoyant!
We created Linkerd! And we help you run Linkerd by providing
management tools (Buoyant Cloud), support, training, and much
more.
At your service today:
★ William Morgan, CEO ( @wm)
★ Jason Morgan (not related!), MC ( @RJasonMorgan)
★ Lots of other friendly Buoyant folks in the Linkerd Slack.
Have questions or need help? Join the #workshops channel on
slack.linkerd.io and help each other!
Let's dive right in!
★ Linkerd 2.11 introduced a big new
feature: authorization policy.
★ This feature gives you control over the
types of communication are allowed on
your cluster.
★ It's built on top of mTLS identity and
enforced at the pod level (zero-trust
compatible).
But what do we mean by "authorization policy"?
★ By default, Kubernetes allows all communication to and from any pod.
★ By default, Linkerd also allows all communication to and from any
(meshed) pod.
★ Authorization policy refers to restricting some types of communication.
★ Called "authorization policy" because works by denying requests unless
they're properly authorized.

So authorization policy gives Linkerd the power to say "no".


What kinds of communication can be restricted?
Today, Linkerd's policies are purely server-side policies (enforced by the
inbound proxies) and authorize individual connections. This means they:
★ Can only restrict traffic to meshed pods.
★ Can only restricts connection, not individual requests.
This is just a first step! In the future (e.g. 2.12) we'll add:
★ Client-side policies (restrict traffic from meshed pods)
★ Fine-grained policy (verbs, paths, gRPC methods)
★ More!
Linkerd's authorization policies vs NetworkPolicies
Authorization Policies Network Policies
★ Use workload identity (i.e. ★ Use network identity (i.e. IP
ServiceAccount) address)
★ "Include" encryption ★ No encryption
★ Enforced at the pod level ★ Enforced at the CNI layer
(zero trust)
★ No L7 semantics
★ Can capture L7 semantics
★ Hard to use
★ Are ergonomic
How is authorization policy expressed?
Two mechanisms that work together:

★ A default policy, typically set through a


config.linkerd.io/default-inbound-policy annotation.
★ Two CRDs, Server and ServerAuthorization, that specify exceptions
to the default policy.

This brings the total number of Linkerd CRDs to 4. Sorry!


Default policies
★ Every cluster has a cluster-wide default policy, set at install time with
policyController.defaultAllowPolicy
○ By default: all-unauthenticated

★ The default policy can be overridden at the namespace or workload


level
○ Set the config.linkerd.io/default-inbound-policy annotation

★ Every proxy's default policy is fixed at startup time.


○ If you want to change its default policy, you need to restart the pod!
○ Can be viewed in the environment variables for the proxy container.
Available default policies
★ all-unauthenticated: allow all
★ cluster-unauthenticated: allow from clients with source IPs in the
cluster.
★ all-authenticated: allow from clients with Linkerd's mTLS
★ cluster-authenticated: allow from in-cluster clients with Linkerd's
mTLS
★ deny: deny all
A note about cluster networks
★ Kubernetes doesn't give us a great way of knowing what the actual
network IP range is
★ Linkerd just uses all private IP space by default
★ But in practice, you should probably restrict this to the cluster's actual
network space by setting the clusterNetworks variable at
install/upgrade time.
The Server CRD
★ Selects over a port, and a set of apiVersion: policy.linkerd.io/v1beta1
kind: Server
pods, in a namespace metadata:
★ Give it a protocol hint and it you namespace: emojivoto

can avoid protocol detection! name: voting-grpc


spec:
podSelector:
matchLabels:
Example: the gRPC port on the app: voting-svc
emojivoto voting service port: voting-grpc
proxyProtocol: gRPC
Servers can match multiple workloads!
Example: the admin port on every apiVersion: policy.linkerd.io/v1beta1

pod in this namespace kind: Server


metadata:
namespace: emojivoto
name: admin
spec:
port: linkerd-admin
podSelector:
matchLabels: {} # every pod
By themselves, Servers deny all traffic!
★ If you create a Server for a port, all traffic to that port will be denied.
○ This overrides the default policy.

★ If you want to allow traffic, you need to create a


ServerAuthorization that references that Server
The ServerAuthorization CRD
★ Selects over one or more apiVersion: policy.linkerd.io/v1beta1

Servers
kind: ServerAuthorization
metadata:
★ Describes the types of traffic namespace: emojivoto
that are allowed to those name: admin-unauthed

Servers spec:
server:
Example: unauthenticated traffic name: admin
to the "admin" Server is allowed client:
unauthenticated: true
ServerAuthz's can match multiple Servers!
Example: traffic to any Server with
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization

the "emojivoto/api" label is metadata:


namespace: emojivoto

allowed if it's mTLS traffic from the name: internal-grpc


spec:
"web" ServiceAccount server:
selector:
matchLabels:
emojivoto/api: internal-grpc
client:
meshTLS:
serviceAccounts:
- name: web
Putting it all together
So, when a connection comes to a port on a meshed pod, how does
Linkerd decide what to do? It uses this basic logic

Is the (pod, port) selected by a Server?


Yes => Is that Server selected by a ServerAuthorization?
○ Yes => Follow the ServerAuthorization's rules for that connection
○ No => deny the connection

No => Use the default policy for the pod


How does it feel to be rejected?
★ If Linkerd knows this is a gRPC connection
○ Denial is a grpc-status: PermissionDenied response

★ If Linkerd knows this is an HTTP/1 or HTTP/2 connection


○ Denial is a 403 response

★ Otherwise
○ Denial is a refused TCP connection

If you update your policies, Linkerd will happily terminate established


connections if they are no longer allowed!
Gotcha #1: Kubelet probes need to be authorized!
★ If you are building a "deny by default" setup, you need to make sure
Kubelet probes (liveness checks, readiness checks, health checks, etc) are
authorized!
○ Otherwise your pod won't start.
★ This also applies if you're building an "authenticated by default" setup.
Kubelet probes are plaintext / unauthenticated.
Gotcha #2: Default policies are not read dynamically!

★ The default policy for a pod is fixed at startup time, based on the
annotations then present in the namespace and workload.
★ ... with one edge case, which is that you can dynamically change the
cluster-wide default with linkerd update. Only works if no annotations
are overriding it.
The Server and ServerAuthorization CRs are, of course, read dynamically.
Gotcha #3: Ports must be in the pod spec!
If a Server references a port that is not in the pod spec, it will be ignored.
Hands-on time!
Let's take a look at how to get our Emojivoto app into a high security,
"deny by default" namespace.

(Based loosely on Go Directly to Namespace Jail by Linkerd maintainer


Alejandro Pedraza)
Next Workshop
A guide to end-to-end encryption with Emissary-ingress and Linkerd
Thu, Feb 17, 2022
9 am PST | 12 pm EST | 6 pm CET

Register today!
buoyant.io/register/end-to-end-encryption-with-emissary-and-linkerd

….and coming up in March: Certificate management for Linkerd


Thank you!
William Morgan
CEO @ Buoyant
[email protected]

@BuoyantIO buoyant.io
The best way to run
in mission-critical
environments
★ Automatically track data plane and control plane health
★ Manage mesh certificates and versions
Request a demo
★ Build the ultimate service mesh platform buoyant.io/demo
★ Get full Linkerd support

You might also like