gRPC
This example demonstrates how to route traffic to a gRPC service through the nginx controller.
Prerequisites
You have a kubernetes cluster running.
You have a domain name such as
example.com
that is configured to routetraffic to the ingress controller. Replace references to
fortune-teller.stack.build
(the domain name used in this example) to yourown domain name (you're also responsible for provisioning an SSL certificate
for the ingress).
You have the nginx-ingress controller installed in typical fashion (must be
at least
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
for grpc support.
You have a backend application running a gRPC server and listening for TCP
traffic. If you prefer, you can use the
application provided here as an example.
Step 1: kubernetes Deployment
Deployment
This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051
.
The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation:
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS"
.
Step 2: the kubernetes Service
Service
Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051
.
Step 3: the kubernetes Ingress
Ingress
A few things to note:
We've tagged the ingress with the annotation
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
. This is the magicingredient that sets up the appropriate nginx configuration to route http/2
traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate
fortune-teller.stack.build
. The ingress matches traffic arriving ashttps://fortune-teller.stack.build:443
and routes unencrypted messages toour kubernetes service.
Step 4: test the connection
Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
Debugging Hints
Obviously, watch the logs on your app.
Watch the logs for the nginx-ingress-controller (increasing verbosity as
needed).
Double-check your address and ports.
Set the
GODEBUG=http2debug=2
environment variable to get detailed http/2logging on the client and/or server.
Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.
If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer /gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
Notes on using response/request streams
If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the
grpc_read_timeout
to acommodate for this.If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the
grpc_send_timeout
and theclient_body_timeout
.If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts:
grpc_read_timeout
,grpc_send_timeout
andclient_body_timeout
.
Values for the timeouts must be specified as e.g. "1200s"
.
On the most recent versions of nginx-ingress, changing these timeouts requires using the
nginx.ingress.kubernetes.io/server-snippet
annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout seperately.
Last updated