Service Weaver and Kubernetes
In this blog post we introduce Kube
, a deployer that allows you to
deploy Service Weaver applications in any Kubernetes environment,
i.e. GKE, EKS, AKS, minikube.
With Kube
, the user provides an application binary and a configuration file
config.yaml
. The deployer builds a container image for the application, and
generates Kubernetes resources that enable the application to run in a Kubernetes
cluster.
With
Kube
:
- The Kubernetes manifest file to deploy your application is automatically generated.
- You control how to run your application (e.g., resource requirements, scaling specifications, volumes).
- You decide how to export telemetry (e.g., traces to Jaeger, metrics to Prometheus, write custom telemetry plugins to export telemetry to your favorite observability framework).
- You can use existing tools to deploy your application (e.g., kubectl, CI/CD pipelines like GitHub Actions, Argo CD or Jenkins).
Hello World!
To deploy a "Hello, World!" Service Weaver application with the Kube
deployer,
the user has to write an application config and a deployment config:
# app_config.toml
[serviceweaver]
binary = "./hello"
# Per component config.
...
# dep_config.yaml
appConfig: app_config.toml
repo: docker.io/mydockerid
listeners:
- name: hello
public: true
Then use weaver kube deploy
to generate the Kubernetes resources needed to
run your application:
$ go build .
$ weaver kube deploy dep_config.yaml
...
Building image hello:ffa65856...
...
Uploading image to docker.io/mydockerid/...
...
Generating kube deployment info ...
...
kube deployment information successfully generated
/tmp/kube_ffa65856.yaml
Finally, you can simply deploy your application:
$ kubectl apply -f /tmp/kube_ffa65856.yaml
Configurations
With Kube
the user can control how the application is running on Kubernetes. We
identified the top Kubernetes knobs that users typically configure when running
on Kubernetes (based on a survey done by the Go team in 2023 H2), and expose them
in the config. We expose the following knobs:
- Resources needed to run each container; e.g., the CPU/Mem requests/limits.
- Scaling specifications to control how the pods are scaled up/down; e.g., min and max replicas, metrics to scale, utilization threshold.
- Volumes to mount storage, config maps, secrets, etc.
- Probes to monitor the liveness, readiness and startup of the containers.
Note that these knobs can be specified for the entire application, or for a subset of the components.
For example, if the user wants to configure the memory limits for all the running pods, it can be configured as follows:
appConfig: weaver.toml
repo: docker.io/mydockerid
listeners:
- name: hello
public: true
resourceSpec:
requests:
memory: "64Mi"
Note that if there are other knobs that you want to configure, you can manually edit the generated Kubernetes manifest file. However, if there are knobs that are widely used, we can expose them along with the existing knobs in the config.
Telemetry
The Kube
deployer documentation goes into details how to
export telemetry to your favorite observability framework. To do that, you need
to implement a wrapper deployer on top of the Kube
deployer using the
Kube tool abstraction. We provide examples on how to export
traces to Jaeger, metrics to Prometheus and how to visualize
the exported traces and metrics with Grafana.
CI/CD Pipelines
The Kube
deployer allows you to deploy your application using your favorite CI/CD
pipeline. Here is an example on how to integrate with the
GitHub Actions. We also integrate with ArgCD and Jenkins.
Final Thoughts
To learn more on how to use the Kube
deployer to run your Service Weaver application
on Kubernetes, check the Kube
deployer documentation. We are eager
to hear your feedback and help us enhance the deployer or, perhaps, contribute
new deployers that can benefit the entire Service Weaver community.