Service Weaver and Kubernetes
Kube, the user provides an application binary and a configuration file
config.yaml. The deployer builds a container image for the application, and
generates Kubernetes resources that enable the application to run in a Kubernetes
- The Kubernetes manifest file to deploy your application is automatically generated.
- You control how to run your application (e.g., resource requirements, scaling specifications, volumes).
- You decide how to export telemetry (e.g., traces to Jaeger, metrics to Prometheus, write custom telemetry plugins to export telemetry to your favorite observability framework).
- You can use existing tools to deploy your application (e.g., kubectl, CI/CD pipelines like GitHub Actions, Argo CD or Jenkins).
To deploy a "Hello, World!" Service Weaver application with the
the user has to write an application config and a deployment config:
# app_config.toml [serviceweaver] binary = "./hello" # Per component config. ...
# dep_config.yaml appConfig: app_config.toml repo: docker.io/mydockerid listeners: - name: hello public: true
weaver kube deploy to generate the Kubernetes resources needed to
run your application:
$ go build . $ weaver kube deploy dep_config.yaml ... Building image hello:ffa65856... ... Uploading image to docker.io/mydockerid/... ... Generating kube deployment info ... ... kube deployment information successfully generated /tmp/kube_ffa65856.yaml
Finally, you can simply deploy your application:
$ kubectl apply -f /tmp/kube_ffa65856.yaml
Kube the user can control how the application is running on Kubernetes. We
identified the top Kubernetes knobs that users typically configure when running
on Kubernetes (based on a survey done by the Go team in 2023 H2), and expose them
in the config. We expose the following knobs:
- Resources needed to run each container; e.g., the CPU/Mem requests/limits.
- Scaling specifications to control how the pods are scaled up/down; e.g., min and max replicas, metrics to scale, utilization threshold.
- Volumes to mount storage, config maps, secrets, etc.
- Probes to monitor the liveness, readiness and startup of the containers.
Note that these knobs can be specified for the entire application, or for a subset of the components.
For example, if the user wants to configure the memory limits for all the running pods, it can be configured as follows:
appConfig: weaver.toml repo: docker.io/mydockerid listeners: - name: hello public: true resourceSpec: requests: memory: "64Mi"
Note that if there are other knobs that you want to configure, you can manually edit the generated Kubernetes manifest file. However, if there are knobs that are widely used, we can expose them along with the existing knobs in the config.
Kube deployer documentation goes into details how to
export telemetry to your favorite observability framework. To do that, you need
to implement a wrapper deployer on top of the
Kube deployer using the
Kube tool abstraction. We provide examples on how to export
traces to Jaeger, metrics to Prometheus and how to visualize
the exported traces and metrics with Grafana.
To learn more on how to use the
Kube deployer to run your Service Weaver application
on Kubernetes, check the
Kube deployer documentation. We are eager
to hear your feedback and help us enhance the deployer or, perhaps, contribute
new deployers that can benefit the entire Service Weaver community.