Knative provides a higher level of abstraction, simplifying and speeding up the process of building, deploying, and managing applications on Kubernetes. It allows developers to focus more on implementing business logic, while leaving most of the infrastructure and operations work to Knative, significantly improving productivity.
Queues requests (if a Knative Service has scaled to zero). Calls the autoscaler to bring back services that have scaled down to zero and forward queued requests. The Activator can also act as a request buffer, handling bursts of traffic.
Autoscaler
Responsible for scaling Knative services based on configuration, metrics, and incoming requests.
Controller
Manages the state of Knative CRs. It monitors multiple objects, manages the lifecycle of dependent resources, and updates resource status.
Queue-Proxy
Sidecar container injected into each Knative Service. Responsible for collecting traffic data and reporting it to the Autoscaler, which then initiates scaling requests based on this data and preset rules.
Webhooks
Knative Serving has several Webhooks responsible for validating and mutating Knative resources.
Automatically manages the entire lifecycle of Workloads, controls the creation of other objects, ensures applications have Routes, Configurations, and new revisions with each update.
Routes
route.serving.knative.dev
Maps network endpoints to one or more revision versions, supports traffic distribution and version routing.
Configurations
configuration.serving.knative.dev
Maintains the desired state of deployments, provides separation between code and configuration, follows the Twelve-Factor App methodology, modifying configurations creates new revisions.
Revisions
revision.serving.knative.dev
Snapshot of the workload at each modification time point, immutable object, automatically scales based on traffic.