KubeAdmiral - Enhanced Kubernetes Federation
English | 简体中文
KubeAdmiral is a multi-cluster management system for Kubernetes, developed from Kubernetes Federation v2. Kubernetes Federation v2 allows users to manage Kubernetes resources across multiple clusters through the use of federated types such as FederatedDeployment, FederatedReplicaSet, FederatedSecret, etc. KubeAdmiral extends the Kubernetes Federation v2 API, providing compatibility with the Kubernetes native API and more powerful resource management capabilities. KubeAdmiral also adds new features such as:
- A new scheduling framework with a rich set of scheduling plugins.
- Override policies.
- Automatic propagation of dependencies with follower scheduling.
- Status aggregation of member cluster resources.
- Scalability, stability and user experience enhancements
Getting started
KubeAdmiral supports Kubernetes versions from 1.16 up to 1.24. Using lower or higher Kubernetes versions may cause compatibility issues. For setup please refer to Quickstart.
Community
Contributing
If you are willing to be a contributor for the KubeAdmiral project, please refer to our CONTRIBUTING document for details.
Contact
If you have any questions or wish to contribute, you are welcome to communicate via GitHub issues or pull requests. Alternatively, you may reach out to our Maintainers.
License
KubeAdmiral is under the Apache 2.0 license. See the LICENSE file for details. KubeAdmiral is a continuation of Kubernetes Federation v2, and certain features in KubeAdmiral rely on existing code from Kubernetes — all credits go to the original Kubernetes authors. We also refer to Karmada for some of our architecture and API design, all relevant credits go to the Karmada Authors.
feat: collect cluster resources without caching pods
Overview
Collecting member cluster resources is required for resource-based scheduling plugins, such as ClusterResourcesFit, ClusterResourcesBalancedAllocation, etc.
Currently, we collect the available and allocatable member cluster resources by maintaining a Node and Pod informer for each member cluster. The number of schedulable nodes and the total allocatable resources is obtained from the Node informer. The total requested resources (which is subtracted from the total allocatable resources to get the total available resources) is obtained from the Pod informer.
This naturally does not scale well as informers maintain a copy of each object in memory.
This PR introduces a new mechanism for collecting cluster resources that does not require the caching of pod objects. With this new mechanism, we no longer require the use of a pod informer to collect the total requested resources in a cluster.
This new mechanism works based on the premise that:
How the new mechanism works
Memory improvements
(In relation to collecting total requested resources)
Previously, a copy of every pod in each member cluster was stored in memory. During relists, where we have to deserialize a complete list of pods again, the memory might even double.
With this new mechanism, the amount of memory used would be O(number of member clusters * page size). The amount of memory used for maintaining the set of completed-but-not-terminated-pods' uids should be insignificant.
Results
I ran two versions of the KubeAdmiral controller-manager against test clusters where I simulated the creation and deletion of 10000 pods at regular intervals. The first version is the main branch, while the second version uses the new cacheless collector. Periodic relists were triggered by pausing the apiservers to ensure that relists do not affect memory usage and correctness
I also hooked up some temporary metrics for evaluating the memory usage and correctness. Below are the results:
Heap alloc of the 2 versions:
Reported CPU available (both correspond)
Reported memory available (both correspond)
feat: add commands for building docker images and new kubeadmiral local startup
What this PR does?
kubeadmiral/hack
directory structure to make the script easier to read.Does this PR introduce a user-facing change?
make local-up
command with the following commandchore(deps): bump k8s.io/klog/v2 from 2.80.1 to 2.90.1
Bumps k8s.io/klog/v2 from 2.80.1 to 2.90.1.
Release notes
Sourced from k8s.io/klog/v2's releases.