1. 前言
转载请说明原文出处, 尊重他人劳动成果!
源码位置: https://github.com/nicktming/kubernetes/tree/tming-v1.13/pkg/kubelet/cm/devicemanager
分支: tming-v1.13 (基于v1.13版本)
k8s-device-plugin
分支: tming-v1.11(基于v1.11版本)
device manager and device plugin
1. [k8s源码分析][kubelet] devicemanager 之 pod_devices 和 checkpoint
2. [k8s源码分析][kubelet] devicemanager 之 使用device-plugin(模拟gpu)
3. [k8s源码分析][kubelet] devicemanager 之 device-plugin向kubelet注册
4. [k8s源码分析][kubelet] devicemanager 之 kubelet申请资源
5. [k8s源码分析][kubelet] devicemanager 之 重启kubelet和device-plugin
本文将分析一下
device manager的其他方法以及kubelet和device plugin重启的时候会做什么样的操作.
2. readCheckpoint 和 writeCheckpoint
持久化到
/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint
func (m *ManagerImpl) writeCheckpoint() error {
m.mutex.Lock()
registeredDevs := make(map[string][]string)
// 只将healthy的devices持久化
for resource, devices := range m.healthyDevices {
registeredDevs[resource] = devices.UnsortedList()
}
// 将podDevices的内容持久化
data := checkpoint.New(m.podDevices.toCheckpointData(),
registeredDevs)
m.mutex.Unlock()
err := m.checkpointManager.CreateCheckpoint(kubeletDeviceManagerCheckpoint, data)
if err != nil {
return fmt.Errorf("failed to write checkpoint file %q: %v", kubeletDeviceManagerCheckpoint, err)
}
return nil
}
可以看到
device manager只将healthyDevices和podDevices中的内容持久化, 别的属性比如allocatedDevices(已经分配的devices) 以及unhealthyDevices的内容并没有做持久化.
func (m *ManagerImpl) readCheckpoint() error {
registeredDevs := make(map[string][]string)
devEntries := make([]checkpoint.PodDevicesEntry, 0)
cp := checkpoint.New(devEntries, registeredDevs)
err := m.checkpointManager.GetCheckpoint(kubeletDeviceManagerCheckpoint, cp)
if err != nil {
if err == errors.ErrCheckpointNotFound {
klog.Warningf("Failed to retrieve checkpoint for %q: %v", kubeletDeviceManagerCheckpoint, err)
return nil
}
return err
}
m.mutex.Lock()
defer m.mutex.Unlock()
podDevices, registeredDevs := cp.GetData()
// 只恢复了podDevices中的内容 并没有恢复healthyDevices里面的内容
m.podDevices.fromCheckpointData(podDevices)
m.allocatedDevices = m.podDevices.devices()
for resource := range registeredDevs {
// 为每个资源生成了一个带有stop时间的endpoint 等到device plugin重新注册
m.healthyDevices[resource] = sets.NewString()
m.unhealthyDevices[resource] = sets.NewString()
m.endpoints[resource] = endpointInfo{e: newStoppedEndpointImpl(resource), opts: nil}
}
return nil
}
需要注意三点:
1. 恢复了podDevices中的内容
2. 没有恢复healthyDevices里面的内容
3. 为每个资源生成了一个带有stop时间的endpoint, 等到device plugin重新注册, 那什么时候会重新注册呢? 后面会有分析. 因为重新注册的时候会调用回调函数来更新healthyDevices和unhealthyDevices
可以看到writeCheckpoint中持久化的healthyDevices, 在readCheckpoint是为给每个healthyDevices中的资源生成一个带有停止时间的endpoint.
3. GetCapacity
func (m *ManagerImpl) GetCapacity() (v1.ResourceList, v1.ResourceList, []string) {
needsUpdateCheckpoint := false
var capacity = v1.ResourceList{}
var allocatable = v1.ResourceList{}
deletedResources := sets.NewString()
m.mutex.Lock()
for resourceName, devices := range m.healthyDevices {
eI, ok := m.endpoints[resourceName]
if (ok && eI.e.stopGracePeriodExpired()) || !ok {
// The resources contained in endpoints and (un)healthyDevices
// should always be consistent. Otherwise, we run with the risk
// of failing to garbage collect non-existing resources or devices.
if !ok {
klog.Errorf("unexpected: healthyDevices and endpoints are out of sync")
}
// 删除device manager中关于ResourceName的所有关系
delete(m.endpoints, resourceName)
delete(m.healthyDevices, resourceName)
deletedResources.Insert(resourceName)
needsUpdateCheckpoint = true
} else {
capacity[v1.ResourceName(resourceName)] = *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
allocatable[v1.ResourceName(resourceName)] = *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
}
}
for resourceName, devices := range m.unhealthyDevices {
eI, ok := m.endpoints[resourceName]
if (ok && eI.e.stopGracePeriodExpired()) || !ok {
if !ok {
klog.Errorf("unexpected: unhealthyDevices and endpoints are out of sync")
}
delete(m.endpoints, resourceName)
delete(m.unhealthyDevices, resourceName)
deletedResources.Insert(resourceName)
needsUpdateCheckpoint = true
} else {
capacityCount := capacity[v1.ResourceName(resourceName)]
unhealthyCount := *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
capacityCount.Add(unhealthyCount)
capacity[v1.ResourceName(resourceName)] = capacityCount
}
}
m.mutex.Unlock()
if needsUpdateCheckpoint {
// 如果某个resourceName不存在endpoint 或者endpoint有stop时间
m.writeCheckpoint()
}
return capacity, allocatable, deletedResources.UnsortedList()
}
这个方法是供
kubelet调用的, 在./kubectl describe node某个节点的时候会显示节点的资源信息. 这个就完美解释了 [k8s源码分析][kubelet] devicemanager 之 使用device-plugin(模拟gpu) 中注册device plugin后节点信息中就有其对应资源的capacity和allocatable信息.
capacity: 每个
resouceName中的unhealthyDevices和healthyDevices之和.
allocatable: 每个resouceName的healthyDevices.
4. GetDeviceRunContainerOptions
func (m *ManagerImpl) GetDeviceRunContainerOptions(pod *v1.Pod, container *v1.Container) (*DeviceRunContainerOptions, error) {
podUID := string(pod.UID)
contName := container.Name
needsReAllocate := false
for k := range container.Resources.Limits {
resource := string(k)
if !m.isDevicePluginResource(resource) {
continue
}
err := m.callPreStartContainerIfNeeded(podUID, contName, resource)
if err != nil {
return nil, err
}
if m.podDevices.containerDevices(podUID, contName, resource) == nil {
needsReAllocate = true
}
}
if needsReAllocate {
klog.V(2).Infof("needs re-allocate device plugin resources for pod %s", podUID)
m.allocatePodResources(pod)
}
m.mutex.Lock()
defer m.mutex.Unlock()
return m.podDevices.deviceRunContainerOptions(string(pod.UID), container.Name), nil
}
func (m *ManagerImpl) callPreStartContainerIfNeeded(podUID, contName, resource string) error {
...
devices := m.podDevices.containerDevices(podUID, contName, resource)
if devices == nil {
m.mutex.Unlock()
return fmt.Errorf("no devices found allocated in local cache for pod %s, container %s, resource %s", podUID, contName, resource)
}
_, err := eI.e.preStartContainer(devs)
...
}
该方法也是
kubelet调用的, 获得该容器启动时需要的运行参数. 如果有必要, 还需要用grpc调用device plugin的eI.e.preStartContainer(devs)方法.
5. 重启kubelet
5.1 重启kubelet
重启
kubelet的时候readCheckpoint的时候healthyDevices并没有填入任何信息, 解释说需要等待device plugin的重新注册, 那device plugin是如何知道kubelet重新启动了, 自己也需要去重新注册呢?
// k8s-device-plugin/main.go
func main() {
...
log.Println("Starting FS watcher.")
watcher, err := newFSWatcher(pluginapi.DevicePluginPath)
...
restart := true
var devicePlugin *NvidiaDevicePlugin
L:
for {
if restart {
if devicePlugin != nil {
devicePlugin.Stop()
}
devicePlugin = NewNvidiaDevicePlugin()
if err := devicePlugin.Serve(); err != nil {
...
} else {
restart = false
}
}
select {
case event := <-watcher.Events:
if event.Name == pluginapi.KubeletSocket && event.Op&fsnotify.Create == fsnotify.Create {
log.Printf("inotify: %s created, restarting.", pluginapi.KubeletSocket)
restart = true
}
case err := <-watcher.Errors:
log.Printf("inotify: %s", err)
...
}
}
看到
k8s-device-plugin在启动后会一直监控pluginapi.DevicePluginPath = "/var/lib/kubelet/device-plugins/"目录, 当该目录下的所有文件有任何变动时,会进入case event := <-watcher.Events:中, 此case判断是否kubelet.sock,如果是, 无论是kubelet.sock增删改等操作, 一律设置restart标志位为true, 进行当前自己的device plugin重启操作. 进而修改device manager中对应resouceName的设备信息, 也就是healthyDevices和unhealthyDevices. 关于注册可以参考 [k8s源码分析][kubelet] devicemanager 之 device-plugin向kubelet注册
5.2 重启device plugin
现在
kubelet运行正常, 但是device plugin一直在重启, 那不是始终在生成新的endpoint, 这样的话那不是越来越多endpoint在运行了吗? 针对这个问题, 可以看一下runEndpoint
func (m *ManagerImpl) runEndpoint(resourceName string, e endpoint) {
e.run()
e.stop()
m.mutex.Lock()
defer m.mutex.Unlock()
if old, ok := m.endpoints[resourceName]; ok && old.e == e {
m.markResourceUnhealthy(resourceName)
}
klog.V(2).Infof("Endpoint (%s, %v) became unhealthy", resourceName, e)
}
当
device plugin重启的时候, 之前的服务(nvidia.sock)会中断与其对应的endpoint的run方法, 因为e.run()方法是阻塞方法, 所以中断后才会进入到e.stop()设置了stop时间, 另外重启后的device plugin只要resouceName没有发生改变,device manager的endpoints也会被新生成的endpoint覆盖.
6 总结
conclusion.png
1. 可以看到总共有三个方法(
genericDeviceUpdateCallback,GetCapacity和allocateContainerResources)调用了writeCheckpoint将device manager的属性podDevices和registerDevices持久化到文件/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中. 因为这三个方法都有可能对持久化的内容有所改变.1.1
genericDeviceUpdateCallback会改变healthyDevices进而改变registerDevices内容.
2.2GetCapacity会检查到某些endpoint已经不存在了, 会从healthyDevices中的内容中删除有关内容.
2.3allocateContainerResources这个是因为kubelet在调用Allocate的时候会先更新一些podDevices中内容, 因为有些已经占有资源的pod已经运行结束了, 需要回收资源, 可以参考 [k8s源码分析][kubelet] devicemanager 之 kubelet申请资源.
2. 当
kubelet重启的时候, 这个时候有两个动作需要注意:2.1
device manager会通过readCheckpoint的内容加载到podDevices中.
2.2/var/lib/kubelet/device-plugins/kubelet.sock文件会重新生成, 从而触发device plugin重启, 重启注册后会通过回调函数genericDeviceUpdateCallback加载设备到healthyDevices和unhealthyDevices中. 最后一直通过ListAndWatch保持连接一直报告最新的healthyDevices和unhealthyDevices.
3.
/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中的内容与实际情况是有"延迟"的
注意: 这里的延迟指的是有些占有资源的pod已经运行结束, 但是device manager中的allocatedDevices没有及时更新.可以看到三个有可能调用
writeCheckpoint方法里面只有genericDeviceUpdataCallback是一定会调用writeCheckpoint的, 而另外两个方法都是条件成立的条件下才会调用writeCheckpoint方法. 这里就分析其中一种情况下会使/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中的内容有延迟. 说明一下,GetCapacity方法只有在某个资源对应的endpoint不存在或者过期的时候才会调用writeCheckpoint方法.
当现在所有的资源(比如gpu卡)已经分配出去了, 然后过了一段时间某个占有资源的pod运行结束了, 此时调用GetCapacity和genericDeviceUpdataCallback即使调用了writeCheckpoint方法也只是更新了registerDevices部分, 并没有更新podDevices部分, 因为只有调用了Allocate方法(申请资源)才会调用updateAllocatedDevices方法把这些运行结束的pod中的资源真正释放出来, 这也就是为什么根据kubelet_internal_checkpoint统计资源的情况会与实际使用的情况不一样, 因为有的pod已经运行结束了, 但是device manager中的资源并没有去更新, 而且kubelet_checkpoint_internal文件中依然有这个pod的资源使用信息.

conclusion.png










网友评论