跳到主要内容

16 篇博文 含有标签「iiot」

查看所有标签

· 阅读需 10 分钟

This blog introduces how to enable Cilium Container Network Interface with KubeEdge.

Why Cilium for KubeEdge

Cilium is the one of the most advanced and efficient container network interface plugin for Kubernetes, that provides network connectivity and security for containerized applications in Kubernetes clusters. It leverages eBPF (extended Berkeley Packet Filter) technology to implement networking and security policies at the Linux kernel level, allowing for high-performance data plane operations and fine-grained security controls.

And KubeEdge extends the cluster orchestration capability down to edge environments to provide unified cluster management and sophisticated edge specific features.

Enabling Cilium with KubeEdge allows us to take advantage of both benefits even for edge computing environments. We can deploy the application containers where EdgeCore running and bind Cilium to connect with workloads in the cloud infrastructure. This is because Cilium can also enable WireGuard VPN with transparent encryption of traffic between Cilium-managed endpoints.

Further more, we can also rely on Cilium Tetragon Security Observability and Runtime Enforcement to confine security risk and vulnerability in edge environment.

· 阅读需 3 分钟

KubeEdge v1.17 is now available! This latest release introduces several new features and enhancements, including support for edge pods using InClusterConfig to access the Kubernetes API server, video streaming data reporting in Mappers, auto-restarting for EdgeCore modules, and more.

1.17 What's New

Release Highlights

Support Edge Pods Using InClusterConfig to Access Kube-APIServer

The InClusterConfig mechanism enables cloud pods to directly access the Kubernetes API server. With this release, KubeEdge now supports edge pods using the InClusterConfig mechanism to access the Kube-APIServer directly, even when the edge and cloud are in different network environments.

Refer to the link for more details. (#5524, #5541)

Mapper Supports Video Streaming Data Reporting

Previously, Mappers could only process structured device data. In v1.17, video stream data processing features have been added to the Mapper-Framework.

  • Edge Camera Device Management

    v1.17 provides a built-in Mapper based on the Onvif protocol, which can manage Onvif network camera devices into the KubeEdge cluster and obtain the camera's authentication file and RTSP video stream.

  • Video Stream Data Processing

    Video stream data processing capabilities have been introduced to the Mapper-Framework data plane. The video stream reported by edge camera devices can be saved as frame files or video files.

Refer to the link for more details. (#5448, #5514, mappers-go/#127)

Support Auto-Restarting for Edge Modules

EdgeCore modules could previously fail to start due to non-configurable and recoverable matters like process start order issues. In v1.17, the BeeHive framework has been improved to support automatically restarting modules. Users can now configure EdgeCore modules to automatically restart instead of restarting the entire component.

Refer to the link for more details. (#5509, #5513)

Introduce keadm ctl Command to Support Pods Query and Restart at Edge

The new keadm ctl command has been introduced in v1.17, allowing users to query and restart pods on edge nodes when they are offline:

  • Query: keadm ctl get pod [flags]
  • Restart: keadm ctl restart pod [flags]

Refer to the link for more details. (#5504)

Keadm Enhancements

Several enhancements were made to the keadm installation tool:

  • Refactored the keadm init command
  • Changed the command keadm generate to keadm manifest
  • Added image-repository flag to keadm join to support customization
  • Split the keadm reset command into keadm reset cloud and keadm reset edge.

Refer to the link for more details. (#5317)

Add MySQL to Mapper Framework

The Mapper Framework data plane now includes MySQL database support in its pushMethod. When using MySQL, basic configuration parameters for the MySQL client need to be added in the DeviceInstance.

Refer to the link for more details. (#5376)

Upgrade Kubernetes Dependency to v1.28.6

The vendored Kubernetes version has been upgraded to v1.28.6, users are now able to use the latest features on both the cloud and edge side.

Refer to the link for more details. (#5412)

Important Steps before Upgrading

  • To use the InClusterConfig feature for edge pods, you need to enable the metaServer and dynamicController switches, and set featureGates.requireAuthorization=true in the CloudCore and EdgeCore configuration files.

  • To use the Auto-Restarting for Edge Modules feature, you must enable the moduleRestart feature gate in EdgeCore.

Download the v1.17.0 release from the release page and upgrade today to take advantage of these new capabilities!

· 阅读需 15 分钟

北京时间2024年1月23日,KubeEdge发布1.16版本。新版本新增多个增强功能,在集群升级、集群易用性、边缘设备管理等方面均有大幅提升。

KubeEdge v1.16 新增特性:

新特性概览

集群升级:支持云边组件自动化升级

随着KubeEdge社区的持续发展,社区版本不断迭代;用户环境版本升级的诉求亟需解决。针对升级步骤难度大,边缘节点重复工作多的问题,v1.16.0版本的 KubeEdge 支持了云边组件的自动化升级。用户可以通过Keadm工具一键化升级云端,并且可以通过创建相应的Kubernetes API,批量升级边缘节点。

  • 云端升级

    云端升级指令使用了三级命令与边端升级进行了区分,指令提供了让用户使用更便捷的方式来对云端的KubeEdge组件进行升级。当前版本升级完成后会打印ConfigMap历史配置,如果用户手动修改过ConfigMap,用户可以选择通过历史配置信息来还原配置文件。我们可以通过help参数查看指令的指导信息:

    keadm upgrade cloud --help
    Upgrade the cloud components to the desired version, it uses helm to upgrade the installed release of cloudcore chart, which includes all the cloud components

    Usage:
    keadm upgrade cloud [flags]

    Flags:
    --advertise-address string Please set the same value as when you installed it, this value is only used to generate the configuration and does not regenerate the certificate. eg: 10.10.102.78,10.10.102.79
    -d, --dry-run Print the generated k8s resources on the stdout, not actual execute. Always use in debug mode
    --external-helm-root string Add external helm root path to keadm
    --force Forced upgrading the cloud components without waiting
    -h, --help help for cloud
    --kube-config string Use this key to update kube-config path, eg: $HOME/.kube/config (default "/root/.kube/config")
    --kubeedge-version string Use this key to set the upgrade image tag
    --print-final-values Print the final values configuration for debuging
    --profile string Sets profile on the command line. If '--values' is specified, this is ignored
    --reuse-values reuse the last release's values and merge in any overrides from the command line via --set and -f.
    --set stringArray Sets values on the command line (can specify multiple or separate values with commas: key1=val1,key2=val2)
    --values stringArray specify values in a YAML file (can specify multiple)

    升级指令样例:

    keadm upgrade cloud --advertise-address=<init时设置的值> --kubeedge-version=v1.16.0
  • 边端升级

    v1.16.0版本的KubeEdge支持通过NodeUpgradeJob的Kubernetes API进行边缘节点的一键化、批量升级。API支持边缘节点的升级预检查、并发升级、失败阈值、超时处理等功能。对此,KubeEdge支持了云边任务框架。社区开发者将无需关注任务控制、状态上报等逻辑实现,只需聚焦云边任务功能本身。

    升级API样例:

    apiVersion: operations.kubeedge.io/v1alpha1
    kind: NodeUpgradeJob
    metadata:
    name: upgrade-example
    labels:
    description: upgrade-label
    spec:
    version: "v1.16.0"
    checkItems:
    - "cpu"
    - "mem"
    - "disk"
    failureTolerate: "0.3"
    concurrency: 2
    timeoutSeconds: 180
    labelSelector:
    matchLabels:
    "node-role.kubernetes.io/edge": ""
    node-role.kubernetes.io/agent: ""
  • 兼容测试

    KubeEdge社区提供了完备了版本兼容性测试,用户在升级时仅需要保证云边版本差异不超过2个版本,就可以避免升级期间云边版本不一致带来的问题。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5330 https://github.com/kubeedge/kubeedge/pull/5229 https://github.com/kubeedge/kubeedge/pull/5289

支持边缘节点的镜像预下载

新版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。

镜像预下载API示例:

apiVersion: operations.kubeedge.io/v1alpha1
kind: ImagePrePullJob
metadata:
name: imageprepull-example
labels:
description:ImagePrePullLabel
spec:
imagePrePullTemplate:
images:
- image1
- image2
nodes:
- edgenode1
- edgenode2
checkItems:
- "disk"
failureTolerate: "0.3"
concurrency: 2
timeoutSeconds: 180
retryTimes: 1

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5310 https://github.com/kubeedge/kubeedge/pull/5331

支持使用Keadm安装Windows边缘节点

KubeEdge 1.15版本实现了在Windows上运行边缘节点,在新版本中,我们支持使用安装工具Keadm直接安装Windows边缘节点,操作命令与Linux边缘节点相同,简化了边缘节点的安装步骤。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/4968

增加多种容器运行时的兼容性测试

新版本中新增了多种容器运行时的兼容性测试,目前已集成了containerddockerisuladcri-o 4种主流容器运行时,保障KubeEdge版本发布质量,用户在安装容器运行时过程中也可以参考该PR中的适配安装脚本。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5321

EdgeApplication中支持更多Deployment对象字段的Override

在新版本中,我们扩展了EdgeApplication中的差异化配置项(overriders),主要的扩展有环境变量、命令参数和资源。当您不同区域的节点组环境需要链接不同的中间件时,就可以使用环境变量(env)或者命令参数(command, args)去重写中间件的链接信息。或者当您不同区域的节点资源不一致时,也可以使用资源配置(resources)去重写cpu和内存的配置。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5262 https://github.com/kubeedge/kubeedge/pull/5370

支持基于Mapper-Framework的Mapper升级

1.16版本中,基于Mapper开发框架Mapper-Framework构建了Mapper组件的升级能力。新框架生成的Mapper工程以依赖引用的方式导入原有Mapper-Framework的部分功能,在需要升级时,用户能够以升级依赖版本的方式完成,简化Mapper升级流程。

  • Mapper-Framework代码解耦:

    1.16版本中将Mapper-Framework中的代码解耦为用户层和业务层。用户层功能包括设备驱动及与之强相关的部分管理面数据面能力,仍会随Mapper-Framework生成在用户Mapper工程中,用户可根据实际情况修改。业务层功能包括Mapper向云端注册、云端下发Device列表等能力,会存放在kubeedge/mapper-framework子库中。

  • Mapper升级框架:

    1.16版本Mapper-Framework生成的用户Mapper工程通过依赖引用的方式使用kubeedge/mapper-framework子库中业务层功能,实现完整的设备管理功能。后续用户能够通过升级依赖版本的方式达到升级Mapper的目的,不再需要手动修改大范围代码。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5308 https://github.com/kubeedge/kubeedge/pull/5326

DMI数据面内置集成Redis与TDEngine数据库

1.16版本中进一步增强DMI数据面中向用户数据库推送数据的能力,增加Redis与TDengine数据库作为内置数据库。用户能够直接在device-instance配置文件中定义相关字段,实现Mapper自动向Redis与TDengine数据库推送设备数据的功能,相关数据库字段定义为:

type DBMethodRedis struct {
// RedisClientConfig of redis database
// +optional
RedisClientConfig *RedisClientConfig `json:"redisClientConfig,omitempty"`
}
type RedisClientConfig struct {
// Addr of Redis database
// +optional
Addr string `json:"addr,omitempty"`
// Db of Redis database
// +optional
DB int `json:"db,omitempty"`
// Poolsize of Redis database
// +optional
Poolsize int `json:"poolsize,omitempty"`
// MinIdleConns of Redis database
// +optional
MinIdleConns int `json:"minIdleConns,omitempty"`
}
type DBMethodTDEngine struct {
// tdengineClientConfig of tdengine database
// +optional
TDEngineClientConfig *TDEngineClientConfig `json:"TDEngineClientConfig,omitempty"`
}
type TDEngineClientConfig struct {
// addr of tdEngine database
// +optional
Addr string `json:"addr,omitempty"`
// dbname of tdEngine database
// +optional
DBName string `json:"dbName,omitempty"`
}

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5064

基于Mapper-Framework的USB-Camera Mapper实现

基于KubeEdge的Mapper-Framework,新版本提供了USB-Camera的Mapper样例,该Mapper根据USB协议的Camera开发,用户可根据该样例和Mapper-Framework更轻松地开发具体业务相关的Mapper。

在样例中提供了helm chart包,用户可以通过修改usbmapper-chart/values.yaml部署UBS-Camera Mapper,主要添加USB-Camera的设备文件, nodeName, USB-Camera的副本数,其余配置修改可根据具体情况而定,通过样例目录中的Dockerfile制作Mapper镜像。

global:
replicaCounts:
......
cameraUsbMapper:
replicaCount: 2 #USB-Camera的副本数
namespace: default
......
nodeSelectorAndDevPath:
mapper:
- edgeNode: "edgenode02" #USB-Camera连接的缘节点nodeName
devPath: "/dev/video0" #USB-Camera的设备文件
- edgeNode: "edgenode1"
devPath: "/dev/video17"
......

USB-Camera Mapper的部署命令如下:

helm install usbmapper-chart ./usbmapper-chart

更多信息可参考:

https://github.com/kubeedge/mappers-go/pull/122

易用性提升:基于Keadm的部署能力增强

  • 添加云边通信协议配置参数

    在KubeEdge v1.16.0中,使用keadm join边缘节点时,支持使用--hub-protocol配置云边通信协议。目前KubeEdge支持websocket和quic两种通信协议,默认为websocket协议。

    命令示例:

    keadm join --cloudcore-ipport <云节点ip>:10001 --hub-protocol=quic --kubeedge-version=v1.16.0 --token=xxxxxxxx

    说明:当--hub-protocol设置为quic时,需要将--cloudcore-ipport的端口设置为10001,并需在CloudCore的ConfigMap中打开quic开关,即设置modules.quic.enable为true。

    操作示例:使用kubectl edit cm -n kubeedge cloudcore,将quic的enable属性设置成true,保存修改后重启CloudCore的pod。

    modules:
    ......
    quic:
    address: 0.0.0.0
    enable: true #quic协议开关
    maxIncomingStreams: 10000
    port: 10001

    更多信息可参考:

    https://github.com/kubeedge/kubeedge/pull/5156

  • keadm join与CNI插件解耦

    在新版本中,keadm join边缘节点时,不需要再提前安装CNI插件,已将边缘节点的部署与CNI插件解耦。同时该功能已同步到v1.12及更高版本,欢迎用户使用新版本或升级老版本。

    说明:如果部署在边缘节点上的应用程序需要使用容器网络,则在部署完edgecore后仍然需要安装CNI插件。

    更多信息可参考:

    https://github.com/kubeedge/kubeedge/pull/5196

升级K8s依赖到v1.27

新版本将依赖的Kubernetes版本升级到v1.27.7,您可以在云和边缘使用新版本的特性。

更多信息可参考:

https://github.com/kubeedge/kubeedge/pull/5121

版本升级注意事项

新版本我们使用DaemonSet来管理边端的MQTT服务Eclipse Mosquitto了,我们能够通过云端Helm Values配置来设置是否要开启MQTT服务。使用DaemonSet管理MQTT后,我们可以方便的对边端MQTT进行统一管理,比如我们可以通过修改DaemonSet的配置将边端MQTT替换成EMQX。

但是如果您是从老版本升级到最新版本,则需要考虑版本兼容问题,同时使用原本由静态Pod管理的MQTT和使用新的DaemonSet管理的MQTT会产生端口冲突。兼容操作步骤参考:

  1. 您可以在云端执行命令,将旧的边缘节点都打上自定义标签
kubectl label nodes --selector=node-role.kubernetes.io/edge without-mqtt-daemonset=""
  1. 您可以修改MQTT DaemonSet的节点亲和性
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- ...
- key: without-mqtt-daemonset
operator: Exists
  1. 将节点MQTT改为由DaemonSet管理
# ------ 边端 ------
# 修改/lib/systemd/system/edgecore.service,将环境变量DEPLOY_MQTT_CONTAINER设置成false
# 这步可以放在更新EdgeCore前修改,这样就不用重启EdgeCore了
sed -i '/DEPLOY_MQTT_CONTAINER=/s/true/false/' /etc/systemd/system/edgecore.service

# 停止EdgeCore
systemctl daemon-reload && systemctl stop edgecore

# 删除MQTT容器,Containerd可以使用nerdctl替换docker
docker ps -a | grep mqtt-kubeedge | awk '{print $1}' | xargs docker rm -f

# 启动EdgeCore
systemctl start edgecore

# ------ 云端 ------
# 删除节点标签
kubectl label nodes <NODE_NAME> without-mqtt-daemonset

新版本的keadm join命令会隐藏with-mqtt参数,并且将默认值设置成false,如果您还想使用静态Pod管理MQTT,您仍然可以设置参数--with-mqtt来使其生效。with-mqtt参数在v1.18版本中将会被移除。

· 阅读需 4 分钟

On Oct 13, 2023, KubeEdge released v1.15. The new version introduces several enhanced features, significantly improving support for Windows-based edge nodes, device management, and data plane capabilities.

v1.15 What's New

Release Highlights

Support Windows-based Edge Nodes

Edge computing involves various types of devices, including sensors, cameras, and industrial control devices, some of which may run on the Windows OS. In order to support these devices and use cases, supporting Windows Server nodes is necessary for KubeEdge.

In this release, KubeEdge supports the edge node running on Windows Server 2019, and supports Windows container running on edge node, thereby extending KubeEdge to the Windows ecosystem and expanding its use cases and ecosystem.

Refer to the link for more details. (#4914, #4967)

New v1beta1 version of Device API

The device API is updated from v1alpha2 to v1beta1, in v1beta1 API updates include:

  • The built-in protocols incude Modbus, Opc-UA and Bluetooth are removed in device instance, and the built-in mappers for these proytocols still exists and will be maintained and updated to latest verison.

  • Users must define the protocol config through CustomizedValue in ProtocolConfig.

  • DMI date plane related fields are added, users can config the collection and reporting frequency of device data, and the destination to whcih(such as database, httpserver) data is pushed.

  • Controls whether to report device data to cloud.

Refer to the link for more details. (#4983)

Support Alpha version of DMI DatePlane and Mapper-Framework

Alpha version of DMI date plane is supported, DMI date plane is mainly implemented in mapper, providing interface for pushing data, pulling data, and storing data in database.

To make writing mapper easier, a mapper development framework subproject Mapper-Framework is provided in this release. Mapper-Framework provides mapper runtime libs and tools for scaffolding and code generation to bootstrap a new mapper project. Users only need to run a command make generate to generate a mapper project, then add protocol related code to mapper.

Refer to the link for more details. (#5023)

Support Kubernetes native Static Pod on Edge Nodes

Kubernetes native Static Pod is supported on edge node in this release. Users can create pods on edge nodes by place pod manifests in /etc/kubeedge/manifests, same as that on the Kubernetes node.

Refer to the link for more details. (#4825)

Support more Kubernetes Native Plugin Running on Edge Node

Kubernetes non-resource kind request /version is supported from edge node, users now can do /version requests in edge node from metaserver. In addition, it can easily support other non-resource kind of requests like /healthz in edge node with the curent framework. Many kubernetes plugins like cilium/calico which depend on these non-resource kind of requests, now can run on edge nodes.

Refer to the link for more details. (#4904)

Upgrade Kubernetes Dependency to v1.26.7

Upgrade the vendered kubernetes version to v1.26.7, users are now able to use the feature of new version on the cloud and on the edge side.

Refer to the link for more details. (#4929)

Important Steps before Upgrading

  • In KubeEdge v1.15, new v1beta1 version of device API is incompatible with earlier versions of v1alpha1, users need to update the device API yamls to v1bata1 if you want to use v1.15.

  • In KubeEdge v1.15, users need to upgrade the containerd to v1.6.0 or later. Containerd minor version 1.5 and older will not be supported in KubeEdge v1.15.
    Ref: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal

  • In KubeEdge v1.14, EdgeCore has removed the dockeshim support, so users can only use remote type runtime, and uses containerd runtime by default. If you want to use docker runtime in v1.15, you also need to first set edged.containerRuntime=remote and corresponding docker configuration like RemoteRuntimeEndpoint and RemoteImageEndpoint in EdgeCore, then install the cri-dockerd tools as docs below: https://github.com/kubeedge/kubeedge/issues/4843

· 阅读需 3 分钟

On July 1, 2023, KubeEdge released v1.14. The new version introduces several enhanced features, significantly improving security, reliability, and user experience.

v1.14 What's New

Release Highlights

Support Authentication and Authorization for Kube-API Endpoint for Applications On Edge Nodes

The Kube-API endpoint for edge applications is implemented through MetaServer in edegcore. However, in previous versions, the authentication and authorization of Kube-API endpoint are performed in the cloud, which prevents authentication and authorization especially in offline scenarios on the edge node.

In this release, the authentication and authorization functionalities are implemented within the MetaServer at edge, which allows for limiting the access permissions of edge applications when accessing Kube-API endpoint at edge.

Refer to the link for more details. (#4802)

Support Cluster Scope Resource Reliable Delivery to Edge Node

The cluster scope resource can guarantee deliver to the edge side reliably since this release, especially include using list-watch global resources, the cluster scope resource can be delivered to the edge side reliably, and the edge applications can work normally.

Refer to the link for more details. (#4758)

Upgrade Kubernetes Dependency to v1.24.14

Upgrade the vendered kubernetes version to v1.24.14, users are now able to use the feature of new version on the cloud and on the edge side.

备注

The dockershim has been removed, which means users can't use docker runtime directly in this release.

Refer to the link for more details. (#4789)

Support Kubectl Attach to Container Running on Edge Node

KubeEdge already support kubectl logs/exe command, kubectl attach is supported in this release. kubectl attach command can attach to a running container at edge node. Users can execute these commands in the cloud and no need to operate on the edge nodes.

Refer to the link for more details. (#4734)

Alpha version of KubeEdge Dashboard

KubeEdge dashboard provides a graphical user interface (GUI) for managing and monitoring your KubeEdge clusters. It allows users to manage edge applications running in the cluster and troubleshoot them.

Refer to the link for more details. (https://github.com/kubeedge/dashboard)

Important Steps before Upgrading

  • On KubeEdge v1.14, EdgeCore has removed the dockeshim support, so users can only use remote type runtime, and uses containerd runtime by default. If you want to use docker runtime, you must first set edged.containerRuntime=remote and corresponding docker configuration like RemoteRuntimeEndpoint and RemoteImageEndpoint in EdgeCore, then install the cri-dockerd tools as docs below: https://github.com/kubeedge/kubeedge/issues/4843