Zed相机的Linux内核适配性 (zed相机适用的linux内核)

随着深度学习和计算机视觉技术的不断发展,3D摄像头的市场需求也日益增长,而Zed相机则是其中的佼佼者。Zed相机能够同时获取左右两个摄像头的图像,并将这些图像合成为一个深度感知图像。这一功能使得它在视觉测距和深度感知方面非常出色,被广泛应用于机器人、虚拟现实、无人驾驶和安防等领域。而在使用Zed相机的过程中,由于其内核的适配性问题,很多用户遇到了困难。本文将对进行分析,并提供相关解决方案。

Zed相机的基础系统是Linux,因此,它的内核版本是至关重要的。目前,官方支持的内核版本有Ubuntu 16.04和18.04以及Windows 10(64位)。如果您使用的是其他Linux发行版,例如CentOS、Debian等,那么您需要确保内核版本和缺失的依赖项与Zed相机相匹配,否则可能会出现不兼容和无法识别的情况。

Zed相机需要从NVIDIA的官方网站下载并安装CUDA和cuDNN。CUDA是NVIDIA提供的并行计算平台和编程模型,可以加速GPU上的计算,包括深度神经网络、信号和图像处理等。而cuDNN是NVIDIA提供的深度神经网络(DNN)库,用于加速DNN的前向和反向推断。如果您已经安装并配置了CUDA和cuDNN,那么Zed相机的安装相对简单。但如果您是之一次安装,那么建议先参考CUDA和cuDNN的官方文档进行配置,以确保其与Zed相机的兼容性。

第三,Zed相机对于Linux内核的编译和安装有一定的要求。例如,必须安装Qt和SDL2等依赖项。Qt是跨平台的应用程序和UI框架,可用于创建图形化应用程序。SDL2则是一个用于创建视频游戏等“多媒体”应用程序的库。在编译和安装Zed相机之前,您需要先安装这些依赖项,以确保它正常工作。

如果您仍然无法识别Zed相机,那么还可以尝试调整其设备权限。在Linux中,设备权限是重要的安全措施,用于控制哪些程序可以访问哪些设备。如果Zed相机被其它程序占用,或者其权限设置不正确,那么可能会导致无法识别的问题。此时,可以使用以下命令:

“`bash

sudo chmod 666 /dev/video0

“`

这将使视频设备(例如Zed相机)的权限可读可写,并允许其他应用程序访问它。

Zed相机是一个非常强大的3D摄像头,可以广泛应用于机器人、虚拟现实、无人驾驶和安防等领域。但是,在使用Zed相机的过程中,有时可能会遇到内核适配性问题。为了解决这些问题,您需要确保内核版本和缺失的依赖项与Zed相机相匹配,安装并配置CUDA和cuDNN等依赖项,安装Qt和SDL2等依赖项,并调整设备权限。这些都是确保Zed相机在Linux系统中正常工作的关键步骤。

相关问题拓展阅读:

proxmox ve — ZFS on Linux

ZFS是由Sun Microsystems设计的一个文件系统和逻辑卷管理器的组合。从proxmox ve 3.4开裤稿始,zfs文件系统的本机Linux内核端口作为可选文件系统引入,并作为根文件系统的附加选择。不需要手动编译ZFS模块-包括所有包。

通过使用zfs,它可以通过低硬件预算花销实现更大的企业功能,并且可以通过利用SSD缓存或纯使用SSD来饥哪实现高性能系统。ZFS可以通过适度的CPU和内存负载以及简单的管理来取代成本高昂的硬件RAID卡。

General ZFS advantages

ZFS很大程度上依赖于内存,因此至少需要8GB才能启动。烂纯码在实践中,尽可能多地使用高配置硬件。为了防止数据损坏,我们建议使用高质量的ECC RAM。

如果使用专用缓存和/或日志磁盘,则应使用企业级SSD(例如Intel SSD DC S3700系列)。这可以显著提高整体性能。

If you are experimenting with an installation of Proxmox VE inside a VM (Nested Virtualization), don’t use virtio for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with virtio SCSI controller type).

When you install using the Proxmox VE installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time:

| RAID0

|

Also called “striping”. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable.

|

| RAID1

|

Also called “mirroring”. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk.

|

| RAID10

|

A combination of RAID0 and RAID1. Requires at least 4 disks.

|

| RAIDZ-1

|

A variation on RAID-5, single parity. Requires at least 3 disks.

|

| RAIDZ-2

|

A variation on RAID-5, double parity. Requires at least 4 disks.

|

| RAIDZ-3

|

A variation on RAID-5, triple parity. Requires at least 5 disks.

|

The installer automatically partitions the disks, creates a ZFS pool called rpool, and installs the root file system on the ZFS subvolume rpool/ROOT/pve-1.

Another subvolume called rpool/data is created to store VM images. In order to use that with the Proxmox VE tools, the installer creates the following configuration entry in /etc/pve/storage.cfg:

zfspool: local-zfs

pool rpool/data

sparse

content images,rootdir

After installation, you can view your ZFS pool status using the zpool command:

# zpool status

pool: rpool

state: ONLINE

scan: none requested

config:

errors: No known data errors

The zfs command is used configure and manage your ZFS file systems. The following command lists all file systems after installation:

# zfs list

NAME USED AVAIL REFER MOUNTPOINT

rpool.94G 7.68T 96K /rpool

rpool/ROOTM 7.68T 96K /rpool/ROOT

rpool/ROOT/pveM 7.68T 702M /

rpool/dataK 7.68T 96K /rpool/data

rpool/swap.25G 7.69T 64K –

Depending on whether the system is booted in EFI or legacy BIOS mode the Proxmox VE installer sets up either grub or systemd-boot as main bootloader. See the chapter on Proxmox VE host bootladers for details.

This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are zfs and zpool. Both commands come with great manual pages, which can be read with:

# man zpool

To create a new pool, at least one disk is needed. The ashift should have the same sector-size (2 power of ashift) or larger as the underlying disk.

zpool create -f -o ashift=12

To activate compression

zfs set compression=lz4

Minimum 1 Disk

zpool create -f -o ashift=12

Minimum 2 Disks

zpool create -f -o ashift=12 mirror

Minimum 4 Disks

zpool create -f -o ashift=12 mirror mirror

Minimum 3 Disks

zpool create -f -o ashift=12 raidz1

Minimum 4 Disks

zpool create -f -o ashift=12 raidz2

It is possible to use a dedicated cache drive partition to increase the performance (use SSD).

As it is possible to use more devices, like it’s shown in “Create a new pool with RAID*”.

zpool create -f -o ashift=12 cache

It is possible to use a dedicated cache drive partition to increase the performance(SSD).

As it is possible to use more devices, like it’s shown in “Create a new pool with RAID*”.

zpool create -f -o ashift=12 log

If you have an pool without cache and log. First partition the SSD in 2 partition with parted or gdisk

| Always use GPT partition tables. |

The maximum size of a log device should be about half the size of physical memory, so this is usually quite all. The rest of the SSD can be used as cache.

zpool add -f log cache

Changing a failed device

zpool replace -f

Changing a failed bootable device when using systemd-boot

sgdisk -R

sgdisk -G

zpool replace -f

pve-efiboot-tool format

pve-efiboot-tool init

| ESP stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the Proxmox VE installer since version 5.4. For details, see Setting up a new partition for use as synced ESP . |

ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ships the daemon in a separate package, and you can install it using apt-get:

# apt-get install zfs-zed

To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favourite editor, and uncomment the ZED_EMAIL_ADDR setting:

ZED_EMAIL_ADDR=”root”

Please note Proxmox VE forwards mails to root to the email address configured for the root user.

| The only setting that is required is ZED_EMAIL_ADDR. All other settings are optional. |

It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in /etc/modprobe.d/zfs.conf and insert:

options zfs zfs_arc_max=

This example setting limits the usage to 8GB.

|

If your root file system is ZFS you must update your initramfs every time this value changes:

update-initramfs -u

|

Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage.

We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the “swappiness” value. A good value for servers is 10:

sysctl -w vm.swappiness=10

To make the swappiness persistent, open /etc/sysctl.conf with an editor of your choice and add the following line:

vm.swappiness = 10

Table 1. Linux kernel swappiness parameter values

vm.swappiness = 0

|

The kernel will swap only to avoid an out of memory condition

|

|

vm.swappiness = 1

|

Minimum amount of swapping without disabling it entirely.

|

|

vm.swappiness = 10

|

This value is sometimes recommended to improve performance when sufficient memory exists in a system.

|

|

vm.swappiness = 60

|

The default value.

|

|

vm.swappiness = 100

|

The kernel will swap aggressively.

|

ZFS on Linux version 0.8.0 introduced support for native encryption of datasets. After an upgrade from previous ZFS on Linux versions, the encryption feature can be enabled per pool:

# zpool get feature@encryption tank

NAME PROPERTYVALUESOURCE

tank feature@encryption disabledlocal

NAME PROPERTYVALUESOURCE

tank feature@encryption enabledlocal

| There is currently no support for booting from pools with encrypted datasets using Grub, and only limited support for automatically unlocking encrypted datasets on boot. Older versions of ZFS without encryption support will not be able to decrypt stored data. |

| It is recommended to either unlock storage datasets manually after booting, or to write a custom unit to pass the key material needed for unlocking on boot to zfs load-key. |

| Establish and test a backup procedure before enabling encryption of production data.If the associated key material/passphrase/keyfile has been lost, accessing the encrypted data is no longer possible. |

Encryption needs to be setup when creating datasets/zvols, and is inherited by default to child datasets. For example, to create an encrypted dataset tank/encrypted_data and configure it as storage in Proxmox VE, run the following commands:

# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data

Enter passphrase:

Re-enter passphrase:

All guest volumes/disks create on this storage will be encrypted with the shared key material of the parent dataset.

To actually use the storage, the associated key material needs to be loaded with zfs load-key:

# zfs load-key tank/encrypted_data

Enter passphrase for ‘tank/encrypted_data’:

It is also possible to use a (random) keyfile instead of prompting for a passphrase by setting the keylocation and keyformat properties, either at creation time or with zfs change-key on existing datasets:

# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1

| When using a keyfile, special care needs to be taken to secure the keyfile against unauthorized access or accidental loss. Without the keyfile, it is not possible to access the plaintext data! |

A guest volume created underneath an encrypted dataset will have its encryptionroot property set accordingly. The key material only needs to be loaded once per encryptionroot to be available to all encrypted datasets underneath it.

See the encryptionroot, encryption, keylocation, keyformat and keystatus properties, the zfs load-key, zfs unload-key and zfs change-key commands and the Encryption section from man zfs for more details and advanced usage.

zynq运行操作系统之linux kernel编译

然后拷贝出arch/arm/boot/uImage 到SD卡即可

Linux with HDMI video output on the ZED, ZC702 and ZC706 boards

ADV7511 HDMI tranitter Linux Driver

Building the Zynq Linux kernel and devicetrees from source

axiiic

关于zed相机适用的linux内核的介绍到此就结束了,不知道你从中找到你需要的信息了吗 ?如果你还想了解更多这方面的信息,记得收藏关注本站。


数据运维技术 » Zed相机的Linux内核适配性 (zed相机适用的linux内核)