Tag: proxmox

  • Proxmox VE + 海康威视 R1 共享 UPS

    买了个便宜的UPS,想把PVE主机和海康的R1都挂在下边,所以简单折腾一把。

    本来是担心有点麻烦,结果发现海康的UPS已经用了 NUT,那就简单粗暴来了。

    R1 上

    echo "LISTEN 0.0.0.0 3493" >> /etc/nut/upsd.conf
    killall upsd

    过一会儿,upsd会自己重启

    cat /etc/nut/upsd.users 读账号密码

    PVE

    apt install nut-client

    执行

    root@pve6:~# upsc -l 192.168.11.111
    Init SSL without certificate database
    HikUPS
    root@pve6:~# upsc HikUPS@192.168.11.111
    Init SSL without certificate database
    battery.charge: 100
    battery.voltage: 13.50
    battery.voltage.high: 13.00
    battery.voltage.low: 10.40
    battery.voltage.nominal: 12.0
    device.type: ups
    driver.name: nutdrv_qx
    driver.parameter.bus: 001
    driver.parameter.pollfreq: 30
    driver.parameter.pollinterval: 2
    driver.parameter.port: auto
    driver.parameter.product: USB to Serial
    driver.parameter.productid: 5161
    driver.parameter.synchronous: auto
    driver.parameter.vendor: INNO TECH
    driver.parameter.vendorid: 0665
    driver.version: 2.8.0
    driver.version.data: Voltronic-QS 0.07
    driver.version.internal: 0.32
    driver.version.usb: libusb-1.0.26 (API: 0x1000109)
    input.voltage: 224.4
    input.voltage.fault: 224.4
    output.current.nominal: 2.0
    output.frequency: 50.1
    output.frequency.nominal: 50
    output.voltage: 224.4
    output.voltage.nominal: 220
    ups.beeper.status: disabled
    ups.delay.shutdown: 30
    ups.delay.start: 180
    ups.firmware.aux: PM-V
    ups.load: 0
    ups.productid: 5161
    ups.status: OL
    ups.type: offline / line interactive
    ups.vendorid: 0665

    修改配置 /etc/nut/upsmon.conf

    MONITOR HikUPS@192.168.11.111 1 hikups Aa123456 master

    密码是写死的

    /etc/nut/nut.conf

    MODE=netclient

    重启

    systemctl restart nut-client

  • 海康威视R1 开共享存储

    不支持NFS,因为内核模块不完整,安装会报错

    mount: /run/rpc_pipefs: unknown filesystem type 'rpc_pipefs'.

    替代方案是,加samba共享用户,root脚本添加用户和配置

    # 加用户
    /tmp/histor_low/root/sys_user_mng.sh ADD share the.p4sswd
    # 加目录
    mkdir /drives/raid_a1/share
    chown share: /drives/raid_a1/share
    
    # 加配置
    cat << EOF > /tmp/smb.conf.share
    [BackupShare]
        path = /drives/raid_a1/share
        read only = no
        valid users = share
        browsable = yes
        writable = yes
        create mask = 0770
        directory mask = 0770
    EOF

    不需要reload配置

    其实在 /tmp 下还有 smb.conf.externsmb.conf.nassmb.conf.static,感觉是预留了这个逻辑(没有看程序)。

    这个共享我是给pve用的,为了管理方便,我在用户目录下绑定一个目录

    
    mkdir -p /drives/raid_a1/Users/admin/pve-share/
    mount --bind /drives/raid_a1/share/ /drives/raid_a1/Users/admin/pve-share/
    

  • 海康威视 R1 开Docker

    正常应该走app,但是我发现系统并没有为docker额外增加配置,而是直接依赖docker container create 。

    以plex为例,默认它会创建两个随机名称的volume,我为了好看就手工命名了。

    之前有位小朋友提及这里可以命令注入,实现更多参数,但是由于他家客户端错误提示不好,几乎等于盲注。记得闭合两边的引号。

    Plex

    
    docker volume remove plex-config
    docker volume remove plex-transcode
    
    docker rm plex
    
    docker volume create plex-config
    docker volume create plex-transcode
    
    docker container create  --name='plex'  \
      --volume='plex-config:/config:rw' \
      --volume='/drives/raid_a1/Users/admin/video:/video:rw' \
      --volume='plex-transcode:/transcode:rw'  \
      --device=/dev/dri:/dev/dri \
      --publish='1900:1900/udp' \
      --publish='32400:32400/tcp' \
      --publish='32410:32410/udp' \
      --publish='32412:32412/udp' \
      --publish='32413:32413/udp' \
      --publish='32414:32414/udp' \
      --publish='32469:32469/tcp' \
      --publish='8324:8324/tcp'  \
      --restart=unless-stopped \
      --env='LANG=C.UTF-8' \
      --env='LC_ALL=C.UTF-8' \
      --env='PLEX_CLAIM=claim-xxxx'\
      --env='ADVERTISE_IP=http://xx.yy.zz.aa:32400' \
      --env='HOME=/config' \
      --env='PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' \
      --env='TERM=xterm' \
      --env='TZ=Asia/Shanghai' plexinc/pms-docker:latest
    
    
    docker start plex 
    
    

    Home Assistant

    docker volume remove ha-config
    
    docker rm home-assistant
    
    docker volume create ha-config
    
    docker container create  --name='home-assistant'  \
      --volume='ha-config:/config:rw' \
      --volume='/etc/localtime:/etc/localtime:ro' \
      --volume='/run/dbus:/run/dbus:ro' \
      --env='LANG=C.UTF-8' \
      --env='LC_ALL=C.UTF-8' \
       --env='HOME=/config' \
       --env='PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' \
       --env='TERM=xterm' \
       --env='TZ=Asia/Shanghai' \
       --privileged \
       --publish='8123:8123/tcp' \
       --restart=unless-stopped \
       ghcr.io/home-assistant/home-assistant:stable 
    
    docker start home-assistant 
    
    # 以下为安装 HACS
    
    docker ps -a 
    
    docker exec -it 86 /bin/bash
    
    
    wget -O - https://get.hacs.xyz | bash -
    
    
    docker restart home-assistant 
    

    Portainer

    docker volume create portainer_data
    
    docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:2.21.4
    
    
  • Proxmox VE 创建集群

    空的集群不提了。家里有三台电脑,都装了PVE,都有虚拟机,直接创建集群将三个节点都加入,从Web console操作是不行的。命令行操作步骤如下:

    1. 修改主机名

    确保每台pve主机的主机名不一样。

    假定原来主机名叫 pve,新的主机名叫 pve1,我都加了 local 的后缀。

    hostnamectl set-hostname pve1.home.local

    然后修改 /etc/hosts/etc/postfix/main.cf,确保里边的主机名是新的。

    但是这样操作是不够的,还得去手动处理主机配置文件。(打开web console,看到的节点变成了两个,pve和pve1)

    mv /etc/pve/nodes/pve/qemu-server/* /etc/pve/nodes/pve1/qemu-server/

    建议顺手更新,然后重启系统。

    2. 删除已有的集群信息(可选)

    如果之前尝试过建集群,需要删掉这个集群,否则后边会进不了cluster的页面。

    文档: https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node

    systemctl stop pve-cluster corosync
    pmxcfs -l
    rm /etc/corosync/*
    rm /etc/pve/corosync.conf
    killall pmxcfs
    systemctl start pve-cluster

    3. 修改主机ID

    我其实不确定是不是必须,反正我是改了

    脚本参考:https://forum.proxmox.com/threads/changing-vmid-of-a-vm.63161

    #!/usr/bin/env bash
    
    VM_TYPE=$1
    OLD_VMID=$2
    NEW_VMID=$3
    
    #echo Put the VM type to change '(lxc, qemu)'
    #read -r VM_TYPE
    case "$VM_TYPE" in
    "lxc") VM_TYPE="lxc" ;;
    "qemu" | "qemu-server")
    VM_TYPE="qemu-server" ;;
    *)
      echo bad input. Exiting
    exit
      ;;
    esac
    
    #echo
    #echo Put the VMID to change
    #read -r OLD_VMID
    case $OLD_VMID in
    '' | *[!0-9]*)
      echo bad input. Exiting
    exit
      ;;
    *)
      echo Old VMID - "$OLD_VMID"
      ;;
    esac
    
    #echo
    #echo Put the new VMID
    #read -r NEW_VMID
    case $NEW_VMID in
    '' | *[!0-9]*)
      echo bad input. Exiting
    exit
      ;;
    *)
      echo New VMID - "$NEW_VMID"
      ;;
    esac
    echo
    
    VG_NAME="$(lvs --noheadings -o lv_name,vg_name | grep "$OLD_VMID" | awk -F ' ' '{print $2}' | uniq)"
    
    case "$VG_NAME" in
    "")
      echo Machine not in Volume Group. Exiting
      exit
      ;;
    *)
      echo Volume Group - "$VG_NAME"
      ;;
    esac
    
    for volume in $(lvs -a | grep "$VG_NAME" | awk '{print $1}' | grep "$OLD_VMID"); do
      newVolume="${volume//"${OLD_VMID}"/"${NEW_VMID}"}"
      lvrename "$VG_NAME" "$volume" "$newVolume"
    done
    
    for volume in $(zfs list -t all | awk '{print $1}' | grep "vm-${OLD_VMID}-disk"); do
      newVolume="${volume//"${OLD_VMID}"/"${NEW_VMID}"}"
      zfs rename "$volume" "$newVolume"
    done
    
    sed -i "s/$OLD_VMID/$NEW_VMID/g" /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf
    mv /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf /etc/pve/"$VM_TYPE"/"$NEW_VMID".conf
    

    原脚本是交互输入参数的,我改成命令行参数了,这样可以搭配 for 去批量改id。

    我的pve vm都是lvm的,所以这样没问题。如果是独立磁盘文件的,需要rename,然后qm rescan

    4. 移动主机文件到一个统一的节点

    假设我用 pve1 当主节点(我不知道pve有没有主节点的概念,就当临时用来操作的),pve1自己不动,其他节点需要把主机文件scp过去

    假定pve1 ip是 192.168.111.1,现在处理 pve2

    scp -pr /etc/pve/nodes/pve2 root@192.168.111.1:/etc/pve/nodes/
    mv /etc/pve/nodes/pve2 /root/

    5. 创建并加入集群

    从pve1 创建集群,从其他已经处理好的节点加集群。

    完。

  • pve ubuntu cloud-init

    自己装的ubuntu,转成模板后,增加cloud-init支持。

    root 下

    truncate -s0 /etc/hostname
    hostnamectl set-hostname localhost
    
    rm -f /etc/netplan/50-cloud-init.yaml
    
    
    truncate -s0 /etc/machine-id
    
    
    cloud-init clean
    
    
    truncate -s0 ~/.bash_history
    
    history -c
    
    shutdown -h now
    

    关机后,hardware 增加 cloud-init drive。

    修改 cloud-init 设置。

    ubuntu 2404验证通过。

  • Proxmox VE 恢复 root SSH 登录

    Proxmox VE 的各种说明里都是让进单用户模式去改密码,之前试过几个古老版本,live usb载入 + chroot + passwd 改出来的root密码都无效,不知道原因。

    现在换另外一个方案,增加 ssh authorized_keys。

    在 pve里,authorized_keys 使用的是symbolic link,链接到了 /etc/pve/priv/authorized_keys

    # ls -al root/.ssh/
    total 24
    drwxr-xr-x 2 root root 4096 Mar 24  2021 .
    drwx------ 3 root root 4096 Dec 11  2020 ..
    lrwxrwxrwx 1 root root   29 Dec 11  2020 authorized_keys -> /etc/pve/priv/authorized_keys
    

    根据 PVE 的wiki (https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)#_recovery) ,/etc/pve 使用的是他家的 pmxcfs 文件系统,数据实际存储在 /var/lib/pve-cluster/config.db

    file 一下,可以看出是 sqlite3,打开,只有一张表 tree,定义如下:

    CREATE TABLE tree (  inode INTEGER PRIMARY KEY NOT NULL,  parent INTEGER NOT NULL CHECK(typeof(parent)=='integer'),  version INTEGER NOT NULL CHECK(typeof(version)=='integer'),  writer INTEGER NOT NULL CHECK(typeof(writer)=='integer'),  mtime INTEGER NOT NULL CHECK(typeof(mtime)=='integer'),  type INTEGER NOT NULL CHECK(typeof(type)=='integer'),  name TEXT NOT NULL,  data BLOB);
    

    特别简单。

    改数据就直接 update 表。

  • Proxmox VE 虚拟机磁盘挂载


    QEMU QCOW Image (v3)

    # file vm-101-disk-0.qcow2 
    vm-101-disk-0.qcow2: QEMU QCOW Image (v3), 107374182400 bytes (v3), 107374182400 bytes

    apt install qemu-utils
    
    modprobe nbd max_part=8
    qemu-nbd --connect=/dev/nbd0 vm-1061-disk-0.qcow2
    
    # unload 
    qemu-nbd -d /dev/nbd0
    

    After this

    
    Disk /dev/nbd0: 100 GiB, 107374182400 bytes, 209715200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x28288fb3
    
    Device      Boot    Start       End   Sectors  Size Id Type
    /dev/nbd0p1 *        2048   3905535   3903488  1.9G 83 Linux
    /dev/nbd0p2       3905536  11718655   7813120  3.7G 82 Linux swap / Solaris
    /dev/nbd0p3      11718656 209715199 197996544 94.4G 83 Linux

    RAW

    vm-1014-disk-1.raw: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,1), end-CHS (0x3ff,254,63), startsector 1, 209715199 sectors, extended partition table (last)
    

    losetup -f -P --show vm-1014-disk-1.raw
    # output like /dev/loop16
    
    # unload 
    losetup -d /dev/loop16

    After this

    Disk /dev/loop16: 100 GiB, 107374182400 bytes, 209715200 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: 49400ED0-2144-4854-BA3D-64A88C3EC9C8
    
    Device        Start       End   Sectors  Size Type
    /dev/loop16p1  2048 209713151 209711104  100G Microsoft basic data
    GPT PMBR size mismatch (1952448511 != 1953525167) will be corrected by write.
    The backup GPT table is not on the end of the device.

    LVM

    From lsblk

    /dev/mapper/pve-vm--118--disk--0

    apt-get install kpartx
    kpartx -a /dev/mapper/pve-vm--118--disk--0
    

    After this,

    # ls /dev/mapper/pve-vm--118--disk--*
    /dev/mapper/pve-vm--118--disk--0  /dev/mapper/pve-vm--118--disk--0p1  /dev/mapper/pve-vm--118--disk--0p2  /dev/mapper/pve-vm--118--disk--0p3
    
    # fdisk -l 
    ...
    
    Disk /dev/mapper/pve-vm--118--disk--0: 50 GiB, 53687091200 bytes, 104857600 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 65536 bytes / 65536 bytes
    Disklabel type: dos
    Disk identifier: 0x28288fb3
    
    Device                             Boot    Start       End  Sectors  Size Id Type
    /dev/mapper/pve-vm--118--disk--0p1 *        2048   3905535  3903488  1.9G 83 Linux
    /dev/mapper/pve-vm--118--disk--0p2       3905536  11718655  7813120  3.7G 82 Linux swap / Solaris
    /dev/mapper/pve-vm--118--disk--0p3      11718656 104855551 93136896 44.4G 83 Linux
    
    # undo
    kpartx -d /dev/mapper/pve-vm--118--disk--0

  • Proxmox VE Import VMware VM (VMX)

    1 Download OVFTool on PVE: https://developer.broadcom.com/tools/open-virtualization-format-ovf-tool/latest, extract

    2 Navigate to VMX directory, command like

    /root/ovftool/ovftool myVM.vmx /root/myVM.ova

    3 Extract OVA

    cd /root
    mkdir extracted
    tar -C extracted -xvf myVM.ova

    4 Import VM

    cd extracted
    qm importovf 117 myVM.ovf  local-lvm

    117 is my new VMID

    5 Remove OVA and OVF

    rm -fr /root/myVM.ova /root/extracted

    6 Start VM and install virtio drivers.

  • PVE 查看以太网设备的映射关系

    root@pve:~# lshw -c network -businfo
    Bus info          Device          Class          Description
    ============================================================
    pci@0000:02:00.0  enp2s0          network        Ethernet Controller I226-V
    pci@0000:03:00.0  enp3s0          network        Ethernet Controller I226-V
    pci@0000:04:00.0                  network        82599ES 10-Gigabit SFI/SFP+ Network Connection
    pci@0000:04:00.1                  network        82599ES 10-Gigabit SFI/SFP+ Network Connection
    pci@0000:05:00.0  enp5s0          network        Ethernet Controller I226-V
    pci@0000:06:00.0  enp6s0          network        Ethernet Controller I226-V