<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://abbbi.github.io//feed.xml" rel="self" type="application/atom+xml" /><link href="https://abbbi.github.io//" rel="alternate" type="text/html" /><updated>2026-03-04T12:23:13+00:00</updated><id>https://abbbi.github.io//feed.xml</id><title type="html">Michael Ablassmeier</title><subtitle>..</subtitle><entry><title type="html">pbsindex - file backup index</title><link href="https://abbbi.github.io//pbsindex/" rel="alternate" type="text/html" title="pbsindex - file backup index" /><published>2026-03-03T00:00:00+00:00</published><updated>2026-03-03T00:00:00+00:00</updated><id>https://abbbi.github.io//pbsindex</id><content type="html" xml:base="https://abbbi.github.io//pbsindex/"><![CDATA[<p>If you take backups using the proxmox-backup-client and you wondered what
backup may include a specific file, the only way to find out is to mount the
backup and search for the files.</p>

<p>For regular file backups, the Proxmox Backup Server frontend provides a pcat1
file for download, whose binary format is somewhat
<a href="https://bugzilla.proxmox.com/show_bug.cgi?id=5748">undocumented</a> but actually
includes a listing of the files backed up.</p>

<p>A Proxmox backup server datastore includes the same pcat1 file as blob index
(.pcat1.didx). So to actually beeing able to tell which backup contains which
files, one needs to:</p>

<p>1) Open the .pcat1.didx file and find out required blobs, see <a href="https://pbs.proxmox.com/docs/file-formats.html#dynamic-index-format-didx">format documentation</a></p>

<p>2) Reconstruct the .pcat1 file from the blobs</p>

<p>3) Parse the pcat1 file and output the directory listing.</p>

<p>I’ve implemented this in <a href="https://github.com/abbbi/pbsindex">pbsindex</a>
which lets you create a central file index for your backups by scanning a
complete PBS datastore.</p>

<p>Lets say you want to have a file listing for a specific backup,
use:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> pbsindex <span class="nt">--chunk-dir</span> /backup/.chunks/ /backup/host/vm178/2026-03-02T10:47:57Z/catalog.pcat1.didx
 didx <span class="nv">uuid</span><span class="o">=</span>7e4086a9-4432-4184-a21f-0aeec2b2de93 <span class="nv">ctime</span><span class="o">=</span>2026-03-02T10:47:57Z <span class="nv">chunks</span><span class="o">=</span>2 <span class="nv">total_size</span><span class="o">=</span>1037386
 chunk[0] <span class="nv">start</span><span class="o">=</span>0 <span class="nv">end</span><span class="o">=</span>344652 <span class="nv">size</span><span class="o">=</span>344652 <span class="nv">digest</span><span class="o">=</span>af3851419f5e74fbb4d7ca6ac3bc7c5cbbdb7c03d3cb489d57742ea717972224
 chunk[1] <span class="nv">start</span><span class="o">=</span>344652 <span class="nv">end</span><span class="o">=</span>1037386 <span class="nv">size</span><span class="o">=</span>692734 <span class="nv">digest</span><span class="o">=</span>e400b13522df02641c2d9934c3880ae78ebb397c66f9b4cf3b931d309da1a7cc
 d ./usr.pxar.didx
 d ./usr.pxar.didx/bin
 l ./usr.pxar.didx/bin/Mail
 f ./usr.pxar.didx/bin/[ <span class="nv">size</span><span class="o">=</span>55720 <span class="nv">mtime</span><span class="o">=</span>2025-06-04T15:14:05Z
 f ./usr.pxar.didx/bin/aa-enabled <span class="nv">size</span><span class="o">=</span>18672 <span class="nv">mtime</span><span class="o">=</span>2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-exec <span class="nv">size</span><span class="o">=</span>18672 <span class="nv">mtime</span><span class="o">=</span>2025-04-10T15:06:25Z
 f ./usr.pxar.didx/bin/aa-features-abi <span class="nv">size</span><span class="o">=</span>18664 <span class="nv">mtime</span><span class="o">=</span>2025-04-10T15:06:25Z
 l ./usr.pxar.didx/bin/apropos</code></pre></figure>

<p>It also lets you scan a complete datastore for all existing .pcat1.didx files
and store the directory listings in a SQLite database for easier searching.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[If you take backups using the proxmox-backup-client and you wondered what backup may include a specific file, the only way to find out is to mount the backup and search for the files.]]></summary></entry><entry><title type="html">libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN</title><link href="https://abbbi.github.io//libvirt11/" rel="alternate" type="text/html" title="libvirt 11.10 VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN" /><published>2025-12-03T00:00:00+00:00</published><updated>2025-12-03T00:00:00+00:00</updated><id>https://abbbi.github.io//libvirt11</id><content type="html" xml:base="https://abbbi.github.io//libvirt11/"><![CDATA[<p>As with libvirt 11.10 a new flag for backup operation has been
inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.</p>

<p>According to the <a href="https://libvirt.org/kbase/live_full_disk_backup.html#shutdown-of-the-guest-os-during-backup">documentation</a>
“It instructs libvirt to avoid termination of the VM if the guest OS shuts down
while the backup is still running. The VM is in that scenario reset and paused
instead of terminated allowing the backup to finish. Once the backup finishes
the VM process is terminated.”</p>

<p>Added support for this in <a href="https://github.com/abbbi/virtnbdbackup/releases/tag/v2.40">virtnbdbackup 2.40</a>.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[As with libvirt 11.10 a new flag for backup operation has been inroduced: VIR_DOMAIN_BACKUP_BEGIN_PRESERVE_SHUTDOWN_DOMAIN.]]></summary></entry><entry><title type="html">building SLES 16 vagrant/libvirt images using guestfs tools</title><link href="https://abbbi.github.io//slevagrant/" rel="alternate" type="text/html" title="building SLES 16 vagrant/libvirt images using guestfs tools" /><published>2025-11-19T00:00:00+00:00</published><updated>2025-11-19T00:00:00+00:00</updated><id>https://abbbi.github.io//slevagrant</id><content type="html" xml:base="https://abbbi.github.io//slevagrant/"><![CDATA[<p>SLES 16 has been released. In the past, SUSE offered ready built vagrant
images. Unfortunately that’s not the case anymore, as with more recent
SLES15 releases the official images were gone.</p>

<p>In the past, it was possible to clone existing projects on the opensuse build
service to build the images by yourself, but i couldn’t find any templates
for SLES 16.</p>

<p>Naturally, there are several ways to build images, and the tooling around
involves kiwi-ng, opensuse build service, or packer recipes etc.. (existing
packer recipes wont work anymore, as Yast has been replaced by a new installer,
called agma). All pretty complicated, …</p>

<p>So my current take on creating a vagrant image for SLE16 has been the
following:</p>

<ul>
  <li>Spin up an QEMU virtual machine</li>
  <li>Manually install the system, all in default except for one special setting:
In the Network connection details, “Edit Binding settings” and set the
Interface to not bind a particular MAC address or interface. This will make
the system pick whatever network device naming scheme is applied during
boot.</li>
  <li>After installation has finished, shutdown.</li>
</ul>

<p>Two guestfs-tools that can now be used to modify the created qcow2 image:</p>

<ul>
  <li>run virt-sysrpep on the image to wipe settings that might cause troubles:</li>
</ul>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> virt-sysprep <span class="nt">-a</span> sles16.qcow2</code></pre></figure>

<ul>
  <li>create a simple shellscript that setups all vagrant related settings:</li>
</ul>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/bin/bash</span>
useradd vagrant
<span class="nb">mkdir</span> <span class="nt">-p</span> /home/vagrant/.ssh/
<span class="nb">chmod </span>0700 /home/vagrant/.ssh/
<span class="nb">echo</span> <span class="s2">"ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIF
o9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9W
hQ== vagrant insecure public key"</span> <span class="o">&gt;</span> /home/vagrant/.ssh/authorized_keys
<span class="nb">chmod </span>0600 /home/vagrant/.ssh/authorized_keys
<span class="nb">chown</span> <span class="nt">-R</span> vagrant:vagrant /home/vagrant/
<span class="c"># apply recommended ssh settings for vagrant boxes</span>
<span class="nv">SSHD_CONFIG</span><span class="o">=</span>/etc/ssh/sshd_config.d/99-vagrant.conf
<span class="k">if</span> <span class="o">[[</span> <span class="o">!</span> <span class="nt">-d</span> <span class="s2">"</span><span class="si">$(</span><span class="nb">dirname</span> <span class="k">${</span><span class="nv">SSHD_CONFIG</span><span class="k">}</span><span class="si">)</span><span class="s2">"</span> <span class="o">]]</span><span class="p">;</span> <span class="k">then
    </span><span class="nv">SSHD_CONFIG</span><span class="o">=</span>/etc/ssh/sshd_config
    <span class="c"># prepend the settings, so that they take precedence</span>
    <span class="nb">echo</span> <span class="nt">-e</span> <span class="s2">"UseDNS no</span><span class="se">\n</span><span class="s2">GSSAPIAuthentication no</span><span class="se">\n</span><span class="si">$(</span><span class="nb">cat</span> <span class="k">${</span><span class="nv">SSHD_CONFIG</span><span class="k">}</span><span class="si">)</span><span class="s2">"</span> <span class="o">&gt;</span> <span class="k">${</span><span class="nv">SSHD_CONFIG</span><span class="k">}</span>
<span class="k">else
    </span><span class="nb">echo</span> <span class="nt">-e</span> <span class="s2">"UseDNS no</span><span class="se">\n</span><span class="s2">GSSAPIAuthentication no"</span> <span class="o">&gt;</span> <span class="k">${</span><span class="nv">SSHD_CONFIG</span><span class="k">}</span>
<span class="k">fi
</span><span class="nv">SUDOERS_LINE</span><span class="o">=</span><span class="s2">"vagrant ALL=(ALL) NOPASSWD: ALL"</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-d</span> /etc/sudoers.d <span class="o">]</span><span class="p">;</span> <span class="k">then
    </span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$SUDOERS_LINE</span><span class="s2">"</span> <span class="o">&gt;</span>| /etc/sudoers.d/vagrant
    visudo <span class="nt">-cf</span> /etc/sudoers.d/vagrant
    <span class="nb">chmod </span>0440 /etc/sudoers.d/vagrant
<span class="k">else
    </span><span class="nb">echo</span> <span class="s2">"</span><span class="nv">$SUDOERS_LINE</span><span class="s2">"</span> <span class="o">&gt;&gt;</span> /etc/sudoers
    visudo <span class="nt">-cf</span> /etc/sudoers
<span class="k">fi
 
</span><span class="nb">mkdir</span> <span class="nt">-p</span> /vagrant
<span class="nb">chown</span> <span class="nt">-R</span> vagrant:vagrant /vagrant
systemctl <span class="nb">enable </span>sshd</code></pre></figure>

<ul>
  <li>use virt-customize to upload the script into the qcow image:</li>
</ul>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> virt-customize <span class="nt">-a</span> sle16.qcow2 <span class="nt">--upload</span> vagrant.sh:/tmp/vagrant.sh</code></pre></figure>

<ul>
  <li>execute the script via:</li>
</ul>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> virt-customize <span class="nt">-a</span> sle16.qcow2 <span class="nt">--run-command</span> <span class="s2">"/tmp/vagrant.sh"</span></code></pre></figure>

<p>After this, use the create-box.sh from the vagrant-libvirt project
to create an box image:</p>

<p>https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh</p>

<p>and add the image to your environment:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> create_box.sh sle16.qcow2 sle16.box
 vagrant box add <span class="nt">--name</span> my/sles16 test.box</code></pre></figure>

<p>the resulting box is working well within my CI environment as far as i can
tell.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[SLES 16 has been released. In the past, SUSE offered ready built vagrant images. Unfortunately that’s not the case anymore, as with more recent SLES15 releases the official images were gone.]]></summary></entry><entry><title type="html">qmpbackup and proxmox 9</title><link href="https://abbbi.github.io//pve9-qmpbackup/" rel="alternate" type="text/html" title="qmpbackup and proxmox 9" /><published>2025-09-12T00:00:00+00:00</published><updated>2025-09-12T00:00:00+00:00</updated><id>https://abbbi.github.io//pve9-qmpbackup</id><content type="html" xml:base="https://abbbi.github.io//pve9-qmpbackup/"><![CDATA[<p>The latest Proxmox release introduces a new Qemu machine version that seems to
behave differently for how it addresses the virtual disk configuration.</p>

<p>Also, the regular “query-block” qmp command doesn’t list the created bitmaps
as usual.</p>

<p>If the virtual machine version is set to “9.2+pve”, everything seems to work
out of the box.</p>

<p>I’ve released <a href="https://github.com/abbbi/qmpbackup/releases/tag/v0.50">Version
0.50</a> with some small
changes so its compatible with the newer machine versions.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The latest Proxmox release introduces a new Qemu machine version that seems to behave differently for how it addresses the virtual disk configuration.]]></summary></entry><entry><title type="html">Vagrant images for trixie</title><link href="https://abbbi.github.io//vagrant/" rel="alternate" type="text/html" title="Vagrant images for trixie" /><published>2025-09-08T00:00:00+00:00</published><updated>2025-09-08T00:00:00+00:00</updated><id>https://abbbi.github.io//vagrant</id><content type="html" xml:base="https://abbbi.github.io//vagrant/"><![CDATA[<p>It’s no news that the vagrant license has changed while ago, which resulted
in less motivation to maintain it in Debian (understandably).</p>

<p>Unfortunately this means there are currently no official vagrant images for Debian
trixie, <a href="https://www.mail-archive.com/debian-bugs-dist@lists.debian.org/msg2049694.html">for reasons</a></p>

<p>Of course there are various boxes floating around on hashicorp’s vagrant cloud,
but either they do not fit my needs (too big) or i don’t consider them trustworthy
enough…</p>

<p>Building the images using the existing toolset is quite straight forward. The
required scripts are maintained in the <a href="https://salsa.debian.org/debian/debian-vagrant-images">Debian Vagrant
images</a> repository.</p>

<p>With <a href="https://salsa.debian.org/cloud-team/debian-vagrant-images/-/merge_requests/18">a few additional changes applied</a>
and following the instructions of the README, you can build the images yourself.</p>

<p>For me, the built images work like expected.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[It’s no news that the vagrant license has changed while ago, which resulted in less motivation to maintain it in Debian (understandably).]]></summary></entry><entry><title type="html">PVE 9.0 - Snapshots for LVM</title><link href="https://abbbi.github.io//pve9/" rel="alternate" type="text/html" title="PVE 9.0 - Snapshots for LVM" /><published>2025-08-05T00:00:00+00:00</published><updated>2025-08-05T00:00:00+00:00</updated><id>https://abbbi.github.io//pve9</id><content type="html" xml:base="https://abbbi.github.io//pve9/"><![CDATA[<p>The new Proxmox release advertises a new feature for easier snapshot handling of
virtual machines whose disks are stored on LVM volumes, I wondered.. whats the
deal..?</p>

<p>To be able to use the new feature, you need to enable a special flag for the
LVM volume group. This example shows the general workflow for a fresh setup.</p>

<p>1) Create the volume group with the <code class="language-plaintext highlighter-rouge">snapshot-as-volume-chain</code> feature turned on:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> pvesm add lvm lvmthick <span class="nt">--content</span> images <span class="nt">--vgname</span> lvm <span class="nt">--snapshot-as-volume-chain</span> 1</code></pre></figure>

<p>2) From this point on, you can create virtual machines right away, BUT those
   virtual machines disks must use the QCOW image format for their disk
   volumes. If you use the RAW format, you wont be able to create snapshots,
   still.</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> <span class="nv">VMID</span><span class="o">=</span>401
 qm create <span class="nv">$VMID</span> <span class="nt">--name</span> vm-lvmthick
 qm <span class="nb">set</span> <span class="nv">$VMID</span> <span class="nt">-scsi1</span> lvmthick:2,format<span class="o">=</span>qcow2</code></pre></figure>

<p>So, why would it make sense to format the LVM volume as QCOW?</p>

<p>Snapshots on LVM thick provisioned devices are, as everybody knows, a very
I/O intensive task. Besides each snapshot, a special -cow Device is created
that tracks the changed block regions and the original block data for each
change to the active volume. This will waste quite some space within your
volume group for each snapshot.</p>

<p>Formatting the LVM volume as QCOW image, makes it possible to use the QCOW
backing-image option for these devices, this is the way PVE 9 handles these
kind of snapshots.</p>

<p>Creating a snapshot looks like this:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> qm snapshot <span class="nv">$VMID</span> <span class="nb">id
 </span>snapshotting <span class="s1">'drive-scsi1'</span> <span class="o">(</span>lvmthick3:vm-401-disk-0.qcow2<span class="o">)</span>
 Renamed <span class="s2">"vm-401-disk-0.qcow2"</span> to <span class="s2">"snap_vm-401-disk-0_id.qcow2"</span> <span class="k">in </span>volume group <span class="s2">"lvm"</span>
 Rounding up size to full physical extent 1.00 GiB
 Logical volume <span class="s2">"vm-401-disk-0.qcow2"</span> created.
 Formatting <span class="s1">'/dev/lvm/vm-401-disk-0.qcow2'</span>, <span class="nb">fmt</span><span class="o">=</span>qcow2 <span class="nv">cluster_size</span><span class="o">=</span>131072 <span class="nv">extended_l2</span><span class="o">=</span>on <span class="nv">preallocation</span><span class="o">=</span>metadata <span class="nv">compression_type</span><span class="o">=</span>zlib <span class="nv">size</span><span class="o">=</span>1073741824 <span class="nv">backing_file</span><span class="o">=</span>snap_vm-401-disk-0_id.qcow2 <span class="nv">backing_fmt</span><span class="o">=</span>qcow2 <span class="nv">lazy_refcounts</span><span class="o">=</span>off <span class="nv">refcount_bits</span><span class="o">=</span>16</code></pre></figure>

<p>So it will rename the current active disk and create another QCOW formatted LVM
volume, but pointing it to the snapshot image using the backing_file option.</p>

<p>Neat.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[The new Proxmox release advertises a new feature for easier snapshot handling of virtual machines whose disks are stored on LVM volumes, I wondered.. whats the deal..?]]></summary></entry><entry><title type="html">libvirt - incremental backups for raw devices</title><link href="https://abbbi.github.io//datafile/" rel="alternate" type="text/html" title="libvirt - incremental backups for raw devices" /><published>2025-07-31T00:00:00+00:00</published><updated>2025-07-31T00:00:00+00:00</updated><id>https://abbbi.github.io//datafile</id><content type="html" xml:base="https://abbbi.github.io//datafile/"><![CDATA[<p>Skimming through the latest libvirt releases, to my surprise, i found that
latest versions (&gt;= v10.10.0) have added support for the QCOW data-file
setting.</p>

<p>Usually the incremental backup feature using bitmaps was limited to qcow2 based
images, as there was no way to store the bitmaps persistently within raw
devices. This basically ruled out proper incremental backups for direct
attached luns, etc.</p>

<p><a href="https://lists.gnu.org/archive/html/qemu-devel/2021-03/msg07448.html">In the
past</a>,
there were some discussions how to implement this, mostly by using a separate
metadata qcow image, holding the bitmap information persistently.</p>

<p>These approaches have been discussed again lately <a href="https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/JCO233PHT3TSC2IJCI5G4NIZZEKKGS2T/#VLSGER5NI3XLJIUKGTFCUUEO3CJOHH2J">and required features were
implemented</a></p>

<p>In order to be able to use the feature, you need to configure the virtual
machines and its disks in a special way:</p>

<p>Lets assume you have a virtual machine that uses a raw device
<code class="language-plaintext highlighter-rouge">/tmp/datafile.raw</code></p>

<p>1) Create an qcow image (same size as the raw image):</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> <span class="c"># point the data-file to a temporary file, as create will overwrite whatever it finds here</span>
 qemu-img create <span class="nt">-f</span> qcow2 /tmp/metadata.qcow2 <span class="nt">-o</span> <span class="nv">data_file</span><span class="o">=</span>/tmp/TEMPFILE,data_file_raw<span class="o">=</span><span class="nb">true</span> ..
 <span class="nb">rm</span> <span class="nt">-f</span> /tmp/TEMPFILE</code></pre></figure>

<p>2) Now use the amend option to point the qcow image to the right raw device
   using the data-file option:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> qemu-img amend /tmp/metadata.qcow2 <span class="nt">-o</span> <span class="nv">data_file</span><span class="o">=</span>/tmp/datafile.raw,data_file_raw<span class="o">=</span><span class="nb">true</span></code></pre></figure>

<p>3) Reconfigure the virtual machine configuration to look like this:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash">    &lt;disk <span class="nb">type</span><span class="o">=</span><span class="s1">'file'</span> <span class="nv">device</span><span class="o">=</span><span class="s1">'disk'</span><span class="o">&gt;</span>
      &lt;driver <span class="nv">name</span><span class="o">=</span><span class="s1">'qemu'</span> <span class="nb">type</span><span class="o">=</span><span class="s1">'qcow2'</span> <span class="nv">cache</span><span class="o">=</span><span class="s1">'none'</span> <span class="nv">io</span><span class="o">=</span><span class="s1">'native'</span> <span class="nv">discard</span><span class="o">=</span><span class="s1">'unmap'</span>/&gt;
      &lt;<span class="nb">source </span><span class="nv">file</span><span class="o">=</span><span class="s1">'/tmp/metadata.qcow2'</span><span class="o">&gt;</span>
        &lt;dataStore <span class="nb">type</span><span class="o">=</span><span class="s1">'file'</span><span class="o">&gt;</span>
          &lt;format <span class="nb">type</span><span class="o">=</span><span class="s1">'raw'</span>/&gt;
          &lt;<span class="nb">source </span><span class="nv">file</span><span class="o">=</span><span class="s1">'/tmp/datafile.raw'</span>/&gt;
        &lt;/dataStore&gt;
      &lt;/source&gt;
      &lt;target <span class="nv">dev</span><span class="o">=</span><span class="s1">'vda'</span> <span class="nv">bus</span><span class="o">=</span><span class="s1">'virtio'</span>/&gt;
    &lt;/disk&gt;</code></pre></figure>

<p>Now its possible to create persistent checkpoints:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> virsh checkpoint-create-as vm6 <span class="nt">--name</span> <span class="nb">test</span> <span class="nt">--diskspec</span> vda,bitmap<span class="o">=</span><span class="nb">test
 </span>Domain checkpoint <span class="nb">test </span>created</code></pre></figure>

<p>and the persistent bitmap will be stored within the metadata image:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"> qemu-img info  /tmp/tmp.16TRBzeeQn/vm6-sda.qcow2
 <span class="o">[</span>..]
    bitmaps:
        <span class="o">[</span>0]:
            flags:
                <span class="o">[</span>0]: auto
            name: <span class="nb">test
            </span>granularity: 65536</code></pre></figure>

<p>Hoooray.</p>

<p><a href="https://github.com/abbbi/virtnbdbackup/releases/tag/v2.33">Added support for this in virtnbdbackup v2.33</a></p>]]></content><author><name></name></author><summary type="html"><![CDATA[Skimming through the latest libvirt releases, to my surprise, i found that latest versions (&gt;= v10.10.0) have added support for the QCOW data-file setting.]]></summary></entry><entry><title type="html">qmpbackup 0.46 - add image fleecing</title><link href="https://abbbi.github.io//fleece/" rel="alternate" type="text/html" title="qmpbackup 0.46 - add image fleecing" /><published>2025-04-01T00:00:00+00:00</published><updated>2025-04-01T00:00:00+00:00</updated><id>https://abbbi.github.io//fleece</id><content type="html" xml:base="https://abbbi.github.io//fleece/"><![CDATA[<p>I’ve released <a href="https://github.com/abbbi/qmpbackup/">qmpbackup 0.46</a> which now
utilizes the image fleecing technique for backup.</p>

<p>Usually, during backup, <code class="language-plaintext highlighter-rouge">Qemu</code> will use a so called copy-before-write filter so
that data for new guest writes is sent to the backup target first, the guest
write blocks until this operation is finished.</p>

<p>If the backup target is flaky, or becomes unavailable during backup operation,
this could lead to high I/O wait times or even complete VM lockups.</p>

<p>To fix this, a so called “fleecing” image is introduced during backup being
used as temporary cache for write operations by the guest. This image can be
placed on the same storage as the virtual machine disks, so is independent from
the backup target performance.</p>

<p>The documentation on which steps are required to get this going, using the Qemu
QMP protocol is, lets say.. lacking..</p>

<p>The following examples show the general functionality, but should be enhanced
to use transactions where possible. All commands are in <code class="language-plaintext highlighter-rouge">qmp-shell</code> command
format.</p>

<p>Lets start with a full backup:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># create a new bitmap</span>
block-dirty-bitmap-add <span class="nv">node</span><span class="o">=</span>disk1 <span class="nv">name</span><span class="o">=</span>bitmap <span class="nv">persistent</span><span class="o">=</span><span class="nb">true</span>
<span class="c"># add the fleece image to the virtual machine (same size as original disk required)</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>qcow2 node-name<span class="o">=</span>fleecie <span class="nv">file</span><span class="o">={</span><span class="s2">"driver"</span>:<span class="s2">"file"</span>,<span class="s2">"filename"</span>:<span class="s2">"/tmp/fleece.qcow2"</span><span class="o">}</span>
<span class="c"># add the backup target file to the virtual machine</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>qcow2 node-name<span class="o">=</span>backup-target-file <span class="nv">file</span><span class="o">={</span><span class="s2">"driver"</span>:<span class="s2">"file"</span>,<span class="s2">"filename"</span>:<span class="s2">"/tmp/backup.qcow2"</span><span class="o">}</span>
<span class="c"># enable the copy-before-writer for the first disk attached, utilizing the fleece image</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>copy-before-write node-name<span class="o">=</span>cbw <span class="nv">file</span><span class="o">=</span>disk1 <span class="nv">target</span><span class="o">=</span>fleecie
<span class="c"># "blockdev-replace": make the copy-before-writer filter the major device (use "query-block" to get path parameter value, qdev node)</span>
qom-set <span class="nv">path</span><span class="o">=</span>/machine/unattached/device[20] <span class="nv">property</span><span class="o">=</span>drive <span class="nv">value</span><span class="o">=</span>cbw
<span class="c"># add the snapshot-access filter backing the copy-before-writer</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>snapshot-access <span class="nv">file</span><span class="o">=</span>cbw node-name<span class="o">=</span>snapshot-backup-source
<span class="c"># create a full backup</span>
blockdev-backup <span class="nv">device</span><span class="o">=</span>snapshot-backup-source <span class="nv">target</span><span class="o">=</span>backup-target-file <span class="nb">sync</span><span class="o">=</span>full job-id<span class="o">=</span><span class="nb">test</span>

<span class="o">[</span> <span class="nb">wait </span><span class="k">until </span>block job finishes]

<span class="c"># remove the snapshot access filter from the virtual machine</span>
blockdev-del node-name<span class="o">=</span>snapshot-backup-source
<span class="c"># switch back to the regular disk</span>
qom-set <span class="nv">path</span><span class="o">=</span>/machine/unattached/device[20] <span class="nv">property</span><span class="o">=</span>drive <span class="nv">value</span><span class="o">=</span>node-disk1
<span class="c"># remove the copy-before-writer</span>
blockdev-del node-name<span class="o">=</span>cbw
<span class="c"># remove the backup-target-file</span>
blockdev-del node-name<span class="o">=</span>backup-target-file
<span class="c"># detach the fleecing image</span>
blockdev-del node-name<span class="o">=</span>fleecie</code></pre></figure>

<p>After this process, the temporary fleecing image can be deleted/recreated. Now
lets go for a incremental backup:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c"># add the fleecing and backup target image, like before</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>qcow2 node-name<span class="o">=</span>fleecie <span class="nv">file</span><span class="o">={</span><span class="s2">"driver"</span>:<span class="s2">"file"</span>,<span class="s2">"filename"</span>:<span class="s2">"/tmp/fleece.qcow2"</span><span class="o">}</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>qcow2 node-name<span class="o">=</span>backup-target-file <span class="nv">file</span><span class="o">={</span><span class="s2">"driver"</span>:<span class="s2">"file"</span>,<span class="s2">"filename"</span>:<span class="s2">"/tmp/backup-incremental.qcow2"</span><span class="o">}</span>
<span class="c"># add the copy-before-write filter, but utilize the bitmap created during full backup</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>copy-before-write node-name<span class="o">=</span>cbw <span class="nv">file</span><span class="o">=</span>disk1 <span class="nv">target</span><span class="o">=</span>fleecie <span class="nv">bitmap</span><span class="o">={</span><span class="s2">"node"</span>:<span class="s2">"disk1"</span>,<span class="s2">"name"</span>:<span class="s2">"bitmap"</span><span class="o">}</span>
<span class="c"># switch device to the copy-before-write filter</span>
qom-set <span class="nv">path</span><span class="o">=</span>/machine/unattached/device[20] <span class="nv">property</span><span class="o">=</span>drive <span class="nv">value</span><span class="o">=</span>cbw
<span class="c"># add the snapshot-access filter</span>
blockdev-add <span class="nv">driver</span><span class="o">=</span>snapshot-access <span class="nv">file</span><span class="o">=</span>cbw node-name<span class="o">=</span>snapshot-backup-source
<span class="c"># merge the bitmap created during full backup to the snapshot-access device so</span>
<span class="c"># the backup operation can access it. (you better use an transaction here)</span>
block-dirty-bitmap-add <span class="nv">node</span><span class="o">=</span>snapshot-backup-source <span class="nv">name</span><span class="o">=</span>bitmap
block-dirty-bitmap-merge <span class="nv">node</span><span class="o">=</span>snapshot-backup-source <span class="nv">target</span><span class="o">=</span>bitmap <span class="nv">bitmaps</span><span class="o">=[{</span><span class="s2">"node"</span>:<span class="s2">"disk1"</span>,<span class="s2">"name"</span>:<span class="s2">"bitmap"</span><span class="o">}]</span>
<span class="c"># create incremental backup (you better use an transaction here)</span>
blockdev-backup <span class="nv">device</span><span class="o">=</span>snapshot-backup-source <span class="nv">target</span><span class="o">=</span>backup-target-file job-id<span class="o">=</span><span class="nb">test sync</span><span class="o">=</span>incremental <span class="nv">bitmap</span><span class="o">=</span>bitmap

 <span class="o">[</span> <span class="nb">wait </span><span class="k">until </span>backup has finished <span class="o">]</span>
 <span class="o">[</span> cleanup like before <span class="o">]</span>

<span class="c"># clear the dirty bitmap (you better use an transaction here)</span>
block-dirty-bitmap-clear <span class="nv">node</span><span class="o">=</span>disk1 <span class="nv">name</span><span class="o">=</span>bitmap</code></pre></figure>

<p>Or, use a simple reproducer by directly passing qmp commands via stdio:</p>

<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="c">#!/usr/bin/bash</span>
qemu-img create <span class="nt">-f</span> raw disk 1M
qemu-img create <span class="nt">-f</span> raw fleece 1M
qemu-img create <span class="nt">-f</span> raw backup 1M
qemu-system-x86_64 <span class="nt">-drive</span> node-name<span class="o">=</span>disk,file<span class="o">=</span>disk,format<span class="o">=</span>file <span class="nt">-qmp</span> stdio <span class="nt">-nographic</span> <span class="nt">-nodefaults</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
{"execute": "qmp_capabilities"}
{"execute": "block-dirty-bitmap-add", "arguments": {"node": "disk", "name": "bitmap"}}
{"execute": "blockdev-add", "arguments": {"node-name": "fleece", "driver": "file", "filename": "fleece"}}
{"execute": "blockdev-add", "arguments": {"node-name": "backup", "driver": "file", "filename": "backup"}}
{"execute": "blockdev-add", "arguments": {"node-name": "cbw", "driver": "copy-before-write", "file": "disk", "target": "fleece", "bitmap": {"node": "disk", "name": "bitmap"}}}
{"execute": "query-block"}
{"execute": "qom-set", "arguments": {"path": "/machine/unattached/device[4]", "property": "drive", "value": "cbw"}}
{"execute": "blockdev-add", "arguments": {"node-name": "snapshot", "driver": "snapshot-access", "file": "cbw"}}
{"execute": "block-dirty-bitmap-add", "arguments": {"node": "snapshot", "name": "tbitmap"}}
{"execute": "block-dirty-bitmap-merge", "arguments": {"node": "snapshot", "target": "tbitmap", "bitmaps": [{"node": "disk", "name": "bitmap"}]}}
[..]
{"execute": "quit"}
EOF</span></code></pre></figure>]]></content><author><name></name></author><summary type="html"><![CDATA[I’ve released qmpbackup 0.46 which now utilizes the image fleecing technique for backup.]]></summary></entry><entry><title type="html">pbsav - scan backups on proxmox backup server via clamav</title><link href="https://abbbi.github.io//pbsav/" rel="alternate" type="text/html" title="pbsav - scan backups on proxmox backup server via clamav" /><published>2025-03-01T00:00:00+00:00</published><updated>2025-03-01T00:00:00+00:00</updated><id>https://abbbi.github.io//pbsav</id><content type="html" xml:base="https://abbbi.github.io//pbsav/"><![CDATA[<p>Little side project this weekend:</p>

<p><a href="https://github.com/abbbi/pbsav">pbsav</a></p>

<p>Small utility to scan virtual machine backups on PBS via clamav.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Little side project this weekend:]]></summary></entry><entry><title type="html">proxmox backup nbdkit plugin round 2</title><link href="https://abbbi.github.io//nbdkit2/" rel="alternate" type="text/html" title="proxmox backup nbdkit plugin round 2" /><published>2025-02-28T00:00:00+00:00</published><updated>2025-02-28T00:00:00+00:00</updated><id>https://abbbi.github.io//nbdkit2</id><content type="html" xml:base="https://abbbi.github.io//nbdkit2/"><![CDATA[<p>I re-implemented the <a href="https://abbbi.github.io/nbdkit-pbs/">proxmox backup nbdkit
plugin</a> in C.</p>

<p>It seems golang shared libraries <a href="https://github.com/golang/go/issues/15538">don’t play
well</a> with programs that fork().</p>

<p>As a result, the Plugin was only usable if nbdkit was run in foreground mode
(-f), making it impossible to use nbdkit’s’ captive modes, which are quite
useful..  Lessons learned.</p>

<p><a href="https://github.com/abbbi/cpbsnbd">Here is the C version</a></p>]]></content><author><name></name></author><summary type="html"><![CDATA[I re-implemented the proxmox backup nbdkit plugin in C.]]></summary></entry></feed>