lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 2 Oct 2022 07:56:50 -0600
From:   Keith Busch <kbusch@...nel.org>
To:     Maxim Levitsky <mlevitsk@...hat.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Christoph Hellwig <hch@....de>, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org, qemu-devel@...gnu.org,
        kvm@...r.kernel.org, Kevin Wolf <kwolf@...hat.com>,
        Michael Roth <mdroth@...ux.vnet.ibm.com>
Subject: Re: Commit 'iomap: add support for dma aligned direct-io' causes
 qemu/KVM boot failures

On Sun, Oct 02, 2022 at 11:59:42AM +0300, Maxim Levitsky wrote:
> On Thu, 2022-09-29 at 19:35 +0200, Paolo Bonzini wrote:
> > On 9/29/22 18:39, Christoph Hellwig wrote:
> > > On Thu, Sep 29, 2022 at 10:37:22AM -0600, Keith Busch wrote:
> > > > > I am aware, and I've submitted the fix to qemu here:
> > > > > 
> > > > >   https://lists.nongnu.org/archive/html/qemu-block/2022-09/msg00398.html
> > > > 
> > > > I don't think so. Memory alignment and length granularity are two completely
> > > > different concepts. If anything, the kernel's ABI had been that the length
> > > > requirement was also required for the memory alignment, not the other way
> > > > around. That usage will continue working with this kernel patch.
> 
> Yes, this is how I also understand it - for example for O_DIRECT on a file which
> resides on 4K block device, you have to use page aligned buffers.
> 
> But here after the patch, 512 aligned buffer starts working as well - If I
> understand you correctly the ABI didn't guarantee that such usage would fail,
> but rather that it might fail.

The kernel patch will allow buffer alignment to work with whatever the hardware
reports it can support. It could even as low as byte aligned if that's the
hardware can use that.

The patch aligns direct-io with the same criteria blk_rq_map_user() has always
used to know if the user space buffer is compatible with the hardware's dma
requirements. Prior to this patch, the direct-io memory alignment was an
artificial software constraint, and that constraint creates a lot of
unnecessary memory pressure.

As has always been the case, each segment needs to be a logical block length
granularity. QEMU assumed a buffer's page offset also defined the logical block
size instead of using the actual logical block size that it had previously
discovered directly.

> If I understand that correctly, after the patch in question, 
> qemu is able to use just 512 bytes aligned buffer to read a single 4K block from the disk,
> which supposed to fail but wasn't guarnteed to fail.
> 
> Later qemu it submits iovec which also reads a 4K block but in two parts,
> and if I understand that correctly, each part (iov) is considered
> to be a separate IO operation,  and thus each has to be in my case 4K in size, 
> and its memory buffer *should* also be 4K aligned.
>
> (but it can work with smaller alignement as well).

Right. The iov length needs to match the logical block size. The iov's memory
offset needs to align to the queue's dma_alignment attribute. The memory
alignment may be smaller than a block size.
 
> Assuming that I understand all of this correctly, I agree with Paolo that this is qemu
> bug, but I do fear that it can cause quite some problems for users,
> especially for users that use outdated qemu version.
> 
> It might be too much to ask, but maybe add a Kconfig option to keep legacy behavier
> for those that need it?

Kconfig doesn't sound right.

The block layer exports all the attributes user space needs to know about for
direct io.

  iov length:    /sys/block/<block-dev>/queue/logical_block_size
  iov mem align: /sys/block/<block-dev>/queue/dma_alignment

If you really want to change the behavior, I think maybe we could make the
dma_alignment attribute writeable (or perhaps add a new attribute specifically
for dio_alignment) so the user can request something larger.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ