lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 02 Oct 2022 11:59:42 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Paolo Bonzini <pbonzini@...hat.com>,
        Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...nel.org>
Cc:     linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        qemu-devel@...gnu.org, kvm@...r.kernel.org,
        Kevin Wolf <kwolf@...hat.com>,
        Michael Roth <mdroth@...ux.vnet.ibm.com>
Subject: Re: Commit 'iomap: add support for dma aligned direct-io' causes
 qemu/KVM boot failures

On Thu, 2022-09-29 at 19:35 +0200, Paolo Bonzini wrote:
> On 9/29/22 18:39, Christoph Hellwig wrote:
> > On Thu, Sep 29, 2022 at 10:37:22AM -0600, Keith Busch wrote:
> > > > I am aware, and I've submitted the fix to qemu here:
> > > > 
> > > >   https://lists.nongnu.org/archive/html/qemu-block/2022-09/msg00398.html
> > > 
> > > I don't think so. Memory alignment and length granularity are two completely
> > > different concepts. If anything, the kernel's ABI had been that the length
> > > requirement was also required for the memory alignment, not the other way
> > > around. That usage will continue working with this kernel patch.

Yes, this is how I also understand it - for example for O_DIRECT on a file which
resides on 4K block device, you have to use page aligned buffers.

But here after the patch, 512 aligned buffer starts working as well - If I
understand you correctly the ABI didn't guarantee that such usage would fail,
but rather that it might fail.

> > 
> > Well, Linus does treat anything that breaks significant userspace
> > as a regression.  Qemu certainly is significant, but that might depend
> > on bit how common configurations hitting this issue are.
> 
> Seeing the QEMU patch, I agree that it's a QEMU bug though.  I'm 
> surprised it has ever worked.
> 
> It requires 4K sectors in the host but not in the guest, and can be 
> worked around (if not migrating) by disabling O_DIRECT.  I think it's 
> not that awful, but we probably should do some extra releases of QEMU 
> stable branches.
> 
> Paolo
> 

I must admit I am out of the loop on the exact requirements of the O_DIRECT.


If I understand that correctly, after the patch in question, 
qemu is able to use just 512 bytes aligned buffer to read a single 4K block from the disk,
which supposed to fail but wasn't guarnteed to fail.



Later qemu it submits iovec which also reads a 4K block but in two parts,
and if I understand that correctly, each part (iov) is considered
to be a separate IO operation,  and thus each has to be in my case 4K in size, 
and its memory buffer *should* also be 4K aligned.

(but it can work with smaller alignement as well).


Assuming that I understand all of this correctly, I agree with Paolo that this is qemu
bug, but I do fear that it can cause quite some problems for users,
especially for users that use outdated qemu version.

It might be too much to ask, but maybe add a Kconfig option to keep legacy behavier
for those that need it?

Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ