lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Feb 2017 08:54:15 +0000
From:   Al Viro <viro@...IV.linux.org.uk>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     Jeff Layton <jlayton@...hat.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org,
        ceph-devel@...r.kernel.org, lustre-devel@...ts.lustre.org,
        v9fs-developer@...ts.sourceforge.net,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to
 allocate more pages per call

On Thu, Feb 02, 2017 at 11:49:01PM -0800, Christoph Hellwig wrote:
> On Thu, Feb 02, 2017 at 08:00:52AM -0500, Jeff Layton wrote:
> > Yeah, that might work. You could kmalloc the buffer array according to
> > the maxsize value. For small ones we could even consider using an on-
> > stack buffer.
> 
> For the block direct I/O code we defintively want to avoid any
> allocations for small I/O a that shows up in the performance numbers.
> And we'd like to reuse the on-stack bio_vec so that the defintion of
> a small I/O can be as big as possible without blowing up the stack.

Hmm...  Reuse part is really nasty ;-/  OTOH, it might make sense to have
a "fill bio_vec array" as separate primitive - having that sucker come
from bio looks like an artificial restriction.

OK, next question, seeing that you've dealt with O_DIRECT guts more than
I have.  When we have iov_iter_get_pages() fail on do_direct_IO() write
with some blocks already allocated, we pick zero page as data source.
So far, so good, but:
	* should we bother zeroing anything unless buffer_new() is true?
	* why, in case of more than a page worth of pending allocated
blocks, do we bother with calling iov_iter_get_pages() again and again?
We *do* take care not to allocate anything else after that point, but
dio_get_page() will be calling iov_iter_get_pages() every time in that
case - there's only one page in queue.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ