lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 2 Feb 2017 15:48:17 +0100
From:   Jan Kara <jack@...e.cz>
To:     Al Viro <viro@...IV.linux.org.uk>
Cc:     Jeff Layton <jlayton@...hat.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org,
        ceph-devel@...r.kernel.org, lustre-devel@...ts.lustre.org,
        v9fs-developer@...ts.sourceforge.net,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Jan Kara <jack@...e.cz>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v3 0/2] iov_iter: allow iov_iter_get_pages_alloc to
 allocate more pages per call

On Thu 02-02-17 09:51:25, Al Viro wrote:
> I'm massaging that code (along with a lot of RTFS); the interesting questions
> related to VM side of things are
> 	* what are the relative costs of doing small vs. large batches?  Some
> of get_user_pages_fast() instances have comments along the lines of "we ought
> to limit the batch size, but then nobody's doing more than 64 pages at a time
> anyway".

Well, I believe it's only about the cost of setting up page table walk and
walking down those several levels of page tables. It is cheaper to copy
several PTE entries from the leaf level than doing the full walk every
time. But I didn't ever profile it to get actual numbers.

Large batches are only a problem because of possible soft-lockups and irq
latencies AFAIK. So the batch just should not be insanely large...

> 	* Not a question: any ->fault() that returns VM_FAULT_RETRY when
> *not* passed FAULT_FLAG_ALLOW_RETRY in flags ought to be shot.  cxlflash one
> sure as hell is.

Yep.

> 	* drivers/gpu/drm/vgem/vgem_drv.c:vgem_gem_fault() is bloody odd -
> shmem_read_mapping_page() can't return -EBUSY, AFAICS.  vm_insert_page()
> used to (and back then vgem_gem_fault() used to be broken), but these days
> it looks like dead code...

Looks like that.

> 	* ->page_mkwrite() instances sometimes return VM_FAULT_RETRY; AFAICS,
> it's only (ab)used there as 'not zero, but doesn't contain any error bits';
> VM_FAULT_RETRY from that source does *not* reach handle_mm_fault() callers,
> right?

I can see only Lustre doing it and IMHO it is abuse. VM_FAULT_RETRY is used
for mmap_sem latency reduction when paging in pages and so not everybody
handles it. If a handler wants to simply retry the fault, returning
VM_FAULT_NOPAGE is a more common way to do that...

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ