lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200613221654.GO8681@bombadil.infradead.org>
Date:   Sat, 13 Jun 2020 15:16:54 -0700
From:   Matthew Wilcox <willy@...radead.org>
To:     Arnd Bergmann <arnd@...db.de>
Cc:     afzal mohammed <afzal.mohd.ma@...il.com>,
        Russell King - ARM Linux admin <linux@...linux.org.uk>,
        Linus Walleij <linus.walleij@...aro.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Nicolas Pitre <nico@...xnic.net>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>
Subject: Re: [RFC 1/3] lib: copy_{from,to}_user using gup & kmap_atomic()

On Sat, Jun 13, 2020 at 10:45:33PM +0200, Arnd Bergmann wrote:
> On Sat, Jun 13, 2020 at 2:04 PM afzal mohammed <afzal.mohd.ma@...il.com> wrote:
> > Observation is that max. pages reaching copy_{from,to}_user() is 2,
> > observed maximum of n (number of bytes) being 1 page size. i think C
> > library cuts any size read, write to page size (if it exceeds) &
> > invokes the system call. Max. pages reaching 2, happens when 'n'
> > crosses page boundary, this has been observed w/ small size request
> > as well w/ ones of exact page size (but not page aligned).
> 
> Right, this is apparently because tmpfs uses shmem_file_read_iter() to
> copy the file pages one at a time. generic_file_buffered_read() seems
> similar, to copying between an aligned kernel page and address in
> user space that is not page aligned would be an important case to
> optimize for.

This is kind of the nature of the page cache.  The kernel doesn't
necessarily have contiguous memory in the page cache, so it's going to
be split on page boundaries.  This is going to change with my THP series
(I haven't actually changed generic_file_buffered_read(), but it'll
come later).

> > Quickly comparing boot-time on Beagle Bone White, boot time increases
> > by only 4%, perhaps this worry is irrelevant, but just thought will
> > put it across.
> 
> 4% boot time increase sounds like a lot, especially if that is only for
> copy_from_user/copy_to_user. In the end it really depends on how well
> get_user()/put_user() and small copies can be optimized in the end.

The write path should also be paid attention to.  Look at
generic_perform_write() which first calls iov_iter_fault_in_readable()
(for the entire length of the write) and then actually does the copy
later with iov_iter_copy_from_user_atomic().  So you're going to want
to optimise the case where you access the same pages multiple times.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ