lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZP/akhSImBVxff0k@casper.infradead.org>
Date:   Tue, 12 Sep 2023 04:27:14 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Ankur Arora <ankur.a.arora@...cle.com>,
        Peter Zijlstra <peterz@...radead.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org,
        akpm@...ux-foundation.org, luto@...nel.org, bp@...en8.de,
        dave.hansen@...ux.intel.com, hpa@...or.com, mingo@...hat.com,
        juri.lelli@...hat.com, vincent.guittot@...aro.org, mgorman@...e.de,
        tglx@...utronix.de, jon.grimm@....com, bharata@....com,
        raghavendra.kt@....com, boris.ostrovsky@...cle.com,
        konrad.wilk@...cle.com
Subject: Re: [PATCH v2 7/9] sched: define TIF_ALLOW_RESCHED

On Mon, Sep 11, 2023 at 01:50:53PM -0700, Linus Torvalds wrote:
> Another example of this this is just plain read/write. It's not a
> problem in practice right now, because large pages are effectively
> never used.
> 
> But just imagine what happens once filemap_read() actually does big folios?
> 
> Do you really want this code:
> 
>         copied = copy_folio_to_iter(folio, offset, bytes, iter);
> 
> to forever use the artificial chunking it does now?
> 
> And yes, right now it will still do things in one-page chunks in
> copy_page_to_iter(). It doesn't even have cond_resched() - it's
> currently in the caller, in filemap_read().

Ah, um.  If you take a look in fs/iomap/buffered-io.c, you'll
see ...

iomap_write_iter:
        size_t chunk = PAGE_SIZE << MAX_PAGECACHE_ORDER;
                struct folio *folio;
                bytes = min(chunk - offset, iov_iter_count(i));
                if (unlikely(fault_in_iov_iter_readable(i, bytes) == bytes)) {
                copied = copy_folio_from_iter_atomic(folio, offset, bytes, i);

So we do still cond_resched(), but we might go up to PMD_SIZE
between calls.  This is new code in 6.6 so it hasn't seen use by too
many users yet, but it's certainly bigger than the 16 pages used by
copy_chunked_from_user().  I honestly hadn't thought about preemption
latency.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ