lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Oct 2022 22:52:14 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Dave Chinner <david@...morbit.com>
Cc:     Zhaoyang Huang <huangzhaoyang@...il.com>,
        "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, ke.wang@...soc.com,
        steve.kang@...soc.com, baocong.liu@...soc.com,
        linux-fsdevel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page

On Thu, Oct 20, 2022 at 09:04:24AM +1100, Dave Chinner wrote:
> On Wed, Oct 19, 2022 at 04:23:10PM +0100, Matthew Wilcox wrote:
> > On Wed, Oct 19, 2022 at 09:30:42AM +1100, Dave Chinner wrote:
> > > This is reading and writing the same amount of file data at the
> > > application level, but once the data has been written and kicked out
> > > of the page cache it seems to require an awful lot more read IO to
> > > get it back to the application. i.e. this looks like mmap() is
> > > readahead thrashing severely, and eventually it livelocks with this
> > > sort of report:
> > > 
> > > [175901.982484] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> > > [175901.985095] rcu:    Tasks blocked on level-1 rcu_node (CPUs 0-15): P25728
> > > [175901.987996]         (detected by 0, t=97399871 jiffies, g=15891025, q=1972622 ncpus=32)
> > > [175901.991698] task:test_write      state:R  running task     stack:12784 pid:25728 ppid: 25696 flags:0x00004002
> > > [175901.995614] Call Trace:
> > > [175901.996090]  <TASK>
> > > [175901.996594]  ? __schedule+0x301/0xa30
> > > [175901.997411]  ? sysvec_apic_timer_interrupt+0xb/0x90
> > > [175901.998513]  ? sysvec_apic_timer_interrupt+0xb/0x90
> > > [175901.999578]  ? asm_sysvec_apic_timer_interrupt+0x16/0x20
> > > [175902.000714]  ? xas_start+0x53/0xc0
> > > [175902.001484]  ? xas_load+0x24/0xa0
> > > [175902.002208]  ? xas_load+0x5/0xa0
> > > [175902.002878]  ? __filemap_get_folio+0x87/0x340
> > > [175902.003823]  ? filemap_fault+0x139/0x8d0
> > > [175902.004693]  ? __do_fault+0x31/0x1d0
> > > [175902.005372]  ? __handle_mm_fault+0xda9/0x17d0
> > > [175902.006213]  ? handle_mm_fault+0xd0/0x2a0
> > > [175902.006998]  ? exc_page_fault+0x1d9/0x810
> > > [175902.007789]  ? asm_exc_page_fault+0x22/0x30
> > > [175902.008613]  </TASK>
> > > 
> > > Given that filemap_fault on XFS is probably trying to map large
> > > folios, I do wonder if this is a result of some kind of race with
> > > teardown of a large folio...
> > 
> > It doesn't matter whether we're trying to map a large folio; it
> > matters whether a large folio was previously created in the cache.
> > Through the magic of readahead, it may well have been.  I suspect
> > it's not teardown of a large folio, but splitting.  Removing a
> > page from the page cache stores to the pointer in the XArray
> > first (either NULL or a shadow entry), then decrements the refcount.
> > 
> > We must be observing a frozen folio.  There are a number of places
> > in the MM which freeze a folio, but the obvious one is splitting.
> > That looks like this:
> > 
> >         local_irq_disable();
> >         if (mapping) {
> >                 xas_lock(&xas);
> > (...)
> >         if (folio_ref_freeze(folio, 1 + extra_pins)) {
> 
> But the lookup is not doing anything to prevent the split on the
> frozen page from making progress, right? It's not holding any folio
> references, and it's not holding the mapping tree lock, either. So
> how does the lookup in progress prevent the page split from making
> progress?

My thinking was that it keeps hammering the ->refcount field in
struct folio.  That might prevent a thread on a different socket
from making forward progress.  In contrast, spinlocks are designed
to be fair under contention, so by spinning on an actual lock, we'd
remove contention on the folio.

But I think the tests you've done refute that theory.  I'm all out of
ideas at the moment.  Either we have a frozen folio from somebody who
doesn't hold the lock, or we have someone who's left a frozen folio in
the page cache.  I'm leaning towards that explanation at the moment,
but I don't have a good suggestion for debugging.

Perhaps a bad suggestion for debugging would be to call dump_page()
with a __ratelimit() wrapper to not be overwhelmed with information?

> I would have thought:
> 
> 	if (!folio_try_get_rcu(folio)) {
> 		rcu_read_unlock();
> 		cond_resched();
> 		rcu_read_lock();
> 		goto repeat;
> 	}
> 
> Would be the right way to yeild the CPU to avoid priority
> inversion related livelocks here...

I'm not sure we're allowed to schedule here.  We might be under another
spinlock?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ