[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGudoHESB-+kHPJO+4MHnUDPJXGP87=yJ2QrW3q8pkO5z7OLRw@mail.gmail.com>
Date: Wed, 17 Jul 2024 13:29:31 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: Bharata B Rao <bharata@....com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
nikunj@....com, "Upadhyay, Neeraj" <Neeraj.Upadhyay@....com>,
Andrew Morton <akpm@...ux-foundation.org>, David Hildenbrand <david@...hat.com>, willy@...radead.org,
yuzhao@...gle.com, kinseyho@...gle.com, Mel Gorman <mgorman@...e.de>
Subject: Re: Hard and soft lockups with FIO and LTP runs on a large system
On Wed, Jul 17, 2024 at 11:42 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 7/3/24 5:11 PM, Bharata B Rao wrote:
> > The general observation is that the problem usually surfaces when the
> > system free memory goes very low and page cache/buffer consumption hits
> > the ceiling. Most of the times the two contended locks are lruvec and
> > inode->i_lock spinlocks.
> >
[snip mm stuff]
There are numerous avoidable i_lock acquires (including some only
showing up under load), but I don't know if they play any role in this
particular test.
Collecting all traces would definitely help, locked up or not, for example:
bpftrace -e 'kprobe:queued_spin_lock_slowpath { @[kstack()] = count();
}' -o traces
As for clear_shadow_entry mentioned in the opening mail, the content is:
spin_lock(&mapping->host->i_lock);
xa_lock_irq(&mapping->i_pages);
__clear_shadow_entry(mapping, index, entry);
xa_unlock_irq(&mapping->i_pages);
if (mapping_shrinkable(mapping))
inode_add_lru(mapping->host);
spin_unlock(&mapping->host->i_lock);
so for all I know it's all about the xarray thing, not the i_lock per se.
--
Mateusz Guzik <mjguzik gmail.com>
Powered by blists - more mailing lists