[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YZ/QFWzt/XbsLCqR@casper.infradead.org>
Date: Thu, 25 Nov 2021 18:04:05 +0000
From: Matthew Wilcox <willy@...radead.org>
To: Hao Lee <haolee.swjtu@...il.com>
Cc: Michal Hocko <mhocko@...e.com>, Linux MM <linux-mm@...ck.org>,
Johannes Weiner <hannes@...xchg.org>, vdavydov.dev@...il.com,
Shakeel Butt <shakeelb@...gle.com>, cgroups@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: reduce spinlock contention in release_pages()
On Thu, Nov 25, 2021 at 08:02:38AM +0000, Hao Lee wrote:
> On Thu, Nov 25, 2021 at 03:30:44AM +0000, Matthew Wilcox wrote:
> > On Thu, Nov 25, 2021 at 11:24:02AM +0800, Hao Lee wrote:
> > > On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > We do batch currently so no single task should be
> > > > able to monopolize the cpu for too long. Why this is not sufficient?
> > >
> > > uncharge and unref indeed take advantage of the batch process, but
> > > del_from_lru needs more time to complete. Several tasks will contend
> > > spinlock in the loop if nr is very large.
> >
> > Is SWAP_CLUSTER_MAX too large? Or does your architecture's spinlock
> > implementation need to be fixed?
> >
>
> My testing server is x86_64 with 5.16-rc2. The spinlock should be normal.
>
> I think lock_batch is not the point. lock_batch only break spinning time
> into small parts, but it doesn't reduce spinning time. The thing may get
> worse if lock_batch is very small.
OK. So if I understand right, you've got a lot of processes all
calling exit_mmap() at the same time, which eventually becomes calls to
unmap_vmas(), unmap_single_vma(), unmap_page_range(), zap_pte_range(),
tlb_flush_mmu(), tlb_batch_pages_flush(), free_pages_and_swap_cache(),
release_pages(), and then you see high contention on the LRU lock.
Your use-case doesn't seem to mind sleeping (after all, these processes
are exiting). So we could put a semaphore in exit_mmap() to limit the
number of simultaneous callers to unmap_vmas(). Do you want to try
that out and see if it solves your problem? You might want to make it
a counting semaphore (eg permit two tasks to exit at once) rather than
a mutex. But maybe a mutex is just fine.
Powered by blists - more mailing lists