[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=MBts2mGgTE__VP-ZVMrMFTzQnbTAkMPTJs3KNRQ2QDjg@mail.gmail.com>
Date: Tue, 9 Apr 2024 10:52:40 -0700
From: Nhat Pham <nphamcs@...il.com>
To: Zhaoyu Liu <liuzhaoyu.zackary@...edance.com>
Cc: "Huang, Ying" <ying.huang@...el.com>, Andrew Morton <akpm@...ux-foundation.org>, ryncsn@...il.com,
songmuchun@...edance.com, david@...hat.com, chrisl@...nel.org,
guo.ziliang@....com.cn, yosryahmed@...gle.com, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH v2] mm: swap: prejudgement swap_has_cache to avoid page allocation
On Tue, Apr 9, 2024 at 7:57 AM Zhaoyu Liu
<liuzhaoyu.zackary@...edance.com> wrote:
>
> On Tue, Apr 09, 2024 at 09:07:29AM +0800, Huang, Ying wrote:
> > Andrew Morton <akpm@...ux-foundation.org> writes:
> >
> > > On Mon, 8 Apr 2024 20:14:39 +0800 Zhaoyu Liu <liuzhaoyu.zackary@...edance.com> wrote:
> > >
> > >> Based on qemu arm64 - latest kernel + 100M memory + 1024M swapfile.
> > >> Create 1G anon mmap and set it to shared, and has two processes
> > >> randomly access the shared memory. When they are racing on swap cache,
> > >> on average, each "alloc_pages_mpol + swapcache_prepare + folio_put"
> > >> took about 1475 us.
> > >
> > > And what effect does this patch have upon the measured time? ANd upon
> > > overall runtime?
> >
> > And the patch will cause increased lock contention, please test with
> > more processes and perhaps HDD swap device too.
>
> Hi Ying,
>
> Thank you for your suggestion.
> It may indeed cause some lock contention, as mentioned by Kairui before.
>
> If so, is it recommended?
> ---
> unsigned char swap_map, mapcount, hascache;
> ...
> /* Return raw data of the si->swap_map[offset] */
> swap_map = __swap_map(si, entry);
> mapcount = swap_map & ~SWAP_HAS_CACHE;
> if (!mapcount && swap_slot_cache_enabled)
> ...
> hascache = swap_map & SWAP_HAS_CACHE;
> /* Could judge that it's being added to swap cache with high probability */
> if (mapcount && hascache)
> goto skip_alloc;
> ...
> ---
> In doing so, there is no additional use of locks.
>
Hmm so is this a lockless check now? Ummmm... Could someone with more
expertise in the Linux kernel memory model double check that this is
even a valid state we're observing here? Looks like we're performing
an unguarded, unsynchronized, non-atomic read with the possibility of
concurrent write - is there a chance we might see partial/invalid
results?
Could you also test with zswap enabled (and perhaps with zswap
shrinker enabled)?
Powered by blists - more mailing lists