[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87ilcl2m21.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Mon, 22 May 2023 09:26:46 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Chris Li <chrisl@...nel.org>
Cc: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Minchan Kim <minchan@...nel.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Yang Shi <shy828301@...il.com>, Yu Zhao <yuzhao@...gle.com>
Subject: Re: [PATCH] swap: cleanup get/put_swap_device usage
Chris Li <chrisl@...nel.org> writes:
> On Wed, May 17, 2023 at 08:23:18AM +0800, Huang, Ying wrote:
>> David Hildenbrand <david@...hat.com> writes:
>>
>> > On 16.05.23 07:29, Huang Ying wrote:
>> >> The general rule to use a swap entry is as follows.
>> >> When we get a swap entry, if there isn't some other way to prevent
>> >> swapoff, such as page lock for swap cache, page table lock, etc., the
>> >> swap entry may become invalid because of swapoff. Then, we need to
>> >> enclose all swap related functions with get_swap_device() and
>> >> put_swap_device(), unless the swap functions call
>> >> get/put_swap_device() by themselves.
>> >> Add the rule as comments of get_swap_device(), and cleanup some
>> >> functions which call get/put_swap_device().
>> >> 1. Enlarge the get/put_swap_device() protection range in
>> >> __read_swap_cache_async(). This makes the function a little easier to
>> >> be understood because we don't need to consider swapoff. And this
>> >> makes it possible to remove get/put_swap_device() calling in some
>> >> function called by __read_swap_cache_async().
>> >> 2. Remove get/put_swap_device() in __swap_count(). Which is call in
>> >> do_swap_page() only, which encloses the call with get/put_swap_device()
>> >> already.
>> >> 3. Remove get/put_swap_device() in __swp_swapcount(). Which is call
>> >> in __read_swap_cache_async() only, which encloses the call with
>> >> get/put_swap_device() already.
>> >> 4. Remove get/put_swap_device() in __swap_duplicate(). Which is
>> >> called
>> >> by
>> >> - swap_shmem_alloc(): the swap cache is locked.
>> >> - copy_nonpresent_pte() -> swap_duplicate() and try_to_unmap_one()
>> >> ->
>> >> swap_duplicate(): the page table lock is held.
>> >> - __read_swap_cache_async() -> swapcache_prepare(): enclosed with
>> >> get/put_swap_device() already.
>> >> Other get/put_swap_device() usages are checked too.
>> >
>> > I suggest splitting this patch up into logical pieces as outlined here
>> > by you already.
>
> Agree with David here.
>
>>
>> OK. Will do that in the next version.
>
> Your patch make sense to me.
>
> Looking forward to your next version.
>
> BTW, no relat to your patch, but just when I look
> at your patch I notice is that we have too many swap
> count functions.
> The naming scheme is very confusing.
>
> 1) swap_count(), just mask out SWAP_HAS_CACHE
>
> 2) __swap_count() the name with underscore suggest it
> is more internal. But __swap_count() calls swap_count().
> It is basically swap_count() with device lookup.
>
> 3) swap_swapcount()
> similar to __swap_count() but with cluster level
> locking if possible. otherwise fall back to device level locking.
>
> 4) __swp_swapcount()
> swap_swapcount () with device lookup. not consider continuing.
> Again this function is more external while swap_swapcount()
> is more internal.
>
> 5) swp_swapcount() similar to __swp_swapcount()
> exact count consider continue
>
> We should have a more consistent naming regarding swap count.
> Device level, then cluster level, then entry level.
Yes. The original naming is confusing.
> Also I consider the continuing is internal to the current
> swap index implementation. If we have alternative swap file
> implementation, we might not have count continuing at all.
There's some difficulties to hide continuation completely. For example,
we want to call add_swap_count_continuation() in non-atomic context in
copy_pte_range(), while the fast path calls swap_duplicate() in atomic
context (via copy_nonpresent_pte()).
Best Regards,
Huang, Ying
Powered by blists - more mailing lists