[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877cgy2ifu.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Tue, 16 Apr 2024 09:40:53 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Barry Song <21cnbao@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org,
baolin.wang@...ux.alibaba.com, chrisl@...nel.org, david@...hat.com,
hanchuanhua@...o.com, hannes@...xchg.org, hughd@...gle.com,
kasong@...cent.com, ryan.roberts@....com, surenb@...gle.com,
v-songbaohua@...o.com, willy@...radead.org, xiang@...nel.org,
yosryahmed@...gle.com, yuzhao@...gle.com, ziy@...dia.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched
swap_free()
Barry Song <21cnbao@...il.com> writes:
> On Mon, Apr 15, 2024 at 8:53 PM Huang, Ying <ying.huang@...el.com> wrote:
>>
>> Barry Song <21cnbao@...il.com> writes:
>>
>> > On Mon, Apr 15, 2024 at 8:21 PM Huang, Ying <ying.huang@...el.com> wrote:
>> >>
>> >> Barry Song <21cnbao@...il.com> writes:
>> >>
>> >> > On Mon, Apr 15, 2024 at 6:19 PM Huang, Ying <ying.huang@...el.com> wrote:
>> >> >>
>> >> >> Barry Song <21cnbao@...il.com> writes:
>> >> >>
>> >> >> > From: Chuanhua Han <hanchuanhua@...o.com>
>> >> >> >
>> >> >> > While swapping in a large folio, we need to free swaps related to the whole
>> >> >> > folio. To avoid frequently acquiring and releasing swap locks, it is better
>> >> >> > to introduce an API for batched free.
>> >> >> >
>> >> >> > Signed-off-by: Chuanhua Han <hanchuanhua@...o.com>
>> >> >> > Co-developed-by: Barry Song <v-songbaohua@...o.com>
>> >> >> > Signed-off-by: Barry Song <v-songbaohua@...o.com>
>> >> >> > ---
>> >> >> > include/linux/swap.h | 5 +++++
>> >> >> > mm/swapfile.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
>> >> >> > 2 files changed, 56 insertions(+)
>> >> >> >
>> >> >> > diff --git a/include/linux/swap.h b/include/linux/swap.h
>> >> >> > index 11c53692f65f..b7a107e983b8 100644
>> >> >> > --- a/include/linux/swap.h
>> >> >> > +++ b/include/linux/swap.h
>> >> >> > @@ -483,6 +483,7 @@ extern void swap_shmem_alloc(swp_entry_t);
>> >> >> > extern int swap_duplicate(swp_entry_t);
>> >> >> > extern int swapcache_prepare(swp_entry_t);
>> >> >> > extern void swap_free(swp_entry_t);
>> >> >> > +extern void swap_free_nr(swp_entry_t entry, int nr_pages);
>> >> >> > extern void swapcache_free_entries(swp_entry_t *entries, int n);
>> >> >> > extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
>> >> >> > int swap_type_of(dev_t device, sector_t offset);
>> >> >> > @@ -564,6 +565,10 @@ static inline void swap_free(swp_entry_t swp)
>> >> >> > {
>> >> >> > }
>> >> >> >
>> >> >> > +void swap_free_nr(swp_entry_t entry, int nr_pages)
>> >> >> > +{
>> >> >> > +}
>> >> >> > +
>> >> >> > static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
>> >> >> > {
>> >> >> > }
>> >> >> > diff --git a/mm/swapfile.c b/mm/swapfile.c
>> >> >> > index 28642c188c93..f4c65aeb088d 100644
>> >> >> > --- a/mm/swapfile.c
>> >> >> > +++ b/mm/swapfile.c
>> >> >> > @@ -1356,6 +1356,57 @@ void swap_free(swp_entry_t entry)
>> >> >> > __swap_entry_free(p, entry);
>> >> >> > }
>> >> >> >
>> >> >> > +/*
>> >> >> > + * Free up the maximum number of swap entries at once to limit the
>> >> >> > + * maximum kernel stack usage.
>> >> >> > + */
>> >> >> > +#define SWAP_BATCH_NR (SWAPFILE_CLUSTER > 512 ? 512 : SWAPFILE_CLUSTER)
>> >> >> > +
>> >> >> > +/*
>> >> >> > + * Called after swapping in a large folio,
>> >> >>
>> >> >> IMHO, it's not good to document the caller in the function definition.
>> >> >> Because this will discourage function reusing.
>> >> >
>> >> > ok. right now there is only one user that is why it is added. but i agree
>> >> > we can actually remove this.
>> >> >
>> >> >>
>> >> >> > batched free swap entries
>> >> >> > + * for this large folio, entry should be for the first subpage and
>> >> >> > + * its offset is aligned with nr_pages
>> >> >>
>> >> >> Why do we need this?
>> >> >
>> >> > This is a fundamental requirement for the existing kernel, folio's
>> >> > swap offset is naturally aligned from the first moment add_to_swap
>> >> > to add swapcache's xa. so this comment is describing the existing
>> >> > fact. In the future, if we want to support swap-out folio to discontiguous
>> >> > and not-aligned offsets, we can't pass entry as the parameter, we should
>> >> > instead pass ptep or another different data struct which can connect
>> >> > multiple discontiguous swap offsets.
>> >> >
>> >> > I feel like we only need "for this large folio, entry should be for
>> >> > the first subpage" and drop "and its offset is aligned with nr_pages",
>> >> > the latter is not important to this context at all.
>> >>
>> >> IIUC, all these are requirements of the only caller now, not the
>> >> function itself. If only part of the all swap entries of a mTHP are
>> >> called with swap_free_nr(), can swap_free_nr() still do its work? If
>> >> so, why not make swap_free_nr() as general as possible?
>> >
>> > right , i believe we can make swap_free_nr() as general as possible.
>> >
>> >>
>> >> >>
>> >> >> > + */
>> >> >> > +void swap_free_nr(swp_entry_t entry, int nr_pages)
>> >> >> > +{
>> >> >> > + int i, j;
>> >> >> > + struct swap_cluster_info *ci;
>> >> >> > + struct swap_info_struct *p;
>> >> >> > + unsigned int type = swp_type(entry);
>> >> >> > + unsigned long offset = swp_offset(entry);
>> >> >> > + int batch_nr, remain_nr;
>> >> >> > + DECLARE_BITMAP(usage, SWAP_BATCH_NR) = { 0 };
>> >> >> > +
>> >> >> > + /* all swap entries are within a cluster for mTHP */
>> >> >> > + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER);
>> >> >> > +
>> >> >> > + if (nr_pages == 1) {
>> >> >> > + swap_free(entry);
>> >> >> > + return;
>> >> >> > + }
>> >> >>
>> >> >> Is it possible to unify swap_free() and swap_free_nr() into one function
>> >> >> with acceptable performance? IIUC, the general rule in mTHP effort is
>> >> >> to avoid duplicate functions between mTHP and normal small folio.
>> >> >> Right?
>> >> >
>> >> > I don't see why.
>> >>
>> >> Because duplicated implementation are hard to maintain in the long term.
>> >
>> > sorry, i actually meant "I don't see why not", for some reason, the "not"
>> > was missed. Obviously I meant "why not", there was a "but" after it :-)
>> >
>> >>
>> >> > but we have lots of places calling swap_free(), we may
>> >> > have to change them all to call swap_free_nr(entry, 1); the other possible
>> >> > way is making swap_free() a wrapper of swap_free_nr() always using
>> >> > 1 as the argument. In both cases, we are changing the semantics of
>> >> > swap_free_nr() to partially freeing large folio cases and have to drop
>> >> > "entry should be for the first subpage" then.
>> >> >
>> >> > Right now, the semantics is
>> >> > * swap_free_nr() for an entire large folio;
>> >> > * swap_free() for one entry of either a large folio or a small folio
>> >>
>> >> As above, I don't think the these semantics are important for
>> >> swap_free_nr() implementation.
>> >
>> > right. I agree. If we are ready to change all those callers, nothing
>> > can stop us from removing swap_free().
>> >
>> >>
>> >> >>
>> >> >> > +
>> >> >> > + remain_nr = nr_pages;
>> >> >> > + p = _swap_info_get(entry);
>> >> >> > + if (p) {
>> >> >> > + for (i = 0; i < nr_pages; i += batch_nr) {
>> >> >> > + batch_nr = min_t(int, SWAP_BATCH_NR, remain_nr);
>> >> >> > +
>> >> >> > + ci = lock_cluster_or_swap_info(p, offset);
>> >> >> > + for (j = 0; j < batch_nr; j++) {
>> >> >> > + if (__swap_entry_free_locked(p, offset + i * SWAP_BATCH_NR + j, 1))
>> >> >> > + __bitmap_set(usage, j, 1);
>> >> >> > + }
>> >> >> > + unlock_cluster_or_swap_info(p, ci);
>> >> >> > +
>> >> >> > + for_each_clear_bit(j, usage, batch_nr)
>> >> >> > + free_swap_slot(swp_entry(type, offset + i * SWAP_BATCH_NR + j));
>> >> >> > +
>> >> >> > + bitmap_clear(usage, 0, SWAP_BATCH_NR);
>> >> >> > + remain_nr -= batch_nr;
>> >> >> > + }
>> >> >> > + }
>> >> >> > +}
>> >> >> > +
>> >> >> > /*
>> >> >> > * Called after dropping swapcache to decrease refcnt to swap entries.
>> >> >> > */
>> >> >>
>> >> >> put_swap_folio() implements batching in another method. Do you think
>> >> >> that it's good to use the batching method in that function here? It
>> >> >> avoids to use bitmap operations and stack space.
>> >> >
>> >> > Chuanhua has strictly limited the maximum stack usage to several
>> >> > unsigned long,
>> >>
>> >> 512 / 8 = 64 bytes.
>> >>
>> >> So, not trivial.
>> >>
>> >> > so this should be safe. on the other hand, i believe this
>> >> > implementation is more efficient, as put_swap_folio() might lock/
>> >> > unlock much more often whenever __swap_entry_free_locked returns
>> >> > 0.
>> >>
>> >> There are 2 most common use cases,
>> >>
>> >> - all swap entries have usage count == 0
>> >> - all swap entries have usage count != 0
>> >>
>> >> In both cases, we only need to lock/unlock once. In fact, I didn't
>> >> find possible use cases other than above.
>> >
>> > i guess the point is free_swap_slot() shouldn't be called within
>> > lock_cluster_or_swap_info? so when we are freeing nr_pages slots,
>> > we'll have to unlock and lock nr_pages times? and this is the most
>> > common scenario.
>>
>> No. In put_swap_folio(), free_entries is either SWAPFILE_CLUSTER (that
>> is, nr_pages) or 0. These are the most common cases.
>>
>
> i am actually talking about the below code path,
>
> void put_swap_folio(struct folio *folio, swp_entry_t entry)
> {
> ...
>
> ci = lock_cluster_or_swap_info(si, offset);
> ...
> for (i = 0; i < size; i++, entry.val++) {
> if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) {
> unlock_cluster_or_swap_info(si, ci);
> free_swap_slot(entry);
> if (i == size - 1)
> return;
> lock_cluster_or_swap_info(si, offset);
> }
> }
> unlock_cluster_or_swap_info(si, ci);
> }
>
> but i guess you are talking about the below code path:
>
> void put_swap_folio(struct folio *folio, swp_entry_t entry)
> {
> ...
>
> ci = lock_cluster_or_swap_info(si, offset);
> if (size == SWAPFILE_CLUSTER) {
> map = si->swap_map + offset;
> for (i = 0; i < SWAPFILE_CLUSTER; i++) {
> val = map[i];
> VM_BUG_ON(!(val & SWAP_HAS_CACHE));
> if (val == SWAP_HAS_CACHE)
> free_entries++;
> }
> if (free_entries == SWAPFILE_CLUSTER) {
> unlock_cluster_or_swap_info(si, ci);
> spin_lock(&si->lock);
> mem_cgroup_uncharge_swap(entry, SWAPFILE_CLUSTER);
> swap_free_cluster(si, idx);
> spin_unlock(&si->lock);
> return;
> }
> }
> }
I am talking about both code paths. In 2 most common cases,
__swap_entry_free_locked() will return 0 or !0 for all entries in range.
> we are mTHP, so we can't assume our size is SWAPFILE_CLUSTER?
> or you want to check free_entries == "1 << swap_entry_order(folio_order(folio))"
> instead of SWAPFILE_CLUSTER for the "for (i = 0; i < size; i++, entry.val++)"
> path?
Just replace SWAPFILE_CLUSTER with "nr_pages" in your code.
>
>> >>
>> >> And, we should add batching in __swap_entry_free(). That will help
>> >> free_swap_and_cache_nr() too.
>
> Chris Li and I actually discussed it before, while I completely
> agree this can be batched. but i'd like to defer this as an incremental
> patchset later to keep this swapcache-refault small.
OK.
>>
>> Please consider this too.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists