[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuOwPMJSsR2+q53gQTjWT9b0XS+AMZGJMOq1veut1OogWA@mail.gmail.com>
Date: Tue, 21 Nov 2023 00:02:14 -0800
From: Chris Li <chrisl@...nel.org>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: Kairui Song <ryncsn@...il.com>, linux-mm@...ck.org,
Kairui Song <kasong@...cent.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 09/24] mm/swap: inline __swap_count
On Sun, Nov 19, 2023 at 11:43 PM Huang, Ying <ying.huang@...el.com> wrote:
>
> Kairui Song <ryncsn@...il.com> writes:
>
> > From: Kairui Song <kasong@...cent.com>
> >
> > There is only one caller in swap subsystem now, where it can be inline
> > smoothly, avoid the memory access and function call overheads.
> >
> > Signed-off-by: Kairui Song <kasong@...cent.com>
> > ---
> > include/linux/swap.h | 6 ------
> > mm/swap_state.c | 6 +++---
> > mm/swapfile.c | 8 --------
> > 3 files changed, 3 insertions(+), 17 deletions(-)
> >
> > diff --git a/include/linux/swap.h b/include/linux/swap.h
> > index 2401990d954d..64a37819a9b3 100644
> > --- a/include/linux/swap.h
> > +++ b/include/linux/swap.h
> > @@ -485,7 +485,6 @@ int swap_type_of(dev_t device, sector_t offset);
> > int find_first_swap(dev_t *device);
> > extern unsigned int count_swap_pages(int, int);
> > extern sector_t swapdev_block(int, pgoff_t);
> > -extern int __swap_count(swp_entry_t entry);
> > extern int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry);
> > extern int swp_swapcount(swp_entry_t entry);
> > extern struct swap_info_struct *page_swap_info(struct page *);
> > @@ -559,11 +558,6 @@ static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
> > {
> > }
> >
> > -static inline int __swap_count(swp_entry_t entry)
> > -{
> > - return 0;
> > -}
> > -
> > static inline int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry)
> > {
> > return 0;
> > diff --git a/mm/swap_state.c b/mm/swap_state.c
> > index fb78f7f18ed7..d87c20f9f7ec 100644
> > --- a/mm/swap_state.c
> > +++ b/mm/swap_state.c
> > @@ -316,9 +316,9 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
> > release_pages(pages, nr);
> > }
> >
> > -static inline bool swap_use_no_readahead(struct swap_info_struct *si, swp_entry_t entry)
> > +static inline bool swap_use_no_readahead(struct swap_info_struct *si, pgoff_t offset)
> > {
> > - return data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1;
> > + return data_race(si->flags & SWP_SYNCHRONOUS_IO) && swap_count(si->swap_map[offset]) == 1;
> > }
> >
> > static inline bool swap_use_vma_readahead(struct swap_info_struct *si)
> > @@ -928,7 +928,7 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
> >
> > si = swp_swap_info(entry);
> > mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
> > - if (swap_use_no_readahead(si, entry)) {
> > + if (swap_use_no_readahead(si, swp_offset(entry))) {
> > page = swapin_no_readahead(entry, gfp_mask, mpol, ilx, vmf->vma->vm_mm);
> > cached = false;
> > } else if (swap_use_vma_readahead(si)) {
> > diff --git a/mm/swapfile.c b/mm/swapfile.c
> > index a8ae472ed2b6..e15a6c464a38 100644
> > --- a/mm/swapfile.c
> > +++ b/mm/swapfile.c
> > @@ -1431,14 +1431,6 @@ void swapcache_free_entries(swp_entry_t *entries, int n)
> > spin_unlock(&p->lock);
> > }
> >
> > -int __swap_count(swp_entry_t entry)
> > -{
> > - struct swap_info_struct *si = swp_swap_info(entry);
> > - pgoff_t offset = swp_offset(entry);
> > -
> > - return swap_count(si->swap_map[offset]);
> > -}
> > -
>
> I'd rather keep __swap_count() in the original place together with other
> swap count related functions. And si->swap_map[] was hided in
> swapfile.c before. I don't think the change will have any real
> performance improvement.
I agree with Ying here. Does not seem to have value high enough to
justify a patch by itself.
Chris
Powered by blists - more mailing lists