[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z71kB6iC50io8zwS@MiWiFi-R3L-srv>
Date: Tue, 25 Feb 2025 14:32:39 +0800
From: Baoquan He <bhe@...hat.com>
To: Kairui Song <kasong@...cent.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Chris Li <chrisl@...nel.org>, Barry Song <v-songbaohua@...o.com>,
Hugh Dickins <hughd@...gle.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
"Huang, Ying" <ying.huang@...ux.alibaba.com>,
Nhat Pham <nphamcs@...il.com>, Johannes Weiner <hannes@...xchg.org>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kalesh Singh <kaleshsingh@...gle.com>,
Matthew Wilcox <willy@...radead.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 4/7] mm, swap: don't update the counter up-front
On 02/25/25 at 02:02am, Kairui Song wrote:
> From: Kairui Song <kasong@...cent.com>
>
> The counter update before allocation design was useful to avoid
> unnecessary scan when device is full, so it will abort early if the
> counter indicates the device is full. But that is an uncommon case,
> and now scanning of a full device is very fast, so the up-front update
> is not helpful any more.
>
> Remove it and simplify the slot allocation logic.
>
> Signed-off-by: Kairui Song <kasong@...cent.com>
> ---
> mm/swapfile.c | 18 ++----------------
> 1 file changed, 2 insertions(+), 16 deletions(-)
Reviewed-by: Baoquan He <bhe@...hat.com>
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 6f2de59c6355..db836670c334 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1201,22 +1201,10 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order)
> int order = swap_entry_order(entry_order);
> unsigned long size = 1 << order;
> struct swap_info_struct *si, *next;
> - long avail_pgs;
> int n_ret = 0;
> int node;
>
> spin_lock(&swap_avail_lock);
> -
> - avail_pgs = atomic_long_read(&nr_swap_pages) / size;
> - if (avail_pgs <= 0) {
> - spin_unlock(&swap_avail_lock);
> - goto noswap;
> - }
> -
> - n_goal = min3((long)n_goal, (long)SWAP_BATCH, avail_pgs);
> -
> - atomic_long_sub(n_goal * size, &nr_swap_pages);
> -
> start_over:
> node = numa_node_id();
> plist_for_each_entry_safe(si, next, &swap_avail_heads[node], avail_lists[node]) {
> @@ -1250,10 +1238,8 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_order)
> spin_unlock(&swap_avail_lock);
>
> check_out:
> - if (n_ret < n_goal)
> - atomic_long_add((long)(n_goal - n_ret) * size,
> - &nr_swap_pages);
> -noswap:
> + atomic_long_sub(n_ret * size, &nr_swap_pages);
> +
> return n_ret;
> }
>
> --
> 2.48.1
>
Powered by blists - more mailing lists