lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bf10dcfb-8f21-4d0d-82fb-63efb167d169@bytedance.com>
Date: Mon, 22 Jan 2024 21:17:43 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Yosry Ahmed <yosryahmed@...gle.com>,
 Andrew Morton <akpm@...ux-foundation.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Nhat Pham <nphamcs@...il.com>,
 Chris Li <chrisl@...nel.org>, Huang Ying <ying.huang@...el.com>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: swap: update inuse_pages after all cleanups are
 done

On 2024/1/20 10:40, Yosry Ahmed wrote:
> In swap_range_free(), we update inuse_pages then do some cleanups (arch
> invalidation, zswap invalidation, swap cache cleanups, etc). During
> swapoff, try_to_unuse() uses inuse_pages to make sure all swap entries
> are freed. Make sure we only update inuse_pages after we are done with
> the cleanups.
> 
> In practice, this shouldn't matter, because swap_range_free() is called
> with the swap info lock held, and the swapoff code will spin for that
> lock after try_to_unuse() anyway.
> 
> The goal is to make it obvious and more future proof that once
> try_to_unuse() returns, all cleanups are done. This also facilitates a
> following zswap cleanup patch which uses this fact to simplify
> zswap_swapoff().
> 
> Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>

Reviewed-by: Chengming Zhou <zhouchengming@...edance.com>

Thanks.

> ---
>  mm/swapfile.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 556ff7347d5f0..2fedb148b9404 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -737,8 +737,6 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
>  		if (was_full && (si->flags & SWP_WRITEOK))
>  			add_to_avail_list(si);
>  	}
> -	atomic_long_add(nr_entries, &nr_swap_pages);
> -	WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
>  	if (si->flags & SWP_BLKDEV)
>  		swap_slot_free_notify =
>  			si->bdev->bd_disk->fops->swap_slot_free_notify;
> @@ -752,6 +750,8 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,
>  		offset++;
>  	}
>  	clear_shadow_from_swap_cache(si->type, begin, end);
> +	atomic_long_add(nr_entries, &nr_swap_pages);
> +	WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries);
>  }
>  
>  static void set_cluster_next(struct swap_info_struct *si, unsigned long next)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ