lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <19e1d864-a0b5-4854-9f10-56cf6be7638d@linux.alibaba.com>
Date: Mon, 30 Jun 2025 15:24:44 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Kairui Song <kasong@...cent.com>, linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, Hugh Dickins
 <hughd@...gle.com>, Matthew Wilcox <willy@...radead.org>,
 Kemeng Shi <shikemeng@...weicloud.com>, Chris Li <chrisl@...nel.org>,
 Nhat Pham <nphamcs@...il.com>, Baoquan He <bhe@...hat.com>,
 Barry Song <baohua@...nel.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 5/7] mm/shmem, swap: never use swap cache and readahead
 for SWP_SYNCHRONOUS_IO



On 2025/6/27 14:20, Kairui Song wrote:
> From: Kairui Song <kasong@...cent.com>
> 
> Currently if THP swapin failed due to reasons like partially conflicting
> swap cache or ZSWAP enabled, it will fallback to cached swapin.
> 
> Right now the swap cache has a non-trivial overhead, and readahead is
> not helpful for SWP_SYNCHRONOUS_IO devices, so we should always skip
> the readahead and swap cache even if the swapin falls back to order 0.
> 
> So handle the fallback logic without falling back to the cached read.
> 
> Also slightly tweak the behavior if the WARN_ON is triggered (shmem
> mapping is corrupted or buggy code) as a side effect, just return
> with -EINVAL. This should be OK as things are already very wrong
> beyond recovery at that point.
> 
> Signed-off-by: Kairui Song <kasong@...cent.com>
> ---
>   mm/shmem.c | 68 ++++++++++++++++++++++++++++++------------------------
>   1 file changed, 38 insertions(+), 30 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 5be9c905396e..5f2641fd1be7 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1975,13 +1975,15 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>   	return ERR_PTR(error);
>   }
>   
> -static struct folio *shmem_swap_alloc_folio(struct inode *inode,
> +static struct folio *shmem_swapin_direct(struct inode *inode,
>   		struct vm_area_struct *vma, pgoff_t index,
>   		swp_entry_t entry, int order, gfp_t gfp)
>   {
>   	struct shmem_inode_info *info = SHMEM_I(inode);
>   	int nr_pages = 1 << order;
>   	struct folio *new;
> +	pgoff_t offset;
> +	gfp_t swap_gfp;

Nit: The term 'swap' always reminds me of swap allocation:) But here 
it's actually about allocating a folio. Would 'alloc_gfp' be a better 
name? Otherwise look good to me.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ