[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8344980d-4c22-4694-9a76-2e5a7ada50cb@linux.alibaba.com>
Date: Mon, 6 Jan 2025 11:46:04 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: akpm@...ux-foundation.org, hughd@...gle.com, david@...hat.com,
wangkefeng.wang@...wei.com, kasong@...cent.com,
ying.huang@...ux.alibaba.com, 21cnbao@...il.com, ryan.roberts@....com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm: shmem: skip swapcache for swapin of synchronous
swap device
On 2025/1/2 21:10, Matthew Wilcox wrote:
> On Thu, Jan 02, 2025 at 04:40:17PM +0800, Baolin Wang wrote:
>> With fast swap devices (such as zram), swapin latency is crucial to applications.
>> For shmem swapin, similar to anonymous memory swapin, we can skip the swapcache
>> operation to improve swapin latency.
>
> OK, but now we have more complexity. Why can't we always skip the
> swapcache on swapin?
Skipping swapcache is used to swap-in shmem large folios, avoiding the
large folios being split. Meanwhile, since the IO latency of syncing
swap devices is relatively small, it won't cause the IO latency
amplification issue.
But for async swap devices, if we swap-in the large folio one-time, I am
afraid the IO latency can be amplified. And I remember we still haven't
reached an agreement here[1], so let's step by step and start with the
sync swap devices first.
[1]
https://lore.kernel.org/linux-mm/874j7zfqkk.fsf@yhuang6-desk2.ccr.corp.intel.com/
(Actually, I think we can always skip the
> swapcache on swapout too, but that's a different matter).
Good suggestion. I will have a detail look.
>> +static struct folio *shmem_swap_alloc_folio(struct inode *inode, struct vm_area_struct *vma,
>> + pgoff_t index, swp_entry_t entry, int order, gfp_t gfp)
>
> Please wrap at 80 columns and use two tabs for indenting subsequent
> lines. ie:
>
> static struct folio *shmem_swap_alloc_folio(struct inode *inode,
> struct vm_area_struct *vma, pgoff_t index, swp_entry_t entry,
> int order, gfp_t gfp)
Sure. Thanks.
Powered by blists - more mailing lists