[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87vakopk22.fsf@yhuang-dev.intel.com>
Date: Tue, 12 Sep 2017 13:23:01 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, kernel-team <kernel-team@....com>,
Ilya Dryomov <idryomov@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
"Huang\, Ying" <ying.huang@...el.com>
Subject: Re: [PATCH 4/5] mm:swap: respect page_cluster for readahead
Minchan Kim <minchan@...nel.org> writes:
> page_cluster 0 means "we don't want readahead" so in the case,
> let's skip the readahead detection logic.
>
> Cc: "Huang, Ying" <ying.huang@...el.com>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
> ---
> include/linux/swap.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 0f54b491e118..739d94397c47 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -427,7 +427,8 @@ extern bool has_usable_swap(void);
>
> static inline bool swap_use_vma_readahead(void)
> {
> - return READ_ONCE(swap_vma_readahead) && !atomic_read(&nr_rotate_swap);
> + return page_cluster > 0 && READ_ONCE(swap_vma_readahead)
> + && !atomic_read(&nr_rotate_swap);
> }
>
> /* Swap 50% full? Release swapcache more aggressively.. */
Now the readahead window size of the VMA based swap readahead is
controlled by /sys/kernel/mm/swap/vma_ra_max_order, while that of the
original swap readahead is controlled by sysctl page_cluster. It is
possible for anonymous memory to use VMA based swap readahead and tmpfs
to use original swap readahead algorithm at the same time. So that, I
think it is necessary to use different control knob to control these two
algorithm. So if we want to disable readahead for tmpfs, but keep it
for VMA based readahead, we can set 0 to page_cluster but non-zero to
/sys/kernel/mm/swap/vma_ra_max_order. With your change, this will be
impossible.
Best Regards,
Huang, Ying
Powered by blists - more mailing lists