[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87a73f7d71.fsf@yhuang-dev.intel.com>
Date: Mon, 13 Apr 2020 21:00:34 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Andrea Righi <andrea.righi@...onical.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan@...nel.org>,
Anchal Agarwal <anchalag@...zon.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] mm: swap: use fixed-size readahead during swapoff
Andrea Righi <andrea.righi@...onical.com> writes:
[snip]
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index ebed37bbf7a3..c71abc8df304 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -20,6 +20,7 @@
> #include <linux/migrate.h>
> #include <linux/vmalloc.h>
> #include <linux/swap_slots.h>
> +#include <linux/oom.h>
> #include <linux/huge_mm.h>
>
> #include <asm/pgtable.h>
> @@ -507,6 +508,14 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> max_pages = 1 << READ_ONCE(page_cluster);
> if (max_pages <= 1)
> return 1;
> + /*
> + * If current task is using too much memory or swapoff is running
> + * simply use the max readahead size. Since we likely want to load a
> + * lot of pages back into memory, using a fixed-size max readhaead can
> + * give better performance in this case.
> + */
> + if (oom_task_origin(current))
> + return max_pages;
>
> hits = atomic_xchg(&swapin_readahead_hits, 0);
> pages = __swapin_nr_pages(prev_offset, offset, hits, max_pages,
Thinks this again. If my understanding were correct, the accessing
pattern during swapoff is sequential, why swap readahead doesn't work?
If so, can you root cause that firstly?
Best Regards,
Huang, Ying
Powered by blists - more mailing lists