[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87eeffufst.fsf@yhuang6-desk1.ccr.corp.intel.com>
Date: Tue, 13 Apr 2021 09:36:02 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Miaohe Lin <linmiaohe@...wei.com>
Cc: <akpm@...ux-foundation.org>, <hannes@...xchg.org>,
<mhocko@...e.com>, <iamjoonsoo.kim@....com>, <vbabka@...e.cz>,
<alex.shi@...ux.alibaba.com>, <willy@...radead.org>,
<minchan@...nel.org>, <richard.weiyang@...il.com>,
<hughd@...gle.com>, <tim.c.chen@...ux.intel.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH 5/5] mm/swap_state: fix swap_cluster_readahead() race
with swapoff
Miaohe Lin <linmiaohe@...wei.com> writes:
> swap_cluster_readahead() could race with swapoff and might dereference
> si->swap_file after it's released by swapoff. Close this race window by
> using get/put_swap_device() pair.
I think we should fix the callers instead to reduce the overhead. Now,
do_swap_page() has been fixed. We need to fix shmem_swapin().
Best Regards,
Huang, Ying
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
> ---
> mm/swap_state.c | 11 +++++++++--
> 1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 3bf0d0c297bc..eba6b0cf6cf9 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -626,12 +626,17 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> unsigned long offset = entry_offset;
> unsigned long start_offset, end_offset;
> unsigned long mask;
> - struct swap_info_struct *si = swp_swap_info(entry);
> + struct swap_info_struct *si;
> struct blk_plug plug;
> bool do_poll = true, page_allocated;
> struct vm_area_struct *vma = vmf->vma;
> unsigned long addr = vmf->address;
>
> + si = get_swap_device(entry);
> + /* In case we raced with swapoff. */
> + if (!si)
> + return NULL;
> +
> mask = swapin_nr_pages(offset) - 1;
> if (!mask)
> goto skip;
> @@ -673,7 +678,9 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
>
> lru_add_drain(); /* Push any new pages onto the LRU now */
> skip:
> - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll);
> + page = read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll);
> + put_swap_device(si);
> + return page;
> }
>
> int init_swap_address_space(unsigned int type, unsigned long nr_pages)
Powered by blists - more mailing lists