[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <67107cc6-cc8a-c072-a323-b5c417fb45c6@nvidia.com>
Date: Tue, 16 Jul 2019 18:51:11 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Ralph Campbell <rcampbell@...dia.com>, <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>,
Jérôme Glisse <jglisse@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Christoph Hellwig <hch@....de>,
Jason Gunthorpe <jgg@...lanox.com>, <stable@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 3/3] mm/hmm: Fix bad subpage pointer in try_to_unmap_one
On 7/16/19 5:14 PM, Ralph Campbell wrote:
> When migrating an anonymous private page to a ZONE_DEVICE private page,
> the source page->mapping and page->index fields are copied to the
> destination ZONE_DEVICE struct page and the page_mapcount() is increased.
> This is so rmap_walk() can be used to unmap and migrate the page back to
> system memory. However, try_to_unmap_one() computes the subpage pointer
> from a swap pte which computes an invalid page pointer and a kernel panic
> results such as:
>
> BUG: unable to handle page fault for address: ffffea1fffffffc8
>
> Currently, only single pages can be migrated to device private memory so
> no subpage computation is needed and it can be set to "page".
>
> Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration")
> Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
> Cc: "Jérôme Glisse" <jglisse@...hat.com>
> Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
> Cc: Mike Kravetz <mike.kravetz@...cle.com>
> Cc: Christoph Hellwig <hch@....de>
> Cc: Jason Gunthorpe <jgg@...lanox.com>
> Cc: <stable@...r.kernel.org>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> ---
> mm/rmap.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index e5dfe2ae6b0d..ec1af8b60423 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> * No need to invalidate here it will synchronize on
> * against the special swap migration pte.
> */
> + subpage = page;
> goto discard;
> }
The problem is clear, but the solution still leaves the code ever so slightly
more confusing, and it was already pretty difficult to begin with.
I still hold out hope for some comment documentation at least, and maybe
even just removing the subpage variable (as Jerome mentioned, offline) as
well.
Jerome?
thanks,
--
John Hubbard
NVIDIA
Powered by blists - more mailing lists