[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180902065859.GE28695@350D>
Date: Sun, 2 Sep 2018 16:58:59 +1000
From: Balbir Singh <bsingharora@...il.com>
To: Jerome Glisse <jglisse@...hat.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Ralph Campbell <rcampbell@...dia.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
stable@...r.kernel.org
Subject: Re: [PATCH 3/7] mm/rmap: map_pte() was not handling private
ZONE_DEVICE page properly v2
On Fri, Aug 31, 2018 at 12:19:35PM -0400, Jerome Glisse wrote:
> On Fri, Aug 31, 2018 at 07:27:24PM +1000, Balbir Singh wrote:
> > On Thu, Aug 30, 2018 at 10:41:56AM -0400, jglisse@...hat.com wrote:
> > > From: Ralph Campbell <rcampbell@...dia.com>
> > >
> > > Private ZONE_DEVICE pages use a special pte entry and thus are not
> > > present. Properly handle this case in map_pte(), it is already handled
> > > in check_pte(), the map_pte() part was lost in some rebase most probably.
> > >
> > > Without this patch the slow migration path can not migrate back private
> > > ZONE_DEVICE memory to regular memory. This was found after stress
> > > testing migration back to system memory. This ultimatly can lead the
> > > CPU to an infinite page fault loop on the special swap entry.
> > >
> > > Changes since v1:
> > > - properly lock pte directory in map_pte()
> > >
> > > Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
> > > Signed-off-by: Jérôme Glisse <jglisse@...hat.com>
> > > Cc: Andrew Morton <akpm@...ux-foundation.org>
> > > Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> > > Cc: Balbir Singh <bsingharora@...il.com>
> > > Cc: stable@...r.kernel.org
> > > ---
> > > mm/page_vma_mapped.c | 9 ++++++++-
> > > 1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > > index ae3c2a35d61b..bd67e23dce33 100644
> > > --- a/mm/page_vma_mapped.c
> > > +++ b/mm/page_vma_mapped.c
> > > @@ -21,7 +21,14 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
> > > if (!is_swap_pte(*pvmw->pte))
> > > return false;
> > > } else {
> > > - if (!pte_present(*pvmw->pte))
> > > + if (is_swap_pte(*pvmw->pte)) {
> > > + swp_entry_t entry;
> > > +
> > > + /* Handle un-addressable ZONE_DEVICE memory */
> > > + entry = pte_to_swp_entry(*pvmw->pte);
> > > + if (!is_device_private_entry(entry))
> > > + return false;
> >
> > OK, so we skip this pte from unmap since it's already unmapped? This prevents
> > try_to_unmap from unmapping it and it gets restored with MIGRATE_PFN_MIGRATE
> > flag cleared?
> >
> > Sounds like the right thing, if I understand it correctly
>
> Well not exactly we do not skip it, we replace it with a migration
I think I missed the !is_device_private_entry and missed the ! part,
so that seems reasonable
Reviewed-by: Balbir Singh <bsingharora@...il.com>
Powered by blists - more mailing lists