[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YwaOpj54/qUb5fXa@xz-m1.local>
Date: Wed, 24 Aug 2022 16:48:38 -0400
From: Peter Xu <peterx@...hat.com>
To: Alistair Popple <apopple@...dia.com>
Cc: "Huang, Ying" <ying.huang@...el.com>,
Nadav Amit <nadav.amit@...il.com>,
huang ying <huang.ying.caritas@...il.com>,
Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
"Sierra Guiza, Alejandro (Alex)" <alex.sierra@....com>,
Felix Kuehling <Felix.Kuehling@....com>,
Jason Gunthorpe <jgg@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
David Hildenbrand <david@...hat.com>,
Ralph Campbell <rcampbell@...dia.com>,
Matthew Wilcox <willy@...radead.org>,
Karol Herbst <kherbst@...hat.com>,
Lyude Paul <lyude@...hat.com>, Ben Skeggs <bskeggs@...hat.com>,
Logan Gunthorpe <logang@...tatee.com>, paulus@...abs.org,
linuxppc-dev@...ts.ozlabs.org, stable@...r.kernel.org
Subject: Re: [PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page
On Wed, Aug 24, 2022 at 04:25:44PM -0400, Peter Xu wrote:
> On Wed, Aug 24, 2022 at 11:56:25AM +1000, Alistair Popple wrote:
> > >> Still I don't know whether there'll be any side effect of having stall tlbs
> > >> in !present ptes because I'm not familiar enough with the private dev swap
> > >> migration code. But I think having them will be safe, even if redundant.
> >
> > What side-effect were you thinking of? I don't see any issue with not
> > TLB flushing stale device-private TLBs prior to the migration because
> > they're not accessible anyway and shouldn't be in any TLB.
>
> Sorry to be misleading, I never meant we must add them. As I said it's
> just that I don't know the code well so I don't know whether it's safe to
> not have it.
>
> IIUC it's about whether having stall system-ram stall tlb in other
> processor would matter or not here. E.g. some none pte that this code
> collected (boosted both "cpages" and "npages" for a none pte) could have
> stall tlb in other cores that makes the page writable there.
For this one, let me give a more detailed example.
It's about whether below could happen:
thread 1 thread 2 thread 3
-------- -------- --------
write to page P (data=P1)
(cached TLB writable)
zap_pte_range()
pgtable lock
clear pte for page P
pgtable unlock
...
migrate_vma_collect
pte none, npages++, cpages++
allocate device page
copy data (with P1)
map pte as device swap
write to page P again
(data updated from P1->P2)
flush tlb
Then at last from processor side P should have data P2 but actually from
device memory it's P1. Data corrupt.
>
> When I said I'm not familiar with the code, it's majorly about one thing I
> never figured out myself, in that migrate_vma_collect_pmd() has this
> optimization to trylock on the page, collect if it succeeded:
>
> /*
> * Optimize for the common case where page is only mapped once
> * in one process. If we can lock the page, then we can safely
> * set up a special migration page table entry now.
> */
> if (trylock_page(page)) {
> ...
> } else {
> put_page(page);
> mpfn = 0;
> }
>
> But it's kind of against a pure "optimization" in that if trylock failed,
> we'll clear the mpfn so the src[i] will be zero at last. Then will we
> directly give up on this page, or will we try to lock_page() again
> somewhere?
>
> The future unmap op is also based on this "cpages", not "npages":
>
> if (args->cpages)
> migrate_vma_unmap(args);
>
> So I never figured out how this code really works. It'll be great if you
> could shed some light to it.
>
> Thanks,
>
> --
> Peter Xu
--
Peter Xu
Powered by blists - more mailing lists