[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpHSe8+h+wG6RxepiqxZiGDoyQJcMnZ7kkSeNbAzgEYROQ@mail.gmail.com>
Date: Tue, 27 Jun 2023 08:35:35 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Alistair Popple <apopple@...dia.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, hannes@...xchg.org,
mhocko@...e.com, josef@...icpanda.com, jack@...e.cz,
ldufour@...ux.ibm.com, laurent.dufour@...ibm.com,
michel@...pinasse.org, liam.howlett@...cle.com, jglisse@...gle.com,
vbabka@...e.cz, minchan@...gle.com, dave@...olabs.net,
punit.agrawal@...edance.com, lstoakes@...il.com, hdanton@...a.com,
peterx@...hat.com, ying.huang@...el.com, david@...hat.com,
yuzhao@...gle.com, dhowells@...hat.com, hughd@...gle.com,
viro@...iv.linux.org.uk, brauner@...nel.org,
pasha.tatashin@...een.com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v3 7/8] mm: drop VMA lock before waiting for migration
On Tue, Jun 27, 2023 at 1:06 AM Alistair Popple <apopple@...dia.com> wrote:
>
>
> Suren Baghdasaryan <surenb@...gle.com> writes:
>
> > migration_entry_wait does not need VMA lock, therefore it can be
> > dropped before waiting.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > ---
> > mm/memory.c | 14 ++++++++++++--
> > 1 file changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 5caaa4c66ea2..bdf46fdc58d6 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -3715,8 +3715,18 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > entry = pte_to_swp_entry(vmf->orig_pte);
> > if (unlikely(non_swap_entry(entry))) {
> > if (is_migration_entry(entry)) {
> > - migration_entry_wait(vma->vm_mm, vmf->pmd,
> > - vmf->address);
> > + /* Save mm in case VMA lock is dropped */
> > + struct mm_struct *mm = vma->vm_mm;
> > +
> > + if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> > + /*
> > + * No need to hold VMA lock for migration.
> > + * WARNING: vma can't be used after this!
> > + */
> > + vma_end_read(vma);
> > + ret |= VM_FAULT_COMPLETED;
>
> Doesn't this need to also set FAULT_FLAG_LOCK_DROPPED to ensure we don't
> call vma_end_read() again in __handle_mm_fault()?
Uh, right. Got lost during the last refactoring. Thanks for flagging!
>
> > + }
> > + migration_entry_wait(mm, vmf->pmd, vmf->address);
> > } else if (is_device_exclusive_entry(entry)) {
> > vmf->page = pfn_swap_entry_to_page(entry);
> > ret = remove_device_exclusive_entry(vmf);
>
Powered by blists - more mailing lists