[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210604204934.sbspsmwdqdtmz73d@revolver>
Date: Fri, 4 Jun 2021 20:49:39 +0000
From: Liam Howlett <liam.howlett@...cle.com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Alistair Popple <apopple@...dia.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"nouveau@...ts.freedesktop.org" <nouveau@...ts.freedesktop.org>,
"bskeggs@...hat.com" <bskeggs@...hat.com>,
"rcampbell@...dia.com" <rcampbell@...dia.com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"jhubbard@...dia.com" <jhubbard@...dia.com>,
"bsingharora@...il.com" <bsingharora@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
"hch@...radead.org" <hch@...radead.org>,
"jglisse@...hat.com" <jglisse@...hat.com>,
"willy@...radead.org" <willy@...radead.org>,
"jgg@...dia.com" <jgg@...dia.com>,
"peterx@...hat.com" <peterx@...hat.com>,
"hughd@...gle.com" <hughd@...gle.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap
* Shakeel Butt <shakeelb@...gle.com> [210525 19:45]:
> On Tue, May 25, 2021 at 11:40 AM Liam Howlett <liam.howlett@...cle.com> wrote:
> >
> [...]
> > >
> > > +/*
> > > + * Walks the vma's mapping a page and mlocks the page if any locked vma's are
> > > + * found. Once one is found the page is locked and the scan can be terminated.
> > > + */
> >
> > Can you please add that this requires the mmap_sem() lock to the
> > comments?
> >
>
> Why does this require mmap_sem() lock? Also mmap_sem() lock of which mm_struct?
Doesn't the mlock_vma_page() require the mmap_sem() for reading? The
mm_struct in vma->vm_mm;
>From what I can see, at least the following paths have mmap_lock held
for writing:
munlock_vma_pages_range() from __do_munmap()
munlokc_vma_pages_range() from remap_file_pages()
>
> > > +static bool page_mlock_one(struct page *page, struct vm_area_struct *vma,
> > > + unsigned long address, void *unused)
> > > +{
> > > + struct page_vma_mapped_walk pvmw = {
> > > + .page = page,
> > > + .vma = vma,
> > > + .address = address,
> > > + };
> > > +
> > > + /* An un-locked vma doesn't have any pages to lock, continue the scan */
> > > + if (!(vma->vm_flags & VM_LOCKED))
> > > + return true;
> > > +
> > > + while (page_vma_mapped_walk(&pvmw)) {
> > > + /* PTE-mapped THP are never mlocked */
> > > + if (!PageTransCompound(page))
> > > + mlock_vma_page(page);
> > > + page_vma_mapped_walk_done(&pvmw);
> > > +
> > > + /*
> > > + * no need to continue scanning other vma's if the page has
> > > + * been locked.
> > > + */
> > > + return false;
> > > + }
> > > +
> > > + return true;
> > > +}
munlock_vma_pages_range() comments still references try_to_{munlock|unmap}
Powered by blists - more mailing lists