[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191106151429.swqtq2dt4uelhjzn@macbook-pro-91.dhcp.thefacebook.com>
Date: Wed, 6 Nov 2019 10:14:31 -0500
From: Josef Bacik <josef@...icpanda.com>
To: Jan Kara <jack@...e.cz>
Cc: Josef Bacik <josef@...icpanda.com>, snazy@...zy.de,
Johannes Weiner <hannes@...xchg.org>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Randy Dunlap <rdunlap@...radead.org>,
linux-kernel@...r.kernel.org, Linux MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"Potyra, Stefan" <Stefan.Potyra@...ktrobit.com>
Subject: Re: mlockall(MCL_CURRENT) blocking infinitely
On Wed, Nov 06, 2019 at 04:05:24PM +0100, Jan Kara wrote:
> On Wed 06-11-19 09:56:09, Josef Bacik wrote:
> > On Wed, Nov 06, 2019 at 02:45:43PM +0100, Robert Stupp wrote:
> > > On Wed, 2019-11-06 at 13:03 +0100, Jan Kara wrote:
> > > > On Tue 05-11-19 13:22:11, Johannes Weiner wrote:
> > > > > What I don't quite understand yet is why the fault path doesn't
> > > > > make
> > > > > progress eventually. We must drop the mmap_sem without changing the
> > > > > state in any way. How can we keep looping on the same page?
> > > >
> > > > That may be a slight suboptimality with Josef's patches. If the page
> > > > is marked as PageReadahead, we always drop mmap_sem if we can and
> > > > start
> > > > readahead without checking whether that makes sense or not in
> > > > do_async_mmap_readahead(). OTOH page_cache_async_readahead() then
> > > > clears
> > > > PageReadahead so the only way how I can see we could loop like this
> > > > is when
> > > > file->ra->ra_pages is 0. Not sure if that's what's happening through.
> > > > We'd
> > > > need to find which of the paths in filemap_fault() calls
> > > > maybe_unlock_mmap_for_io() to tell more.
> > >
> > > Yes, ra_pages==0
> > > Attached the dmesg + smaps outputs
> > >
> > >
> >
> > Ah ok I see what's happening, __get_user_pages() returns 0 if we get an EBUSY
> > from faultin_page, and then __mm_populate does nend = nstart + ret * PAGE_SIZE,
> > which just leaves us where we are.
> >
> > We need to handle the non-blocking and the locking separately in __mm_populate
> > so we know what's going on. Jan's fix for the readahead thing is definitely
> > valid as well, but this will keep us from looping forever in other retry cases.
>
> I don't think this will work. AFAICS faultin_page() just checks whether
> 'nonblocking' is != NULL but doesn't ever look at its value... Honestly the
> whole interface is rather weird like lots of things around gup().
>
Oh what the hell, yeah this is super bonkers. The whole fault path probably
should be cleaned up to handle retry better. This will do the trick I think?
Josef
diff --git a/mm/gup.c b/mm/gup.c
index 8f236a335ae9..2468789298e6 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -628,7 +628,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma,
fault_flags |= FAULT_FLAG_WRITE;
if (*flags & FOLL_REMOTE)
fault_flags |= FAULT_FLAG_REMOTE;
- if (nonblocking)
+ if (nonblocking && *nonblocking != 0)
fault_flags |= FAULT_FLAG_ALLOW_RETRY;
if (*flags & FOLL_NOWAIT)
fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT;
@@ -1237,6 +1237,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
unsigned long end, nstart, nend;
struct vm_area_struct *vma = NULL;
int locked = 0;
+ int nonblocking = 1;
long ret = 0;
end = start + len;
@@ -1268,7 +1269,7 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
* double checks the vma flags, so that it won't mlock pages
* if the vma was already munlocked.
*/
- ret = populate_vma_page_range(vma, nstart, nend, &locked);
+ ret = populate_vma_page_range(vma, nstart, nend, &nonblocking);
if (ret < 0) {
if (ignore_errors) {
ret = 0;
@@ -1276,6 +1277,14 @@ int __mm_populate(unsigned long start, unsigned long len, int ignore_errors)
}
break;
}
+
+ /*
+ * We dropped the mmap_sem, so we need to re-lock, and the next
+ * loop around we won't drop because nonblocking is now 0.
+ */
+ if (!nonblocking)
+ locked = 0;
+
nend = nstart + ret * PAGE_SIZE;
ret = 0;
}
Powered by blists - more mailing lists