[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a547b066-6960-4411-1f3d-2bc3f15b6a73@google.com>
Date: Mon, 7 Jun 2021 23:53:14 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Yu Xu <xuyu@...ux.alibaba.com>
cc: Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
gavin.dg@...ux.alibaba.com, Greg Thelen <gthelen@...gle.com>,
Wei Xu <weixugc@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Nicholas Piggin <npiggin@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH] mm, thp: relax migration wait when failed to get tail
page
On Tue, 8 Jun 2021, Yu Xu wrote:
> On 6/8/21 12:44 PM, Hugh Dickins wrote:
> > On Mon, 7 Jun 2021, Yu Xu wrote:
> >> On 6/2/21 11:57 PM, Hugh Dickins wrote:
> >>> On Wed, 2 Jun 2021, Yu Xu wrote:
> >>>> On 6/2/21 12:55 AM, Hugh Dickins wrote:
> >>>>> On Wed, 2 Jun 2021, Xu Yu wrote:
> >>>>>
> >>>>>> We notice that hung task happens in a conner but practical scenario
> >>>>>> when
> >>>>>> CONFIG_PREEMPT_NONE is enabled, as follows.
> >>>>>>
> >>>>>> Process 0 Process 1 Process
> >>>>>> 2..Inf
> >>>>>> split_huge_page_to_list
> >>>>>> unmap_page
> >>>>>> split_huge_pmd_address
> >>>>>> __migration_entry_wait(head)
> >>>>>> __migration_entry_wait(tail)
> >>>>>> remap_page (roll back)
> >>>>>> remove_migration_ptes
> >>>>>> rmap_walk_anon
> >>>>>> cond_resched
> >>>>>>
> >>>>>> Where __migration_entry_wait(tail) is occurred in kernel space, e.g.,
> >>>>>> copy_to_user, which will immediately fault again without rescheduling,
> >>>>>> and thus occupy the cpu fully.
> >>>>>>
> >>>>>> When there are too many processes performing __migration_entry_wait on
> >>>>>> tail page, remap_page will never be done after cond_resched.
> >>>>>>
> >>>>>> This relaxes __migration_entry_wait on tail page, thus gives remap_page
> >>>>>> a chance to complete.
> >>>>>>
> >>>>>> Signed-off-by: Gang Deng <gavin.dg@...ux.alibaba.com>
> >>>>>> Signed-off-by: Xu Yu <xuyu@...ux.alibaba.com>
> >>>>>
> >>>>> Well caught: you're absolutely right that there's a bug there.
> >>>>> But isn't cond_resched() just papering over the real bug, and
> >>>>> what it should do is a "page = compound_head(page);" before the
> >>>>> get_page_unless_zero()? How does that work out in your testing?
> >>>>
> >>>> compound_head works. The patched kernel is alive for hours under
> >>>> our reproducer, which usually makes the vanilla kernel hung after
> >>>> tens of minutes at most.
> >>>
> >>> Oh, that's good news, thanks.
> >>>
> >>> (It's still likely that a well-placed cond_resched() somewhere in
> >>> mm/gup.c would also be a good idea, but none of us have yet got
> >>> around to identifying where.)
> >>
> >> We neither. If really have to do it outside of __migration_entry_wait,
> >> return value of __migration_entry_wait is needed, and many related
> >> functions have to updated, which may be undesirable.
> >
> > No, it would not be necessary to plumb through a return value from
> > __migration_entry_wait(): I didn't mean that this GUP cond_resched()
> > should be done only for the migration case, but (I guess) on any path
> > where handle_mm_fault() returns "success" for a retry, yet the retry
> > of follow_page_mask() fails.
> >
> > But now that I look, I see there is already a cond_resched() there!
>
> Do you mean might_sleep in mmap_read_trylock within do_user_addr_fault?
>
> If so, our environment has CONFIG_PREEMPT_NONE is enabled, and the
> __migration_entry_wait happens in kernel when do something like
> copy_to_user (e.g., fstat).
Oh, I am sorry: now I see that you did mention copy_to_user() in your
original post, but I'm afraid I was fixated on get_user_pages() all
along: a different way in which the kernel handles a fault on user
address space without returning to userspace immediately afterwards.
So, the GUP case has its cond_resched() and is okay, but the
arch/whatever/mm/fault.c case is the one which probably deserves a
cond_resched() somewhere (on the architecture in question anyway - x86?).
I was reluctant to suggest where to place it in GUP, I am even more
reluctant to say where in arch/whatever/mm/fault.c: I haven't thought
through that code in years. x86, somewhere in do_user_addr_fault(),
probably yes; but it's better to cond_resched() without holding a
lock; and better to avoid it on first entry too.
But we don't need to decide that, if the compound_head() is a
satisfactory solution for you in practice. Sorry for confusing
you with my own confusion, and thank you for clearing it up.
Hugh
Powered by blists - more mailing lists