lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141021105542.GU23531@worktop.programming.kicks-ass.net>
Date:	Tue, 21 Oct 2014 12:55:42 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Alex Thorlton <athorlton@....com>
Cc:	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Ingo Molnar <mingo@...nel.org>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>, Bob Liu <lliubbo@...il.com>,
	Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org
Subject: Re: [BUG] mm, thp: khugepaged can't allocate on requested node when
 confined to a cpuset

On Fri, Oct 10, 2014 at 01:56:20PM -0500, Alex Thorlton wrote:
> On Fri, Oct 10, 2014 at 11:20:52AM +0200, Peter Zijlstra wrote:
> > So for the numa thing we do everything from the affected tasks context.
> > There was a lot of arguments early on that that could never really work,
> > but here we are.
> >
> > Should we convert khugepaged to the same? Drive the whole thing from
> > task_work? That would make this issue naturally go away.
> 
> That seems like a reasonable idea to me, but that will change the way
> that the compaction scans work right now, by quite a bit.  As I'm sure
> you're aware, the way it works now is we tack our mm onto the
> khugepagd_scan list in do_huge_pmd_anonymous_page (there might be some
> other ways to get on there - I can't remember), then when khugepaged
> wakes up it scans through each mm on the list until it hits the maximum
> number of pages to scan on each pass.
> 
> If we move the compaction scan over to a task_work style function, we'll
> only be able to scan the one task's mm at a time.  While the underlying
> compaction infrastructure can function more or less the same, the timing
> of when these scans occur, and exactly what the scans cover, will have
> to change.  If we go for the most rudimentary approach, the scans will
> occur each time a thread is about to return to userland after faulting
> in a THP (we'll just replace the khugepaged_enter call with a
> task_work_add), and will cover the mm for the current task.  A slightly
> more advanced approach would involve a timer to ensure that scans don't
> occur too often, as is currently handled by
> khugepaged_scan_sleep_millisecs. In any case, I don't see a way around
> the fact that we'll lose the multi-mm scanning functionality our
> khugepaged_scan list provides, but maybe that's not a huge issue.

Right, can't see that being a problem, in fact its a bonus, because
we'll stop scanning completely if there's no tasks running what so ever,
so we get the power aware thing for free.

Also if you drive it like the numa scanning, you end up scanning tasks
proportional to their runtime, there's no point scanning and collapsing
pages for tasks than hardly ever run anyhow, that only sucks limited
resources (large page allocations) for no win.

> Before I run off and start writing patches, here's a brief summary of
> what I think we could do here:
> 
> 1) Dissolve the khugepaged thread and related structs/timers (I'm
>    expecting some backlash on this one).
> 2) Replace khugepged_enter calls with calls to task_work_add(work,
>    our_new_scan_function) - new scan function will look almost exactly
>    like khugepaged_scan_mm_slot.
> 3) Set up a timer similar to khugepaged_scan_sleep_millisecs that gets
>    checked during/before our_new_scan_function to ensure that we're not
>    scanning more often than necessary.  Also, set up progress markers to
>    limit the number of pages scanned in a single pass.
> 
> By doing this, scans will get triggered each time a thread that has
> faulted THPs is about to return to userland execution, throttled by our
> new timer/progress indicators.  The major benefit here is that scans
> will now occur in the desired task's context.
> 
> Let me know if you anybody sees any major flaws in this approach.

I would suggest you have a look at task_tick_numa(), maybe we can make
something that works for both.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ