lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Oct 2014 13:56:20 -0500
From:	Alex Thorlton <athorlton@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Alex Thorlton <athorlton@....com>, linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Ingo Molnar <mingo@...nel.org>,
	"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
	Hugh Dickins <hughd@...gle.com>, Bob Liu <lliubbo@...il.com>,
	Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org
Subject: Re: [BUG] mm, thp: khugepaged can't allocate on requested node when
 confined to a cpuset

On Fri, Oct 10, 2014 at 11:20:52AM +0200, Peter Zijlstra wrote:
> So for the numa thing we do everything from the affected tasks context.
> There was a lot of arguments early on that that could never really work,
> but here we are.
>
> Should we convert khugepaged to the same? Drive the whole thing from
> task_work? That would make this issue naturally go away.

That seems like a reasonable idea to me, but that will change the way
that the compaction scans work right now, by quite a bit.  As I'm sure
you're aware, the way it works now is we tack our mm onto the
khugepagd_scan list in do_huge_pmd_anonymous_page (there might be some
other ways to get on there - I can't remember), then when khugepaged
wakes up it scans through each mm on the list until it hits the maximum
number of pages to scan on each pass.

If we move the compaction scan over to a task_work style function, we'll
only be able to scan the one task's mm at a time.  While the underlying
compaction infrastructure can function more or less the same, the timing
of when these scans occur, and exactly what the scans cover, will have
to change.  If we go for the most rudimentary approach, the scans will
occur each time a thread is about to return to userland after faulting
in a THP (we'll just replace the khugepaged_enter call with a
task_work_add), and will cover the mm for the current task.  A slightly
more advanced approach would involve a timer to ensure that scans don't
occur too often, as is currently handled by
khugepaged_scan_sleep_millisecs. In any case, I don't see a way around
the fact that we'll lose the multi-mm scanning functionality our
khugepaged_scan list provides, but maybe that's not a huge issue.

Before I run off and start writing patches, here's a brief summary of
what I think we could do here:

1) Dissolve the khugepaged thread and related structs/timers (I'm
   expecting some backlash on this one).
2) Replace khugepged_enter calls with calls to task_work_add(work,
   our_new_scan_function) - new scan function will look almost exactly
   like khugepaged_scan_mm_slot.
3) Set up a timer similar to khugepaged_scan_sleep_millisecs that gets
   checked during/before our_new_scan_function to ensure that we're not
   scanning more often than necessary.  Also, set up progress markers to
   limit the number of pages scanned in a single pass.

By doing this, scans will get triggered each time a thread that has
faulted THPs is about to return to userland execution, throttled by our
new timer/progress indicators.  The major benefit here is that scans
will now occur in the desired task's context.

Let me know if you anybody sees any major flaws in this approach.

Thanks a lot for your input, Peter!

- Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ