lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1296593801.27022.3920.camel@nimitz>
Date:	Tue, 01 Feb 2011 12:56:41 -0800
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	Andrea Arcangeli <aarcange@...hat.com>
Cc:	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Michael J Wolf <mjwolf@...ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC][PATCH 0/6] more detailed per-process transparent
 hugepage statistics

On Tue, 2011-02-01 at 21:39 +0100, Andrea Arcangeli wrote:
> So now the speedup
> from hugepages needs to also offset the cost of the more frequent
> split/collapse events that didn't happen before.

My concern here is the downward slope.  I interpret that as saying that
we'll eventually have _zero_ THPs.  Plus, the benefits are decreasing
constantly, even though the scanning overhead is fixed (or increasing
even).

> So I guess considering the time is of the order of 2/3 hours and there
> are "only" 88G of memory, speeding up khugepaged is going to be
> beneficial considering how big boost hugepages gives to the guest with
> NPT/EPT and even bigger boost for regular shadow paging, but it also
> depends on guest. In short khugepaged by default is tuned in a way
> that can't run in the way of the CPU. 

I guess we could also try and figure out whether the khugepaged CPU
overhead really comes from the scanning or the collapsing operations
themselves.  Should be as easy as some oprofiling.

If it really is the scanning, I bet we could be a lot more efficient
with khugepaged as well.  In the case of KVM guests, we're going to have
awfully fixed virtual addresses and processes where collapsing can take
place.

It might make sense to just have split_huge_page() stick the vaddr and
the mm in a queue.  khugepaged could scan those addresses first instead
of just going after the system as a whole.

For cases where the page got split, but wasn't modified, should we have
a non-copying, non-allocating fastpath to re-merge it?

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ