[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1306181654350.4503@chino.kir.corp.google.com>
Date: Tue, 18 Jun 2013 17:01:23 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Alex Thorlton <athorlton@....com>
cc: linux-kernel@...r.kernel.org, Li Zefan <lizefan@...wei.com>,
Rob Landley <rob@...dley.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Johannes Weiner <hannes@...xchg.org>,
Xiao Guangrong <xiaoguangrong@...ux.vnet.ibm.com>,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
Robin Holt <holt@....com>
Subject: Re: [PATCH v2] Make transparent hugepages cpuset aware
On Tue, 18 Jun 2013, Alex Thorlton wrote:
> Thanks for your input, however, I believe the method of using a malloc
> hook falls apart when it comes to static binaries, since we wont' have
> any shared libraries to hook into. Although using a malloc hook is a
> perfectly suitable solution for most cases, we're looking to implement a
> solution that can be used in all situations.
>
I guess the question would be why you don't want your malloc memory backed
by thp pages for certain static binaries and not others? Is it because of
an increased rss due to khugepaged collapsing memory because of its
default max_ptes_none value?
> Aside from that particular shortcoming of the malloc hook solution,
> there are some other situations having a cpuset-based option is a
> much simpler and more efficient solution than the alternatives.
Sure, but why should this be a cpuset based solution? What is special
about cpusets that make certain statically allocated binaries not want
memory backed by thp while others do? This still seems based solely on
convenience instead of any hard requirement.
> One
> such situation that comes to mind would be an environment where a batch
> scheduler is in use to ration system resources. If an administrator
> determines that a users jobs run more efficiently with thp always on,
> the administrator can simply set the users jobs to always run with that
> setting, instead of having to coordinate with that user to get them to
> run their jobs in a different way. I feel that, for cases such as this,
> the this additional flag is in line with the other capabilities that
> cgroups and cpusets provide.
>
That sounds like a memcg, i.e. container, type of an issue, not a cpuset
issue which is more geared toward NUMA optimizations. User jobs should
always run more efficiently with thp always on, the worst-case scenario
should be if they run with the same performance as thp set to never. In
other words, there shouldn't be any regression that requires certain
cpusets to disable thp because of a performance regression. If there are
any, we'd like to investigate that separately from this patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists