[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110208181709.GL3347@random.random>
Date: Tue, 8 Feb 2011 19:17:09 +0100
From: Andrea Arcangeli <aarcange@...hat.com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Michael J Wolf <mjwolf@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC][PATCH 0/6] more detailed per-process transparent
hugepage statistics
Hello,
On Tue, Feb 08, 2011 at 09:54:34AM -0800, Dave Hansen wrote:
> Just FYI, I did some profiling on a workload that constantly split and
> joined pages. Very little of the overhead was in the scanning itself,
> so I think you're dead-on here.
Yep, my way to deduce it has been to set both to 100%, and check the
rate of increase of
/sys/kernel/mm/transparent_hugepage/khugepaged/full_scans vs
/sys/kernel/mm/ksm/full_scans and the differences is enormous. So a
100% CPU ksmd scan can probably be followed more than well with a 1%
CPU khugepaged scan and probably achieve the exact same hugepage ratio
of a 100% khugepaged scan. The default khugepaged scan is super
paranoid (it has to be, considering the default ksm scan is
zero). Maybe we can still increase the default pages_to_scan a bit. I
suspect most of the current cost should be in the scheduler and that
only accounts for 1 kthread schedule event every 10 sec.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists