[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r30mha41.fsf@yhuang-dev.intel.com>
Date: Fri, 21 Apr 2017 08:34:22 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: "Huang\, Ying" <ying.huang@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH -mm -v9 2/3] mm, THP, swap: Check whether THP can be split firstly
Johannes Weiner <hannes@...xchg.org> writes:
> On Thu, Apr 20, 2017 at 08:50:43AM +0800, Huang, Ying wrote:
>> Johannes Weiner <hannes@...xchg.org> writes:
>> > On Wed, Apr 19, 2017 at 03:06:24PM +0800, Huang, Ying wrote:
>> >> With the patchset, the swap out throughput improves 3.6% (from about
>> >> 4.16GB/s to about 4.31GB/s) in the vm-scalability swap-w-seq test case
>> >> with 8 processes. The test is done on a Xeon E5 v3 system. The swap
>> >> device used is a RAM simulated PMEM (persistent memory) device. To
>> >> test the sequential swapping out, the test case creates 8 processes,
>> >> which sequentially allocate and write to the anonymous pages until the
>> >> RAM and part of the swap device is used up.
>> >>
>> >> Cc: Johannes Weiner <hannes@...xchg.org>
>> >> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>> >> Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com> [for can_split_huge_page()]
>> >
>> > How often does this actually happen in practice? Because all that this
>> > protects us from is trying to allocate a swap cluster - which with the
>> > si->free_clusters list really isn't all that expensive - and return it
>> > again. Unless this happens all the time in practice, this optimization
>> > seems misplaced.
>>
>> To my surprise too, I found this patch has measurable impact in my
>> test. The swap out throughput improves 3.6% in the vm-scalability
>> swap-w-seq test case with 8 processes. Details are in the original
>> patch description.
>
> Yeah I think that justifies it.
>
> The changelog says "the patchset", I didn't realize this is the gain
> from just this patch alone. Care to update that?
Sorry for confusing, will update it in the next version.
Best Regards,
Huang, Ying
> Thanks!
Powered by blists - more mailing lists