[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120921093530.GS11266@suse.de>
Date: Fri, 21 Sep 2012 10:35:30 +0100
From: Mel Gorman <mgorman@...e.de>
To: Richard Davies <richard@...chsys.com>
Cc: Shaohua Li <shli@...nel.org>, Rik van Riel <riel@...hat.com>,
Avi Kivity <avi@...hat.com>,
QEMU-devel <qemu-devel@...gnu.org>, KVM <kvm@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/6] Reduce compaction scanning and lock contention
On Fri, Sep 21, 2012 at 10:13:33AM +0100, Richard Davies wrote:
> Hi Mel,
>
> Thank you for this series. I have applied on clean 3.6-rc5 and tested, and
> it works well for me - the lock contention is (still) gone and
> isolate_freepages_block is much reduced.
>
Excellent!
> Here is a typical test with these patches:
>
> # grep -F '[k]' report | head -8
> 65.20% qemu-kvm [kernel.kallsyms] [k] clear_page_c
> 2.18% qemu-kvm [kernel.kallsyms] [k] isolate_freepages_block
> 1.56% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock
> 1.40% qemu-kvm [kernel.kallsyms] [k] svm_vcpu_run
> 1.38% swapper [kernel.kallsyms] [k] default_idle
> 1.35% qemu-kvm [kernel.kallsyms] [k] get_page_from_freelist
> 0.74% ksmd [kernel.kallsyms] [k] memcmp
> 0.72% qemu-kvm [kernel.kallsyms] [k] free_pages_prepare
>
Ok, so that is more or less acceptable. I would like to reduce the scanning
even further but I'll take this as a start -- largely because I do not have
any new good ideas on how it could be reduced further without incurring
a large cost in the page allocator :)
> I did manage to get a couple which were slightly worse, but nothing like as
> bad as before. Here are the results:
>
> # grep -F '[k]' report | head -8
> 45.60% qemu-kvm [kernel.kallsyms] [k] clear_page_c
> 11.26% qemu-kvm [kernel.kallsyms] [k] isolate_freepages_block
> 3.21% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock
> 2.27% ksmd [kernel.kallsyms] [k] memcmp
> 2.02% swapper [kernel.kallsyms] [k] default_idle
> 1.58% qemu-kvm [kernel.kallsyms] [k] svm_vcpu_run
> 1.30% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 1.09% qemu-kvm [kernel.kallsyms] [k] get_page_from_freelist
>
> # grep -F '[k]' report | head -8
> 61.29% qemu-kvm [kernel.kallsyms] [k] clear_page_c
> 4.52% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock_irqsave
> 2.64% qemu-kvm [kernel.kallsyms] [k] copy_page_c
> 1.61% swapper [kernel.kallsyms] [k] default_idle
> 1.57% qemu-kvm [kernel.kallsyms] [k] _raw_spin_lock
> 1.18% qemu-kvm [kernel.kallsyms] [k] get_page_from_freelist
> 1.18% qemu-kvm [kernel.kallsyms] [k] isolate_freepages_block
> 1.11% qemu-kvm [kernel.kallsyms] [k] svm_vcpu_run
>
>
Were the boot times acceptable even when these slightly worse figures
were recorded?
> I will follow up with the detailed traces for these three tests.
>
> Thank you!
>
Thank you for the detailed reporting and the testing, it's much
appreciated. I've already rebased the patches to Andrew's tree and tested
them overnight and the figures look good on my side. I'll update the
changelog and push them shortly.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists