[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <AANLkTi=gDnMjTfC756wABD_K6evk+hEOtp_7JVvnwjki@mail.gmail.com>
Date: Fri, 3 Sep 2010 18:12:03 -0700
From: Venkatesh Pallipadi <venki@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Ying Han <yinghan@...gle.com>, Minchan Kim <minchan.kim@...il.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Rik van Riel <riel@...hat.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Johannes Weiner <hannes@...xchg.org>
Subject: Re: [PATCH] vmscan: prevent background aging of anon page in no swap system
On Fri, Sep 3, 2010 at 2:56 PM, Andrew Morton <akpm@...ux-foundation.org> wrote:
> On Fri, 3 Sep 2010 14:47:03 -0700
> Ying Han <yinghan@...gle.com> wrote:
>
>> > We don't have any quantitative data on the effect of these excess tlb
>> > flushes, which makes it difficult to decide which kernel versions
>> > should receive this patch.
>> >
>> > Help?
>>
>> Andrew:
>>
>> We observed the degradation on 2.6.34 compared to 2.6.26 kernel. The
>> workload we are running is doing 4k-random-write which runs about 3-4
>> minutes. We captured the TLB shootsdowns before/after:
>>
>> Before the change:
>> TLB: 29435 22208 37146 25332 47952 43698 43545 40297 49043 44843 46127
>> 50959 47592 46233 43698 44690 TLB shootdowns [HSUM = 662798 ]
>>
>> After the change:
>> TLB: 2340 3113 1547 1472 2944 4194 2181 1212 2607 4373 1690 1446 2310
>> 3784 1744 1134 TLB shootdowns [HSUM = 38091 ]
>
> Do you have data on how much additional CPU time (and/or wall time) was
> consumed?
>
Just reran the workload to get this data
- after - before of /proc/interrupts:TLB
- after - before of /proc/stat:cpu
(output is: "cpu" user nice sys idle iowait irq softirq steal guest guestnice)
Without this change
TLB: 28550 21232 33876 14300 40661 43118 38227 34887 34376 38208 35735
33591 36305 43649 36558 42013 TLB shootdowns [HSUM = 555286 ]
cpu 41056 381 17945 308706 26447 39 9713 0 0 0
With this change
TLB: 660 1088 761 474 778 1050 697 551 712 1353 651 730 788 1419 574
521 TLB shootdowns [HSUM = 12807 ]
cpu 40375 231 16622 204115 19317 36 9464 0 0 0
This is on a 16 way system, so 16 * 100 count in cpu line above counts as 1s.
I don't think all the reduction in CPU time (especially idle time!)
can be attributed to this change. There is some run to run variation
especially with the setup and teardown of the tests. But, there is a
notable reduction in user, system and irq time. For what its worth,
for this particular workload, throughput number reported by the run is
4% up.
Thanks,
Venki
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists