[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5197EF76.2070504@bitsync.net>
Date: Sat, 18 May 2013 23:15:34 +0200
From: Zlatko Calusic <zcalusic@...sync.net>
To: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>
CC: Jiri Slaby <jslaby@...e.cz>,
Valdis Kletnieks <Valdis.Kletnieks@...edu>,
Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
dormando <dormando@...ia.net>, Michal Hocko <mhocko@...e.cz>,
Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/9] Reduce system disruption due to kswapd V4
On 15.05.2013 22:37, Andrew Morton wrote:
>>
>> 3.10.0-rc1 3.10.0-rc1
>> vanilla lessdisrupt-v4
>> Page Ins 1234608 101892
>> Page Outs 12446272 11810468
>> Swap Ins 283406 0
>> Swap Outs 698469 27882
>> Direct pages scanned 0 136480
>> Kswapd pages scanned 6266537 5369364
>> Kswapd pages reclaimed 1088989 930832
>> Direct pages reclaimed 0 120901
>> Kswapd efficiency 17% 17%
>> Kswapd velocity 5398.371 4635.115
>> Direct efficiency 100% 88%
>> Direct velocity 0.000 117.817
>> Percentage direct scans 0% 2%
>> Page writes by reclaim 1655843 4009929
>> Page writes file 957374 3982047
>> Page writes anon 698469 27882
>> Page reclaim immediate 5245 1745
>> Page rescued immediate 0 0
>> Slabs scanned 33664 25216
>> Direct inode steals 0 0
>> Kswapd inode steals 19409 778
>
> The reduction in inode steals might be a significant thing?
> prune_icache_sb() does invalidate_mapping_pages() and can have the bad
> habit of shooting down a vast number of pagecache pages (for a large
> file) in a single hit. Did this workload use large (and clean) files?
> Did you run any test which would expose this effect?
>
I did not run specific tests, but I believe I observed exactly this
issue on the real workload, where even at a moderate load sudden frees
of pagecache happen quite often. I've attached a small graph where it
can be easily seen. The snapshot was taken while the server was running
an unpatched Linus kernel. After the Mel's patch series is applied, I
can't see anything similar. So it seems that this issue is completely
gone, Mel's done a wonderful job.
And BTW, V4 continues to be rock stable, running here on many different
machines, so I look forward seeing this code merged in 3.11.
--
Zlatko
Download attachment "memory-hourly.png" of type "image/png" (11951 bytes)
Powered by blists - more mailing lists