[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <DEB7E312-8DF9-4923-B427-CCDE6B2A6298@gmail.com>
Date: Fri, 26 Apr 2013 09:03:00 +0300
From: Alexey Lyahkov <alexey.lyashkov@...il.com>
To: Mel Gorman <mgorman@...e.de>
Cc: Theodore Ts'o <tytso@....edu>, Andrew Perepechko <anserper@...ru>,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Bernd Schubert <bernd.schubert@...tmail.fm>,
Will Huck <will.huckk@...il.com>, linux-ext4@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: page eviction from the buddy cache
On Apr 26, 2013, at 01:40, Mel Gorman wrote:
> No, I would prefer if this was not fixed within ext4. I need confirmation
> that fixing mark_page_accessed() addresses the performance problem you
> encounter. The two-line check for PageLRU() followed by a lru_add_drain()
> is meant to check that. That is still not my preferred fix because even
> if you do not encounter higher LRU contention, other workloads would be
> at risk. The likely fix will involve converting pagevecs to using a single
> list and then selecting what LRU to put a page on at drain time but I
> want to know that it's worthwhile.
>
> Using shake_page() in ext4 is certainly overkill.
agree, but it's was my prof of concept patch :) just to verify founded
>
>>> Andrew, can you try the following patch please? Also, is there any chance
>>> you can describe in more detail what the workload does?
>>
>> lustre OSS node + IOR with file size twice more then OSS memory.
>>
>
> Ok, no way I'll be reproducing that workload. Thanks.
>
I think you should be try several processes with DIO (so don't put any pages in lru_pagevec as that is heap), each have a filesize twice or more of available memory.
Main idea you should be have a read a new pages in budy cache (to allocate) and have large memory allocation in same time.
DIO chunk should be enough to start streaming allocation.
also you may use attached jprobe module to hit an BUG() if buddy page removed from a memory by shrinker.
Download attachment "jprobe-1.c" of type "application/octet-stream" (2944 bytes)
Powered by blists - more mailing lists