[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d52c53fc-60c7-21ca-08ab-f58cd4b403f1@suse.cz>
Date: Tue, 13 Dec 2016 13:32:58 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH] mm: fadvise: avoid expensive remote LRU cache draining
after FADV_DONTNEED
On 12/12/2016 04:55 PM, Johannes Weiner wrote:
> On Mon, Dec 12, 2016 at 10:21:24AM +0100, Vlastimil Babka wrote:
>> On 12/10/2016 06:26 PM, Johannes Weiner wrote:
>>> When FADV_DONTNEED cannot drop all pages in the range, it observes
>>> that some pages might still be on per-cpu LRU caches after recent
>>> instantiation and so initiates remote calls to all CPUs to flush their
>>> local caches. However, in most cases, the fadvise happens from the
>>> same context that instantiated the pages, and any pre-LRU pages in the
>>> specified range are most likely sitting on the local CPU's LRU cache,
>>> and so in many cases this results in unnecessary remote calls, which,
>>> in a loaded system, can hold up the fadvise() call significantly.
>>
>> Got any numbers for this part?
>
> I didn't record it in the extreme case we observed, unfortunately. We
> had a slow-to-respond system and noticed it spending seconds in
> lru_add_drain_all() after fadvise calls, and this patch came out of
> thinking about the code and how we commonly call FADV_DONTNEED.
>
> FWIW, I wrote a silly directory tree walker/searcher that recurses
> through /usr to read and FADV_DONTNEED each file it finds. On a 2
> socket 40 ht machine, over 1% is spent in lru_add_drain_all(). With
> the patch, that cost is gone; the local drain cost shows at 0.09%.
Thanks, worth adding to changelog :)
Vlastimil
Powered by blists - more mailing lists