[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod53p3P_cM9bZVtjgQ34Y5+d+bF__sOpgUfofMS7n8Ugog@mail.gmail.com>
Date: Thu, 19 Oct 2017 12:19:20 -0700
From: Shakeel Butt <shakeelb@...gle.com>
To: "Kirill A. Shutemov" <kirill@...temov.name>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...e.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Minchan Kim <minchan@...nel.org>,
Yisheng Xie <xieyisheng1@...wei.com>,
Ingo Molnar <mingo@...nel.org>,
Greg Thelen <gthelen@...gle.com>,
Hugh Dickins <hughd@...gle.com>, Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: mlock: remove lru_add_drain_all()
On Thu, Oct 19, 2017 at 3:18 AM, Kirill A. Shutemov
<kirill@...temov.name> wrote:
> On Wed, Oct 18, 2017 at 04:17:30PM -0700, Shakeel Butt wrote:
>> Recently we have observed high latency in mlock() in our generic
>> library and noticed that users have started using tmpfs files even
>> without swap and the latency was due to expensive remote LRU cache
>> draining.
>
> Hm. Isn't the point of mlock() to pay price upfront and make execution
> smoother after this?
>
> With this you shift latency onto reclaim (and future memory allocation).
>
> I'm not sure if it's a win.
>
It will not shift latency to fast path memory allocation. Reclaim
happens on slow path and only reclaim may see more mlocked pages.
Please note that the very antagonistics workload i.e. for each mlock
on a cpu, the pages being mlocked happen to be on the cache of other
cpus, is very hard to trigger.
Powered by blists - more mailing lists