[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57598E3E.3010705@intel.com>
Date: Thu, 9 Jun 2016 08:41:50 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: "Odzioba, Lukasz" <lukasz.odzioba@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
"mhocko@...e.com" <mhocko@...e.com>,
"aarcange@...hat.com" <aarcange@...hat.com>,
"vdavydov@...allels.com" <vdavydov@...allels.com>,
"mingli199x@...com" <mingli199x@...com>,
"minchan@...nel.org" <minchan@...nel.org>
Cc: "Anaczkowski, Lukasz" <lukasz.anaczkowski@...el.com>
Subject: Re: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page
arrival
On 06/09/2016 01:50 AM, Odzioba, Lukasz wrote:
> On 08-06-16 17:31:00, Dave Hansen wrote:
>> Do we have any statistics that tell us how many pages are sitting the
>> lru pvecs? Although this helps the problem overall, don't we still have
>> a problem with memory being held in such an opaque place?
>
>>>From what I observed the problem is mainly with lru_add_pvec, the
> rest is near empty for most of the time. I added debug code to
> lru_add_drain_all(), to see sizes of the lru pvecs when I debugged this.
>
> Among lru_add_pvec, lru_rotate_pvecs, lru_deactivate_file_pvecs,
> lru_deactivate_pvecs, activate_page_pvecs almost all (3-4GB) of the
> missing memory was in lru_add_pvec, the rest was almost always empty.
Does your workload put large pages in and out of those pvecs, though?
If your system doesn't have any activity, then all we've shown is that
they're not a problem when not in use. But what about when we use them?
Have you, for instance, tried this on a system with memory pressure?
Powered by blists - more mailing lists