[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57583A49.30809@intel.com>
Date: Wed, 8 Jun 2016 08:31:21 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Lukasz Odzioba <lukasz.odzioba@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
mhocko@...e.com, aarcange@...hat.com, vdavydov@...allels.com,
mingli199x@...com, minchan@...nel.org
Cc: lukasz.anaczkowski@...el.com
Subject: Re: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page
arrival
On 06/08/2016 07:35 AM, Lukasz Odzioba wrote:
> diff --git a/mm/swap.c b/mm/swap.c
> index 9591614..3fe4f18 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -391,9 +391,8 @@ static void __lru_cache_add(struct page *page)
> struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
>
> get_page(page);
> - if (!pagevec_space(pvec))
> + if (!pagevec_add(pvec, page) || PageCompound(page))
> __pagevec_lru_add(pvec);
> - pagevec_add(pvec, page);
> put_cpu_var(lru_add_pvec);
> }
Lukasz,
Do we have any statistics that tell us how many pages are sitting the
lru pvecs? Although this helps the problem overall, don't we still have
a problem with memory being held in such an opaque place?
I think if we're going to be hacking around this area, we should also
add something to vmstat or zoneinfo to spell out how many of these
things there are.
Powered by blists - more mailing lists