[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160608160653.GB21838@dhcp22.suse.cz>
Date: Wed, 8 Jun 2016 18:06:53 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Lukasz Odzioba <lukasz.odzioba@...el.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
akpm@...ux-foundation.org, kirill.shutemov@...ux.intel.com,
aarcange@...hat.com, vdavydov@...allels.com, mingli199x@...com,
minchan@...nel.org, lukasz.anaczkowski@...el.com
Subject: Re: [PATCH 1/1] mm/swap.c: flush lru_add pvecs on compound page
arrival
On Wed 08-06-16 08:31:21, Dave Hansen wrote:
> On 06/08/2016 07:35 AM, Lukasz Odzioba wrote:
> > diff --git a/mm/swap.c b/mm/swap.c
> > index 9591614..3fe4f18 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -391,9 +391,8 @@ static void __lru_cache_add(struct page *page)
> > struct pagevec *pvec = &get_cpu_var(lru_add_pvec);
> >
> > get_page(page);
> > - if (!pagevec_space(pvec))
> > + if (!pagevec_add(pvec, page) || PageCompound(page))
> > __pagevec_lru_add(pvec);
> > - pagevec_add(pvec, page);
> > put_cpu_var(lru_add_pvec);
> > }
>
> Lukasz,
>
> Do we have any statistics that tell us how many pages are sitting the
> lru pvecs? Although this helps the problem overall, don't we still have
> a problem with memory being held in such an opaque place?
Is it really worth bothering when we are talking about 56kB per CPU
(after this patch)?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists