[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090421085007.GG12713@csn.ul.ie>
Date: Tue, 21 Apr 2009 09:50:08 +0100
From: Mel Gorman <mel@....ul.ie>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Linux Memory Management List <linux-mm@...ck.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Lin Ming <ming.m.lin@...el.com>,
Zhang Yanmin <yanmin_zhang@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 18/25] Do not disable interrupts in free_page_mlock()
On Tue, Apr 21, 2009 at 10:55:07AM +0300, Pekka Enberg wrote:
> On Mon, 2009-04-20 at 23:20 +0100, Mel Gorman wrote:
> > free_page_mlock() tests and clears PG_mlocked using locked versions of the
> > bit operations. If set, it disables interrupts to update counters and this
> > happens on every page free even though interrupts are disabled very shortly
> > afterwards a second time. This is wasteful.
> >
> > This patch splits what free_page_mlock() does. The bit check is still
> > made. However, the update of counters is delayed until the interrupts are
> > disabled and the non-lock version for clearing the bit is used. One potential
> > weirdness with this split is that the counters do not get updated if the
> > bad_page() check is triggered but a system showing bad pages is getting
> > screwed already.
> >
> > Signed-off-by: Mel Gorman <mel@....ul.ie>
> > Reviewed-by: Christoph Lameter <cl@...ux-foundation.org>
>
> Reviewed-by: Pekka Enberg <penberg@...helsinki.fi>
>
> > ---
> > mm/internal.h | 11 +++--------
> > mm/page_alloc.c | 8 +++++++-
> > 2 files changed, 10 insertions(+), 9 deletions(-)
> >
> > diff --git a/mm/internal.h b/mm/internal.h
> > index 987bb03..58ec1bc 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -157,14 +157,9 @@ static inline void mlock_migrate_page(struct page *newpage, struct page *page)
> > */
> > static inline void free_page_mlock(struct page *page)
> > {
> > - if (unlikely(TestClearPageMlocked(page))) {
> > - unsigned long flags;
> > -
> > - local_irq_save(flags);
> > - __dec_zone_page_state(page, NR_MLOCK);
> > - __count_vm_event(UNEVICTABLE_MLOCKFREED);
> > - local_irq_restore(flags);
> > - }
>
> Maybe add a VM_BUG_ON(!PageMlocked(page))?
>
We always check in the caller and I don't see callers to this function
expanding. I can add it if you insist but I don't think it'll catch
anything in this case.
> > + __ClearPageMlocked(page);
> > + __dec_zone_page_state(page, NR_MLOCK);
> > + __count_vm_event(UNEVICTABLE_MLOCKFREED);
> > }
> >
> > #else /* CONFIG_HAVE_MLOCKED_PAGE_BIT */
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists