[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150415114220.GG17717@twins.programming.kicks-ass.net>
Date: Wed, 15 Apr 2015 13:42:20 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Mel Gorman <mgorman@...e.de>
Cc: Linux-MM <linux-mm@...ck.org>, Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Dave Hansen <dave.hansen@...el.com>,
Andi Kleen <andi@...stfloor.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/4] mm: Gather more PFNs before sending a TLB to flush
unmapped pages
On Wed, Apr 15, 2015 at 11:42:55AM +0100, Mel Gorman wrote:
> +/*
> + * Use a page to store as many PFNs as possible for batch unmapping. Adjusting
> + * this trades memory usage for number of IPIs sent
> + */
> +#define BATCH_TLBFLUSH_SIZE \
> + ((PAGE_SIZE - sizeof(struct cpumask) - sizeof(unsigned long)) / sizeof(unsigned long))
>
> /* Track pages that require TLB flushes */
> struct unmap_batch {
> + /* Update BATCH_TLBFLUSH_SIZE when adjusting this structure */
> struct cpumask cpumask;
> unsigned long nr_pages;
> unsigned long pfns[BATCH_TLBFLUSH_SIZE];
The alternative is something like:
struct unmap_batch {
struct cpumask cpumask;
unsigned long nr_pages;
unsigned long pfnsp[0];
};
#define BATCH_TLBFLUSH_SIZE ((PAGE_SIZE - sizeof(struct unmap_batch)) / sizeof(unsigned long))
and unconditionally allocate 1 page. This saves you from having to worry
about the layout of struct unmap_batch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists