[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0810281337530.6470@chino.kir.corp.google.com>
Date: Tue, 28 Oct 2008 13:48:53 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Peter Zijlstra <peterz@...radead.org>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux-foundation.org>,
Nick Piggin <npiggin@...e.de>, Paul Menage <menage@...gle.com>,
Derek Fults <dfults@....com>, linux-kernel@...r.kernel.org
Subject: Re: [patch 1/7] cpusets: add dirty map to struct address_space
On Tue, 28 Oct 2008, Peter Zijlstra wrote:
> > This patch implements the management of dirty node maps for an address
> > space through the following functions:
> >
> > cpuset_clear_dirty_nodes(mapping) Clear the map of dirty nodes
> >
> > cpuset_update_nodes(mapping, page) Record a node in the dirty nodes
> > map
> >
> > cpuset_init_dirty_nodes(mapping) Initialization of the map
> >
> >
> > The dirty map may be stored either directly in the mapping (for NUMA
> > systems with less then BITS_PER_LONG nodes) or separately allocated for
> > systems with a large number of nodes (f.e. ia64 with 1024 nodes).
> >
> > Updating the dirty map may involve allocating it first for large
> > configurations. Therefore, we protect the allocation and setting of a
> > node in the map through the tree_lock. The tree_lock is already taken
> > when a page is dirtied so there is no additional locking overhead if we
> > insert the updating of the nodemask there.
>
> I find this usage of tree lock most bothersome, as my concurrent
> pagecache patches take the lock out. In which case this _does_ cause
> extra locking overhead.
>
Yeah, if we don't serialize with tree_lock then we'll need to protect the
attachment of mapping->dirty_nodes with a new spinlock in struct
address_space (and only for configs where MAX_NUMNODES > BITS_PER_LONG).
That locking overhead is negligible when mapping->dirty_nodes is non-NULL
since there's no requirement to protect the setting of the node in the
nodemask.
Are your concurrent pagecache patches in the latest mmotm? If so, I can
rebase this entire patchset off that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists