[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6599ad831003050403v2e988723k1b6bf38d48707ab1@mail.gmail.com>
Date: Fri, 5 Mar 2010 04:03:26 -0800
From: Paul Menage <menage@...gle.com>
To: miaox@...fujitsu.com
Cc: David Rientjes <rientjes@...gle.com>,
Lee Schermerhorn <lee.schermerhorn@...com>,
Nick Piggin <npiggin@...e.de>,
Linux-Kernel <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 4/4] cpuset,mm: use rwlock to protect task->mempolicy and
mems_allowed
On Wed, Mar 3, 2010 at 2:52 AM, Miao Xie <miaox@...fujitsu.com> wrote:
> if MAX_NUMNODES > BITS_PER_LONG, loading/storing task->mems_allowed or mems_allowed in
> task->mempolicy are not atomic operations, and the kernel page allocator gets an empty
> mems_allowed when updating task->mems_allowed or mems_allowed in task->mempolicy. So we
> use a rwlock to protect them to fix this probelm.
Rather than adding locks, if the intention is just to avoid the
allocator seeing an empty nodemask couldn't we instead do the
equivalent of:
current->mems_allowed |= new_mask;
current->mems_allowed = new_mask;
i.e. effectively set all new bits in the nodemask first, and then
clear all old bits that are no longer in the new mask. The only
downside of this is that a page allocation that races with the update
could potentially allocate from any node in the union of the old and
new nodemasks - but that's the case anyway for an allocation that
races with an update, so I don't see that it's any worse.
Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists