[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061016192615.GA3746@localhost.localdomain>
Date: Mon, 16 Oct 2006 12:26:15 -0700
From: Ravikiran G Thirumalai <kiran@...lex86.org>
To: Andrew Morton <akpm@...l.org>
Cc: linux-kernel@...r.kernel.org,
Christoph Lameter <clameter@...r.sgi.com>,
Alok Kataria <alok.kataria@...softinc.com>,
"Shai Fultheim (Shai@...lex86.org)" <shai@...lex86.org>,
"Benzi Galili (Benzi@...leMP.com)" <benzi@...lemp.com>
Subject: Re: [patch] slab: Fix a cpu hotplug race condition while tuning slab cpu caches
On Mon, Oct 16, 2006 at 11:15:11AM -0700, Andrew Morton wrote:
> On Mon, 16 Oct 2006 01:54:39 -0700
> Ravikiran G Thirumalai <kiran@...lex86.org> wrote:
>
> The problem is obvious: we have some data (the array caches) and we have a
> data structure which is used to look up that data (cpu_online_map). But
> we're releasing the lock while these two things are in an inconsistent
> state.
>
> So you could have fixed this by taking cache_chain_mutex in CPU_UP_PREPARE
> and releasing it in CPU_ONLINE and CPU_UP_CANCELED.
Hmm, yes. I suppose so. Maybe we can do away with other uses of
lock_cpu_hotplug() in slab.c as well then! Will give it a shot. Slab
locking might look uglier than what it already is though no?
>
> > list_for_each_entry(cachep, &cache_chain, next) {
> > @@ -4087,6 +4088,7 @@ ssize_t slabinfo_write(struct file *file
> > }
> > }
> > mutex_unlock(&cache_chain_mutex);
> > + unlock_cpu_hotplug();
> > if (res >= 0)
> > res = count;
> > return res;
>
> Given that this lock_cpu_hotplug() happens at a high level I guess it'll
> avoid the usual lock_cpu_hotplug() horrors and we can live with it. I
> assume lockdep was enabled when you were testing this?
Not when I tested it. I just retested with lockdep on and things seemed
fine on a SMP.
Thanks,
Kiran
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists