[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.44L0.0611172318180.8754-100000@netrider.rowland.org>
Date: Fri, 17 Nov 2006 23:33:45 -0500 (EST)
From: Alan Stern <stern@...land.harvard.edu>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
cc: Jens Axboe <jens.axboe@...cle.com>,
Linus Torvalds <torvalds@...l.org>,
Thomas Gleixner <tglx@...esys.com>,
Ingo Molnar <mingo@...e.hu>,
LKML <linux-kernel@...r.kernel.org>,
john stultz <johnstul@...ibm.com>,
David Miller <davem@...emloft.net>,
Arjan van de Ven <arjan@...radead.org>,
Andrew Morton <akpm@...l.org>, Andi Kleen <ak@...e.de>,
<manfred@...orfullife.com>, <oleg@...sign.ru>
Subject: Re: [patch] cpufreq: mark cpufreq_tsc() as core_initcall_sync
On Fri, 17 Nov 2006, Paul E. McKenney wrote:
> > Perhaps a better approach to the initialization problem would be to assume
> > that either:
> >
> > 1. The srcu_struct will be initialized before it is used, or
> >
> > 2. When it is used before initialization, the system is running
> > only one thread.
>
> Are these assumptions valid? If so, they would indeed simplify things
> a bit.
I don't know. Maybe Andrew can tell us -- is it true that the kernel runs
only one thread up through the time the core_initcalls are finished?
If not, can we create another initcall level that is guaranteed to run
before any threads are spawned?
> For the moment, I cheaped out and used a mutex_trylock. If this can block,
> I will need to add a separate spinlock to guard per_cpu_ref allocation.
I haven't looked at your revised patch yet... But it's important to keep
things as simple as possible.
> Hmmm... How to test this? Time for the wrapper around alloc_percpu()
> that randomly fails, I guess. ;-)
Do you really want things to continue in a highly degraded mode when
percpu allocation fails? Maybe it would be better just to pass the
failure back to the caller.
Alan Stern
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists