[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1272893189.5605.119.camel@twins>
Date: Mon, 03 May 2010 15:26:29 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: mingo@...e.hu, linux-kernel@...r.kernel.org, x86@...nel.org,
oleg@...hat.com, rusty@...tcorp.com.au, sivanich@....com,
heiko.carstens@...ibm.com, dipankar@...ibm.com,
josh@...edesktop.org, paulmck@...ux.vnet.ibm.com,
akpm@...ux-foundation.org, arjan@...ux.intel.com,
torvalds@...ux-foundation.org
Subject: Re: [PATCH 1/4] cpu_stop: implement stop_cpu[s]()
On Thu, 2010-04-22 at 18:09 +0200, Tejun Heo wrote:
> +static int cpu_stopper_thread(void *data)
> +{
> + struct cpu_stopper *stopper = data;
BUG_ON(stopper != __get_cpu_var(cpu_stopper)); ?
> + work = NULL;
> + spin_lock_irq(&stopper->lock);
> + if (!list_empty(&stopper->works)) {
> + work = list_first_entry(&stopper->works,
> + struct cpu_stop_work, list);
> + list_del_init(&work->list);
> + }
> + spin_unlock_irq(&stopper->lock);
Not sure if its worth the hassle, but you could list_splice_init() the
complete pending list onto a local list, possible avoiding some locks.
But since this isn't supposed to be used much, I doubt we'll ever see
the difference.
> + /* restore preemption and check it's still balanced */
> + preempt_enable();
> + WARN_ON_ONCE(preempt_count());
You would use WARN_ONCE() and print the function that last ran and
leaked the preempt count.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists