[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090130123858.GQ30821@kernel.dk>
Date: Fri, 30 Jan 2009 13:38:58 +0100
From: Jens Axboe <jens.axboe@...cle.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Rusty Russell <rusty@...tcorp.com.au>, npiggin@...e.de,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH -v3] use per cpu data for single cpu ipi calls
On Fri, Jan 30 2009, Peter Zijlstra wrote:
> On Fri, 2009-01-30 at 12:23 +0100, Jens Axboe wrote:
>
> > Peter, can you post a final complete patch for review and acks?
>
> Sure,
Looks good, you can add my acked-by to that. One small comment:
> if (!wait) {
> + /*
> + * We are calling a function on a single CPU
> + * and we are not going to wait for it to finish.
> + * We first try to allocate the data, but if we
> + * fail, we fall back to use a per cpu data to pass
> + * the information to that CPU. Since all callers
> + * of this code will use the same data, we must
> + * synchronize the callers to prevent a new caller
> + * from corrupting the data before the callee
> + * can access it.
> + *
> + * The CSD_FLAG_LOCK is used to let us know when
> + * the IPI handler is done with the data.
> + * The first caller will set it, and the callee
> + * will clear it. The next caller must wait for
> + * it to clear before we set it again. This
> + * will make sure the callee is done with the
> + * data before a new caller will use it.
> + * We use spinlocks to manage the callers.
> + */
That last sentence appears stale now, since there's no locking involved
for the per-cpu csd.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists