[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.58.0807241821580.19888@gandalf.stny.rr.com>
Date: Thu, 24 Jul 2008 18:22:22 -0400 (EDT)
From: Steven Rostedt <rostedt@...dmis.org>
To: Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>
cc: akpm@...ux-foundation.org, Ingo Molnar <mingo@...e.hu>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Masami Hiramatsu <mhiramat@...hat.com>,
"Frank Ch. Eigler" <fche@...hat.com>,
Hideo AOKI <haoki@...hat.com>,
Takashi Nishiie <t-nishiie@...css.fujitsu.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Eduard - Gabriel Munteanu <eduard.munteanu@...ux360.ro>
Subject: Re: [patch 02/17] Kernel Tracepoints
On Thu, 24 Jul 2008, Mathieu Desnoyers wrote:
> > > +
> > > + if (nr_probes - nr_del == 0) {
> > > + /* N -> 0, (N > 1) */
> > > + entry->funcs = NULL;
> > > + entry->refcount = 0;
> > > + debug_print_probes(entry);
> > > + return old;
> > > + } else {
> > > + int j = 0;
> > > + /* N -> M, (N > 1, M > 0) */
> > > + /* + 1 for NULL */
> > > + new = kzalloc((nr_probes - nr_del + 1)
> > > + * sizeof(void *), GFP_KERNEL);
> > > + if (new == NULL)
> > > + return ERR_PTR(-ENOMEM);
> >
> > Hmm, on failure of allocating a new array, we could simply use the
> > old array, and remove the one probe from it instead of just failing.
> >
>
> Nay, because of RCU constraints. So we have the readers in the current
> RCU window who need to see the old version, and readers of the following
> window who need to see the next version. Both can live at the same time
> on the system. We cannot reuse the same memory to perform the array
> shrink without corrupting the data seen by the previous readers. We
> really have to perform a copy here.
Ah, good point. I forgot the whole RCU factor here.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists