[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060829002302.GC32697@in.ibm.com>
Date: Tue, 29 Aug 2006 05:53:02 +0530
From: Dipankar Sarma <dipankar@...ibm.com>
To: Andrew Morton <akpm@...l.org>
Cc: Alan Stern <stern@...land.harvard.edu>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Paul E McKenney <paulmck@...ibm.com>
Subject: Re: [PATCH 0/4] RCU: various merge candidates
On Mon, Aug 28, 2006 at 12:40:58PM -0700, Andrew Morton wrote:
> On Tue, 29 Aug 2006 00:46:42 +0530
> Dipankar Sarma <dipankar@...ibm.com> wrote:
>
> >
> > rcutorture fix patches independent of rcu implementation changes
> > in this patchset.
>
> So this patchset is largely orthogonal to the presently-queued stuff?
Yes, it should be.
> > > Now what?
> >
> > Heh. I can always re-submit against -mm after I wait for a day or two
> > for comments :)
>
> That would be good, thanks. We were seriously considering merging all the
> SRCU stuff for 2.6.18, because
I think non-srcu rcutorture patches can be merged in 2.6.19. srcu
is a tossup. Perhaps srcu and this patchset may be merge candidates
for 2.6.20 should things go well in review and testing. Should I re-submit
against 2.6.18-mm1 or so (after your patchset reduces in size) ?
What is a convenient time ?
> cpufreq-make-the-transition_notifier-chain-use-srcu.patch fixes a cpufreq
> down()-in-irq-disabled warning at suspend time.
>
> But that's a lot of new stuff just to fix a warning about something which
> won't actually cause any misbehaviour. We could just as well do
>
> if (irqs_disabled())
> down_read_trylock(...); /* suspend */
> else
> down_read(...);
>
> in cpufreq to temporarily shut the thing up.
GAh! cpufreq. Already I am having to look at all of cpufreq and the
cpufreq drivers, change notifiers for the whole locking model for
the other (hotplug) cleanup. I will keep this in mind.
Thanks
Dipankar
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists