[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1276103149.4343.350.camel@mulgrave.site>
Date: Wed, 09 Jun 2010 13:05:49 -0400
From: James Bottomley <James.Bottomley@...e.de>
To: Florian Mickler <florian@...kler.org>
Cc: pm list <linux-pm@...ts.linux-foundation.org>,
markgross@...gnar.org, mgross@...ux.intel.com,
linville@...driver.com, Frederic Weisbecker <fweisbec@...il.com>,
Jonathan Corbet <corbet@....net>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4] pm_qos: make update_request non blocking
On Wed, 2010-06-09 at 18:32 +0200, Florian Mickler wrote:
> On Wed, 09 Jun 2010 12:07:25 -0400
> James Bottomley <James.Bottomley@...e.de> wrote:
>
> > On Wed, 2010-06-09 at 18:00 +0200, Florian Mickler wrote:
> > > On Wed, 09 Jun 2010 11:37:12 -0400
> > > James Bottomley <James.Bottomley@...e.de> wrote:
> > >
> > >
> > > > This still isn't resilient against two successive calls to update. If
> > > > the second one gets to schedule_work() before the work of the first one
> > > > has finished, you'll corrupt the workqueue.
> > >
> > > Sorry, I don't see it. Can you elaborate?
> > >
> > > In "run_workqueue(" we do a list_del_init() which resets the
> > > list-pointers of the workitem and only after that reset the
> > > WORK_STRUCT_PENDING member of said structure.
> > >
> > >
> > > schedule_work does a queue_work_on which does a test_and_set_bit on
> > > the WORK_STRUCT_PENDING member of the work and only queues the work
> > > via list_add_tail in insert_work afterwards.
> > >
> > > Where is my think'o? Or was this fixed while you didn't look?
> > >
> > > So what _can_ happen, is that we miss a new notfication while the old
> > > notification is still in the queue. But I don't think this is a problem.
> >
> > OK, so the expression of the race is that the latest notification gets
> > lost. If something is tracking values, you'd really like to lose the
> > previous one (which is now irrelevant) not the latest one. The point is
> > there's still a race.
> >
> > James
> >
>
> Yeah, but for blocking notification it is not that bad.
The network latency notifier uses the value to recalculate something.
Losing the last value will mean it's using stale data.
> Doesn't using blocking notifiers mean that you need to always check
> via pm_qos_request() to get the latest value anyways? I.e. the
> notification is just a hint, that something changed so you can act on
> it. But if you act (may it because of notification or because of
> something other) then you have to get the current value anyways.
Well, the network notifier assumes the notifier value is the latest ...
> I think there are 2 schemes of operation. The one which needs
> atomic notifiers and where it would be bad if we lost any notification
> and the other where it is just a way of doing some work in a timely
> fashion but it isn't critical that it is done right away.
>
> pm_qos was the second kind of operation up until now, because the
> notifiers may have got scheduled away while blocked.
Actually, no ... the current API preserves ordering semantics. If a
notifier sleeps and another change comes along, the update callsite will
sleep on the notifier's mutex ... so ordering is always preserved.
> I think we should allow for both kinds of operations simultaneous. But
> as I got that pushback to the change from John Linville as I touched his
> code, I got a bit reluctant and just have done the simple variant. :)
The code I proposed does ... but it relies on the callsite supplying the
necessary context.
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists