[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1276266352.2862.70.camel@mulgrave.site>
Date: Fri, 11 Jun 2010 09:25:52 -0500
From: James Bottomley <James.Bottomley@...e.de>
To: Florian Mickler <florian@...kler.org>
Cc: Jonathan Corbet <corbet@....net>,
Frederic Weisbecker <fweisbec@...il.com>,
markgross@...gnar.org, linville@...driver.com,
linux-kernel@...r.kernel.org,
pm list <linux-pm@...ts.linux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [linux-pm] [PATCH v4] pm_qos: make update_request non blocking
On Thu, 2010-06-10 at 16:41 +0200, Florian Mickler wrote:
> > > So the notified value is always the latest or there is another
> > > notification underway.
> >
> > Well, no ... it's a race, and like all good races the winner is non
> > deterministic.
>
> Can you point out where I'm wrong?
>
> U1. update_request gets called
> U2. new extreme value gets calculated under spinlock
> U3. notify gets queued if its WORK_PENDING_BIT is not set.
>
> run_workqueue() does the following:
> R1. clears the WORK_PENDING_BIT
> R2. calls update_notify()
> R3. reads the current extreme value
> R4. notification gets called with that value
>
>
> If another update_request comes to schedule_work before
> run_workqueue() has cleared the WORK_PENDING_BIT, the work will not be
> requeued, but R3 isn't yet executed. So the notifiers will get the last
> value.
So the race now only causes lost older notifications ... as long as the
consumers are OK with that (it is an API change) then this should work.
You're still not taking advantage of the user context passed in, though,
so this does needlessly delay notifications for that case.
Actually, pm_qos_remove now needs a flush_scheduled work since you don't
want to return until the list is clear (since the next action may be to
free the object).
James
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists