[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4476185.bf2jyafmHt@barack>
Date: Tue, 14 Jun 2011 10:51:35 +0300
From: Péter Ujfalusi <peter.ujfalusi@...com>
To: Tejun Heo <tj@...nel.org>
CC: Dmitry Torokhov <dmitry.torokhov@...il.com>,
"Girdwood, Liam" <lrg@...com>, Tony Lindgren <tony@...mide.com>,
Mark Brown <broonie@...nsource.wolfsonmicro.com>,
Samuel Ortiz <sameo@...ux.intel.com>,
"linux-input@...r.kernel.org" <linux-input@...r.kernel.org>,
"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"alsa-devel@...a-project.org" <alsa-devel@...a-project.org>,
"Lopez Cruz, Misael" <misael.lopez@...com>
Subject: Re: Re: Re: Re: [PATCH v4 11/18] input: Add initial support for TWL6040 vibrator
On Tuesday 14 June 2011 09:31:30 Tejun Heo wrote:
> Thanks for the explanation. I have a couple more questions.
>
> * While transferring data from I2C, I suppose the work item is fully
> occupying the CPU?
Not exactly on OMAP platforms at least. We do not have busy looping in low
level driver (we wait with wait_for_completion_timeout for the transfer to be
done), so scheduling on the same CPU can be possible.
> If so, how long delay are we talking about?
> Millisecs?
It is hard to predict, but it can be few tens of ms for one device, if we have
several devices on the same bus (which we tend to have), and they want to
read/write something at the same time we can see hundred(s) ms in total - it
is rare to happen, and hard to reproduce, but it does happen for sure.
> * You said that the if one task is accessing I2C bus, the other would
> wait even if scheduled on a different CPU. Is access to I2C bus
> protected with a spinlock?
At the bottom it is using rt_mutex_lock/unlick to protect the bus.
And yes, the others need to wait till the ongoing transfer has been finished.
> Also, as it's currently implemented, single threaded wq's effectively
> bypass concurrency level control. This is an implementation detail
> which may change in the future, so even if you're seeing lower latency
> by using a separate single threaded wq, it's an accident and if you
> require lower latency you should be expressing the requirement
> explicitly.
I see, do you have suggestion to which would be best for this kind of
scenarios.
In most cases global wq would be OK for this, but time-to-time we face with
sudden latency spikes, which makes the user experience really bad.
Currently with singlethreaded wq we can avoid these spikes.
Thank you,
Péter
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists