lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110614081836.GG8141@htj.dyndns.org>
Date:	Tue, 14 Jun 2011 10:18:36 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Péter Ujfalusi <peter.ujfalusi@...com>
Cc:	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	"Girdwood, Liam" <lrg@...com>, Tony Lindgren <tony@...mide.com>,
	Mark Brown <broonie@...nsource.wolfsonmicro.com>,
	Samuel Ortiz <sameo@...ux.intel.com>,
	"linux-input@...r.kernel.org" <linux-input@...r.kernel.org>,
	"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"alsa-devel@...a-project.org" <alsa-devel@...a-project.org>,
	"Lopez Cruz, Misael" <misael.lopez@...com>
Subject: Re: Re: Re: Re: [PATCH v4 11/18] input: Add initial support for
 TWL6040 vibrator

Hello,

On Tue, Jun 14, 2011 at 10:51:35AM +0300, Péter Ujfalusi wrote:
> On Tuesday 14 June 2011 09:31:30 Tejun Heo wrote:
> > Thanks for the explanation.  I have a couple more questions.
> > 
> > * While transferring data from I2C, I suppose the work item is fully
> >   occupying the CPU?
> 
> Not exactly on OMAP platforms at least. We do not have busy looping in low 
> level driver (we wait with wait_for_completion_timeout for the transfer to be 
> done), so scheduling on the same CPU can be possible.
> 
> >   If so, how long delay are we talking about?
> >   Millisecs?
> 
> It is hard to predict, but it can be few tens of ms for one device, if we have 
> several devices on the same bus (which we tend to have), and they want to 
> read/write something at the same time we can see hundred(s) ms in total - it 
> is rare to happen, and hard to reproduce, but it does happen for sure.
>  
> > * You said that the if one task is accessing I2C bus, the other would
> >   wait even if scheduled on a different CPU.  Is access to I2C bus
> >   protected with a spinlock?
> 
> At the bottom it is using rt_mutex_lock/unlick to protect the bus.
> And yes, the others need to wait till the ongoing transfer has been finished.

I see, so IIUC,

* If it's using mutex and not holding CPU for the whole duration, you
  shouldn't need to do anything special for latency for other work
  items.  Workqueue code will start executing other work items as soon
  as the I2C work item goes to sleep.

* If I2C work item is burning CPU cycles for the whole duration which
  may stretch to tens / few hundreds millsecs, 1. it's doing something
  quite wrong, 2. should be marked WQ_CPU_INTENSIVE.

So, if something needs to be modified, it's the I2C stuff, not the
vibrator driver.  If I2C stuff isn't doing something wonky, there
shouldn't be a latency problem to begin with.

Thank you.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ