lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Jun 2011 10:18:58 +0200
From:	Tejun Heo <tj@...nel.org>
To:	Péter Ujfalusi <peter.ujfalusi@...com>
Cc:	Dmitry Torokhov <dmitry.torokhov@...il.com>,
	"Girdwood, Liam" <lrg@...com>, Tony Lindgren <tony@...mide.com>,
	Mark Brown <broonie@...nsource.wolfsonmicro.com>,
	Samuel Ortiz <sameo@...ux.intel.com>,
	"linux-input@...r.kernel.org" <linux-input@...r.kernel.org>,
	"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"alsa-devel@...a-project.org" <alsa-devel@...a-project.org>,
	"Lopez Cruz, Misael" <misael.lopez@...com>
Subject: Re: Re: Re: Re: Re: [PATCH v4 11/18] input: Add initial support for
 TWL6040 vibrator

Hello,

On Tue, Jun 14, 2011 at 01:22:45PM +0300, Péter Ujfalusi wrote:
> However I did run a short experiments regarding to latencies:
> With create_singlethread_workqueue :
> Jun 14 12:54:30 omap-gentoo kernel: [  211.269531] vibra scheduling time: 30 usec
> Jun 14 12:54:30 omap-gentoo kernel: [  211.300811] vibra scheduling time: 30 usec
> Jun 14 12:54:33 omap-gentoo kernel: [  214.419006] vibra scheduling time: 31 usec
> Jun 14 12:54:34 omap-gentoo kernel: [  214.980987] vibra scheduling time: 30 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  215.762115] vibra scheduling time: 30 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  215.816650] vibra scheduling time: 30 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  215.871337] vibra scheduling time: 61 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  215.926025] vibra scheduling time: 61 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  215.980743] vibra scheduling time: 61 usec
> Jun 14 12:54:35 omap-gentoo kernel: [  216.035430] vibra scheduling time: 61 usec
> Jun 14 12:54:38 omap-gentoo kernel: [  219.425659] vibra scheduling time: 31 usec
> Jun 14 12:54:40 omap-gentoo kernel: [  220.981658] vibra scheduling time: 31 usec
> Jun 14 12:54:44 omap-gentoo kernel: [  224.692504] vibra scheduling time: 30 usec
> Jun 14 12:54:44 omap-gentoo kernel: [  225.067138] vibra scheduling time: 30 usec
> 
> With create_workqueue :
> Jun 14 12:05:00 omap-gentoo kernel: [  304.965393] vibra scheduling time: 183 usec
> Jun 14 12:05:01 omap-gentoo kernel: [  305.964996] vibra scheduling time: 61 usec
> Jun 14 12:05:03 omap-gentoo kernel: [  307.684082] vibra scheduling time: 152 usec
> Jun 14 12:05:06 omap-gentoo kernel: [  310.972778] vibra scheduling time: 30 usec
> Jun 14 12:05:08 omap-gentoo kernel: [  312.683715] vibra scheduling time: 61 usec
> Jun 14 12:05:10 omap-gentoo kernel: [  314.785675] vibra scheduling time: 183 usec
> Jun 14 12:05:15 omap-gentoo kernel: [  319.800903] vibra scheduling time: 61 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.738403] vibra scheduling time: 30 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.793090] vibra scheduling time: 61 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.847778] vibra scheduling time: 61 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.902465] vibra scheduling time: 61 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.957153] vibra scheduling time: 61 usec
> Jun 14 12:05:16 omap-gentoo kernel: [  320.996185] vibra scheduling time: 31 usec
> 
> This is in a system, where I do not have any other drivers on I2C bus, and I have
> generated some load with this command:
> grep -r generate_load /*
> 
> So, I have some CPU, and IO load as well.
> 
> At the end the differences are not that big, but with create_singlethread_workqueue
> I can see less spikes.
> 
> This is with 3.0-rc2 kernel
> 
> I still think, that there is a place for the create_singlethread_workqueue, and the
> tactile feedback needs such a thing.

So, yes, WQ_HIGHPRI would make difference in latency but as can be
seen above in usecs range not msecs range and you're trading off batch
execution and processing locality for it.

> As I recall correctly this was the reason to use create_singlethread_workqueue
> in the twl4030-vibra driver as well (there were latency issues without it).

Before cmwq, the latency which can be induced by using system
workqueue can be easily upto seconds range.  With cmwq, it should be
in the usecs to few millisecs range.  If that's not enough, the use
case probably calls for dedicated RT thread or threaded IRQ handler.

No human being can feel 120usec difference and I can't see how using
HIGHPRI is justified here (which is what the code is doing
_accidentally_ by using singlethread_workqueue).

Thank you.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ