lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 18 Nov 2008 09:34:16 -0500 (EST)
From:	Mikulas Patocka <mpatocka@...hat.com>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
cc:	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, mingo@...e.hu, rml@...h9.net,
	Alasdair G Kergon <agk@...hat.com>,
	Milan Broz <mbroz@...hat.com>
Subject: Re: Active waiting with yield()

On Mon, 17 Nov 2008, Alan Cox wrote:

> > --- so if the driver processes more than 100000 requests between reboots, 
> > wait queues actually slow things down.
> 
> Versus power consumption and virtualisation considerations. Plus your
> numbers are wrong. You seem terribly keen to ignore the fact that the
> true cost is a predicted branch and usually a predicted branch of a
> cached variable and you'll only touch the wait queue in rare cases.

You will touch the wait queue always when finishing the last pending 
request --- just to find out that there is no one waiting on it.

And besides cache line, there is coding and testing overhead with wait 
queues. If the programmer forgets to decrement the number of pending 
requests, he finds it out pretty quickly (the driver won't unload). If he 
forgets to wake the queue up, the code can run with this bug long without 
anyone noticing it --- unless someone tries to unload the driver at a 
specific point --- I have seen this too.

> I'd also note as an aside modern drivers usually run off krefs so
> destruction and thus closedown is refcounted and comes off the last kref
> destruct.
> 
> Alan

So what are the reasons why you (and others) are against active waiting? 
All you are saying is that my reasons are wrong, but you haven't single 
example when active waiting causes trouble. If there is a workload when 
waiting 1ms-to-10ms with mdelay(1) on driver unload would cause discomfort 
to the user, describe it.

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ