lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 Nov 2009 15:31:40 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Matthew Garrett <mjg59@...f.ucam.org>
Cc:	Pavel Machek <pavel@....cz>, Kay Sievers <kay.sievers@...y.org>,
	David Zeuthen <david@...ar.dk>, linux-kernel@...r.kernel.org,
	linux-hotplug@...r.kernel.org
Subject: Re: [PATCH] [RFC] Add support for uevents on block device idle
	changes

On Mon, Nov 23 2009, Matthew Garrett wrote:
> On Mon, Nov 23, 2009 at 03:17:54PM +0100, Jens Axboe wrote:
> 
> > I have to agree, doing a mod_timer() on every single IO is going to suck
> > big time. I went to great lengths to avoid doing that even for timeout
> > detection. So that's pretty much a non-starter to begin with.
> 
> It's conditional on a (default off) setting, so it's not a hit unless 
> the user requests it. But yeah, the performance hit is obviously a 
> concern. It may be that polling is the least bad way of doing this.

Even if it's off by default, doesn't mean we shouldn't make the
implementation correct or fast :-)

> > Additionally, as Bart also wrote, you are not doing this in the right
> > place. You want to do this post-merge, not for each incoming IO. Have
> > you looked at laptop mode? Looks like you are essentially re-inventing
> > that, but in a bad way.
> 
> Right, that's mostly down to my having no familiarity with the block 
> layer at all :) I can fix that up easily enough, but if a deferrable 
> timer is going to be too expensive then it'll need some rethinking 
> anyway.

Well, take a look at laptop mode. A timer per-io is probably
unavoidable, but doing it at IO completion could mean a big decrease in
timer activity as opposed to doing it for each incoming IO. And since
you are looking at when the disk is idle, it makes a lot more sense to
me to do that when the we complete a request (and have no further
pending IO) rather than on incoming IO.

Your biggest performance issue here is going to be sync IO, since the
disk will go idle very briefly before being kicked into action again.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ