lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200712100120.22657.phillips@phunq.net>
Date:	Mon, 10 Dec 2007 01:20:21 -0800
From:	Daniel Phillips <phillips@...nq.net>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	davidsen@....com, linux-kernel@...r.kernel.org,
	peterz@...radead.org
Subject: Re: [RFC] [PATCH] A clean approach to writeout throttling

Hi Andrew,

Unfortunately, I agreed with your suggestion too hastily.   Not only 
would it be complex to implement, It does not work.  It took me several 
days to put my finger on exactly why.  Here it is in a nutshell: 
resources may be consumed _after_ the gatekeeper runs the "go, no go" 
throttling decision.  To illustrate, throw 10,000 bios simultaneously 
at a block stack that is supposed to allow only about 1,000 in flight 
at a time.  If the block stack allocates memory somewhat late in its 
servicing scheme (for example, when it sends a network message) then it 
is possible that no actual resource consumption will have taken place 
before all 10,000 bios are allowed past the gate keeper, and deadlock 
is sure to occur sooner or later.

In general, we must throttle against the maximum requirement of inflight 
bios rather than against the measured consumption.  This achieves the 
invariant I have touted, namely that memory consumption on the block 
writeout path must be bounded.  We could therefore possibly use your 
suggestion or something resembling it to implement a debug check that 
the programmer did in fact do their bounds arithmetic  correctly, but 
it is not useful for enforcing the bound itself.

In case that coffin needs more nails in it, consider that we would not 
only need to account page allocations, but frees as well.  So what 
tells us that a page has returned to the reserve pool?  Oops, tough 
one.  The page may have been returned to a slab and thus not actually 
freed, though it remains available for satisfying new bio transactions.  
Because of such caching, your algorithm would quickly lose track of 
available resources and grind to a halt.

Never mind that keeping track of page frees is a nasty problem in 
itself.   They can occur in interrupt context, so forget the current-> 
idea.  Even keeping track of page allocations for bio transactions in 
normal context will be a mess, and that is the easy part.  I can just 
imagine the code attempting to implement this approach acreting into a 
monster that gets confusingly close to working without ever actually 
getting  there.

We do have a simple, elegant solution posted at the head of this thread, 
which is known to work.

Regards,

Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ