lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e5e476b0911260547r33424098v456ed23203a61dd@mail.gmail.com>
Date:	Thu, 26 Nov 2009 14:47:10 +0100
From:	Corrado Zoccolo <czoccolo@...il.com>
To:	Mel Gorman <mel@....ul.ie>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Frans Pop <elendil@...net.nl>, Jiri Kosina <jkosina@...e.cz>,
	Sven Geggus <lists@...hsschwanzdomain.de>,
	Karol Lewandowski <karol.k.lewandowski@...il.com>,
	Tobias Oetiker <tobi@...iker.ch>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Rik van Riel <riel@...hat.com>,
	Christoph Lameter <cl@...ux-foundation.org>,
	Stephan von Krawczynski <skraw@...net.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH-RFC] cfq: Disable low_latency by default for 2.6.32

On Thu, Nov 26, 2009 at 1:19 PM, Mel Gorman <mel@....ul.ie> wrote:
> (cc'ing the people from the page allocator failure thread as this might be
> relevant to some of their problems)
>
> I know this is very last minute but I believe we should consider disabling
> the "low_latency" tunable for block devices by default for 2.6.32.  There was
> evidence that low_latency was a problem last week for page allocation failure
> reports but the reproduction-case was unusual and involved high-order atomic
> allocations in low-memory conditions. It took another few days to accurately
> show the problem for more normal workloads and it's a bit more wide-spread
> than just allocation failures.
>
> Basically, low_latency looks great as long as you have plenty of memory
> but in low memory situations, it appears to cause problems that manifest
> as reduced performance, desktop stalls and in some cases, page allocation
> failures. I think most kernel developers are not seeing the problem as they
> tend to test on beefier machines and without hitting swap or low-memory
> situations for the most part. When they are hitting low-memory situations,
> it tends to be for stress tests where stalls and low performance are expected.

The low latency tunable controls various policies inside cfq.
The one that could affect memory reclaim is:
        /*
         * Async queues must wait a bit before being allowed dispatch.
         * We also ramp up the dispatch depth gradually for async IO,
         * based on the last sync IO we serviced
         */
        if (!cfq_cfqq_sync(cfqq) && cfqd->cfq_latency) {
                unsigned long last_sync = jiffies - cfqd->last_end_sync_rq;
                unsigned int depth;

                depth = last_sync / cfqd->cfq_slice[1];
                if (!depth && !cfqq->dispatched)
                        depth = 1;
                if (depth < max_dispatch)
                        max_dispatch = depth;
        }

here the async queues max depth is limited to 1 for up to 200 ms after
a sync I/O is completed.
Note: dirty page writeback goes through an async queue, so it is
penalized by this.

This can affect both low and high end hardware. My non-NCQ sata disk
can handle a depth of 2 when writing. NCQ sata disks can handle a
depth up to 31, so limiting depth to 1 can cause write performance
drop, and this in turn will slow down dirty page reclaim, and cause
allocation failures.

It would be good to re-test the OOM conditions with that code commented out.

>
> To show the problem, I used an x86-64 machine booting booted with 512MB of
> memory. This is a small amount of RAM but the bug reports related to page
> allocation failures were on smallish machines and the disks in the system
> are not very high-performance.
>
> I used three tests. The first was sysbench on postgres running an IO-heavy
> test against a large database with 10,000,000 rows. The second was IOZone
> running most of the automatic tests with a record length of 4KB and the
> last was a simulated launching of gitk with a music player running in the
> background to act as a desktop-like scenario. The final test was similar
> to the test described here http://lwn.net/Articles/362184/ except that
> dm-crypt was not used as it has its own problems.

low_latency was tested on other scenarios:
http://lkml.indiana.edu/hypermail/linux/kernel/0910.0/01410.html
http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-11/msg04855.html
where it improved actual and perceived performance, so disabling it
completely may not be good.

Thanks,
Corrado
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ