lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x491tub65t9.fsf@segfault.boston.devel.redhat.com>
Date:	Thu, 26 Jun 2014 11:36:50 -0400
From:	Jeff Moyer <jmoyer@...hat.com>
To:	Mel Gorman <mgorman@...e.de>
Cc:	Linux Kernel <linux-kernel@...r.kernel.org>,
	Linux-MM <linux-mm@...ck.org>,
	Linux-FSDevel <linux-fsdevel@...r.kernel.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Jens Axboe <axboe@...nel.dk>,
	Dave Chinner <david@...morbit.com>
Subject: Re: [PATCH 6/6] cfq: Increase default value of target_latency

Mel Gorman <mgorman@...e.de> writes:

> The existing CFQ default target_latency results in very poor performance
> for larger numbers of threads doing sequential reads. While this can be
> easily described as a tuning problem for users, it is one that is tricky
> to detect. This patch updates the default to benefit smaller machines.
> Dave Chinner points out that it is dangerous to assume that people know
> how to tune their IO scheduler. Jeff Moyer asked what workloads even
> care about threaded readers but it's reasonable to assume file,
> media, database and multi-user servers all experience large sequential
> readers from multiple sources at the same time.

Right, and I guess I hadn't considered that case as I thought folks used
more than one spinning disk for such workloads.

My main reservation about this change is that you've only provided
numbers for one benchmark.  To bump the default target_latency, ideally
we'd know how it affects other workloads.  However, I'm having a hard
time justifying putting any time into this for a couple of reasons:
1) blk-mq pretty much does away with the i/o scheduler, and that is the
   future
2) there is work in progress to convert cfq into bfq, and that will
   essentially make any effort put into this irrelevant (so it might be
   interesting to test your workload with bfq)

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ