lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 31 May 2014 00:01:21 +0200
From:	Paolo Valente <paolo.valente@...more.it>
To:	Tejun Heo <tj@...nel.org>
Cc:	Jens Axboe <axboe@...nel.dk>, Li Zefan <lizefan@...wei.com>,
	Fabio Checconi <fchecconi@...il.com>,
	Arianna Avanzini <avanzini.arianna@...il.com>,
	linux-kernel@...r.kernel.org,
	containers@...ts.linux-foundation.org, cgroups@...r.kernel.org
Subject: Re: [PATCH RFC - TAKE TWO - 11/12] block, bfq: boost the throughput on NCQ-capable flash-based devices


Il giorno 30/mag/2014, alle ore 17:46, Tejun Heo <tj@...nel.org> ha scritto:

> On Thu, May 29, 2014 at 11:05:42AM +0200, Paolo Valente wrote:
>> This patch boosts the throughput on NCQ-capable flash-based devices,
>> while still preserving latency guarantees for interactive and soft
>> real-time applications. The throughput is boosted by just not idling
>> the device when the in-service queue remains empty, even if the queue
>> is sync and has a non-null idle window. This helps to keep the drive's
>> internal queue full, which is necessary to achieve maximum
>> performance. This solution to boost the throughput is a port of
>> commits a68bbdd and f7d7b7a for CFQ.
>> 
>> As already highlighted in patch 10, allowing the device to prefetch
>> and internally reorder requests trivially causes loss of control on
>> the request service order, and hence on service guarantees.
>> Fortunately, as discussed in detail in the comments to the function
>> bfq_bfqq_must_not_expire(), if every process has to receive the same
>> fraction of the throughput, then the service order enforced by the
>> internal scheduler of a flash-based device is relatively close to that
>> enforced by BFQ. In particular, it is close enough to let service
>> guarantees be substantially preserved.
>> 
>> Things change in an asymmetric scenario, i.e., if not every process
>> has to receive the same fraction of the throughput. In this case, to
>> guarantee the desired throughput distribution, the device must be
>> prevented from prefetching requests. This is exactly what this patch
>> does in asymmetric scenarios.
> 
> Does it even make sense to use this type of heavy iosched on ssds?
> It's highly likely that ssds will soon be served through blk-mq
> bypassing all these.  I don't feel too enthused about adding code to
> support ssds to ioscheds.  A lot better approach would be just default
> to deadline for them anyway.
> 

This was basically my opinion before I started running test also with SSDs. As you can see from, e.g., Figure 8 in
http://algogroup.unimore.it/people/paolo/disk_sched/extra_results.php
or Figure 9 in
http://algogroup.unimore.it/people/paolo/disk_sched/results.php
with deadline, as with NOOP and even worse with CFQ, start-up times become unbearably high while some files are being read sequentially. I have got even higher latencies on Intel SSDs.

One of the main reasons is that these schedulers allow the drive to queue more than one request. Maybe adding some of the low-latency heuristics of bfq to deadline may help, but it should be investigated.

Paolo


> Thanks.
> 
> -- 
> tejun


--
Paolo Valente                                                 
Algogroup
Dipartimento di Fisica, Informatica e Matematica		
Via Campi, 213/B
41125 Modena - Italy        				  
homepage:  http://algogroup.unimore.it/people/paolo/

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists