lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2009 19:58:32 +0100
From:	Corrado Zoccolo <czoccolo@...il.com>
To:	Vivek Goyal <vgoyal@...hat.com>
Cc:	Jens Axboe <jens.axboe@...cle.com>,
	Linux-Kernel <linux-kernel@...r.kernel.org>,
	Jeff Moyer <jmoyer@...hat.com>, mel@....ul.ie, efault@....de
Subject: Re: [RFC,PATCH] cfq-iosched: improve async queue ramp up formula

Hi Vivek,
On Mon, Nov 30, 2009 at 6:06 PM, Vivek Goyal <vgoyal@...hat.com> wrote:
> Got a query. Here assumption is that async queues are not being preempted.
> So driving shallower queue depths at the end of slice will help in terms
> of max latencies and driving deeper queue depths at the beginning of slice
> will help in getting more out of disk, without increasing max latencies.
>
> But if we allow deeper queue depths at the beginning of the async slice,
> and then async queue is preempted, then we are back to the old problem of
> first request taking more time to complete.
The problem should be solved, nevertheless.
First, the deeper queue will start after the first 100ms.
Moreover, we will still dispatch less requests than before, and the
max delay is now limited by the time slice.
Since the async time slice is also reduced according to competing sync
processes, it will not hurt the latency seen by the other processes.

> But I guess that problem will be less severe this time as for sync-noidle
> workload we will idle. So ideally we will experience the higher delays
> only for first request and not on subsequent request. Previously, we did
> not enable idling on random seeky queues and after one dispatch from the
> queue, we will again run async queue and there was high delay after every
> read request.
Yes.
> This is assuming if upon preemption, we started running
> sync-noidle workload and did not continue to dispatch from async workload.
>
> Corrodo, you can clear up the air here. What's the policy w.r.t to
> preemption of async queues and workload slice.
Ok. The workload slice works as follows (this is not only for async,
but for all workloads).
A new slice in the workload can be started if the workload slice did
not expire, and there is a ready queue.
When a queue is active, even if the workload expires, it will still
finish its complete slice, unless it has no requests and times out.

Now, preemption of sync vs async:
* if sync comes when the async workload slice is not expired, it will
just change the rb_key of the preempting queue (and workload),
ensuring that the next selected workload will be the preempting one,
but the preemption will be actually delayed until the workload slice
ends
* if sync comes after the workload slice expired, but the async queue
still has some remaining slice, it will preempt it immediately.

Basically, we protect the async to do some work, even if minimal (the
workload slices can become very small since in async workload there is
usually only 1 queue, and this is scaled against all the queues in the
other workloads).
This has shown to improve the situation when memory pressure is high,
and we need writeback to free some of it.

Thanks,
Corrado
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ