lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080129181915.GD15220@kernel.dk>
Date:	Tue, 29 Jan 2008 19:19:15 +0100
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Nikanth Karthikesan <knikanth@...e.de>
Cc:	linux-kernel@...r.kernel.org, nickpiggin@...oo.com.au
Subject: Re: [PATCH] [RFC] block/elevator: change nr_requests handling

On Tue, Jan 29 2008, Nikanth Karthikesan wrote:
> The /sys/block/whateverdisk/queue/nr_requests is used to limit the
> number of requests allocated per queue. There can be atmost nr_requests
> read requests and nr_requests write requests allocated.
> 
> But this can lead to starvation when there are many tasks doing heavy
> IO. And the ioc_batching code lets those who had failed to allocate a
> request once, to allocate upto 150% of nr_requests in next round. This
> just makes the limit a little higher, when there are too many tasks
> doing heavy IO. IO schedulers have no control over this to choose which
> task should get the requests. During such situations, even ionice cannot
> help as CFQ cannot serve a task which cannot allocate any requests.
> Setting nr_requests to a higher value is like disabling this queue depth
> check at all.
> 
> This patch defers the nr_requests check till dispatching the requests
> instead of limiting while creating them. There can be more than
> nr_requests allocated, but only upto nr_requests chosen by the io
> scheduler will be dispatched at a time. This lets the io scheduler to
> guarantee some sort of interactivity even when there are many batchers,
> if it wants to.
> 
> This patch removes the ioc_batching code. Also the schedulers stop
> dispatching when it chooses a Read request and Read queue is full and
> vice versa. On top of this patch, the individual schedulers can be
> changed to take advantage of knowing the no of read and write requests
> that can be dispatched. Or atleast dispatch until both read and write
> queues are full.

This is all a bit backwards, I think. The io schedulers CAN choose which
processes get to allocate requests, through the ->may_queue() hook.

I definitely think the batching logic could do with some improvements,
so I'd encourage you to try and fix that instead. It'd be nice if it did
honor the max number of requests limit. The current batching works well
for allowing a process to queue some IO when it gets the allocation
'token', that should be retained.

Did you do any performance numbers with your patch, btw?

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ