[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1491089381.9734.2.camel@sandisk.com>
Date: Sat, 1 Apr 2017 23:29:55 +0000
From: Bart Van Assche <Bart.VanAssche@...disk.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"osandov@...ndov.com" <osandov@...ndov.com>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"axboe@...com" <axboe@...com>
CC: "kernel-team@...com" <kernel-team@...com>
Subject: Re: [PATCH] blk-mq: add random early detection I/O scheduler
On Sat, 2017-04-01 at 16:07 -0600, Jens Axboe wrote:
> On 04/01/2017 01:55 PM, Omar Sandoval wrote:
> > From: Omar Sandoval <osandov@...com>
> >
> > This patch introduces a new I/O scheduler based on the classic random
> > early detection active queue management algorithm [1]. Random early
> > detection is one of the simplest and most studied AQM algorithms for
> > networking, but until now, it hasn't been applied to disk I/O
> > scheduling.
> >
> > When applied to network routers, RED probabilistically either marks
> > packets with ECN or drops them, depending on the configuration. When
> > dealing with disk I/O, POSIX does not have any mechanism with which to
> > notify the caller that the disk is congested, so we instead only provide
> > the latter strategy. Included in this patch is a minor change to the
> > blk-mq to support this.
>
> This is great work. If we combine this with a thin provisioning target,
> we can even use this to save space on the backend. Better latencies,
> AND lower disk utilization.
>
> I'm tempted to just queue this up for this cycle and make it the default.
Hello Jens,
Did you mean making this the default scheduler for SSDs only or for all types
of block devices? Our (Western Digital) experience is that any I/O scheduler
that limits the queue depth reduces throughput for at least data-center style
workloads when using hard disks. This is why Adam is working on improving I/O
priority support for the Linux block layer. That approach namely allows to
reduce latency of certain requests without significantly impacting average
latency and throughput.
Bart.
Powered by blists - more mailing lists