[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170113150511.GD23338@kernel.dk>
Date: Fri, 13 Jan 2017 08:05:11 -0700
From: Jens Axboe <axboe@...com>
To: Hannes Reinecke <hare@...e.de>
CC: <linux-kernel@...r.kernel.org>, <linux-block@...r.kernel.org>,
<osandov@...ndov.com>, <bart.vanassche@...disk.com>
Subject: Re: [PATCHSET v6] blk-mq scheduling framework
On Fri, Jan 13 2017, Hannes Reinecke wrote:
> On 01/13/2017 12:04 PM, Hannes Reinecke wrote:
> > On 01/13/2017 09:15 AM, Hannes Reinecke wrote:
> >> On 01/11/2017 10:39 PM, Jens Axboe wrote:
> >>> Another year, another posting of this patchset. The previous posting
> >>> was here:
> >>>
> >>> https://www.spinics.net/lists/kernel/msg2406106.html
> >>>
> >>> (yes, I've skipped v5, it was fixes on top of v4, not the rework).
> >>>
> >>> I've reworked bits of this to get rid of the shadow requests, thanks
> >>> to Bart for the inspiration. The missing piece, for me, was the fact
> >>> that we have the tags->rqs[] indirection array already. I've done this
> >>> somewhat differently, though, by having the internal scheduler tag
> >>> map be allocated/torn down when an IO scheduler is attached or
> >>> detached. This also means that when we run without a scheduler, we
> >>> don't have to do double tag allocations, it'll work like before.
> >>>
> >>> The patchset applies on top of 4.10-rc3, or can be pulled here:
> >>>
> >>> git://git.kernel.dk/linux-block blk-mq-sched.6
> >>>
> >> Well ... something's wrong here on my machine:
> >>
> [ .. ]
>
> Turns out that selecting CONFIG_DEFAULT_MQ_DEADLINE is the culprit;
> switching to CONFIG_DEFAULT_MQ_NONE and selecting mq-deadline after
> booting manually makes the problem go away.
>
> So there is a race condition during device init and switching the I/O
> scheduler.
>
> But the results from using mq-deadline are promising; the performance
> drop I've seen on older hardware seems to be resolved:
>
> mq iosched:
> seq read : io=13383MB, bw=228349KB/s, iops=57087
> rand read : io=12876MB, bw=219709KB/s, iops=54927
> seq write: io=14532MB, bw=247987KB/s, iops=61996
> rand write: io=13779MB, bw=235127KB/s, iops=58781
> mq default:
> seq read : io=13056MB, bw=222588KB/s, iops=55647
> rand read : io=12908MB, bw=220069KB/s, iops=55017
> seq write: io=13986MB, bw=238444KB/s, iops=59611
> rand write: io=13733MB, bw=234128KB/s, iops=58532
> sq default:
> seq read : io=10240MB, bw=194787KB/s, iops=48696
> rand read : io=10240MB, bw=191374KB/s, iops=47843
> seq write: io=10240MB, bw=245333KB/s, iops=61333
> rand write: io=10240MB, bw=228239KB/s, iops=57059
>
> measured on mpt2sas with SSD devices.
Perfect! Straight on the path of kill of non scsi-mq, then.
I'll fix up the async scan issue. The new mq schedulers don't really
behave differently in this regard, so I'm a bit puzzled. Hopefully it
reproduces here.
--
Jens Axboe
Powered by blists - more mailing lists