lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 19 Jan 2018 09:19:24 -0700
From:   Jens Axboe <axboe@...nel.dk>
To:     Ming Lei <ming.lei@...hat.com>
Cc:     Bart Van Assche <Bart.VanAssche@....com>,
        "snitzer@...hat.com" <snitzer@...hat.com>,
        "dm-devel@...hat.com" <dm-devel@...hat.com>,
        "hch@...radead.org" <hch@...radead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "osandov@...com" <osandov@...com>
Subject: Re: [RFC PATCH] blk-mq: fixup RESTART when queue becomes idle

On 1/19/18 9:05 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>> resource are we running out of?
>>>>>
>>>>> It is from blk_get_request(underlying queue), see
>>>>> multipath_clone_and_map().
>>>>
>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>> quite possible that this situation can happen. Two potential solutions
>>>> I see:
>>>>
>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>    notified when the scarce resource becomes available. It would not
>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>
>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>    allocation. I haven't read the dm code to know if this is a
>>>>    possibility or not.
>>>>
>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>> queue head, retry, and bail if that also fails. Connecting the
>>>> scarce resource and the consumer is the only way to really fix
>>>> this, without bogus arbitrary delays.
>>>
>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>> resource should fix this issue.
>>
>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>> down the dm device by some random amount.
>>
>> A simple test case would be to have a null_blk device with a queue depth
>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>> that does IO to the underlying device, and one that does IO to the dm
>> device. If the job on the dm device runs substantially slower than the
>> one to the underlying device, then the problem isn't really fixed.
> 
> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> seems not observed this issue, could you explain a bit why IO over dm-mpath
> may be slower? Because both two IO contexts call same get_request(), and
> in theory dm-mpath should be a bit quicker since it uses direct issue for
> underlying queue, without io scheduler involved.

Because if you lose the race for getting the request, you'll have some
arbitrary delay before trying again, potentially. Compared to the direct
user of the underlying device, who will simply sleep on the resource and
get woken the instant it's available.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ