lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6af2344-11f7-5862-daed-e21cbd496d92@kernel.dk>
Date:   Tue, 31 Mar 2020 12:26:01 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Paolo Valente <paolo.valente@...aro.org>,
        Ming Lei <ming.lei@...hat.com>
Cc:     Douglas Anderson <dianders@...omium.org>, jejb@...ux.ibm.com,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        linux-block <linux-block@...r.kernel.org>,
        Guenter Roeck <groeck@...omium.org>,
        linux-scsi@...r.kernel.org, sqazi@...gle.com,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] scsi: core: Fix stall if two threads request budget
 at the same time

On 3/31/20 12:07 PM, Paolo Valente wrote:
>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@...hat.com> ha scritto:
>>
>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote:
>>> It is possible for two threads to be running
>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx".
>>> This is because there can be more than one caller to
>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't
>>> prevent more than one thread from entering.
>>>
>>> If more than one thread is running blk_mq_do_dispatch_sched() at the
>>> same time with the same "hctx", they may have contention acquiring
>>> budget.  The blk_mq_get_dispatch_budget() can eventually translate
>>> into scsi_mq_get_budget().  If the device's "queue_depth" is 1 (not
>>> uncommon) then only one of the two threads will be the one to
>>> increment "device_busy" to 1 and get the budget.
>>>
>>> The losing thread will break out of blk_mq_do_dispatch_sched() and
>>> will stop dispatching requests.  The assumption is that when more
>>> budget is available later (when existing transactions finish) the
>>> queue will be kicked again, perhaps in scsi_end_request().
>>>
>>> The winning thread now has budget and can go on to call
>>> dispatch_request().  If dispatch_request() returns NULL here then we
>>> have a potential problem.  Specifically we'll now call
>>
>> I guess this problem should be BFQ specific. Now there is definitely
>> requests in BFQ queue wrt. this hctx. However, looks this request is
>> only available from another loser thread, and it won't be retrieved in
>> the winning thread via e->type->ops.dispatch_request().
>>
>> Just wondering why BFQ is implemented in this way?
>>
> 
> BFQ inherited this powerful non-working scheme from CFQ, some age ago.
> 
> In more detail: if BFQ has at least one non-empty internal queue, then
> is says of course that there is work to do.  But if the currently
> in-service queue is empty, and is expected to receive new I/O, then
> BFQ plugs I/O dispatch to enforce service guarantees for the
> in-service queue, i.e., BFQ responds NULL to a dispatch request.

What BFQ is doing is fine, IFF it always ensures that the queue is run
at some later time, if it returns "yep I have work" yet returns NULL
when attempting to retrieve that work. Generally this should happen from
subsequent IO completion, or whatever else condition will resolve the
issue that is currently preventing dispatch of that request. Last resort
would be a timer, but that can happen if you're slicing your scheduling
somehow.

> It would be very easy to change bfq_has_work so that it returns false
> in case the in-service queue is empty, even if there is I/O
> backlogged.  My only concern is: since everything has worked with the
> current scheme for probably 15 years, are we sure that everything is
> still ok after we change this scheme?

You're comparing apples to oranges, CFQ never worked within the blk-mq
scheduling framework.

That said, I don't think such a change is needed. If we currently have a
hang due to this discrepancy between has_work and gets_work, then it
sounds like we're not always re-running the queue as we should. From the
original patch, the budget putting is not something the scheduler is
involved with. Do we just need to ensure that if we put budget without
having dispatched a request, we need to kick off dispatching again?


-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ