lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 7 Apr 2020 12:43:58 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     Doug Anderson <dianders@...omium.org>
Cc:     Ming Lei <ming.lei@...hat.com>, Jens Axboe <axboe@...nel.dk>,
        "James E.J. Bottomley" <jejb@...ux.ibm.com>,
        "Martin K. Petersen" <martin.petersen@...cle.com>,
        Salman Qazi <sqazi@...gle.com>,
        linux-block <linux-block@...r.kernel.org>,
        linux-scsi@...r.kernel.org, Guenter Roeck <groeck@...omium.org>,
        Ajay Joshi <ajay.joshi@....com>, Arnd Bergmann <arnd@...db.de>,
        Bart Van Assche <bvanassche@....org>,
        Chaitanya Kulkarni <chaitanya.kulkarni@....com>,
        Damien Le Moal <damien.lemoal@....com>,
        Hou Tao <houtao1@...wei.com>,
        Pavel Begunkov <asml.silence@...il.com>,
        Tejun Heo <tj@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 2/2] blk-mq: Rerun dispatching in the case of budget
 contention



> Il giorno 5 apr 2020, alle ore 18:16, Doug Anderson <dianders@...omium.org> ha scritto:
> 
> Hi,
> 
> On Sun, Apr 5, 2020 at 7:55 AM Paolo Valente <paolo.valente@...aro.org> wrote:
>> 
>>> Il giorno 5 apr 2020, alle ore 16:00, Doug Anderson <dianders@...omium.org> ha scritto:
>>> 
>>> Hi,
>>> 
>>> On Sun, Apr 5, 2020 at 2:15 AM Ming Lei <ming.lei@...hat.com> wrote:
>>>> 
>>>> OK, looks it isn't specific on BFQ any more.
>>>> 
>>>> Follows another candidate approach for this issue, given it is so hard
>>>> to trigger, we can make it more reliable by rerun queue when has_work()
>>>> returns true after ops->dispath_request() returns NULL.
>>>> 
>>>> diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
>>>> index 74cedea56034..4408e5d4fcd8 100644
>>>> --- a/block/blk-mq-sched.c
>>>> +++ b/block/blk-mq-sched.c
>>>> @@ -80,6 +80,7 @@ void blk_mq_sched_restart(struct blk_mq_hw_ctx *hctx)
>>>>       blk_mq_run_hw_queue(hctx, true);
>>>> }
>>>> 
>>>> +#define BLK_MQ_BUDGET_DELAY    3               /* ms units */
>>>> /*
>>>> * Only SCSI implements .get_budget and .put_budget, and SCSI restarts
>>>> * its queue by itself in its completion handler, so we don't need to
>>>> @@ -103,6 +104,9 @@ static void blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
>>>>               rq = e->type->ops.dispatch_request(hctx);
>>>>               if (!rq) {
>>>>                       blk_mq_put_dispatch_budget(hctx);
>>>> +
>>>> +                       if (e->type->ops.has_work && e->type->ops.has_work(hctx))
>>>> +                               blk_mq_delay_run_hw_queue(hctx, BLK_MQ_BUDGET_DELAY);
>>> 
>>> I agree that your patch should solve the race.  With the current BFQ's
>>> has_work() it's a bit of a disaster though. It will essentially put
>>> blk-mq into a busy-wait loop (with a 3 ms delay between each poll)
>>> while BFQ's has_work() says "true" but BFQ doesn't dispatch anything.
>>> 
>>> ...so I guess the question that still needs to be answered: does
>>> has_work() need to be exact?  If so then we need the patch you propose
>>> plus one to BFQ.  If not, we should continue along the lines of my
>>> patch.
>>> 
>> 
>> Some more comments.  BFQ's I/O plugging lasts 9 ms by default.  So,
>> with this last Ming's patch, BFQ may happen to be polled every 3ms,
>> for at most three times.
> 
> Ah!  I did not know this.  OK, then Ming's patch seems like it should
> work.  If nothing else it should fix the problem.  If this ends up
> making BFQ chew up too much CPU time then presumably someone will
> notice and BFQ's has_work() can be improved.
> 
> Ming: how do you want to proceed?  Do you want to formally post the
> patch?  Do you want me to post a v3 of my series where I place patch
> #2 with your patch?  Do you want authorship (which implies adding your
> Signed-off-by)?
> 
> 
>> On the opposite end, making bfq_has_work plugging aware costs more
>> complexity, and possibly one more lock.  While avoiding the above
>> occasional polling, this may imply a lot of overhead or CPU stalls on
>> every dispatch.
> 
> I still think it would be interesting to run performance tests with my
> proof-of-concept solution for has_work().  Even if it's not ideal,
> knowing whether performance increased, decreased, or stayed the same
> would give information about how much more effort should be put into
> this.
> 

Why not?  It is however hard to hope that we add only negligible
overhead and CPU stalls if we move from one lock-protected section per
I/O-request dispatch, to two or more lock-protected sections per
request (has_work may be invoked several times per request).

At any rate, if useful, one of the scripts in my S benchmark suite can
also measure max IOPS (when limited only by I/O processing) [1].  The
script is for Linux distros; I don't know whether it works in your
environments of interest, Doug.

Paolo

[1] https://github.com/Algodev-github/S/blob/master/throughput-sync/throughput-sync.sh

> -Doug

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ