lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180802165831.GB8928@ming.t460p>
Date:   Fri, 3 Aug 2018 00:58:32 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Bart Van Assche <Bart.VanAssche@....com>
Cc:     "jianchao.w.wang@...cle.com" <jianchao.w.wang@...cle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
        "axboe@...nel.dk" <axboe@...nel.dk>
Subject: Re: [RFC] blk-mq: clean up the hctx restart

On Thu, Aug 02, 2018 at 03:52:00PM +0000, Bart Van Assche wrote:
> On Wed, 2018-08-01 at 16:58 +0800, Ming Lei wrote:
> > On Wed, Aug 01, 2018 at 10:17:30AM +0800, jianchao.wang wrote:
> > > However, due to the limits in hctx_may_queue, q_b still cannot get the
> > > tags. The RR restart also will not wake up q_a.
> > > This is unfair for q_a.
> > > 
> > > When we remove RR restart fashion, at least, the q_a will be waked up by
> > > the hctx restart.
> > > Is this the improvement of fairness you said in driver tag allocation ?
> > 
> > I mean the fairness is totally covered by the general tag allocation
> > algorithm now, which is sort of FIFO style because of waitqueue, but RR
> > restart wakes up queue in the order of request queue.
> 
> From sbitmap.h:
> 
> #define SBQ_WAIT_QUEUES 8
> 
> What do you think is the effect of your patch if more than eight LUNs are
> active and the SCSI queue is full?

Jens introduces multiple wait queues for scattering atomic operation on
multiple counters, in theory, the way is understood easily if you just
treat it as one whole queue in concept.

And about the situations you mentioned, no any special as normal cases
or thousands of LUNs. Just a batch of queues are waken up from one
single wait queue(sbq_wait_state), and inside each wait queue, queues
are handled actually in FIFO order.

Or what is your expected ideal behaviour about fairness?

thanks, 
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ