[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180627192046.ieqncfl6ioy37mof@destiny>
Date: Wed, 27 Jun 2018 15:20:48 -0400
From: Josef Bacik <josef@...icpanda.com>
To: Jens Axboe <axboe@...nel.dk>
Cc: Josef Bacik <josef@...icpanda.com>, linux-block@...r.kernel.org,
kernel-team@...com, akpm@...ux-foundation.org, hannes@...xchg.org,
linux-kernel@...r.kernel.org, tj@...nel.org,
linux-fsdevel@...r.kernel.org, Josef Bacik <jbacik@...com>
Subject: Re: [PATCH 12/15] block: introduce blk-iolatency io controller
On Wed, Jun 27, 2018 at 01:06:31PM -0600, Jens Axboe wrote:
> On 6/25/18 9:12 AM, Josef Bacik wrote:
> > +static void __blkcg_iolatency_throttle(struct rq_qos *rqos,
> > + struct iolatency_grp *iolat,
> > + spinlock_t *lock, bool issue_as_root,
> > + bool use_memdelay)
> > + __releases(lock)
> > + __acquires(lock)
> > +{
> > + struct rq_wait *rqw = &iolat->rq_wait;
> > + unsigned use_delay = atomic_read(&lat_to_blkg(iolat)->use_delay);
> > + DEFINE_WAIT(wait);
> > + bool first_block = true;
> > +
> > + if (use_delay)
> > + blkcg_schedule_throttle(rqos->q, use_memdelay);
> > +
> > + /*
> > + * To avoid priority inversions we want to just take a slot if we are
> > + * issuing as root. If we're being killed off there's no point in
> > + * delaying things, we may have been killed by OOM so throttling may
> > + * make recovery take even longer, so just let the IO's through so the
> > + * task can go away.
> > + */
> > + if (issue_as_root || fatal_signal_pending(current)) {
> > + atomic_inc(&rqw->inflight);
> > + return;
> > + }
> > +
> > + if (iolatency_may_queue(iolat, &wait, first_block))
> > + return;
> > +
> > + do {
> > + prepare_to_wait_exclusive(&rqw->wait, &wait,
> > + TASK_UNINTERRUPTIBLE);
> > +
> > + iolatency_may_queue(iolat, &wait, first_block);
> > + first_block = false;
> > +
> > + if (lock) {
> > + spin_unlock_irq(lock);
> > + io_schedule();
> > + spin_lock_irq(lock);
> > + } else {
> > + io_schedule();
> > + }
> > + } while (1);
>
> So how does this wait loop ever exit?
>
Sigh, I cleaned this up from what we're using in production and did it poorly,
I'll fix it up. Thanks,
Josef
Powered by blists - more mailing lists