[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190620164229.GK657710@devbig004.ftw2.facebook.com>
Date: Thu, 20 Jun 2019 09:42:29 -0700
From: Tejun Heo <tj@...nel.org>
To: Jan Kara <jack@...e.cz>
Cc: dsterba@...e.com, clm@...com, josef@...icpanda.com,
axboe@...nel.dk, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 4/9] blkcg: implement REQ_CGROUP_PUNT
Hello, Jan.
On Thu, Jun 20, 2019 at 05:37:33PM +0200, Jan Kara wrote:
> > +bool __blkcg_punt_bio_submit(struct bio *bio)
> > +{
> > + struct blkcg_gq *blkg = bio->bi_blkg;
> > +
> > + /* consume the flag first */
> > + bio->bi_opf &= ~REQ_CGROUP_PUNT;
> > +
> > + /* never bounce for the root cgroup */
> > + if (!blkg->parent)
> > + return false;
> > +
> > + spin_lock_bh(&blkg->async_bio_lock);
> > + bio_list_add(&blkg->async_bios, bio);
> > + spin_unlock_bh(&blkg->async_bio_lock);
> > +
> > + queue_work(blkcg_punt_bio_wq, &blkg->async_bio_work);
> > + return true;
> > +}
> > +
>
> So does this mean that if there is some inode with lots of dirty data for a
> blkcg that is heavily throttled, that blkcg can occupy a ton of workers all
> being throttled in submit_bio()? Or what is constraining a number of
> workers one blkcg can consume?
There's only one work item per blkcg-device pair, so the maximum
number of kthreads a blkcg can occupy on a filesystem would be one.
It's the same scheme as writeback work items.
Thanks.
--
tejun
Powered by blists - more mailing lists