[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190620153733.GM30243@quack2.suse.cz>
Date: Thu, 20 Jun 2019 17:37:33 +0200
From: Jan Kara <jack@...e.cz>
To: Tejun Heo <tj@...nel.org>
Cc: dsterba@...e.com, clm@...com, josef@...icpanda.com,
axboe@...nel.dk, jack@...e.cz, linux-btrfs@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
kernel-team@...com
Subject: Re: [PATCH 4/9] blkcg: implement REQ_CGROUP_PUNT
On Sat 15-06-19 11:24:48, Tejun Heo wrote:
> When a shared kthread needs to issue a bio for a cgroup, doing so
> synchronously can lead to priority inversions as the kthread can be
> trapped waiting for that cgroup. This patch implements
> REQ_CGROUP_PUNT flag which makes submit_bio() punt the actual issuing
> to a dedicated per-blkcg work item to avoid such priority inversions.
>
> This will be used to fix priority inversions in btrfs compression and
> should be generally useful as we grow filesystem support for
> comprehensive IO control.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Reviewed-by: Josef Bacik <josef@...icpanda.com>
> Cc: Chris Mason <clm@...com>
...
> +bool __blkcg_punt_bio_submit(struct bio *bio)
> +{
> + struct blkcg_gq *blkg = bio->bi_blkg;
> +
> + /* consume the flag first */
> + bio->bi_opf &= ~REQ_CGROUP_PUNT;
> +
> + /* never bounce for the root cgroup */
> + if (!blkg->parent)
> + return false;
> +
> + spin_lock_bh(&blkg->async_bio_lock);
> + bio_list_add(&blkg->async_bios, bio);
> + spin_unlock_bh(&blkg->async_bio_lock);
> +
> + queue_work(blkcg_punt_bio_wq, &blkg->async_bio_work);
> + return true;
> +}
> +
So does this mean that if there is some inode with lots of dirty data for a
blkcg that is heavily throttled, that blkcg can occupy a ton of workers all
being throttled in submit_bio()? Or what is constraining a number of
workers one blkcg can consume?
Honza
--
Jan Kara <jack@...e.com>
SUSE Labs, CR
Powered by blists - more mailing lists