lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190620170141.GO30243@quack2.suse.cz>
Date:   Thu, 20 Jun 2019 19:01:41 +0200
From:   Jan Kara <jack@...e.cz>
To:     Tejun Heo <tj@...nel.org>
Cc:     Jan Kara <jack@...e.cz>, dsterba@...e.com, clm@...com,
        josef@...icpanda.com, axboe@...nel.dk, linux-btrfs@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-block@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH 4/9] blkcg: implement REQ_CGROUP_PUNT

On Thu 20-06-19 09:42:29, Tejun Heo wrote:
> Hello, Jan.
> 
> On Thu, Jun 20, 2019 at 05:37:33PM +0200, Jan Kara wrote:
> > > +bool __blkcg_punt_bio_submit(struct bio *bio)
> > > +{
> > > +	struct blkcg_gq *blkg = bio->bi_blkg;
> > > +
> > > +	/* consume the flag first */
> > > +	bio->bi_opf &= ~REQ_CGROUP_PUNT;
> > > +
> > > +	/* never bounce for the root cgroup */
> > > +	if (!blkg->parent)
> > > +		return false;
> > > +
> > > +	spin_lock_bh(&blkg->async_bio_lock);
> > > +	bio_list_add(&blkg->async_bios, bio);
> > > +	spin_unlock_bh(&blkg->async_bio_lock);
> > > +
> > > +	queue_work(blkcg_punt_bio_wq, &blkg->async_bio_work);
> > > +	return true;
> > > +}
> > > +
> > 
> > So does this mean that if there is some inode with lots of dirty data for a
> > blkcg that is heavily throttled, that blkcg can occupy a ton of workers all
> > being throttled in submit_bio()? Or what is constraining a number of
> > workers one blkcg can consume?
> 
> There's only one work item per blkcg-device pair, so the maximum
> number of kthreads a blkcg can occupy on a filesystem would be one.
> It's the same scheme as writeback work items.

OK, I've missed the fact that although the work item can get queued while
it is still running it cannot get executed more than once at a time (which
is kind of obvious but I got confused). Thanks for explanation!

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ