lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 May 2023 23:08:53 +0800
From:   hanjinke <hanjinke.666@...edance.com>
To:     Andrea Righi <andrea.righi@...onical.com>
Cc:     tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk,
        cgroups@...r.kernel.org, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v2] blk-throttle: Fix io statistics for
 cgroup v1

Hi

Sorry for delay(Chinese Labor Day holiday).

在 2023/4/29 上午3:05, Andrea Righi 写道:
> On Sat, Apr 01, 2023 at 05:47:08PM +0800, Jinke Han wrote:
>> From: Jinke Han <hanjinke.666@...edance.com>
>>
>> After commit f382fb0bcef4 ("block: remove legacy IO schedulers"),
>> blkio.throttle.io_serviced and blkio.throttle.io_service_bytes become
>> the only stable io stats interface of cgroup v1, and these statistics
>> are done in the blk-throttle code. But the current code only counts the
>> bios that are actually throttled. When the user does not add the throttle
>> limit, the io stats for cgroup v1 has nothing. I fix it according to the
>> statistical method of v2, and made it count all ios accurately.
>>
>> Fixes: a7b36ee6ba29 ("block: move blk-throtl fast path inline")
>> Signed-off-by: Jinke Han <hanjinke.666@...edance.com>
> 
> Thanks for fixing this!
> 
> The code looks correct to me, but this seems to report io statistics
> only if at least one throttling limit is defined. IIRC with cgroup v1 it
> was possible to see the io statistics inside a cgroup also with no
> throttling limits configured.
> 
> Basically to restore the old behavior we would need to drop the
> cgroup_subsys_on_dfl() check, something like the following (on top of
> your patch).
> 
> But I'm not sure if we're breaking other behaviors in this way...
> opinions?
> 
>   block/blk-cgroup.c   |  3 ---
>   block/blk-throttle.h | 12 +++++-------
>   2 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index 79138bfc6001..43af86db7cf3 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -2045,9 +2045,6 @@ void blk_cgroup_bio_start(struct bio *bio)
>   	struct blkg_iostat_set *bis;
>   	unsigned long flags;
>   
> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys))
> -		return;
> -
>   	/* Root-level stats are sourced from system-wide IO stats */
>   	if (!cgroup_parent(blkcg->css.cgroup))
>   		return;
> diff --git a/block/blk-throttle.h b/block/blk-throttle.h
> index d1ccbfe9f797..bcb40ee2eeba 100644
> --- a/block/blk-throttle.h
> +++ b/block/blk-throttle.h
> @@ -185,14 +185,12 @@ static inline bool blk_should_throtl(struct bio *bio)
>   	struct throtl_grp *tg = blkg_to_tg(bio->bi_blkg);
>   	int rw = bio_data_dir(bio);
>   
> -	if (!cgroup_subsys_on_dfl(io_cgrp_subsys)) {
> -		if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> -			bio_set_flag(bio, BIO_CGROUP_ACCT);
> -			blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> -					bio->bi_iter.bi_size);
> -		}
> -		blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);
> +	if (!bio_flagged(bio, BIO_CGROUP_ACCT)) {
> +		bio_set_flag(bio, BIO_CGROUP_ACCT);
> +		blkg_rwstat_add(&tg->stat_bytes, bio->bi_opf,
> +				bio->bi_iter.bi_size);
>   	}
> +	blkg_rwstat_add(&tg->stat_ios, bio->bi_opf, 1);

It seems that statistics have been carried out in both v1 and v2,we can 
get the statistics of v2 from io.stat, is it necessary to count v2 here?


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ