lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 24 Jul 2021 09:12:40 +0200
From:   Paolo Valente <paolo.valente@...aro.org>
To:     Yu Kuai <yukuai3@...wei.com>
Cc:     Jens Axboe <axboe@...nel.dk>, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com
Subject: Re: [PATCH 1/3] block, bfq: do not idle if only one cgroup is
 activated



> Il giorno 14 lug 2021, alle ore 11:45, Yu Kuai <yukuai3@...wei.com> ha scritto:
> 
> If only one group is activated, specifically
> 'bfqd->num_groups_with_pending_reqs == 1', there is no need to guarantee
> the same share of the throughput of queues in the same group.
> 
> Thus change the condition from '> 0' to '> 1' in
> bfq_asymmetric_scenario().

I see your point, and I agree with your goal.  Yet, your change seems
not to suffer from the following problem.

In addition to the groups that are created explicitly, there is the
implicit root group.  So, when bfqd->num_groups_with_pending_reqs ==
1, there may be both active processes in the root group and active
processes in the only group created explicitly.  In this case, idling
is needed to preserve service guarantees.

Probably your idea should be improved by making sure that there is
pending I/O only from either the root group or the explicit group.

Thanks,
Paolo

> By the way, if 'num_groups_with_pending_reqs'
> is greater than 1, there is no need to check 'varied_queue_weights' and
> 'multiple_classes_busy', thus move the judgement forward.
> 
> Test procedure:
> run "fio -numjobs=1 -ioengine=psync -bs=4k -direct=1 -rw=randread..." multiple
> times in the same cgroup(not root).
> 
> Test result: total bandwidth(Mib/s)
> | total jobs | before this patch | after this patch      |
> | ---------- | ----------------- | --------------------- |
> | 1          | 33.8              | 33.8                  |
> | 2          | 33.8              | 65.4 (32.7 each job)  |
> | 4          | 33.8              | 106.8 (26.7 each job) |
> | 8          | 33.8              | 126.4 (15.8 each job) |
> 
> By the way, if I test with "fio -numjobs=1/2/4/8 ...", test result is
> the same with or without this patch.
> 
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
> block/bfq-iosched.c | 25 ++++++++++++++++---------
> 1 file changed, 16 insertions(+), 9 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 727955918563..2768a4c1cc45 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -709,7 +709,9 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
>  * much easier to maintain the needed state:
>  * 1) all active queues have the same weight,
>  * 2) all active queues belong to the same I/O-priority class,
> - * 3) there are no active groups.
> + * 3) there is one active group at most.
> + * If the last condition is false, there is no need to guarantee the
> + * same share of the throughput of queues in the same group.
>  * In particular, the last condition is always true if hierarchical
>  * support or the cgroups interface are not enabled, thus no state
>  * needs to be maintained in this case.
> @@ -717,7 +719,16 @@ bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
> static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
> 				   struct bfq_queue *bfqq)
> {
> -	bool smallest_weight = bfqq &&
> +	bool smallest_weight;
> +	bool varied_queue_weights;
> +	bool multiple_classes_busy;
> +
> +#ifdef CONFIG_BFQ_GROUP_IOSCHED
> +	if (bfqd->num_groups_with_pending_reqs > 1)
> +		return true;
> +#endif
> +
> +	smallest_weight = bfqq &&
> 		bfqq->weight_counter &&
> 		bfqq->weight_counter ==
> 		container_of(
> @@ -729,21 +740,17 @@ static bool bfq_asymmetric_scenario(struct bfq_data *bfqd,
> 	 * For queue weights to differ, queue_weights_tree must contain
> 	 * at least two nodes.
> 	 */
> -	bool varied_queue_weights = !smallest_weight &&
> +	varied_queue_weights = !smallest_weight &&
> 		!RB_EMPTY_ROOT(&bfqd->queue_weights_tree.rb_root) &&
> 		(bfqd->queue_weights_tree.rb_root.rb_node->rb_left ||
> 		 bfqd->queue_weights_tree.rb_root.rb_node->rb_right);
> 
> -	bool multiple_classes_busy =
> +	multiple_classes_busy =
> 		(bfqd->busy_queues[0] && bfqd->busy_queues[1]) ||
> 		(bfqd->busy_queues[0] && bfqd->busy_queues[2]) ||
> 		(bfqd->busy_queues[1] && bfqd->busy_queues[2]);
> 
> -	return varied_queue_weights || multiple_classes_busy
> -#ifdef CONFIG_BFQ_GROUP_IOSCHED
> -	       || bfqd->num_groups_with_pending_reqs > 0
> -#endif
> -		;
> +	return varied_queue_weights || multiple_classes_busy;
> }
> 
> /*
> -- 
> 2.31.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ