lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 1 Feb 2021 15:32:29 +0800
From:   Lin Feng <linf@...gsu.com>
To:     axboe@...nel.dk, paolo.valente@...aro.org, jack@...e.cz
Cc:     linux-kernel@...r.kernel.org, linux-block@...r.kernel.org
Subject: Re: [PATCH] Revert "bfq: Fix computation of shallow depth"

Hi, it seems that this patch was blocked by linux mailist servers, so ping again.

Based on https://patchwork.kernel.org/project/linux-block/patch/20201210094433.25491-1-jack@suse.cz/,
it looks like we have made a consensus about bfqd->word_depths[2][2]'s changing, so now the
computation codes for bfq's word_depths array are not necessary and one variable is enough.

But IMHO async depth limitation for slow drivers is essential, which is what we always did in cfq age.

On 1/29/21 19:18, Lin Feng wrote:
> This reverts commit 6d4d273588378c65915acaf7b2ee74e9dd9c130a.
> 
> bfq.limit_depth passes word_depths[] as shallow_depth down to sbitmap core
> sbitmap_get_shallow, which uses just the number to limit the scan depth of
> each bitmap word, formula:
> scan_percentage_for_each_word = shallow_depth / (1 << sbimap->shift) * 100%
> 
> That means the comments's percentiles 50%, 75%, 18%, 37% of bfq are correct.
> But after commit patch 'bfq: Fix computation of shallow depth', we use
> sbitmap.depth instead, as a example in following case:
> 
> sbitmap.depth = 256, map_nr = 4, shift = 6; sbitmap_word.depth = 64.
> The resulsts of computed bfqd->word_depths[] are {128, 192, 48, 96}, and
> three of the numbers exceed core dirver's 'sbitmap_word.depth=64' limit
> nothing. Do we really don't want limit depth for such workloads, or we
> just want to bump up the percentiles to 100%?
> 
> Please correct me if I miss something, thanks.
> 
> Signed-off-by: Lin Feng <linf@...gsu.com>
> ---
>   block/bfq-iosched.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
> index 9e4eb0fc1c16..9e81d1052091 100644
> --- a/block/bfq-iosched.c
> +++ b/block/bfq-iosched.c
> @@ -6332,13 +6332,13 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
>   	 * limit 'something'.
>   	 */
>   	/* no more than 50% of tags for async I/O */
> -	bfqd->word_depths[0][0] = max(bt->sb.depth >> 1, 1U);
> +	bfqd->word_depths[0][0] = max((1U << bt->sb.shift) >> 1, 1U);
>   	/*
>   	 * no more than 75% of tags for sync writes (25% extra tags
>   	 * w.r.t. async I/O, to prevent async I/O from starving sync
>   	 * writes)
>   	 */
> -	bfqd->word_depths[0][1] = max((bt->sb.depth * 3) >> 2, 1U);
> +	bfqd->word_depths[0][1] = max(((1U << bt->sb.shift) * 3) >> 2, 1U);
>   
>   	/*
>   	 * In-word depths in case some bfq_queue is being weight-
> @@ -6348,9 +6348,9 @@ static unsigned int bfq_update_depths(struct bfq_data *bfqd,
>   	 * shortage.
>   	 */
>   	/* no more than ~18% of tags for async I/O */
> -	bfqd->word_depths[1][0] = max((bt->sb.depth * 3) >> 4, 1U);
> +	bfqd->word_depths[1][0] = max(((1U << bt->sb.shift) * 3) >> 4, 1U);
>   	/* no more than ~37% of tags for sync writes (~20% extra tags) */
> -	bfqd->word_depths[1][1] = max((bt->sb.depth * 6) >> 4, 1U);
> +	bfqd->word_depths[1][1] = max(((1U << bt->sb.shift) * 6) >> 4, 1U);
>   
>   	for (i = 0; i < 2; i++)
>   		for (j = 0; j < 2; j++)
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ