lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 30 Nov 2010 09:57:12 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Hillf Danton <dhillf@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] maximize dispatching in block throttle

On Fri, Nov 26, 2010 at 10:46:01PM +0800, Hillf Danton wrote:
> When dispatching bio, the quantum is divided into read/write budgets,
> and dispatching for write could not exceed the write budget even if
> the read budget is not exhausted, either dispatching for read.
> 
> It is changed to exhaust the quantum, if possible, in this work for
> dispatching bio.
> 
> Though it is hard to understand that 50/50 division is not selected,
> the difference between divisions could impact little on dispatching as
> much as quantum allows then.
> 
> Signed-off-by: Hillf Danton <dhillf@...il.com>
> ---

Hi Hillf,

Even if there are not enough READS/WRITES to consume the quantum, I don't
think that it changes anyting much. The next dispatch round will be
scheduled almost immediately (If there are bios which are ready to
be dispatched). Look at throtl_schedule_next_dispatch().

Have you noticed some issues/improvements with this patch?

Generally READS are more latency sensitive as compared to WRITE hence
I thought of dispatching more READS per quantum.

Thanks
Vivek

> 
> --- a/block/blk-throttle.c	2010-11-01 19:54:12.000000000 +0800
> +++ b/block/blk-throttle.c	2010-11-26 21:49:00.000000000 +0800
> @@ -647,11 +647,16 @@ static int throtl_dispatch_tg(struct thr
>  	unsigned int max_nr_reads = throtl_grp_quantum*3/4;
>  	unsigned int max_nr_writes = throtl_grp_quantum - nr_reads;
>  	struct bio *bio;
> +	int read_throttled = 0, write_throttled = 0;
> 
>  	/* Try to dispatch 75% READS and 25% WRITES */
> -
> + try_read:
>  	while ((bio = bio_list_peek(&tg->bio_lists[READ]))
> -		&& tg_may_dispatch(td, tg, bio, NULL)) {
> +		&& ! read_throttled) {
> +		if (! tg_may_dispatch(td, tg, bio, NULL)) {
> +			read_throttled = 1;
> +			break;
> +		}
> 
>  		tg_dispatch_one_bio(td, tg, bio_data_dir(bio), bl);
>  		nr_reads++;
> @@ -659,9 +664,15 @@ static int throtl_dispatch_tg(struct thr
>  		if (nr_reads >= max_nr_reads)
>  			break;
>  	}
> -
> +	if (! bio)
> +		read_throttled = 1;
> + try_write:
>  	while ((bio = bio_list_peek(&tg->bio_lists[WRITE]))
> -		&& tg_may_dispatch(td, tg, bio, NULL)) {
> +		&& ! write_throttled) {
> +		if (! tg_may_dispatch(td, tg, bio, NULL)) {
> +			write_throttled = 1;
> +			break;
> +		}
> 
>  		tg_dispatch_one_bio(td, tg, bio_data_dir(bio), bl);
>  		nr_writes++;
> @@ -669,7 +680,23 @@ static int throtl_dispatch_tg(struct thr
>  		if (nr_writes >= max_nr_writes)
>  			break;
>  	}
> +	if (! bio)
> +		write_throttled = 1;
> +
> +	if (write_throttled && read_throttled)
> +		goto out;
> 
> +	if (! (throtl_grp_quantum > nr_writes + nr_reads))
> +		goto out;
> +		
> +	if (read_throttled) {
> +		max_nr_writes = throtl_grp_quantum - nr_reads;
> +		goto try_write;
> +	} else {
> +		max_nr_reads = throtl_grp_quantum - nr_writes;
> +		goto try_read;
> +	}
> + out:
>  	return nr_reads + nr_writes;
>  }
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ