lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110202215152.GC12559@redhat.com>
Date:	Wed, 2 Feb 2011 16:51:52 -0500
From:	Vivek Goyal <vgoyal@...hat.com>
To:	Mike Snitzer <snitzer@...hat.com>
Cc:	Tejun Heo <tj@...nel.org>, Jens Axboe <jaxboe@...ionio.com>,
	tytso@....edu, djwong@...ibm.com, shli@...nel.org, neilb@...e.de,
	adilger.kernel@...ger.ca, jack@...e.cz,
	linux-kernel@...r.kernel.org, kmannth@...ibm.com, cmm@...ibm.com,
	linux-ext4@...r.kernel.org, rwheeler@...hat.com, hch@....de,
	josef@...hat.com, jmoyer@...hat.com
Subject: Re: [PATCH v2 1/2] block: skip elevator initialization for flush
 requests

On Tue, Feb 01, 2011 at 05:46:12PM -0500, Mike Snitzer wrote:
> Skip elevator initialization during request allocation if REQ_SORTED
> is not set in the @rw_flags passed to the request allocator.
> 
> Set REQ_SORTED for all requests that may be put on IO scheduler.  Flush
> requests are not put on IO scheduler so REQ_SORTED is not set for
> them.

So we are doing all this so that elevator_private and flush data can
share the space through union and we can avoid increasing the size
of struct rq by 1 pointer (4 or 8 bytes depneding on arch)? 

Looks good to me. One minor comment inline.

Acked-by: Vivek Goyal <vgoyal@...hat.com>

Vivek

> 
> Signed-off-by: Mike Snitzer <snitzer@...hat.com>
> ---
>  block/blk-core.c |   24 +++++++++++++++++++-----
>  1 files changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 72dd23b..f6fcc64 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -764,7 +764,7 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
>  	struct request_list *rl = &q->rq;
>  	struct io_context *ioc = NULL;
>  	const bool is_sync = rw_is_sync(rw_flags) != 0;
> -	int may_queue, priv;
> +	int may_queue, priv = 0;
>  
>  	may_queue = elv_may_queue(q, rw_flags);
>  	if (may_queue == ELV_MQUEUE_NO)
> @@ -808,9 +808,14 @@ static struct request *get_request(struct request_queue *q, int rw_flags,
>  	rl->count[is_sync]++;
>  	rl->starved[is_sync] = 0;
>  
> -	priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
> -	if (priv)
> -		rl->elvpriv++;
> +	/*
> +	 * Only initialize elevator data if REQ_SORTED is set.
> +	 */
> +	if (rw_flags & REQ_SORTED) {
> +		priv = !test_bit(QUEUE_FLAG_ELVSWITCH, &q->queue_flags);
> +		if (priv)
> +			rl->elvpriv++;
> +	}
>  
>  	if (blk_queue_io_stat(q))
>  		rw_flags |= REQ_IO_STAT;
> @@ -1197,6 +1202,7 @@ static int __make_request(struct request_queue *q, struct bio *bio)
>  	const unsigned short prio = bio_prio(bio);
>  	const bool sync = !!(bio->bi_rw & REQ_SYNC);
>  	const bool unplug = !!(bio->bi_rw & REQ_UNPLUG);
> +	const bool flush = !!(bio->bi_rw & (REQ_FLUSH | REQ_FUA));
>  	const unsigned long ff = bio->bi_rw & REQ_FAILFAST_MASK;
>  	int where = ELEVATOR_INSERT_SORT;
>  	int rw_flags;
> @@ -1210,7 +1216,7 @@ static int __make_request(struct request_queue *q, struct bio *bio)
>  
>  	spin_lock_irq(q->queue_lock);
>  
> -	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
> +	if (flush) {
>  		where = ELEVATOR_INSERT_FLUSH;
>  		goto get_rq;
>  	}
> @@ -1293,6 +1299,14 @@ get_rq:
>  		rw_flags |= REQ_SYNC;
>  
>  	/*
> +	 * Set REQ_SORTED for all requests that may be put on IO scheduler.
> +	 * The request allocator's IO scheduler initialization will be skipped
> +	 * if REQ_SORTED is not set.
> +	 */

Do you want to mention here that why do we want to avoid IO scheduler
initialization. Specifically mention that set_request() is avoided so
that elevator_private[*] are not initialized and that space can be
used by flush request data.

> +	if (!flush)
> +		rw_flags |= REQ_SORTED;
> +
> +	/*
>  	 * Grab a free request. This is might sleep but can not fail.
>  	 * Returns with the queue unlocked.
>  	 */
> -- 
> 1.7.3.4
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ