lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100901135120.GA25251@redhat.com>
Date:	Wed, 1 Sep 2010 09:51:21 -0400
From:	Mike Snitzer <snitzer@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	jaxboe@...ionio.com, k-ueda@...jp.nec.com, j-nomura@...jp.nec.com,
	jamie@...reable.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-raid@...r.kernel.org,
	hch@....de, dm-devel@...hat.com
Subject: Re: [PATCH 3/5] dm: relax ordering of bio-based flush implementation

On Mon, Aug 30 2010 at  5:58am -0400,
Tejun Heo <tj@...nel.org> wrote:

> Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA doesn't mandate any ordering
> against other bio's.  This patch relaxes ordering around flushes.
> 
> * A flush bio is no longer deferred to workqueue directly.  It's
>   processed like other bio's but __split_and_process_bio() uses
>   md->flush_bio as the clone source.  md->flush_bio is initialized to
>   empty flush during md initialization and shared for all flushes.
> 
> * When dec_pending() detects that a flush has completed, it checks
>   whether the original bio has data.  If so, the bio is queued to the
>   deferred list w/ REQ_FLUSH cleared; otherwise, it's completed.
> 
> * As flush sequencing is handled in the usual issue/completion path,
>   dm_wq_work() no longer needs to handle flushes differently.  Now its
>   only responsibility is re-issuing deferred bio's the same way as
>   _dm_request() would.  REQ_FLUSH handling logic including
>   process_flush() is dropped.
> 
> * There's no reason for queue_io() and dm_wq_work() write lock
>   dm->io_lock.  queue_io() now only uses md->deferred_lock and
>   dm_wq_work() read locks dm->io_lock.
> 
> * bio's no longer need to be queued on the deferred list while a flush
>   is in progress making DMF_QUEUE_IO_TO_THREAD unncessary.  Drop it.
> 
> This avoids stalling the device during flushes and simplifies the
> implementation.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>

Looks good overall.

> @@ -144,11 +143,6 @@ struct mapped_device {
>  	spinlock_t deferred_lock;
>  
>  	/*
> -	 * An error from the flush request currently being processed.
> -	 */
> -	int flush_error;
> -
> -	/*
>  	 * Protect barrier_error from concurrent endio processing
>  	 * in request-based dm.
>  	 */

Could you please document why it is OK to remove 'flush_error' in the
patch header?  The -EOPNOTSUPP handling removal (done in patch 2)
obviously helps enable this but it is not clear how the
'num_flush_requests' flushes that __clone_and_map_flush() generates do
not need explicit DM error handling.

Other than that.

Acked-by: Mike Snitzer <snitzer@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ