lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151103005113.GN10656@dastard>
Date:	Tue, 3 Nov 2015 11:51:13 +1100
From:	Dave Chinner <david@...morbit.com>
To:	Dan Williams <dan.j.williams@...el.com>
Cc:	axboe@...com, jack@...e.cz, linux-nvdimm@...ts.01.org,
	linux-kernel@...r.kernel.org, Jeff Moyer <jmoyer@...hat.com>,
	Jan Kara <jack@...e.com>, ross.zwisler@...ux.intel.com,
	hch@....de
Subject: Re: [PATCH v3 02/15] dax: increase granularity of dax_clear_blocks()
 operations

On Sun, Nov 01, 2015 at 11:29:53PM -0500, Dan Williams wrote:
> dax_clear_blocks is currently performing a cond_resched() after every
> PAGE_SIZE memset.  We need not check so frequently, for example md-raid
> only calls cond_resched() at stripe granularity.  Also, in preparation
> for introducing a dax_map_atomic() operation that temporarily pins a dax
> mapping move the call to cond_resched() to the outer loop.
> 
> Reviewed-by: Jan Kara <jack@...e.com>
> Reviewed-by: Jeff Moyer <jmoyer@...hat.com>
> Signed-off-by: Dan Williams <dan.j.williams@...el.com>
> ---
>  fs/dax.c |   27 ++++++++++++---------------
>  1 file changed, 12 insertions(+), 15 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 5dc33d788d50..f8e543839e5c 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -28,6 +28,7 @@
>  #include <linux/sched.h>
>  #include <linux/uio.h>
>  #include <linux/vmstat.h>
> +#include <linux/sizes.h>
>  
>  int dax_clear_blocks(struct inode *inode, sector_t block, long size)
>  {
> @@ -38,24 +39,20 @@ int dax_clear_blocks(struct inode *inode, sector_t block, long size)
>  	do {
>  		void __pmem *addr;
>  		unsigned long pfn;
> -		long count;
> +		long count, sz;
>  
> -		count = bdev_direct_access(bdev, sector, &addr, &pfn, size);
> +		sz = min_t(long, size, SZ_1M);
> +		count = bdev_direct_access(bdev, sector, &addr, &pfn, sz);
>  		if (count < 0)
>  			return count;
> -		BUG_ON(size < count);
> -		while (count > 0) {
> -			unsigned pgsz = PAGE_SIZE - offset_in_page(addr);
> -			if (pgsz > count)
> -				pgsz = count;
> -			clear_pmem(addr, pgsz);
> -			addr += pgsz;
> -			size -= pgsz;
> -			count -= pgsz;
> -			BUG_ON(pgsz & 511);
> -			sector += pgsz / 512;
> -			cond_resched();
> -		}
> +		if (count < sz)
> +			sz = count;
> +		clear_pmem(addr, sz);
> +		addr += sz;
> +		size -= sz;
> +		BUG_ON(sz & 511);
> +		sector += sz / 512;
> +		cond_resched();
>  	} while (size);
>  
>  	wmb_pmem();

dax_clear_blocks() needs to go away and be replaced by a driver
level implementation of blkdev_issue_zerout(). This is effectively a
block device operation (we're taking sector addresses and zeroing
them), so it really belongs in the pmem drivers rather than the DAX
code.

I suspect a REQ_WRITE_SAME implementation is the way to go here, as
then the filesystems can just call sb_issue_zerout() and the block
layer zeroing will work on all types of storage without the
filesystem having to care whether DAX is in use or not.

Putting the implementation of the zeroing in the pmem drivers will
enable the drivers to optimise the caching behaviour of block
zeroing.  The synchronous cache flushing behaviour of this function
is a performance killer as we are now block zeroing on allocation
and that results in two synchronous data writes (zero on alloc,
commit, write data, commit) for each page.

The zeroing (and the data, for that matter) doesn't need to be
committed to persistent store until the allocation is written and
committed to the journal - that will happen with a REQ_FLUSH|REQ_FUA
write, so it makes sense to deploy the big hammer and delay the
blocking CPU cache flushes until the last possible moment in cases
like this.

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ