lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <579F20D9.80107@plexistor.com>
Date:	Mon, 01 Aug 2016 13:13:45 +0300
From:	Boaz Harrosh <boaz@...xistor.com>
To:	Dave Chinner <david@...morbit.com>,
	Dan Williams <dan.j.williams@...el.com>
CC:	Jan Kara <jack@...e.cz>,
	"linux-nvdimm@...ts.01.org" <linux-nvdimm@...ts.01.org>,
	XFS Developers <xfs@....sgi.com>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: Subtle races between DAX mmap fault and write path

On 07/30/2016 03:12 AM, Dave Chinner wrote:
<>
> 
> If we track the dirty blocks from write in the radix tree like we
> for mmap, then we can just use a normal memcpy() in dax_do_io(),
> getting rid of the slow cache bypass that is currently run. Radix
> tree updates are much less expensive than a slow memcpy of large
> amounts of data, ad fsync can then take care of persistence, just
> like we do for mmap.
> 

No! 

mov_nt instructions, That "slow cache bypass that is currently run" above
is actually faster then cached writes by 20%, and if you add the dirty
tracking and cl_flush instructions it becomes x2 slower in the most
optimal case and 3 times slower in the DAX case.

The network guys have noticed the mov_nt instructions superior performance
for years before we pushed DAX into the tree. look for users of copy_from_iter_nocache
and the comments when they where introduced, those where used before DAX, and
nothing at all to do with persistence.

So what you are suggesting is fine only 3 times slower in the current
implementation.

> We should just make the design assumption that all persistent memory
> is volatile, track where we dirty it in all paths, and use the
> fastest volatile memcpy primitives available to us in the IO path.

The "fastest volatile memcpy primitives available" is what we do
today with the mov_nt instructions.

> We'll end up with a faster fastpath that if we use CPU cache bypass
> copies, dax_do_io() and mmap will be coherent and synchronised, and
> fsync() will have the same requirements and overhead regardless of
> the way the application modifies the pmem or the hardware platform
> used to implement the pmem.
> 

I measured, there is tests running in our labs every night, your
suggestion on an ADR system is 3 times slower to reach persistence.

Is why I was pushing for MMAP_PMEM_AWARE, because a smart mmap application
from user-mode uses mov_nt anyway because it wants that 20% gain regardless
of what the Kernel will do. Then it calls fsync() and the Kernel will burn
x2 more CPU, just for the sake of burning CPU, because the data is already
persistent at the get go.

> Cheers,
> Dave.

As you, I do not care for DAX very much, but please lets keep the physical
facts strait

Cheers indeed
Boaz

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ