lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170104164051.GA32600@linux.intel.com>
Date:   Wed, 4 Jan 2017 09:40:51 -0700
From:   Ross Zwisler <ross.zwisler@...ux.intel.com>
To:     Xiong Zhou <xzhou@...hat.com>
Cc:     Ross Zwisler <ross.zwisler@...ux.intel.com>,
        Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
        linux-nvdimm@...1.01.org, linux-kernel@...r.kernel.org
Subject: Re: LTP rwtest01 blocks on DAX mountpoint

On Wed, Jan 04, 2017 at 05:48:34PM +0800, Xiong Zhou wrote:
> On Tue, Jan 03, 2017 at 09:57:10AM -0700, Ross Zwisler wrote:
> > On Tue, Jan 03, 2017 at 02:49:22PM +0800, Xiong Zhou wrote:
> > > On Mon, Jan 02, 2017 at 02:49:41PM -0700, Ross Zwisler wrote:
> > > > On Mon, Jan 02, 2017 at 06:16:17PM +0100, Jan Kara wrote:
> > > > > On Fri 30-12-16 17:33:53, Xiong Zhou wrote:
> > > > > > On Sat, Dec 24, 2016 at 07:07:14PM +0800, Xiong Zhou wrote:
> > > > > > > Hi lists,
> > > snip
> > > > > I was trying to reproduce this but for me rwtest01 completes just fine on
> > > > > dax mountpoint (I've used your reproducer). So can you sample several
> > > > > kernel stack traces to get a rough idea where the kernel is running?
> > > > > Thanks!
> > > > > 
> > > > > 								Honza
> > > > 
> > > > I'm also unable to reproduce this issue.  I've tried with both the blamed
> > > > commit:
> > > > 4b4bb46 (HEAD) dax: clear dirty entry tags on cache flush
> > > > and with v4.9-rc2.  Both pass the test in my setup.
> > > > Perhaps the variable is the size of your PMEM partitions?
> > > > # fdisk -l /dev/pmem0
> > > > Disk /dev/pmem0: 16 GiB, 17179869184 bytes, 33554432 sectors
> > > > Units: sectors of 1 * 512 = 512 bytes
> > > > Sector size (logical/physical): 512 bytes / 4096 bytes
> > > > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > > > Disklabel type: dos
> > > > Disk identifier: 0xfe50c900
> > > > Device       Boot    Start      End  Sectors Size Id Type
> > > > /dev/pmem0p1          4096 25165823 25161728  12G 83 Linux
> > > > /dev/pmem0p2      25165824 33550335  8384512   4G 83 Linux
> > > > 
> > > > What does your setup look like?
> > > > I'm using the current tip of the LTP tree:
> > > > 8cc4165  waitid02: define _XOPEN_SOURCE 500
> > > > Thanks,
> > > > - Ross
> > > 
> > > Thanks all for looking into it.
> > > 
> > > Turns out the rc2 relative updates fix this issue, so does
> > > an old issue i reported a while ago:
> > > multi-threads libvmmalloc fork test hang
> > > https://lists.01.org/pipermail/linux-nvdimm/2016-October/007602.html
> > > 
> > > I'm able to reproduce these issues before rc2, now it
> > > passes on current Linus tree:
> > > c8b4ec8 Merge tag 'fscrypt-for-stable'
> > 
> > Hmm...I'm able to reproduce the other libvmmalloc issue with both v4.10-rc2
> > and with "c8b4ec8 Merge tag 'fscrypt-for-stable'".  I'm debugging that issue
> > today.
> > 
> > It's interesting that both tests started passing for you.  Did you change
> > something in your test setup?
> 
> Hi,
> 
> Quick update:
>   Ross's new patch fixed the vmmaloc_fork issue, not the rc2 update.
>   Regression tests is going on, so far so good.
> 
> I'm able to reproduce the vmmalloc_fork issue on rc2 kernel
> 	c8b4ec8 Merge tag 'fscrypt-for-stable'
> with nvml commit to
> 	77c2a5a Merge pull request #1554 from krzycz/win-libvmem_rc
> 
> My previous statement about rc2 fixed old vmmalloc_fork issue
> was wrong, my mistake. I have changed my test setup.
> 
> Now after some tests, Ross's patch
> 	[PATCH] dax: fix deadlock with DAX 4k holes
> on top of Linus tree c8b4ec8 have fixed this vmmalloc_fork issue.
> My DAX regression tests is going on, looks good so far. Gonna
> update once it have finished.

Cool, thanks for the update.  If you're still able to reproduce this second
issue after my patch we can dig in to the differences between your test setup
and mine so I can reproduce it & debug.

Thanks for the reports!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ