lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.11.1510191852460.5432@eggly.anvils>
Date:	Mon, 19 Oct 2015 19:22:15 -0700 (PDT)
From:	Hugh Dickins <hughd@...gle.com>
To:	Mike Kravetz <mike.kravetz@...cle.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Dave Hansen <dave.hansen@...ux.intel.com>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH 2/3] mm/hugetlb: Setup hugetlb_falloc during fallocate
 hole punch

On Mon, 19 Oct 2015, Mike Kravetz wrote:
> On 10/19/2015 04:16 PM, Andrew Morton wrote:
> > On Fri, 16 Oct 2015 15:08:29 -0700 Mike Kravetz <mike.kravetz@...cle.com> wrote:
> 
> >>  		mutex_lock(&inode->i_mutex);
> >> +
> >> +		spin_lock(&inode->i_lock);
> >> +		inode->i_private = &hugetlb_falloc;
> >> +		spin_unlock(&inode->i_lock);
> > 
> > Locking around a single atomic assignment is a bit peculiar.  I can
> > kinda see that it kinda protects the logic in hugetlb_fault(), but I
> > would like to hear (in comment form) your description of how this logic
> > works?
> 
> To be honest, this code/scheme was copied from shmem as it addresses
> the same situation there.  I did not notice how strange this looks until
> you pointed it out.  At first glance, the locking does appear to be
> unnecessary.  The fault code initially checks this value outside the
> lock.  However, the fault code (on another CPU) will take the lock
> and access values within the structure.  Without the locking or some other
> type of memory barrier here, there is no guarantee that the structure
> will be initialized before setting i_private.  So, the faulting code
> could see invalid values in the structure.
> 
> Hugh, is that accurate?  You provided the shmem code.

Yes, I think that's accurate; but confess I'm replying now for the
sake of replying in a rare timely fashion, before having spent any
time looking through your hugetlbfs reimplementation of the same.

The peculiar thing in the shmem case, was that the structure being
pointed to is on the kernel stack of the fallocating task (with
i_mutex guaranteeing only one at a time per file could be doing this):
so the faulting end has to be careful that it's not accessing the
stale memory after the fallocator has retreated back up its stack.

And in the shmem case, this "temporary inode extension" also had to
communicate to shmem_writepage(), the swapout end of things.  Which
is not a complication you have with hugetlbfs: perhaps it could be
simpler if just between fallocate and fault, or perhaps not.

Whilst it does all work for tmpfs, it looks as if tmpfs was ahead of
the pack (or trinity was attacking tmpfs before other filesystems),
and the issue of faulting versus holepunching (and DAX) has captured
wider interest recently, with Dave Chinner formulating answers in XFS,
and hoping to set an example for other filesystems.

If that work were further along, and if I had had time to digest any
of what he is doing about it, I would point you in his direction rather
than this; but since this does work for tmpfs, I shouldn't discourage you.

I'll try to take a look through yours in the coming days, but there's
several other patchsets I need to look through too, plus a few more
patches from me, if I can find time to send them in: juggling priorities.

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ