lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190924214337.GA17405@cmpxchg.org>
Date:   Tue, 24 Sep 2019 17:43:37 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Josef Bacik <josef@...icpanda.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: drop mmap_sem before calling balance_dirty_pages()
 in write fault

On Tue, Sep 24, 2019 at 01:46:08PM -0700, Matthew Wilcox wrote:
> On Tue, Sep 24, 2019 at 03:42:38PM -0400, Johannes Weiner wrote:
> > > I'm not a fan of moving file_update_time() to _before_ the
> > > balance_dirty_pages call.
> > 
> > Can you elaborate why? If the filesystem has a page_mkwrite op, it
> > will have already called file_update_time() before this function is
> > entered. If anything, this change makes the sequence more consistent.
> 
> Oh, that makes sense.  I thought it should be updated after all the data
> was written, but it probably doesn't make much difference.
> 
> > > Also, this is now the third place that needs
> > > maybe_unlock_mmap_for_io, see
> > > https://lore.kernel.org/linux-mm/20190917120852.x6x3aypwvh573kfa@box/
> > 
> > Good idea, I moved the helper to internal.h and converted to it.
> > 
> > I left the shmem site alone, though. It doesn't require the file
> > pinning, so it shouldn't pointlessly bump the file refcount and
> > suggest such a dependency - that could cost somebody later quite a bit
> > of time trying to understand the code.
> 
> The problem for shmem is this:
> 
>                         spin_unlock(&inode->i_lock);
>                         schedule();
> 
>                         spin_lock(&inode->i_lock);
>                         finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
>                         spin_unlock(&inode->i_lock);
> 
> While scheduled, the VMA can go away and the inode be reclaimed, making
> this a use-after-free.  The initial suggestion was an increment on
> the inode refcount, but since we already have a pattern which involves
> pinning the file, I thought that was a better way to go.

I completely read over the context of that email you linked - that
there is a bug in the existing code - and looked at it as mere
refactoring patch. My apologies.

Switching that shmem site to maybe_unlock_mmap_for_io() to indirectly
pin the inode (in a separate bug fix patch) indeed makes sense to me.

> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@...radead.org>

Thanks, Matthew.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ