lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 22 Oct 2021 12:32:08 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Suren Baghdasaryan <surenb@...gle.com>, akpm@...ux-foundation.org,
        rientjes@...gle.com, hannes@...xchg.org, guro@...com,
        riel@...riel.com, minchan@...nel.org, christian@...uner.io,
        hch@...radead.org, oleg@...hat.com, david@...hat.com,
        jannh@...gle.com, shakeelb@...gle.com, luto@...nel.org,
        christian.brauner@...ntu.com, fweimer@...hat.com, jengelh@...i.de,
        linux-api@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 1/1] mm: prevent a race between process_mrelease and
 exit_mmap

On Fri, Oct 22, 2021 at 10:03:29AM +0200, Michal Hocko wrote:
> On Thu 21-10-21 18:46:58, Suren Baghdasaryan wrote:
> > Race between process_mrelease and exit_mmap, where free_pgtables is
> > called while __oom_reap_task_mm is in progress, leads to kernel crash
> > during pte_offset_map_lock call. oom-reaper avoids this race by setting
> > MMF_OOM_VICTIM flag and causing exit_mmap to take and release
> > mmap_write_lock, blocking it until oom-reaper releases mmap_read_lock.
> > Reusing MMF_OOM_VICTIM for process_mrelease would be the simplest way to
> > fix this race, however that would be considered a hack. Fix this race
> > by elevating mm->mm_users and preventing exit_mmap from executing until
> > process_mrelease is finished. Patch slightly refactors the code to adapt
> > for a possible mmget_not_zero failure.
> > This fix has considerable negative impact on process_mrelease performance
> > and will likely need later optimization.
> 
> I am not sure there is any promise that process_mrelease will run in
> parallel with the exiting process. In fact the primary purpose of this
> syscall is to provide a reliable way to oom kill from user space. If you
> want to optimize process exit resp. its exit_mmap part then you should
> be using other means. So I would be careful calling this a regression.
> 
> I do agree that taking the reference count is the right approach here. I
> was wrong previously [1] when saying that pinning the mm struct is
> sufficient. I have completely forgot about the subtle sync in exit_mmap.
> One way we can approach that would be to take exclusive mmap_sem
> throughout the exit_mmap unconditionally. There was a push back against
> that though so arguments would have to be re-evaluated.

I have another reason for wanting to take the mmap_sem throughout
exit_mmap.  Liam and I are working on using the Maple tree to replace
the rbtree & vma linked list.  It uses lockdep to check that you haven't
forgotten to take a lock (as of two days ago, that mean the mmap_sem
or the RCU read lock) when walking the tree.

So I'd like to hold it over:

 - unlock_range()
 - unmap_vmas()
 - free_pgtables()
 - while (vma) remove_vma()

Which is basically the whole of exit_mmap().  I'd like to know more
about why there was pushback on holding the mmap_lock across this
-- we're exiting, so nobody else should have a reference to the mm?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ