lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 7 Apr 2022 15:32:01 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Nico Pache <npache@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Rafael Aquini <aquini@...hat.com>,
        Waiman Long <longman@...hat.com>, Baoquan He <bhe@...hat.com>,
        Christoph von Recklinghausen <crecklin@...hat.com>,
        Don Dutile <ddutile@...hat.com>,
        "Herton R . Krzesinski" <herton@...hat.com>,
        David Rientjes <rientjes@...gle.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Joel Savitz <jsavitz@...hat.com>,
        Darren Hart <dvhart@...radead.org>
Subject: Re: [PATCH v6] oom_kill.c: futex: Don't OOM reap the VMA containing
 the robust_list_head

On Thu, 7 Apr 2022 17:52:31 -0400 Nico Pache <npache@...hat.com> wrote:

> >>
> >> The following case can still fail:
> >> robust head (skipped) -> private lock (reaped) -> shared lock (skipped)
> >>
> >> Reproducer: https://gitlab.com/jsavitz/oom_futex_reproducer
> > 
> > Should this fix be backported into -stable kernels?
> 
> Yes I believe so. This is caused by the commit marked under 'Fixes:' which is in
> stable branch.

OK.  The MM team don't like the promiscuous backporting of things which
we didn't ask to be backported.  So -stable maintainers have been
trained (we hope) to only backport things which we explicitly marked
cc:stable.

> >> --- a/include/linux/oom.h
> >> +++ b/include/linux/oom.h
> >> @@ -106,7 +106,8 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
> >>  	return 0;
> >>  }
> >>  
> >> -bool __oom_reap_task_mm(struct mm_struct *mm);
> >> +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head
> >> +		__user *robust_list);
> > 
> > Should explicitly include futex.h
> Good point. On second thought I think we also need to surround some of the
> changes with a ifdef CONFIG_FUTEX. current->robust_list is undefined if we turn
> that config option off.

Ah.

> > 
> >>  long oom_badness(struct task_struct *p,
> >>  		unsigned long totalpages);
> >> diff --git a/mm/mmap.c b/mm/mmap.c
> >> index 3aa839f81e63..c14fe6f8e9a5 100644
> >> --- a/mm/mmap.c
> >> +++ b/mm/mmap.c
> >> @@ -3126,7 +3126,8 @@ void exit_mmap(struct mm_struct *mm)
> >>  		 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
> >>  		 * __oom_reap_task_mm() will not block.
> >>  		 */
> >> -		(void)__oom_reap_task_mm(mm);
> >> +		(void)__oom_reap_task_mm(mm, current->robust_list);
> >> +
> >>  		set_bit(MMF_OOM_SKIP, &mm->flags);
> >>  	}
> >>  
> >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> >> index 7ec38194f8e1..727cfc3bd284 100644
> >> --- a/mm/oom_kill.c
> >> +++ b/mm/oom_kill.c
> >> @@ -509,9 +509,11 @@ static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait);
> >>  static struct task_struct *oom_reaper_list;
> >>  static DEFINE_SPINLOCK(oom_reaper_lock);
> >>  
> >> -bool __oom_reap_task_mm(struct mm_struct *mm)
> >> +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head
> >> +		__user *robust_list)
> >>  {
> > 
> > It's pretty sad to make such a low-level function aware of futex
> > internals.  How about making it a more general `void *skip_area'?
> Yes we can make this change. My concern is that the caller may now have to cast
> the type: __oom_reap_task_mm(mm_struct, (void*) current->robust_list). But I
> doubt that is a big concern.

No cast needed - the compiler will happily cast an anything* to a void*.

> > 
> >>  	struct vm_area_struct *vma;
> >> +	unsigned long head = (unsigned long) robust_list;
> >>  	bool ret = true;
> >>  
> >>  	/*
> >> @@ -526,6 +528,11 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
> >>  		if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> >>  			continue;
> >>  
> >> +		if (vma->vm_start <= head && vma->vm_end > head) {
> > 
> > This check as you have it is making assumptions about the length of the
> > area at *robust_list and about that area's relation to the area
> > represented by the vma.
> > 
> > So if this is to be made more generic, we'd also need skip_area_len so
> > we can perform a full overlap check.
> Im not sure I follow here. Can a single MMAP call span multiple VMAs? The
> address would be part of the pthread_t struct which is mmapped by the userspace
> code. We are simply looking for that VMA and skipping the oom of it. It does not
> try to find the individual locks (allocated separately and represented on a
> LinkedList), it just prevents the reaping of the robust_list_head (part of
> pthread_t) which stores the start of this LL. If some of the locks are private
> (shared locks are not reaped) we may run into a case where this still fails;
> however, we haven't been able to reproduce this.

Well I was thinking that if it were to become a generic "ignore this
region" then the code would need to be taught to skip any vma which had
any form of overlap with that region.  Which sounds like way overdesign
until there's a demonstrated need.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ