lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220407131809.f2d256541e2c039c434c0d72@linux-foundation.org>
Date:   Thu, 7 Apr 2022 13:18:09 -0700
From:   Andrew Morton <akpm@...ux-foundation.org>
To:     Nico Pache <npache@...hat.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Rafael Aquini <aquini@...hat.com>,
        Waiman Long <longman@...hat.com>, Baoquan He <bhe@...hat.com>,
        Christoph von Recklinghausen <crecklin@...hat.com>,
        Don Dutile <ddutile@...hat.com>,
        "Herton R . Krzesinski" <herton@...hat.com>,
        David Rientjes <rientjes@...gle.com>,
        Michal Hocko <mhocko@...e.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Davidlohr Bueso <dave@...olabs.net>,
        Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Joel Savitz <jsavitz@...hat.com>,
        Darren Hart <dvhart@...radead.org>
Subject: Re: [PATCH v6] oom_kill.c: futex: Don't OOM reap the VMA containing
 the robust_list_head

On Thu,  7 Apr 2022 14:42:54 -0400 Nico Pache <npache@...hat.com> wrote:

> The pthread struct is allocated on PRIVATE|ANONYMOUS memory [1] which can
> be targeted by the oom reaper. This mapping is used to store the futex
> robust list head; the kernel does not keep a copy of the robust list and
> instead references a userspace address to maintain the robustness during
> a process death. A race can occur between exit_mm and the oom reaper that
> allows the oom reaper to free the memory of the futex robust list before
> the exit path has handled the futex death:
> 
>     CPU1                               CPU2
> ------------------------------------------------------------------------
>     page_fault
>     do_exit "signal"
>     wake_oom_reaper
>                                         oom_reaper
>                                         oom_reap_task_mm (invalidates mm)
>     exit_mm
>     exit_mm_release
>     futex_exit_release
>     futex_cleanup
>     exit_robust_list
>     get_user (EFAULT- can't access memory)
> 
> If the get_user EFAULT's, the kernel will be unable to recover the
> waiters on the robust_list, leaving userspace mutexes hung indefinitely.
> 
> Use the robust_list address stored in the kernel to skip the VMA that holds
> it, allowing a successful futex_cleanup.
> 
> Theoretically a failure can still occur if there are locks mapped as
> PRIVATE|ANON; however, the robust futexes are a best-effort approach.
> This patch only strengthens that best-effort.
> 
> The following case can still fail:
> robust head (skipped) -> private lock (reaped) -> shared lock (skipped)
> 
> Reproducer: https://gitlab.com/jsavitz/oom_futex_reproducer

Should this fix be backported into -stable kernels?

> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -106,7 +106,8 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
>  	return 0;
>  }
>  
> -bool __oom_reap_task_mm(struct mm_struct *mm);
> +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head
> +		__user *robust_list);

Should explicitly include futex.h

>  long oom_badness(struct task_struct *p,
>  		unsigned long totalpages);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 3aa839f81e63..c14fe6f8e9a5 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3126,7 +3126,8 @@ void exit_mmap(struct mm_struct *mm)
>  		 * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
>  		 * __oom_reap_task_mm() will not block.
>  		 */
> -		(void)__oom_reap_task_mm(mm);
> +		(void)__oom_reap_task_mm(mm, current->robust_list);
> +
>  		set_bit(MMF_OOM_SKIP, &mm->flags);
>  	}
>  
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 7ec38194f8e1..727cfc3bd284 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -509,9 +509,11 @@ static DECLARE_WAIT_QUEUE_HEAD(oom_reaper_wait);
>  static struct task_struct *oom_reaper_list;
>  static DEFINE_SPINLOCK(oom_reaper_lock);
>  
> -bool __oom_reap_task_mm(struct mm_struct *mm)
> +bool __oom_reap_task_mm(struct mm_struct *mm, struct robust_list_head
> +		__user *robust_list)
>  {

It's pretty sad to make such a low-level function aware of futex
internals.  How about making it a more general `void *skip_area'?

>  	struct vm_area_struct *vma;
> +	unsigned long head = (unsigned long) robust_list;
>  	bool ret = true;
>  
>  	/*
> @@ -526,6 +528,11 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
>  		if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
>  			continue;
>  
> +		if (vma->vm_start <= head && vma->vm_end > head) {

This check as you have it is making assumptions about the length of the
area at *robust_list and about that area's relation to the area
represented by the vma.

So if this is to be made more generic, we'd also need skip_area_len so
we can perform a full overlap check.

I dunno, maybe not worth it at this time, what do others think.

But the special-casing in here is pretty painful.

> +			pr_info("oom_reaper: skipping vma, contains robust_list");
> +			continue;
> +		}
> +
>  		/*
>  		 * Only anonymous pages have a good chance to be dropped
>  		 * without additional steps which we cannot afford as we
> @@ -587,7 +594,7 @@ static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
>  	trace_start_task_reaping(tsk->pid);
>  
>  	/* failed to reap part of the address space. Try again later */
> -	ret = __oom_reap_task_mm(mm);
> +	ret = __oom_reap_task_mm(mm, tsk->robust_list);
>  	if (!ret)
>  		goto out_finish;
>  
> @@ -1190,7 +1197,8 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
>  	 * Check MMF_OOM_SKIP again under mmap_read_lock protection to ensure
>  	 * possible change in exit_mmap is seen
>  	 */
> -	if (!test_bit(MMF_OOM_SKIP, &mm->flags) && !__oom_reap_task_mm(mm))
> +	if (!test_bit(MMF_OOM_SKIP, &mm->flags) &&
> +			!__oom_reap_task_mm(mm, p->robust_list))
>  		ret = -EAGAIN;
>  	mmap_read_unlock(mm);
>  
> -- 
> 2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ