lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 7 Dec 2017 13:22:30 -0800 (PST)
From:   David Rientjes <rientjes@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
cc:     Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Andrea Arcangeli <aarcange@...hat.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: Multiple oom_reaper BUGs: unmap_page_range racing with
 exit_mmap

On Thu, 7 Dec 2017, Michal Hocko wrote:

> Very well spotted! It could be any task in fact (e.g. somebody reading
> from /proc/<pid> file which requires mm_struct).
> 
> oom_reaper		oom_victim		task
> 						mmget_not_zero
> 			exit_mmap
> 			  mmput
> __oom_reap_task_mm				mmput
>   						  __mmput
> 						    exit_mmap
> 						      remove_vma
>   unmap_page_range
> 
> So we need a more robust test for the oom victim. Your suggestion is
> basically what I came up with originally [1] and which was deemed
> ineffective because we took the mmap_sem even for regular paths and
> Kirill was afraid this adds some unnecessary cycles to the exit path
> which is quite hot.
> 

Yes, I can confirm that in all crashes that we have analyzed so far that 
MMF_OOM_SKIP is actually set at the time that oom_reaper causes BUGs of 
various stack traces all originating from unmap_page_range() which is 
certainly not supposed to happen.

> So I guess we have to do something else instead. We have to store the
> oom flag to the mm struct as well. Something like the patch below.
> 
> [1] http://lkml.kernel.org/r/20170724072332.31903-1-mhocko@kernel.org
> ---
> diff --git a/include/linux/oom.h b/include/linux/oom.h
> index 27cd36b762b5..b7668b5d3e14 100644
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -77,6 +77,11 @@ static inline bool tsk_is_oom_victim(struct task_struct * tsk)
>  	return tsk->signal->oom_mm;
>  }
>  
> +static inline bool mm_is_oom_victim(struct mm_struct *mm)
> +{
> +	return test_bit(MMF_OOM_VICTIM, &mm->flags);
> +}
> +
>  /*
>   * Checks whether a page fault on the given mm is still reliable.
>   * This is no longer true if the oom reaper started to reap the
> diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
> index 9c8847395b5e..da673ca66e7a 100644
> --- a/include/linux/sched/coredump.h
> +++ b/include/linux/sched/coredump.h
> @@ -68,8 +68,9 @@ static inline int get_dumpable(struct mm_struct *mm)
>  #define MMF_RECALC_UPROBES	20	/* MMF_HAS_UPROBES can be wrong */
>  #define MMF_OOM_SKIP		21	/* mm is of no interest for the OOM killer */
>  #define MMF_UNSTABLE		22	/* mm is unstable for copy_from_user */
> -#define MMF_HUGE_ZERO_PAGE	23      /* mm has ever used the global huge zero page */
> -#define MMF_DISABLE_THP		24	/* disable THP for all VMAs */
> +#define MMF_OOM_VICTIM		23	/* mm is the oom victim */
> +#define MMF_HUGE_ZERO_PAGE	24      /* mm has ever used the global huge zero page */
> +#define MMF_DISABLE_THP		25	/* disable THP for all VMAs */
>  #define MMF_DISABLE_THP_MASK	(1 << MMF_DISABLE_THP)
>  
>  #define MMF_INIT_MASK		(MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\

Could we not adjust the bit values, but simply add new one for 
MMF_OOM_VICTIM?  We have automated tools that look at specific bits in 
mm->flags and it would be nice to not have them be inconsistent between 
kernel versions.  Not absolutely required, but nice to avoid.

> diff --git a/mm/mmap.c b/mm/mmap.c
> index 476e810cf100..d00a06248ef1 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3005,7 +3005,7 @@ void exit_mmap(struct mm_struct *mm)
>  	unmap_vmas(&tlb, vma, 0, -1);
>  
>  	set_bit(MMF_OOM_SKIP, &mm->flags);
> -	if (unlikely(tsk_is_oom_victim(current))) {
> +	if (unlikely(mm_is_oom_victim(mm))) {
>  		/*
>  		 * Wait for oom_reap_task() to stop working on this
>  		 * mm. Because MMF_OOM_SKIP is already set before
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 3b0d0fed8480..e4d290b6804b 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -666,8 +666,10 @@ static void mark_oom_victim(struct task_struct *tsk)
>  		return;
>  
>  	/* oom_mm is bound to the signal struct life time. */
> -	if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm))
> +	if (!cmpxchg(&tsk->signal->oom_mm, NULL, mm)) {
>  		mmgrab(tsk->signal->oom_mm);
> +		set_bit(MMF_OOM_VICTIM, &mm->flags);
> +	}
>  
>  	/*
>  	 * Make sure that the task is woken up from uninterruptible sleep

Looks good, I see the other email with the same functional change plus a 
follow-up based on a suggestion by Tetsuo.  I'll test it alongside a 
change to not adjust existing MMF_* bit numbers.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ