lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Oct 2012 13:06:18 +0900
From:	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	David Rientjes <rientjes@...gle.com>
CC:	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Dave Jones <davej@...hat.com>,
	KOSAKI Motohiro <kosaki.motohiro@...il.com>,
	bhutchings@...arflare.com,
	Konstantin Khlebnikov <khlebnikov@...nvz.org>,
	Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
	Hugh Dickins <hughd@...gle.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [patch for-3.7 v2] mm, mempolicy: avoid taking mutex inside spinlock
 when reading numa_maps

(2012/10/18 6:31), David Rientjes wrote:
> As a result of commit 32f8516a8c73 ("mm, mempolicy: fix printing stack
> contents in numa_maps"), the mutex protecting a shared policy can be
> inadvertently taken while holding task_lock(task).
>
> Recently, commit b22d127a39dd ("mempolicy: fix a race in
> shared_policy_replace()") switched the spinlock within a shared policy to
> a mutex so sp_alloc() could block.  Thus, a refcount must be grabbed on
> all mempolicies returned by get_vma_policy() so it isn't freed while being
> passed to mpol_to_str() when reading /proc/pid/numa_maps.
>
> This patch only takes task_lock() while dereferencing task->mempolicy in
> get_vma_policy() if it's non-NULL in the lockess check to increment its
> refcount.  This ensures it will remain in memory until dropped by
> __mpol_put() after mpol_to_str() is called.
>
> Refcounts of shared policies are grabbed by the ->get_policy() function of
> the vma, all others will be grabbed directly in get_vma_policy().  Now
> that this is done, all callers now unconditionally drop the refcount.
>

please add original problem description....

from your 1st patch.
> When reading /proc/pid/numa_maps, it's possible to return the contents of
> the stack where the mempolicy string should be printed if the policy gets
> freed from beneath us.
>
> This happens because mpol_to_str() may return an error the
> stack-allocated buffer is then printed without ever being stored.
.....

Hmm, I've read the whole thread again...and, I'm sorry if I misunderstand something.

I think Kosaki mentioned the commit 52cd3b0740. It avoids refcounting in get_vma_policy()
because it's called every time alloc_pages_vma() is called, at every page fault.
So, it seems he doesn't agree this fix because of performance concern on big NUMA,


Can't we have another way to fix ? like this ? too ugly ?
Again, I'm sorry if I misunderstand the points.

==

 From bfe7e2ab1c1375b134ec12efce6517149318f75d Mon Sep 17 00:00:00 2001
From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Date: Thu, 18 Oct 2012 13:17:25 +0900
Subject: [PATCH] hold task->mempolicy while numa_maps scans.

  /proc/<pid>/numa_maps scans vma and show mempolicy under
  mmap_sem. It sometimes accesses task->mempolicy which can
  be freed without mmap_sem and numa_maps can show some
  garbage while scanning.

This patch tries to take reference count of task->mempolicy at reading
numa_maps before calling get_vma_policy(). By this, task->mempolicy
will not be freed until numa_maps reaches its end.

Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
---
  fs/proc/task_mmu.c |   20 ++++++++++++++++++++
  1 file changed, 20 insertions(+)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 14df880..d92e868 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -94,6 +94,11 @@ static void vma_stop(struct proc_maps_private *priv, struct vm_area_struct *vma)
  {
  	if (vma && vma != priv->tail_vma) {
  		struct mm_struct *mm = vma->vm_mm;
+#ifdef CONFIG_NUMA
+		task_lock(priv->task);
+		__mpol_put(priv->task->mempolicy);
+		task_unlock(priv->task);
+#endif
  		up_read(&mm->mmap_sem);
  		mmput(mm);
  	}
@@ -130,6 +135,16 @@ static void *m_start(struct seq_file *m, loff_t *pos)
  		return mm;
  	down_read(&mm->mmap_sem);
  
+	/*
+	 * task->mempolicy can be freed even if mmap_sem is down (see kernel/exit.c)
+	 * We grab refcount for stable access.
+	 * repleacement of task->mmpolicy is guarded by mmap_sem.
+	 */
+#ifdef CONFIG_NUMA
+	task_lock(priv->task);
+	mpol_get(priv->task->mempolicy);
+	task_unlock(priv->task);
+#endif
  	tail_vma = get_gate_vma(priv->task->mm);
  	priv->tail_vma = tail_vma;
  
@@ -161,6 +176,11 @@ out:
  
  	/* End of vmas has been reached */
  	m->version = (tail_vma != NULL)? 0: -1UL;
+#ifdef CONFIG_NUMA
+	task_lock(priv->task);
+	__mpol_put(priv->task->mempolicy);
+	task_unlock(priv->task);
+#endif
  	up_read(&mm->mmap_sem);
  	mmput(mm);
  	return tail_vma;
-- 
1.7.10.2













--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ