lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Wed, 16 Jul 2008 21:15:38 +0400
From:	Oleg Nesterov <oleg@...sign.ru>
To:	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org
Subject: PATCH? exit_mm: update the comments, move atomic_inc(mm_count) outside of mmap_sem

(textually depends on
 "[PATCH 2/2] coredump: exit_mm: clear ->mm first, then play with ->core_state")

More the "is my understanding correct" than a patch. Assuming it is correct:

	- explain the subtle atomic_inc(&mm->mm_count).

	- move it outside of mm->mmap_sem, otherwise it is not clear why
	  do we need this lock to bump ->mm_count.

	- "fix" the comment above task_lock(). This lock is really needed
	  for get_task_mm/etc. Yes, we can change the code to use smp_mb()
	  + spin_unlock_wait() but I guess we also have another reason.

Signed-off-by: Oleg Nesterov <oleg@...sign.ru>

--- 26-rc2/kernel/exit.c~EXIT_MM	2008-07-15 20:24:50.000000000 +0400
+++ 26-rc2/kernel/exit.c	2008-07-16 20:59:34.000000000 +0400
@@ -680,6 +680,14 @@ static void exit_mm(struct task_struct *
 	mm_release(tsk, mm);
 	if (!mm)
 		return;
+
+	/*
+	 * We are going to clear ->mm. The next schedule() after that
+	 * will assume we did context_switch()->atomic_inc(mm_count).
+	 * finish_task_switch()->mmdrop() will "fix" the counter.
+	 */
+	atomic_inc(&mm->mm_count);
+	BUG_ON(mm != tsk->active_mm);
 	/*
 	 * Serialize with any possible pending coredump.
 	 * We must hold mmap_sem around checking core_state
@@ -689,9 +697,7 @@ static void exit_mm(struct task_struct *
 	 */
 	down_read(&mm->mmap_sem);
 	core_state = mm->core_state;
-	atomic_inc(&mm->mm_count);
-	BUG_ON(mm != tsk->active_mm);
-	/* more a memory barrier than a real lock */
+	/* disables preemption, we must not schedule before enter_lazy_tlb() */
 	task_lock(tsk);
 	tsk->mm = NULL;
 	up_read(&mm->mmap_sem);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ