To avoid possible deadlock. Proposed by Nick Piggin: You have tasklist_lock(R) nesting outside i_mmap_lock, and inside anon_vma lock. And anon_vma lock nests inside i_mmap_lock. This seems fragile. If rwlocks ever become FIFO or tasklist_lock changes type (maybe -rt kernels do it), then you could have a task holding anon_vma lock and waiting for tasklist_lock, and another holding tasklist lock and waiting for i_mmap_lock, and another holding i_mmap_lock and waiting for anon_vma lock. CC: Nick Piggin Signed-off-by: Wu Fengguang --- mm/memory-failure.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) --- sound-2.6.orig/mm/memory-failure.c +++ sound-2.6/mm/memory-failure.c @@ -215,12 +215,14 @@ static void collect_procs_anon(struct pa { struct vm_area_struct *vma; struct task_struct *tsk; - struct anon_vma *av = page_lock_anon_vma(page); + struct anon_vma *av; + read_lock(&tasklist_lock); + + av = page_lock_anon_vma(page); if (av == NULL) /* Not actually mapped anymore */ - return; + goto out; - read_lock(&tasklist_lock); for_each_process (tsk) { if (!tsk->mm) continue; @@ -230,6 +232,7 @@ static void collect_procs_anon(struct pa } } page_unlock_anon_vma(av); +out: read_unlock(&tasklist_lock); } -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/