[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1270121264.1653.205.camel@laptop>
Date: Thu, 01 Apr 2010 13:27:44 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Avi Kivity <avi@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
aarcange@...hat.com, akpm@...ux-foundation.org,
Kent Overstreet <kent.overstreet@...il.com>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [COUNTERPATCH] mm: avoid overflowing preempt_count() in
mmu_take_all_locks()
On Thu, 2010-04-01 at 14:17 +0300, Avi Kivity wrote:
> On 04/01/2010 02:13 PM, Avi Kivity wrote:
> >
> >> Anyway, I don't see a reason why we can't convert those locks to
> >> mutexes and get rid of the whole preempt disabled region.
> >
> > If someone is willing to audit all code paths to make sure these locks
> > are always taken in schedulable context I agree that's a better fix.
> >
>
> From mm/rmap.c:
>
> > /*
> > * Lock ordering in mm:
> > *
> > * inode->i_mutex (while writing or truncating, not reading or
> > faulting)
> > * inode->i_alloc_sem (vmtruncate_range)
> > * mm->mmap_sem
> > * page->flags PG_locked (lock_page)
> > * mapping->i_mmap_lock
> > * anon_vma->lock
> ...
> > *
> > * (code doesn't rely on that order so it could be switched around)
> > * ->tasklist_lock
> > * anon_vma->lock (memory_failure, collect_procs_anon)
> > * pte map lock
> > */
>
> i_mmap_lock is a spinlock, and tasklist_lock is a rwlock, so some
> changes will be needed.
i_mmap_lock will need to change just as well, mm_take_all_locks() uses
both anon_vma->lock and mapping->i_mmap_lock.
I've almost got a patch done that converts those two, still need to look
where that tasklist_lock muck happens.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists