[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689GQLMKztfhymtE-NFvmOxMsf6UB6XssdoBVv17tUv_Qww@mail.gmail.com>
Date: Fri, 4 Jan 2013 14:58:12 -0800
From: Michel Lespinasse <walken@...gle.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: Ingo Molnar <mingo@...nel.org>, Al Viro <viro@...iv.linux.org.uk>,
Hugh Dickins <hughd@...gle.com>, Jorn_Engel <joern@...fs.org>,
Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/9] Avoid populating unbounded num of ptes with mmap_sem held
On Fri, Jan 4, 2013 at 10:16 AM, Andy Lutomirski <luto@...capital.net> wrote:
> I still have quite a few instances of 2-6 ms of latency due to
> "call_rwsem_down_read_failed __do_page_fault do_page_fault
> page_fault". Any idea why? I don't know any great way to figure out
> who is holding mmap_sem at the time. Given what my code is doing, I
> suspect the contention is due to mmap or munmap on a file. MCL_FUTURE
> is set, and MAP_POPULATE is not set.
>
> It could be the other thread calling mmap and getting preempted (or
> otherwise calling schedule()). Grr.
The simplest way to find out who's holding the lock too long might be
to enable CONFIG_LOCK_STATS. This will slow things down a little, but
give you lots of useful information including which threads hold
mmap_sem the longest and the call stack for where they grab it from.
See Documentation/lockstat.txt
I think munmap is a likely culprit, as it still happens with mmap_sem
held for write (I do plan to go work on this next). But it's hard to
be sure without lockstats :)
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists