[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4533A411.2020207@yahoo.com.au>
Date: Tue, 17 Oct 2006 01:24:01 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC: Nick Piggin <npiggin@...e.de>,
Linux Memory Management <linux-mm@...ck.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...l.org>,
David Howells <dhowells@...hat.com>,
Ingo Molnar <mingo@...e.hu>
Subject: pagefault_disable (was Re: [patch 6/6] mm: fix pagecache write deadlocks)
(trimming cc list)
Peter Zijlstra wrote:
> On Sun, 2006-10-15 at 17:57 +0200, Nick Piggin wrote:
>>Hmm, but you may not be doing a copy*user within the kmap. And you may
>>want an atomic copy*user not within a kmap (maybe).
>>
>>I think it really would be more logical to do it in a wrapper function
>>pagefault_disable() pagefault_enable()? ;)
>
>
> I did that one first, but then noticed that most non trivial kmap_atomic
> implementations already did the inc_preempt_count()/dec_preempt_count()
> thing (except frv which did preempt_disable()/preempt_enable() ?)
>
> Anyway, here goes:
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
I think this is a good approach. The missed preempt checks could easily
have been causing scheduling delays because the usercopy can take up a
lot of kernel time.
I don't know that the function should go in filemap.h... uaccess.h seems
more appropriate, and had thought the pagefault_disable() be calle
directly from within the copy_*_user_inatomic functions themselves, not
the filemap helper.
Also, the rest of the kernel tree (mainly uaccess and futexes) should be
converted ;)
> Index: linux-2.6/mm/filemap.h
> ===================================================================
> --- linux-2.6.orig/mm/filemap.h 2006-10-14 20:20:20.000000000 +0200
> +++ linux-2.6/mm/filemap.h 2006-10-15 17:17:45.000000000 +0200
> @@ -21,6 +21,22 @@ __filemap_copy_from_user_iovec_inatomic(
> size_t bytes);
>
> /*
> + * By increasing the preempt_count we make sure the arch preempt
> + * handler bails out early, before taking any locks, so that the copy
> + * operation gets terminated early.
> + */
> +pagefault_static inline void disable(void)
> +{
> + inc_preempt_count();
> +}
> +
> +pagefault_static inline void enable(void)
> +{
> + dec_preempt_count();
> + preempt_check_resched();
> +}
Interesting prototype ;)
--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists