[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45BCD4C9.2030302@mbligh.org>
Date: Sun, 28 Jan 2007 08:52:25 -0800
From: "Martin J. Bligh" <mbligh@...igh.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: Christoph Hellwig <hch@...radead.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] breaking the global file_list_lock
Ingo Molnar wrote:
> * Christoph Hellwig <hch@...radead.org> wrote:
>
>> On Sun, Jan 28, 2007 at 12:51:18PM +0100, Peter Zijlstra wrote:
>>> This patch-set breaks up the global file_list_lock which was found
>>> to be a severe contention point under basically any filesystem
>>> intensive workload.
>> Benchmarks, please. Where exactly do you see contention for this?
>
> it's the most contended spinlock we have during a parallel kernel
> compile on an 8-way system. But it's pretty common-sense as well,
> without doing any measurements, it's basically the only global lock left
> in just about every VFS workload that doesnt involve massive amount of
> dentries created/removed (which is still dominated by the dcache_lock).
>
>> filesystem intensive workload apparently means namespace operation
>> heavy workload, right? The biggest bottleneck I've seen with those is
>> dcache lock.
>
> the dcache lock is not a problem during kernel compiles. (its
> rcu-ification works nicely in that workload)
Mmm. not wholly convinced that's true. Whilst i don't have lockmeter
stats to hand, the heavy time in __d_lookup seems to indicate we may
still have a problem to me. I guess we could move the spinlocks out
of line again to test this fairly easily (or get lockmeter upstream).
114076 total 0.0545
57766 default_idle 916.9206
11869 prep_new_page 49.4542
3830 find_trylock_page 67.1930
2637 zap_pte_range 3.9125
2486 strnlen_user 54.0435
2018 do_page_fault 1.1941
1940 do_wp_page 1.6973
1869 __d_lookup 7.7231
1331 page_remove_rmap 5.2196
1287 do_no_page 1.6108
1272 buffered_rmqueue 4.6423
1160 __copy_to_user_ll 14.6835
1027 _atomic_dec_and_lock 11.1630
655 release_pages 1.9670
644 do_path_lookup 1.6304
630 schedule 0.4046
617 kunmap_atomic 7.7125
573 __handle_mm_fault 0.7365
548 free_hot_page 78.2857
500 __copy_user_intel 3.3784
483 copy_pte_range 0.5941
482 page_address 2.9571
478 file_move 9.1923
441 do_anonymous_page 0.7424
429 filemap_nopage 0.4450
401 anon_vma_unlink 4.8902
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists