lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 25 Feb 2012 09:34:04 +0400 From: Konstantin Khlebnikov <khlebnikov@...nvz.org> To: Tim Chen <tim.c.chen@...ux.intel.com> CC: Hugh Dickins <hughd@...gle.com>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>, KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, Andi Kleen <andi@...stfloor.org> Subject: Re: [PATCH v3 00/21] mm: lru_lock splitting Tim Chen wrote: > On Thu, 2012-02-23 at 17:51 +0400, Konstantin Khlebnikov wrote: >> v3 changes: >> * inactive-ratio reworked again, now it always calculated from from scratch >> * hierarchical pte reference bits filter in memory-cgroup reclaimer >> * fixed two bugs in locking, found by Hugh Dickins >> * locking functions slightly simplified >> * new patch for isolated pages accounting >> * new patch with lru interleaving >> >> This patchset is based on next-20120210 >> >> git: https://github.com/koct9i/linux/commits/lruvec-v3 >> >> --- > > I am seeing an improvement of about 7% in throughput in a workload where > I am doing parallel reading of files that are mmaped. The contention on > lru_lock used to be 13% in the cpu profile on the __pagevec_lru_add code > path. Now lock contention on this path drops to about 0.6%. I have 40 > hyper-threaded enabled cpu cores running 80 mmaped file reading > processes. > > So initial testing of this patch set looks encouraging. That's great! > > Tim > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists