[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51A665F7.6050208@hp.com>
Date: Wed, 29 May 2013 16:32:55 -0400
From: Waiman Long <waiman.long@...com>
To: Simo Sorce <simo@...hat.com>
CC: Dave Chinner <david@...morbit.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Jeff Layton <jlayton@...hat.com>,
Miklos Szeredi <mszeredi@...e.cz>, Ian Kent <raven@...maw.net>,
Sage Weil <sage@...tank.com>, Steve French <sfrench@...ba.org>,
Trond Myklebust <Trond.Myklebust@...app.com>,
Eric Paris <eparis@...hat.com>, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, autofs@...r.kernel.org,
ceph-devel@...r.kernel.org, linux-cifs@...r.kernel.org,
linux-nfs@...r.kernel.org,
"Chandramouleeswaran, Aswin" <aswin@...com>,
"Norton, Scott J" <scott.norton@...com>,
Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH 0/3 v3] dcache: make it more scalable on large system
On 05/29/2013 12:18 PM, Simo Sorce wrote:
> On Wed, 2013-05-29 at 11:55 -0400, Waiman Long wrote:
>
>> My patch set consists of 2 different changes. The first one is to avoid
>> taking the d_lock lock when updating the reference count in the
>> dentries. This particular change also benefit some other workloads that
>> are filesystem intensive. One particular example is the short workload
>> in the AIM7 benchmark. One of the job type in the short workload is
>> "misc_rtns_1" which calls security functions like getpwnam(),
>> getpwuid(), getgrgid() a couple of times. These functions open the
>> /etc/passwd or /etc/group files, read their content and close the files.
>> It is the intensive open/read/close sequence from multiple threads that
>> is causing 80%+ contention in the d_lock on a system with large number
>> of cores.
> To be honest a workload base on /etc/passwd or /etc/group is completely
> artificial, in actual usage, if you really have such access you use
> nscd or sssd with their shared memory caches to completely remove most
> of the file access.
> I have no beef on the rest but repeated access to Nsswitch information
> is not something you need to optimize at the file system layer and
> should not be brought up as a point in favor.
The misc_rtns_1 workload that I described here is just part of a larger
workload involving other activities. It represents just 1/17 of the
total jobs that were spawned. This particular job type, however,
dominates the time because of the lock contention that it created. I
agree that it is an artificial workload as most benchmarks are. It is
certainly an exaggeration of what a real workload may be, but it doesn't
mean that similar contention will not happen in the real world
especially when the trend is to have more and more CPU cores packed in
the same machine.
Regards,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists