[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1470209710-30022-1-git-send-email-kernel@kyup.com>
Date: Wed, 3 Aug 2016 10:35:10 +0300
From: Nikolay Borisov <kernel@...p.com>
To: jlayton@...chiereds.net, bfields@...ldses.org
Cc: viro@...iv.linux.org.uk, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, ebiederm@...ssion.com,
containers@...ts.linux-foundation.org,
Nikolay Borisov <kernel@...p.com>
Subject: [PATCH v2] locks: Filter /proc/locks output on proc pid ns
On busy container servers reading /proc/locks shows all the locks
created by all clients. This can cause large latency spikes. In my
case I observed lsof taking up to 5-10 seconds while processing around
50k locks. Fix this by limiting the locks shown only to those created
in the same pidns as the one the proc was mounted in. When reading
/proc/locks from the init_pid_ns show everything.
Signed-off-by: Nikolay Borisov <kernel@...p.com>
---
fs/locks.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/locks.c b/fs/locks.c
index ee1b15f6fc13..751673d7f7fc 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2648,9 +2648,15 @@ static int locks_show(struct seq_file *f, void *v)
{
struct locks_iterator *iter = f->private;
struct file_lock *fl, *bfl;
+ struct pid_namespace *proc_pidns = file_inode(f->file)->i_sb->s_fs_info;
+ struct pid_namespace *current_pidns = task_active_pid_ns(current);
fl = hlist_entry(v, struct file_lock, fl_link);
+ if ((current_pidns != &init_pid_ns) && fl->fl_nspid
+ && (proc_pidns != ns_of_pid(fl->fl_nspid)))
+ return 0;
+
lock_get_status(f, fl, iter->li_pos, "");
list_for_each_entry(bfl, &fl->fl_block, fl_block)
--
2.5.0
Powered by blists - more mailing lists