lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <456a056e-453e-71b0-0f9e-03511b9f56b1@google.com>
Date:   Wed, 15 Dec 2021 14:00:11 -0500
From:   Barret Rhoden <brho@...gle.com>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>
Cc:     Christian Brauner <christian.brauner@...ntu.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alexey Gladkov <legion@...nel.org>,
        William Cohen <wcohen@...hat.com>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Chris Hyser <chris.hyser@...cle.com>,
        Peter Collingbourne <pcc@...gle.com>,
        Xiaofeng Cao <caoxiaofeng@...ong.com>,
        David Hildenbrand <david@...hat.com>,
        Cyrill Gorcunov <gorcunov@...il.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] rlimits: do not grab tasklist_lock for do_prlimit on
 current

Hi -

On 12/13/21 5:34 PM, Eric W. Biederman wrote:
> Do you have any numbers?  As the entire point of this change is
> performance it would be good to see how the performance changes.
> 
> Especially as a read_lock should not be too bad as it allows sharing,
> nor do I expect reading or writing the rlimit values to be particularly
> frequent.  So some insight into what kinds of userspace patterns make
> this a problem would be nice.

This was motivated by slowdowns we observed on a few machines running 
tests in a cluster.  AFAIK, there were a lot of small tests, many of 
which mucked with process management syscalls while fork/joining other 
tasks.

Based on a cycles profile, it looked like ~87% of the time was spent in 
the kernel, ~42% of which was just trying to get *some* spinlock 
(queued_spin_lock_slowpath, not necessarily the tasklist_lock).

The big offenders (with rough percentages in cycles of the overall trace):

- do_wait 11%
- setpriority 8% (potential future patch)
- kill 8%
- do_exit 5%
- clone 3%
- prlimit64 2%   (this patch)
- getrlimit 1%   (this patch)

Even though do_prlimit was using a read_lock, it was still contending on 
the internal queued_spin_lock.

The prlimit was only 3% of the total.  This patch was more of a "oh, 
this doesn't *need* the tasklist_lock for p == current" - can we fix 
that?  I actually don't even know often those prlimit64 calls had p == 
current.

setpriority was a bigger one too - is the tasklist lock only needed for 
the PGRP ops?  (I thought so based on where the tasklist_lock is write 
locked and the comment on task_pgrp()).  If so, I could do that in 
another patch.

> This change is a bit scary as it makes taking a lock conditional and
> increases the probability of causing a locking mistake.

I definitely see how making the code more brittle might not be worth the 
small win.  If this is more "damage" than "cleanup", then I can drop it.

> If you are going to make this change I would say that do_prlimit should
> become static and taking the tasklist_lock should move into prlimit64.
> 
> 
> Looking a little closer it looks like that update_rlimit_cpu should use
> lock_task_sighand, and once lock_task_sighand is used there is actually
> no need for the tasklist_lock at all.  As holding the reference to tsk
> guarantees that tsk->signal remains valid.

Maybe do both?  unconditionally grab lock_task_sighand (instead of 
tasklist_lock) in prlimit64.

> So I completely agree there are cleanups that can happen in this area.
> Please make those and show numbers in how they improve things, instead
> of making the code worse with a conditional lock.

Unfortunately, I can't easily get a "before and after" on this change. 
The motivating issue popped up sporadically, but getting it to happen in 
a setup under *my* control is organizationally a pain.  So I understand 
if you wouldn't want the patch for that reason.  Ideally, the changes 
would make the code easier to follow and clearer about why we're locking.

If you're OK with two patches that 1) grab lock_task_sighand in 
prlimit64 and 2) moving the read_lock in {set,get}priority into the PGRP 
cases (assuming I was correct on that), I can send them out.

If it's too much of a risk/ugliness for not clear enough gain (in code 
quality or performance), I'm fine with dropping it.

Thanks for looking,

Barret

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ