lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 11 Jun 2014 17:14:55 -0700
From:	ebiederm@...ssion.com (Eric W. Biederman)
To:	paulmck@...ux.vnet.ibm.com
Cc:	chiluk@...onical.com, Rafael Tinoco <rafael.tinoco@...onical.com>,
	linux-kernel@...r.kernel.org, davem@...emloft.net,
	Christopher Arges <chris.j.arges@...onical.com>,
	Jay Vosburgh <jay.vosburgh@...onical.com>
Subject: Re: Possible netns creation and execution performance/scalability regression since v3.8 due to rcu callbacks being offloaded to multiple cpus

"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> writes:

> On Wed, Jun 11, 2014 at 04:12:15PM -0700, Eric W. Biederman wrote:
>> "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com> writes:
>> 
>> > On Wed, Jun 11, 2014 at 01:46:08PM -0700, Eric W. Biederman wrote:
>> >> On the chance it is dropping the old nsproxy which calls syncrhonize_rcu
>> >> in switch_task_namespaces that is causing you problems I have attached
>> >> a patch that changes from rcu_read_lock to task_lock for code that
>> >> calls task_nsproxy from a different task.  The code should be safe
>> >> and it should be an unquestions performance improvement but I have only
>> >> compile tested it.
>> >> 
>> >> If you can try the patch it will tell is if the problem is the rcu
>> >> access in switch_task_namespaces (the only one I am aware of network
>> >> namespace creation) or if the problem rcu case is somewhere else.
>> >> 
>> >> If nothing else knowing which rcu accesses are causing the slow down
>> >> seem important at the end of the day.
>> >> 
>> >> Eric
>> >> 
>> >
>> > If this is the culprit, another approach would be to use workqueues from
>> > RCU callbacks.  The following (untested, probably does not even build)
>> > patch illustrates one such approach.
>> 
>> For reference the only reason we are using rcu_lock today for nsproxy is
>> an old lock ordering problem that does not exist anymore.
>> 
>> I can say that in some workloads setns is a bit heavy today because of
>> the synchronize_rcu and setns is more important that I had previously
>> thought because pthreads break the classic unix ability to do things in
>> your process after fork() (sigh).
>> 
>> Today daemonize is gone, and notify the parent process with a signal
>> relies on task_active_pid_ns which does not use nsproxy.  So the old
>> lock ordering problem/race is gone.
>> 
>> The description of what was happening when the code switched from
>> task_lock to rcu_read_lock to protect nsproxy.
>
> OK, never mind, then!  ;-)

I appreciate you posting your approach.  I just figured I should do
my homework, and verify my fuzzy memory.

Who knows there might be different performance problems with my
approach.  But I am hoping this is one of those happy instances where we
can just make everything simpler.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ