lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 24 May 2015 10:35:48 +0800 From: Zefan Li <lizefan@...wei.com> To: Tejun Heo <tj@...nel.org> CC: Peter Zijlstra <peterz@...radead.org>, <cgroups@...r.kernel.org>, <mingo@...hat.com>, <linux-kernel@...r.kernel.org> Subject: Re: [PATCH 2/3] sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem On 2015/5/22 4:39, Tejun Heo wrote: > Hello, Li. > > On Wed, May 20, 2015 at 06:05:37PM +0800, Zefan Li wrote: >>> The latency is bound by synchronize_sched_expedited(). Given the way >>> cgroups are used in majority of setups (process migration happening >>> only during service / session setups), I think this should be okay. >> >> Actually process migration can happen quite frequently, for example in >> Android phones, and that's why Google had an out-of-tree patch to remove >> the synchronize_rcu() in that path, which turned out to be buggy. > > It's still not a very frequent operation tho. We're talking about > users switching fore/background jobs here and the expedited > synchronization w/ preemption enabled doesn't take much time. In > addition, as it currently stands, android is doing memory charge > immigration on each fore/background switches. I'm pretty doubtful > this would make any difference. > I did some testing with my laptop. Moving a task between 2 cgroups for 10W times with one or two threads: 1T 2T orig 3.36s 3.65s orig+tj 3.55s 6.31s orig+sync_rcu 16.69s 28.47s (only 1000 times) The overhead looks acceptable. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists