lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <661de9471001270534ve6f7bf0nb58ef58f9a6ff10e@mail.gmail.com>
Date:	Wed, 27 Jan 2010 19:04:55 +0530
From:	Balbir Singh <balbir@...ux.vnet.ibm.com>
To:	mingo@...hat.com, hpa@...or.com, anton@...ba.org,
	linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl,
	tglx@...utronix.de, mingo@...e.hu, akpm@...ux-foundation.org
Cc:	linux-tip-commits@...r.kernel.org
Subject: Re: [tip:sched/urgent] sched: cpuacct: Use bigger percpu counter 
	batch values for stats counters

On Wed, Jan 27, 2010 at 6:45 PM, tip-bot for Anton Blanchard
<anton@...ba.org> wrote:
> Commit-ID:  43f85eb1411905afe5db510fbf9841b516e7e6a
> Gitweb:     http://git.kernel.org/tip/43f85eab1411905afe5db510fbf9841b516e7e6a
> Author:     Anton Blanchard <anton@...ba.org>
> AuthorDate: Mon, 18 Jan 2010 15:41:42 +1100
> Committer:  Ingo Molnar <mingo@...e.hu>
> CommitDate: Wed, 27 Jan 2010 08:34:38 +0100
>
> sched: cpuacct: Use bigger percpu counter batch values for stats counters
>
> When CONFIG_VIRT_CPU_ACCOUNTING and CONFIG_CGROUP_CPUACCT are enabled we
> can call cpuacct_update_stats with values much larger than
> percpu_counter_batch. This means the call to percpu_counter_add will
> always add to the global count which is protected by a spinlock and we
> end up with a global spinlock in the scheduler.
>
> Based on an idea by KOSAKI Motohiro, this patch scales the batch value by
> cputime_one_jiffy such that we have the same batch limit as we would if
> CONFIG_VIRT_CPU_ACCOUNTING was disabled. His patch did this once at boot
> but that initialisation happened too early on PowerPC (before time_init)
> and it was never updated at runtime as a result of a hotplug cpu
> add/remove.
>
> This patch instead scales percpu_counter_batch by cputime_one_jiffy at
> runtime, which keeps the batch correct even after cpu hotplug operations.
> We cap it at INT_MAX in case of overflow.
>
> For architectures that do not support CONFIG_VIRT_CPU_ACCOUNTING,
> cputime_one_jiffy is the constant 1 and gcc is smart enough to optimise
> min(s32 percpu_counter_batch, INT_MAX) to just percpu_counter_batch at
> least on x86 and PowerPC. So there is no need to add an #ifdef.
>
> On a 64 thread PowerPC box with CONFIG_VIRT_CPU_ACCOUNTING and
> CONFIG_CGROUP_CPUACCT enabled, a context switch microbenchmark is 234x
> faster and almost matches a CONFIG_CGROUP_CPUACCT disabled kernel:
>
> CONFIG_CGROUP_CPUACCT disabled:         16906698 ctx switches/sec
> CONFIG_CGROUP_CPUACCT enabled:             61720 ctx switches/sec
> CONFIG_CGROUP_CPUACCT + patch:          16663217 ctx switches/sec
>
> Tested with:
>
>  wget http://ozlabs.org/~anton/junkcode/context_switch.c
>  make context_switch
>  for i in `seq 0 63`; do taskset -c $i ./context_switch & done
>  vmstat 1
>
> Signed-off-by: Anton Blanchard <anton@...ba.org>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> LKML-Reference: <20100118044142.GS12666@...ten>
> Signed-off-by: Ingo Molnar <mingo@...e.hu>
> ---
>  kernel/sched.c |    4 +++-
>  1 files changed, 3 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 3a8fb30..8f94138 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -10906,6 +10906,7 @@ static void cpuacct_update_stats(struct task_struct *tsk,
>                enum cpuacct_stat_index idx, cputime_t val)
>  {
>        struct cpuacct *ca;
> +       int batch;
>
>        if (unlikely(!cpuacct_subsys.active))
>                return;
> @@ -10913,8 +10914,9 @@ static void cpuacct_update_stats(struct task_struct *tsk,
>        rcu_read_lock();
>        ca = task_ca(tsk);
>
> +       batch = min_t(long, percpu_counter_batch * cputime_one_jiffy, INT_MAX);
>        do {
> -               percpu_counter_add(&ca->cpustat[idx], val);
> +               __percpu_counter_add(&ca->cpustat[idx], val, batch);
>                ca = ca->parent;
>        } while (ca);
>        rcu_read_unlock();

IIRC, Andrew picked up this patch as well and applied some checkpatch
fixes too..

Balbir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ