[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YFhLCPzKlE2uk46k@hirez.programming.kicks-ass.net>
Date: Mon, 22 Mar 2021 08:45:12 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] psi: reduce calls to sched_clock() in psi
On Sun, Mar 21, 2021 at 01:51:56PM -0700, Shakeel Butt wrote:
> We noticed that the cost of psi increases with the increase in the
> levels of the cgroups. Particularly the cost of cpu_clock() sticks out
> as the kernel calls it multiple times as it traverses up the cgroup
> tree. This patch reduces the calls to cpu_clock().
>
> Performed perf bench on Intel Broadwell with 3 levels of cgroup.
>
> Before the patch:
>
> $ perf bench sched all
> # Running sched/messaging benchmark...
> # 20 sender and receiver processes per group
> # 10 groups == 400 processes run
>
> Total time: 0.747 [sec]
>
> # Running sched/pipe benchmark...
> # Executed 1000000 pipe operations between two processes
>
> Total time: 3.516 [sec]
>
> 3.516689 usecs/op
> 284358 ops/sec
>
> After the patch:
>
> $ perf bench sched all
> # Running sched/messaging benchmark...
> # 20 sender and receiver processes per group
> # 10 groups == 400 processes run
>
> Total time: 0.640 [sec]
>
> # Running sched/pipe benchmark...
> # Executed 1000000 pipe operations between two processes
>
> Total time: 3.329 [sec]
>
> 3.329820 usecs/op
> 300316 ops/sec
>
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
Thanks!
Powered by blists - more mailing lists