[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YT+ptg1Lf1kGLyUX@slm.duckdns.org>
Date: Mon, 13 Sep 2021 09:42:46 -1000
From: Tejun Heo <tj@...nel.org>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Feng Tang <feng.tang@...el.com>, Hillf Danton <hdanton@...a.com>,
LKML <linux-kernel@...r.kernel.org>,
Xing Zhengjun <zhengjun.xing@...ux.intel.com>,
Linux MM <linux-mm@...ck.org>
Subject: Re: [memcg] 45208c9105: aim7.jobs-per-min -14.0% regression
Hello,
On Mon, Sep 13, 2021 at 12:40:06PM -0700, Shakeel Butt wrote:
> I did one more experiment with same workload but with system_wq
> instead system_unbound_wq and there is clear difference in profile:
>
> With system_unbound_wq:
> - 4.63% 0.33% mmap [kernel.kallsyms] [k] queue_work_on
> 4.29% queue_work_on
> - __queue_work
> - 3.45% wake_up_process
> - try_to_wake_up
> - 2.46% ttwu_queue
> - 1.66% ttwu_do_activate
> - 1.14% activate_task
> - 0.97% enqueue_task_fair
> enqueue_entity
>
> With system_wq:
> - 1.36% 0.06% mmap [kernel.kallsyms] [k] queue_work_on
> 1.30% queue_work_on
> - __queue_work
> - 1.03% wake_up_process
> - try_to_wake_up
> - 0.97% ttwu_queue
> 0.66% ttwu_do_activate
>
> Tejun, is this expected? i.e. queuing work on system_wq has a
> different performance impact than on system_unbound_wq?
Yes, system_unbound_wq is putting the work item on the global shared
workqueue while the system_wq is per-cpu, so on a loaded system, overhead
difference showing up isn't too surprising.
Thanks.
--
tejun
Powered by blists - more mailing lists