[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230131055743.tsilxx5vfl6gx4dj@google.com>
Date: Tue, 31 Jan 2023 05:57:43 +0000
From: Shakeel Butt <shakeelb@...gle.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: kernel test robot <oliver.sang@...el.com>, oe-lkp@...ts.linux.dev,
lkp@...el.com, linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Marek Szyprowski <m.szyprowski@...sung.com>,
linux-mm@...ck.org, linux-trace-kernel@...r.kernel.org,
ying.huang@...el.com, feng.tang@...el.com,
zhengjun.xing@...ux.intel.com, fengwei.yin@...el.com
Subject: Re: [linus:master] [mm] f1a7941243: unixbench.score -19.2% regression
On Tue, Jan 31, 2023 at 05:45:21AM +0000, Matthew Wilcox wrote:
[...]
> > I ran perf and it seems like percpu counter allocation is the additional
> > cost with this patch. See the report below. However I made spawn a bit
> > more sophisticated by adding a mmap() of a GiB then the page table
> > copy became the significant cost and no difference without or with the
> > given patch.
> >
> > I am now wondering if this fork ping pong really an important workload
> > that we should revert the patch or ignore for now but work on improving
> > the performance of __alloc_percpu_gfp code.
> >
> >
> > - 90.97% 0.06% spawn [kernel.kallsyms] [k] entry_SYSCALL_64_after_hwframe
> > - 90.91% entry_SYSCALL_64_after_hwframe
> > - 90.86% do_syscall_64
> > - 80.03% __x64_sys_clone
> > - 79.98% kernel_clone
> > - 75.97% copy_process
> > + 46.04% perf_event_init_task
> > - 21.50% copy_mm
> > - 10.05% mm_init
> > ----------------------> - 8.92% __percpu_counter_init
> > - 8.67% __alloc_percpu_gfp
> > - 5.70% pcpu_alloc
>
> 5.7% of our time spent in pcpu_alloc seems excessive. Are we contending
> on pcpu_alloc_mutex perhaps? Also, are you doing this on a 4-socket
> machine like the kernel test robot ran on?
I ran on 2-socket machine and I am not sure about pcpu_alloc_mutex but I
doubt that because I ran a single instance of the spawn test i.e. a
single fork ping pong.
>
> We could cut down the number of calls to pcpu_alloc() by a factor of 4
> by having a pcpu_alloc_bulk() that would allocate all four RSS counters
> at once.
>
> Just throwing out ideas ...
Thanks, I will take a stab at pcpu_alloc_bulk() and will share the
result tomorrow.
thanks,
Shakeel
Powered by blists - more mailing lists