[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZOPMNyZ3gKb/bdjO@casper.infradead.org>
Date: Mon, 21 Aug 2023 21:42:31 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: linux-kernel@...r.kernel.org, dennis@...nel.org, tj@...nel.org,
cl@...ux.com, akpm@...ux-foundation.org, shakeelb@...gle.com,
linux-mm@...ck.org
Subject: Re: [PATCH 0/2] execve scalability issues, part 1
On Mon, Aug 21, 2023 at 10:28:27PM +0200, Mateusz Guzik wrote:
> > While with the patch these allocations remain a significant problem,
> > the primary bottleneck shifts to:
> >
> > __pv_queued_spin_lock_slowpath+1
> > _raw_spin_lock_irqsave+57
> > folio_lruvec_lock_irqsave+91
> > release_pages+590
> > tlb_batch_pages_flush+61
> > tlb_finish_mmu+101
> > exit_mmap+327
> > __mmput+61
> > begin_new_exec+1245
> > load_elf_binary+712
> > bprm_execve+644
> > do_execveat_common.isra.0+429
> > __x64_sys_execve+50
> > do_syscall_64+46
> > entry_SYSCALL_64_after_hwframe+110
>
> I intend to do more work on the area to mostly sort it out, but I would
> not mind if someone else took the hammer to folio. :)
Funny you should ask ... these patches are from ~3 weeks ago. They may
or may not apply to a current tree, but I'm working on this area.
Powered by blists - more mailing lists