[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrC9CyOPamPneUOT@bombadil.infradead.org>
Date: Mon, 20 Jun 2022 11:31:39 -0700
From: Luis Chamberlain <mcgrof@...nel.org>
To: Aaron Lu <aaron.lu@...el.com>, Davidlohr Bueso <dave@...olabs.net>
Cc: Song Liu <song@...nel.org>, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, linux-mm@...ck.org, ast@...nel.org,
daniel@...earbox.net, peterz@...radead.org,
torvalds@...ux-foundation.org, rick.p.edgecombe@...el.com,
kernel-team@...com
Subject: Re: [PATCH v4 bpf-next 0/8] bpf_prog_pack followup
On Mon, Jun 20, 2022 at 07:11:45PM +0800, Aaron Lu wrote:
> Hi Song,
>
> On Fri, May 20, 2022 at 04:57:50PM -0700, Song Liu wrote:
>
> ... ...
>
> > The primary goal of bpf_prog_pack is to reduce iTLB miss rate and reduce
> > direct memory mapping fragmentation. This leads to non-trivial performance
> > improvements.
> >
> > For our web service production benchmark, bpf_prog_pack on 4kB pages
> > gives 0.5% to 0.7% more throughput than not using bpf_prog_pack.
> > bpf_prog_pack on 2MB pages 0.6% to 0.9% more throughput than not using
> > bpf_prog_pack. Note that 0.5% is a huge improvement for our fleet. I
> > believe this is also significant for other companies with many thousand
> > servers.
> >
>
> I'm evaluationg performance impact due to direct memory mapping
> fragmentation
BTW how exactly are you doing this?
Luis
> and seeing the above, I wonder: is the performance improve
> mostly due to prog pack and hugepage instead of less direct mapping
> fragmentation?
>
> I can understand that when progs are packed together, iTLB miss rate will
> be reduced and thus, performance can be improved. But I don't see
> immediately how direct mapping fragmentation can impact performance since
> the bpf code are running from the module alias addresses, not the direct
> mapping addresses IIUC?
>
> I appreciate it if you can shed some light on performance impact direct
> mapping fragmentation can cause, thanks.
Powered by blists - more mailing lists