[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrBV8darrlmUnrHR@ziqianlu-Dell-Optiplex7000>
Date: Mon, 20 Jun 2022 19:11:45 +0800
From: Aaron Lu <aaron.lu@...el.com>
To: Song Liu <song@...nel.org>
CC: <linux-kernel@...r.kernel.org>, <bpf@...r.kernel.org>,
<linux-mm@...ck.org>, <ast@...nel.org>, <daniel@...earbox.net>,
<peterz@...radead.org>, <mcgrof@...nel.org>,
<torvalds@...ux-foundation.org>, <rick.p.edgecombe@...el.com>,
<kernel-team@...com>
Subject: Re: [PATCH v4 bpf-next 0/8] bpf_prog_pack followup
Hi Song,
On Fri, May 20, 2022 at 04:57:50PM -0700, Song Liu wrote:
... ...
> The primary goal of bpf_prog_pack is to reduce iTLB miss rate and reduce
> direct memory mapping fragmentation. This leads to non-trivial performance
> improvements.
>
> For our web service production benchmark, bpf_prog_pack on 4kB pages
> gives 0.5% to 0.7% more throughput than not using bpf_prog_pack.
> bpf_prog_pack on 2MB pages 0.6% to 0.9% more throughput than not using
> bpf_prog_pack. Note that 0.5% is a huge improvement for our fleet. I
> believe this is also significant for other companies with many thousand
> servers.
>
I'm evaluationg performance impact due to direct memory mapping
fragmentation and seeing the above, I wonder: is the performance improve
mostly due to prog pack and hugepage instead of less direct mapping
fragmentation?
I can understand that when progs are packed together, iTLB miss rate will
be reduced and thus, performance can be improved. But I don't see
immediately how direct mapping fragmentation can impact performance since
the bpf code are running from the module alias addresses, not the direct
mapping addresses IIUC?
I appreciate it if you can shed some light on performance impact direct
mapping fragmentation can cause, thanks.
Powered by blists - more mailing lists