[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20191220171004.GA8596@dennisz-mbp.dhcp.thefacebook.com>
Date: Fri, 20 Dec 2019 11:10:04 -0600
From: Dennis Zhou <dennis@...nel.org>
To: Christopher Lameter <cl@...ux.com>
Cc: Tejun Heo <tj@...nel.org>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Björn Töpel <bjorn.topel@...il.com>,
bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: Percpu variables, benchmarking, and performance weirdness
On Fri, Dec 20, 2019 at 03:36:51PM +0000, Christopher Lameter wrote:
> On Fri, 20 Dec 2019, Tejun Heo wrote:
>
> > On Fri, Dec 20, 2019 at 10:34:20AM +0100, Jesper Dangaard Brouer wrote:
> > > > So, my question to the uarch/percpu folks out there: Why are percpu
> > > > accesses (%gs segment register) more expensive than regular global
> > > > variables in this scenario.
> > >
> > > I'm also VERY interested in knowing the answer to above question!?
> > > (Adding LKML to reach more people)
> >
> > No idea. One difference is that percpu accesses are through vmap area
> > which is mapped using 4k pages while global variable would be accessed
> > through the fault linear mapping. Maybe you're getting hit by tlb
> > pressure?
bpf_redirect_info is static so that should be accessed via the linear
mapping as well if we're embedding the first chunk.
>
> And there are some accesses from remote processors to per cpu ares of
> other cpus. If those are in the same cacheline then those will cause
> additional latencies.
>
I guess we could pad out certain structs like bpf_redirect_info, but
that isn't really ideal.
Powered by blists - more mailing lists