[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJ=D8o2kNRf6aL=Pa=V6m_fOr6bPBY67yjXFgwTCEAHag@mail.gmail.com>
Date: Thu, 1 Sep 2022 14:30:43 -0700
From: Eric Dumazet <edumazet@...gle.com>
To: Kuniyuki Iwashima <kuniyu@...zon.com>
Cc: Paolo Abeni <pabeni@...hat.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Kuniyuki Iwashima <kuni1840@...il.com>,
netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH v3 net-next 3/5] tcp: Access &tcp_hashinfo via net.
On Thu, Sep 1, 2022 at 2:25 PM Kuniyuki Iwashima <kuniyu@...zon.com> wrote:
>
> From: Paolo Abeni <pabeni@...hat.com>
> > /Me is thinking aloud...
> >
> > I'm wondering if the above has some measurable negative effect for
> > large deployments using only the main netns?
> >
> > Specifically, are net->ipv4.tcp_death_row and net->ipv4.tcp_death_row-
> > >hashinfo already into the working set data for established socket?
> > Would the above increase the WSS by 2 cache-lines?
>
> Currently, the death_row and hashinfo are touched around tw sockets or
> connect(). If connections on the deployment are short-lived or frequently
> initiated by itself, that would be host and included in WSS.
>
> If the workload is server and there's no active-close() socket or
> connections are long-lived, then it might not be included in WSS.
> But I think it's not likely than the former if the deployment is
> large enough.
>
> If this change had large impact, then we could revert fbb8295248e1
> which converted net->ipv4.tcp_death_row into pointer for 0dad4087a86a
> that tried to fire a TW timer after netns is freed, but 0dad4087a86a
> has already reverted.
Concern was fast path.
Each incoming packet does a socket lookup.
Fetching hashinfo (instead of &tcp_hashinfo) with a dereference of a
field in 'struct net' might inccurr a new cache line miss.
Previously, first cache line of tcp_info was enough to bring a lot of
fields in cpu cache.
Powered by blists - more mailing lists