[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230111110043.036409d0@kernel.org>
Date: Wed, 11 Jan 2023 11:00:43 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: "Arinzon, David" <darinzon@...zon.com>
Cc: David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Machulsky, Zorik" <zorik@...zon.com>,
"Matushevsky, Alexander" <matua@...zon.com>,
"Bshara, Saeed" <saeedb@...zon.com>,
"Bshara, Nafea" <nafea@...zon.com>,
"Saidi, Ali" <alisaidi@...zon.com>,
"Kiyanovski, Arthur" <akiyano@...zon.com>,
"Dagan, Noam" <ndagan@...zon.com>,
"Agroskin, Shay" <shayagr@...zon.com>,
"Itzko, Shahar" <itzko@...zon.com>,
"Abboud, Osama" <osamaabb@...zon.com>
Subject: Re: [PATCH V1 net-next 0/5] Add devlink support to ena
On Wed, 11 Jan 2023 08:58:46 +0000 Arinzon, David wrote:
> > I read it again - and I still don't know what you're doing.
> > I sounds like inline header length configuration yet you also use LLQ all
> > over the place. And LLQ for ENA is documented as basically tx_push:
> >
> > - **Low Latency Queue (LLQ) mode or "push-mode":**
> >
> > Please explain this in a way which assumes zero Amazon-specific
> > knowledge :(
>
> Low Latency Queues (LLQ) is a mode of operation where the packet headers
> (up to a defined length) are being written directly to the device memory.
> Therefore, you are right, the description is similar to tx_push. However,
> This is not a configurable option while ETHTOOL_A_RINGS_TX_PUSH
> configures whether to work in a mode or not.
> If I'm understanding the intent behind ETHTOOL_A_RINGS_TX_PUSH
> and the implementation in the driver that introduced the feature, it
> refers to a push of the packet and not just the headers, which is not what
> the ena driver does.
>
> In this patchset, we allow the configuration of an extended size of the
> Low Latency Queue, meaning, allow enabled another, larger, pre-defined
> size to be used as a max size of the packet header to be pushed directly to
> device memory. It is not configurable in value, therefore, it was defined as
> large LLQ.
>
> I hope this provides more clarification, if not, I'll be happy to elaborate further.
Thanks, the large missing piece in my understanding is still what
the user visible impact of this change is. Without increasing
the LLQ entry size, a user who sends packet with long headers will:
a) see higher latency thru the NIC, but everything else is the same
b) see higher latency and lower overall throughput in terms of PPS
c) will have limited access to offloads, because the device requires
full access to headers via LLQ for some offloads
which one of the three is the closest?
Powered by blists - more mailing lists