[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZzTeLA8BqGHTvUIQ@lizhi-Precision-Tower-5810>
Date: Wed, 13 Nov 2024 12:13:16 -0500
From: Frank Li <Frank.li@....com>
To: Ioana Ciornei <ioana.ciornei@....com>
Cc: Wei Fang <wei.fang@....com>, claudiu.manoil@....com,
vladimir.oltean@....com, xiaoning.wang@....com,
andrew+netdev@...n.ch, davem@...emloft.net, edumazet@...gle.com,
kuba@...nel.org, pabeni@...hat.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, imx@...ts.linux.dev
Subject: Re: [PATCH v3 net-next 4/5] net: enetc: add LSO support for i.MX95
ENETC PF
On Wed, Nov 13, 2024 at 04:39:11PM +0200, Ioana Ciornei wrote:
> On Tue, Nov 12, 2024 at 05:14:46PM +0800, Wei Fang wrote:
> > ENETC rev 4.1 supports large send offload (LSO), segmenting large TCP
> > and UDP transmit units into multiple Ethernet frames. To support LSO,
> > software needs to fill some auxiliary information in Tx BD, such as LSO
> > header length, frame length, LSO maximum segment size, etc.
> >
> > At 1Gbps link rate, TCP segmentation was tested using iperf3, and the
> > CPU performance before and after applying the patch was compared through
> > the top command. It can be seen that LSO saves a significant amount of
> > CPU cycles compared to software TSO.
> >
> > Before applying the patch:
> > %Cpu(s): 0.1 us, 4.1 sy, 0.0 ni, 85.7 id, 0.0 wa, 0.5 hi, 9.7 si
> >
> > After applying the patch:
> > %Cpu(s): 0.1 us, 2.3 sy, 0.0 ni, 94.5 id, 0.0 wa, 0.4 hi, 2.6 si
> >
> > Signed-off-by: Wei Fang <wei.fang@....com>
> > Reviewed-by: Frank Li <Frank.Li@....com>
> > ---
> > v2: no changes
> > v3: use enetc_skb_is_ipv6() helper fucntion which is added in patch 2
> > ---
> > drivers/net/ethernet/freescale/enetc/enetc.c | 266 +++++++++++++++++-
> > drivers/net/ethernet/freescale/enetc/enetc.h | 15 +
> > .../net/ethernet/freescale/enetc/enetc4_hw.h | 22 ++
> > .../net/ethernet/freescale/enetc/enetc_hw.h | 15 +-
> > .../freescale/enetc/enetc_pf_common.c | 3 +
> > 5 files changed, 311 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
> > index 7c6b844c2e96..91428bb99f6d 100644
> > --- a/drivers/net/ethernet/freescale/enetc/enetc.c
> > +++ b/drivers/net/ethernet/freescale/enetc/enetc.c
> > @@ -527,6 +527,233 @@ static void enetc_tso_complete_csum(struct enetc_bdr *tx_ring, struct tso_t *tso
> > }
> > }
> >
> > +static inline int enetc_lso_count_descs(const struct sk_buff *skb)
> > +{
> > + /* 4 BDs: 1 BD for LSO header + 1 BD for extended BD + 1 BD
> > + * for linear area data but not include LSO header, namely
> > + * skb_headlen(skb) - lso_hdr_len. And 1 BD for gap.
> > + */
> > + return skb_shinfo(skb)->nr_frags + 4;
> > +}
>
> Why not move this static inline herper into the header?
>
> > +
> > +static int enetc_lso_get_hdr_len(const struct sk_buff *skb)
> > +{
> > + int hdr_len, tlen;
> > +
> > + tlen = skb_is_gso_tcp(skb) ? tcp_hdrlen(skb) : sizeof(struct udphdr);
> > + hdr_len = skb_transport_offset(skb) + tlen;
> > +
> > + return hdr_len;
> > +}
> > +
> > +static void enetc_lso_start(struct sk_buff *skb, struct enetc_lso_t *lso)
> > +{
> > + lso->lso_seg_size = skb_shinfo(skb)->gso_size;
> > + lso->ipv6 = enetc_skb_is_ipv6(skb);
> > + lso->tcp = skb_is_gso_tcp(skb);
> > + lso->l3_hdr_len = skb_network_header_len(skb);
> > + lso->l3_start = skb_network_offset(skb);
> > + lso->hdr_len = enetc_lso_get_hdr_len(skb);
> > + lso->total_len = skb->len - lso->hdr_len;
> > +}
> > +
> > +static void enetc_lso_map_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
> > + int *i, struct enetc_lso_t *lso)
> > +{
> > + union enetc_tx_bd txbd_tmp, *txbd;
> > + struct enetc_tx_swbd *tx_swbd;
> > + u16 frm_len, frm_len_ext;
> > + u8 flags, e_flags = 0;
> > + dma_addr_t addr;
> > + char *hdr;
> > +
> > + /* Get the fisrt BD of the LSO BDs chain */
>
> s/fisrt/first/
Fang wei:
next time run
./script/checkpatch --strict --codespell
Frank
>
Powered by blists - more mailing lists