[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Uc66PS67HvrT8jzW0tCnzjRqaD1Hnm9-1YZ0XncTh_3BA@mail.gmail.com>
Date: Thu, 3 Dec 2020 10:16:11 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: David Awogbemila <awogbemila@...gle.com>
Cc: Netdev <netdev@...r.kernel.org>, Saeed Mahameed <saeed@...nel.org>,
Eric Dumazet <eric.dumazet@...il.com>,
Catherine Sullivan <csully@...gle.com>,
Yangchun Fu <yangchun@...gle.com>
Subject: Re: [PATCH net-next v9 4/4] gve: Add support for raw addressing in
the tx path
On Wed, Dec 2, 2020 at 10:24 AM David Awogbemila <awogbemila@...gle.com> wrote:
>
> From: Catherine Sullivan <csully@...gle.com>
>
> During TX, skbs' data addresses are dma_map'ed and passed to the NIC.
> This means that the device can perform DMA directly from these addresses
> and the driver does not have to copy the buffer content into
> pre-allocated buffers/qpls (as in qpl mode).
>
> Reviewed-by: Yangchun Fu <yangchun@...gle.com>
> Signed-off-by: Catherine Sullivan <csully@...gle.com>
> Signed-off-by: David Awogbemila <awogbemila@...gle.com>
> ---
> drivers/net/ethernet/google/gve/gve.h | 16 +-
> drivers/net/ethernet/google/gve/gve_adminq.c | 4 +-
> drivers/net/ethernet/google/gve/gve_desc.h | 8 +-
> drivers/net/ethernet/google/gve/gve_ethtool.c | 2 +
> drivers/net/ethernet/google/gve/gve_tx.c | 197 ++++++++++++++----
> 5 files changed, 185 insertions(+), 42 deletions(-)
>
> diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
> index 8aad4af2aa2b..9888fa92be86 100644
> --- a/drivers/net/ethernet/google/gve/gve.h
> +++ b/drivers/net/ethernet/google/gve/gve.h
> @@ -112,12 +112,20 @@ struct gve_tx_iovec {
> u32 iov_padding; /* padding associated with this segment */
> };
>
> +struct gve_tx_dma_buf {
> + DEFINE_DMA_UNMAP_ADDR(dma);
> + DEFINE_DMA_UNMAP_LEN(len);
> +};
> +
> /* Tracks the memory in the fifo occupied by the skb. Mapped 1:1 to a desc
> * ring entry but only used for a pkt_desc not a seg_desc
> */
> struct gve_tx_buffer_state {
> struct sk_buff *skb; /* skb for this pkt */
> - struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
> + union {
> + struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */
> + struct gve_tx_dma_buf buf;
> + };
> };
>
> /* A TX buffer - each queue has one */
> @@ -140,19 +148,23 @@ struct gve_tx_ring {
> __be32 last_nic_done ____cacheline_aligned; /* NIC tail pointer */
> u64 pkt_done; /* free-running - total packets completed */
> u64 bytes_done; /* free-running - total bytes completed */
> + u32 dropped_pkt; /* free-running - total packets dropped */
Generally I would probably use a u64 for any count values. I'm not
sure what rate you will be moving packets at but if something goes
wrong you are better off with the counter not rolling over every few
minutes.
> /* Cacheline 2 -- Read-mostly fields */
> union gve_tx_desc *desc ____cacheline_aligned;
> struct gve_tx_buffer_state *info; /* Maps 1:1 to a desc */
> struct netdev_queue *netdev_txq;
> struct gve_queue_resources *q_resources; /* head and tail pointer idx */
> + struct device *dev;
> u32 mask; /* masks req and done down to queue size */
> + u8 raw_addressing; /* use raw_addressing? */
>
> /* Slow-path fields */
> u32 q_num ____cacheline_aligned; /* queue idx */
> u32 stop_queue; /* count of queue stops */
> u32 wake_queue; /* count of queue wakes */
> u32 ntfy_id; /* notification block index */
> + u32 dma_mapping_error; /* count of dma mapping errors */
Since this is a counter wouldn't it make more sense to place it up
with the other counters?
Looking over the rest of the patch it seems fine to me. The counters
were the only thing that had me a bit concerned.
Powered by blists - more mailing lists