[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200914173224.692707-3-anthony.l.nguyen@intel.com>
Date: Mon, 14 Sep 2020 10:32:21 -0700
From: Tony Nguyen <anthony.l.nguyen@...el.com>
To: davem@...emloft.net
Cc: Li RongQing <lirongqing@...du.com>, netdev@...r.kernel.org,
nhorman@...hat.com, sassmann@...hat.com,
jeffrey.t.kirsher@...el.com, anthony.l.nguyen@...el.com,
kernel test robot <lkp@...el.com>,
Jakub Kicinski <kuba@...nel.org>,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Aaron Brown <aaron.f.brown@...el.com>
Subject: [net-next v2 2/5] i40e: optimise prefetch page refcount
From: Li RongQing <lirongqing@...du.com>
refcount of rx_buffer page will be added here originally, so prefetchw
is needed, but after commit 1793668c3b8c ("i40e/i40evf: Update code to
better handle incrementing page count"), and refcount is not added
every time, so change prefetchw as prefetch.
Now it mainly services page_address(), but which accesses struct page
only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise
it returns address based on offset, so we prefetch it conditionally.
Jakub suggested to define prefetch_page_address in a common header.
Reported-by: kernel test robot <lkp@...el.com>
Suggested-by: Jakub Kicinski <kuba@...nel.org>
Signed-off-by: Li RongQing <lirongqing@...du.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@...el.com>
Tested-by: Aaron Brown <aaron.f.brown@...el.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
---
drivers/net/ethernet/intel/i40e/i40e_txrx.c | 2 +-
include/linux/prefetch.h | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 91ab824926b9..8500e1c1a16b 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1953,7 +1953,7 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring,
struct i40e_rx_buffer *rx_buffer;
rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
- prefetchw(rx_buffer->page);
+ prefetch_page_address(rx_buffer->page);
/* we are reusing so sync this buffer for CPU use */
dma_sync_single_range_for_cpu(rx_ring->dev,
diff --git a/include/linux/prefetch.h b/include/linux/prefetch.h
index 13eafebf3549..b83a3f944f28 100644
--- a/include/linux/prefetch.h
+++ b/include/linux/prefetch.h
@@ -15,6 +15,7 @@
#include <asm/processor.h>
#include <asm/cache.h>
+struct page;
/*
prefetch(x) attempts to pre-emptively get the memory pointed to
by address "x" into the CPU L1 cache.
@@ -62,4 +63,11 @@ static inline void prefetch_range(void *addr, size_t len)
#endif
}
+static inline void prefetch_page_address(struct page *page)
+{
+#if defined(WANT_PAGE_VIRTUAL) || defined(HASHED_PAGE_VIRTUAL)
+ prefetch(page);
+#endif
+}
+
#endif
--
2.26.2
Powered by blists - more mailing lists