[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202212132111.Ir8h6DpT-lkp@intel.com>
Date: Tue, 13 Dec 2022 21:14:38 +0800
From: kernel test robot <lkp@...el.com>
To: Tirthendu Sarkar <tirthendu.sarkar@...el.com>, tirtha@...il.com,
jesse.brandeburg@...el.com, anthony.l.nguyen@...el.com,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, ast@...nel.org, daniel@...earbox.net,
hawk@...nel.org, john.fastabend@...il.com,
intel-wired-lan@...ts.osuosl.org
Cc: llvm@...ts.linux.dev, oe-kbuild-all@...ts.linux.dev,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, magnus.karlsson@...el.com,
maciej.fijalkowski@...el.com
Subject: Re: [PATCH intel-next 5/5] i40e: add support for XDP multi-buffer Rx
Hi Tirthendu,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on next-20221213]
[also build test WARNING on linus/master v6.1]
[cannot apply to tnguy-next-queue/dev-queue v6.1 v6.1-rc8 v6.1-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Tirthendu-Sarkar/i40e-support-XDP-multi-buffer/20221213-190636
patch link: https://lore.kernel.org/r/20221213105023.196409-6-tirthendu.sarkar%40intel.com
patch subject: [PATCH intel-next 5/5] i40e: add support for XDP multi-buffer Rx
config: i386-randconfig-a013
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/c3a93fb1727340a33e806c2bf38f54ea24975863
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Tirthendu-Sarkar/i40e-support-XDP-multi-buffer/20221213-190636
git checkout c3a93fb1727340a33e806c2bf38f54ea24975863
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/net/ethernet/intel/i40e/
If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@...el.com>
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/i40e/i40e_txrx.c:2639:46: warning: variable 'skb' is uninitialized when used here [-Wuninitialized]
i40e_trace(clean_rx_irq, rx_ring, rx_desc, skb);
^~~
drivers/net/ethernet/intel/i40e/i40e_trace.h:48:69: note: expanded from macro 'i40e_trace'
#define i40e_trace(trace_name, args...) I40E_TRACE_NAME(trace_name)(args)
^~~~
drivers/net/ethernet/intel/i40e/i40e_txrx.c:2602:22: note: initialize the variable 'skb' to silence this warning
struct sk_buff *skb;
^
= NULL
1 warning generated.
vim +/skb +2639 drivers/net/ethernet/intel/i40e/i40e_txrx.c
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2564
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2565 /**
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2566 * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2567 * @rx_ring: rx descriptor ring to transact packets on
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2568 * @budget: Total limit on number of packets to process
717b5bc43c1fe7 Joe Damato 2022-10-07 2569 * @rx_cleaned: Out parameter of the number of packets processed
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2570 *
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2571 * This function provides a "bounce buffer" approach to Rx interrupt
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2572 * processing. The advantage to this is that on systems that have
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2573 * expensive overhead for IOMMU access this provides a means of avoiding
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2574 * it by maintaining the mapping of the page to the system.
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2575 *
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2576 * Returns amount of work completed
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2577 **/
717b5bc43c1fe7 Joe Damato 2022-10-07 2578 static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget,
717b5bc43c1fe7 Joe Damato 2022-10-07 2579 unsigned int *rx_cleaned)
a132af24e8d45e Mitch Williams 2015-01-24 2580 {
43b5169d8355cc Lorenzo Bianconi 2020-12-22 2581 unsigned int total_rx_bytes = 0, total_rx_packets = 0, frame_sz = 0;
f7bb0d71d65862 Maciej Fijalkowski 2021-01-18 2582 unsigned int offset = rx_ring->rx_offset;
231f67e69c5cf3 Tirthendu Sarkar 2022-12-13 2583 u16 ntp = rx_ring->next_to_process;
32038c2a499add Tirthendu Sarkar 2022-12-13 2584 u16 ntc = rx_ring->next_to_clean;
32038c2a499add Tirthendu Sarkar 2022-12-13 2585 u16 rmax = rx_ring->count - 1;
2e6893123830d0 Jesper Dangaard Brouer 2018-06-26 2586 unsigned int xdp_xmit = 0;
78f319315764e6 Ciara Loftus 2022-06-23 2587 struct bpf_prog *xdp_prog;
2e6893123830d0 Jesper Dangaard Brouer 2018-06-26 2588 bool failure = false;
871288248de23d Jesper Dangaard Brouer 2018-01-03 2589 struct xdp_buff xdp;
12738ac4754ec9 Arkadiusz Kubalewski 2021-03-26 2590 int xdp_res = 0;
871288248de23d Jesper Dangaard Brouer 2018-01-03 2591
24104024ce0553 Jesper Dangaard Brouer 2020-05-14 2592 #if (PAGE_SIZE < 8192)
43b5169d8355cc Lorenzo Bianconi 2020-12-22 2593 frame_sz = i40e_rx_frame_truesize(rx_ring, 0);
24104024ce0553 Jesper Dangaard Brouer 2020-05-14 2594 #endif
43b5169d8355cc Lorenzo Bianconi 2020-12-22 2595 xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq);
a132af24e8d45e Mitch Williams 2015-01-24 2596
78f319315764e6 Ciara Loftus 2022-06-23 2597 xdp_prog = READ_ONCE(rx_ring->xdp_prog);
78f319315764e6 Ciara Loftus 2022-06-23 2598
b85c94b617c000 Jesse Brandeburg 2017-06-20 2599 while (likely(total_rx_packets < (unsigned int)budget)) {
9a064128fc8489 Alexander Duyck 2017-03-14 2600 struct i40e_rx_buffer *rx_buffer;
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2601 union i40e_rx_desc *rx_desc;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2602 struct sk_buff *skb;
d57c0e08c70162 Alexander Duyck 2017-03-14 2603 unsigned int size;
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2604 u64 qword;
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2605
231f67e69c5cf3 Tirthendu Sarkar 2022-12-13 2606 rx_desc = I40E_RX_DESC(rx_ring, ntp);
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2607
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2608 /* status_error_len will always be zero for unused descriptors
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2609 * because it's cleared in cleanup, and overlaps with hdr_addr
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2610 * which is always zero because packet split isn't used, if the
d57c0e08c70162 Alexander Duyck 2017-03-14 2611 * hardware wrote DD then the length will be non-zero
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2612 */
d57c0e08c70162 Alexander Duyck 2017-03-14 2613 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2614
a132af24e8d45e Mitch Williams 2015-01-24 2615 /* This memory barrier is needed to keep us from reading
d57c0e08c70162 Alexander Duyck 2017-03-14 2616 * any other fields out of the rx_desc until we have
d57c0e08c70162 Alexander Duyck 2017-03-14 2617 * verified the descriptor has been written back.
a132af24e8d45e Mitch Williams 2015-01-24 2618 */
67317166dd6e8e Alexander Duyck 2015-04-08 2619 dma_rmb();
a132af24e8d45e Mitch Williams 2015-01-24 2620
be1222b585fdc4 Björn Töpel 2020-05-20 2621 if (i40e_rx_is_programming_status(qword)) {
be1222b585fdc4 Björn Töpel 2020-05-20 2622 i40e_clean_programming_status(rx_ring,
be1222b585fdc4 Björn Töpel 2020-05-20 2623 rx_desc->raw.qword[0],
6d7aad1da2791c Björn Töpel 2018-08-28 2624 qword);
231f67e69c5cf3 Tirthendu Sarkar 2022-12-13 2625 rx_buffer = i40e_rx_bi(rx_ring, ntp);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2626 if (ntc == ntp)
231f67e69c5cf3 Tirthendu Sarkar 2022-12-13 2627 I40E_INC_NEXT(ntp, ntc, rmax);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2628 else
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2629 I40E_IDX_NEXT(ntp, rmax);
6d7aad1da2791c Björn Töpel 2018-08-28 2630 i40e_reuse_rx_page(rx_ring, rx_buffer);
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2631 continue;
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2632 }
6d7aad1da2791c Björn Töpel 2018-08-28 2633
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2634 size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2635 I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2636 if (!size)
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2637 break;
0e626ff7ccbfc4 Alexander Duyck 2017-04-10 2638
ed0980c4401a21 Scott Peterson 2017-04-13 @2639 i40e_trace(clean_rx_irq, rx_ring, rx_desc, skb);
9a064128fc8489 Alexander Duyck 2017-03-14 2640
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2641 if (i40e_is_non_eop(rx_ring, rx_desc)) {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2642 I40E_IDX_NEXT(ntp, rmax);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2643 continue;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2644 }
be9df4aff65f18 Lorenzo Bianconi 2020-12-22 2645
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2646 /* retrieve EOP buffer from the ring */
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2647 rx_buffer = i40e_get_rx_buffer(rx_ring, size, ntp);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2648
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2649 if (likely(ntc == ntp))
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2650 i40e_build_xdp(rx_ring, offset, rx_buffer, size, &xdp);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2651 else
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2652 if (i40e_build_xdp_mb(rx_ring, offset, rx_buffer,
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2653 size, ntc, ntp, &xdp)) {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2654 xdp_res = I40E_XDP_CONSUMED;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2655 goto process_frags;
0c8493d90b6bb0 Björn Töpel 2017-05-24 2656 }
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2657 xdp_res = i40e_run_xdp(rx_ring, &xdp, xdp_prog);
0c8493d90b6bb0 Björn Töpel 2017-05-24 2658
12738ac4754ec9 Arkadiusz Kubalewski 2021-03-26 2659 if (xdp_res) {
2e6893123830d0 Jesper Dangaard Brouer 2018-06-26 2660 if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
2e6893123830d0 Jesper Dangaard Brouer 2018-06-26 2661 xdp_xmit |= xdp_res;
74608d17fe29b2 Björn Töpel 2017-05-24 2662 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
74608d17fe29b2 Björn Töpel 2017-05-24 2663 } else {
74608d17fe29b2 Björn Töpel 2017-05-24 2664 rx_buffer->pagecnt_bias++;
74608d17fe29b2 Björn Töpel 2017-05-24 2665 }
0c8493d90b6bb0 Björn Töpel 2017-05-24 2666 } else {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2667 struct i40e_rx_buffer *rxb = i40e_rx_bi(rx_ring, ntc);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2668
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2669 if (ring_uses_build_skb(rx_ring))
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2670 skb = i40e_build_skb(rx_ring, rxb, &xdp);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2671 else
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2672 skb = i40e_construct_skb(rx_ring, rxb, &xdp);
fa2343e9034ce6 Alexander Duyck 2017-03-14 2673
fa2343e9034ce6 Alexander Duyck 2017-03-14 2674 /* exit if we failed to retrieve a buffer */
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2675 if (!skb) {
fa2343e9034ce6 Alexander Duyck 2017-03-14 2676 rx_ring->rx_stats.alloc_buff_failed++;
fa2343e9034ce6 Alexander Duyck 2017-03-14 2677 rx_buffer->pagecnt_bias++;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2678 if (ntc == ntp)
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2679 break;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2680 xdp_res = I40E_XDP_EXIT;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2681 goto process_frags;
fa2343e9034ce6 Alexander Duyck 2017-03-14 2682 }
a132af24e8d45e Mitch Williams 2015-01-24 2683
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2684 if (i40e_cleanup_headers(rx_ring, skb, rx_desc)) {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2685 xdp.data = NULL;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2686 goto process_frags;
e72e56597ba15c Scott Peterson 2017-02-09 2687 }
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2688 /* populate checksum, VLAN, and protocol */
800b8f637d07cc Michał Mirosław 2018-12-04 2689 i40e_process_skb_fields(rx_ring, rx_desc, skb);
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2690
ed0980c4401a21 Scott Peterson 2017-04-13 2691 i40e_trace(clean_rx_irq_rx, rx_ring, rx_desc, skb);
2a508c64ad278d Michał Mirosław 2018-12-04 2692 napi_gro_receive(&rx_ring->q_vector->napi, skb);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2693 }
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2694
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2695 /* probably a little skewed due to removing CRC */
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2696 total_rx_bytes += size;
a132af24e8d45e Mitch Williams 2015-01-24 2697
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2698 /* update budget accounting */
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2699 total_rx_packets++;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2700
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2701 process_frags:
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2702 if (unlikely(xdp_buff_has_frags(&xdp))) {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2703 i40e_process_rx_buffers(rx_ring, ntc, ntp, xdp_res,
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2704 &xdp, size, &total_rx_bytes);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2705 if (xdp_res == I40E_XDP_EXIT) {
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2706 /* Roll back ntp to first desc on the packet */
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2707 ntp = ntc;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2708 break;
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2709 }
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2710 }
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2711 i40e_put_rx_buffer(rx_ring, rx_buffer);
c3a93fb1727340 Tirthendu Sarkar 2022-12-13 2712 I40E_INC_NEXT(ntp, ntc, rmax);
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2713 }
231f67e69c5cf3 Tirthendu Sarkar 2022-12-13 2714 rx_ring->next_to_process = ntp;
32038c2a499add Tirthendu Sarkar 2022-12-13 2715 rx_ring->next_to_clean = ntc;
a132af24e8d45e Mitch Williams 2015-01-24 2716
feb9d432d64ff1 Tirthendu Sarkar 2022-12-13 2717 failure = i40e_alloc_rx_buffers(rx_ring, I40E_DESC_UNUSED(rx_ring));
feb9d432d64ff1 Tirthendu Sarkar 2022-12-13 2718
6d7aad1da2791c Björn Töpel 2018-08-28 2719 i40e_finalize_xdp_rx(rx_ring, xdp_xmit);
e72e56597ba15c Scott Peterson 2017-02-09 2720
6d7aad1da2791c Björn Töpel 2018-08-28 2721 i40e_update_rx_stats(rx_ring, total_rx_bytes, total_rx_packets);
fd0a05ce74efc9 Jesse Brandeburg 2013-09-11 2722
717b5bc43c1fe7 Joe Damato 2022-10-07 2723 *rx_cleaned = total_rx_packets;
717b5bc43c1fe7 Joe Damato 2022-10-07 2724
1a557afc4dd59b Jesse Brandeburg 2016-04-20 2725 /* guarantee a trip back through this routine if there was a failure */
b85c94b617c000 Jesse Brandeburg 2017-06-20 2726 return failure ? budget : (int)total_rx_packets;
fd0a05ce74efc9 Jesse Brandeburg 2013-09-11 2727 }
fd0a05ce74efc9 Jesse Brandeburg 2013-09-11 2728
--
0-DAY CI Kernel Test Service
https://01.org/lkp
View attachment "config" of type "text/plain" (157195 bytes)
Powered by blists - more mailing lists