lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <26bafed3-1ebc-234a-5e76-a6b9e1e0f32c@intel.com>
Date:   Wed, 6 May 2020 13:43:52 +0200
From:   Björn Töpel <bjorn.topel@...el.com>
To:     Maxim Mikityanskiy <maximmi@...lanox.com>,
        Björn Töpel <bjorn.topel@...il.com>
Cc:     ast@...nel.org, daniel@...earbox.net, davem@...emloft.net,
        kuba@...nel.org, hawk@...nel.org, john.fastabend@...il.com,
        netdev@...r.kernel.org, bpf@...r.kernel.org,
        magnus.karlsson@...el.com, jonathan.lemon@...il.com,
        maciej.fijalkowski@...el.com
Subject: Re: [RFC PATCH bpf-next 04/13] xsk: introduce AF_XDP buffer
 allocation API

On 2020-05-06 11:51, Maxim Mikityanskiy wrote:
> On 2020-05-04 14:37, Björn Töpel wrote:
[]
>> @@ -389,6 +390,11 @@ static void __xdp_return(void *data, struct 
>> xdp_mem_info *mem, bool napi_direct,
>>           xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
>>           xa->zc_alloc->free(xa->zc_alloc, handle);
>>           rcu_read_unlock();
>> +        break;
>> +    case MEM_TYPE_XSK_BUFF_POOL:
>> +        /* NB! Only valid from an xdp_buff! */
>> +        xsk_buff_free(xdp);
>> +        break;
> 
> I remember I asked about it, but not sure what we decided here. 
> xdp_return_buff is the only way to get in this new case, and it's called 
> only from XSK flows. Maybe it would make sense to kill this case and 
> xdp_return_buff, and call xsk_buff_free directly? It'll save some time 
> that we waste in switch-case, a function call and two parameters of 
> __xdp_return - should make everything faster. Do you think it makes sense?
>

I forgot about this! Thanks for the reminder. Yeah, that makes sense. 
Wdyt about the patch below:

From: =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= <bjorn.topel@...el.com>
Date: Wed, 6 May 2020 13:39:05 +0200
Subject: [PATCH] xdp: simplify xdp_return_{frame,frame_rx_napi,buff}
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The xdp_return_{frame,frame_rx_napi,buff} function are never used,
except in xdp_convert_zc_to_xdp_frame(), by the MEM_TYPE_XSK_BUFF_POOL
memory type.

To simplify and reduce code, change so that
xdp_convert_zc_to_xdp_frame() calls xsk_buff_free() directly since the
type is know, and remove MEM_TYPE_XSK_BUFF_POOL from the switch
statement in __xdp_return() function.

Suggested-by: Maxim Mikityanskiy <maximmi@...lanox.com>
Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
---
  net/core/xdp.c | 21 +++++++++------------
  1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/net/core/xdp.c b/net/core/xdp.c
index 11273c976e19..7ab1f9014c5e 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -334,10 +334,11 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model);
   * scenarios (e.g. queue full), it is possible to return the xdp_frame
   * while still leveraging this protection.  The @napi_direct boolean
   * is used for those calls sites.  Thus, allowing for faster recycling
- * of xdp_frames/pages in those cases.
+ * of xdp_frames/pages in those cases. This path is never used by the
+ * MEM_TYPE_XSK_BUFF_POOL memory type, so it's explicitly not part of
+ * the switch-statement.
   */
-static void __xdp_return(void *data, struct xdp_mem_info *mem, bool 
napi_direct,
-			 struct xdp_buff *xdp)
+static void __xdp_return(void *data, struct xdp_mem_info *mem, bool 
napi_direct)
  {
  	struct xdp_mem_allocator *xa;
  	struct page *page;
@@ -359,33 +360,29 @@ static void __xdp_return(void *data, struct 
xdp_mem_info *mem, bool napi_direct,
  		page = virt_to_page(data); /* Assumes order0 page*/
  		put_page(page);
  		break;
-	case MEM_TYPE_XSK_BUFF_POOL:
-		/* NB! Only valid from an xdp_buff! */
-		xsk_buff_free(xdp);
-		break;
  	default:
  		/* Not possible, checked in xdp_rxq_info_reg_mem_model() */
+		WARN(1, "Incorrect XDP memory type (%d) usage", mem->type);
  		break;
  	}
  }

  void xdp_return_frame(struct xdp_frame *xdpf)
  {
-	__xdp_return(xdpf->data, &xdpf->mem, false, NULL);
+	__xdp_return(xdpf->data, &xdpf->mem, false);
  }
  EXPORT_SYMBOL_GPL(xdp_return_frame);

  void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
  {
-	__xdp_return(xdpf->data, &xdpf->mem, true, NULL);
+	__xdp_return(xdpf->data, &xdpf->mem, true);
  }
  EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);

  void xdp_return_buff(struct xdp_buff *xdp)
  {
-	__xdp_return(xdp->data, &xdp->rxq->mem, true, xdp);
+	__xdp_return(xdp->data, &xdp->rxq->mem, true);
  }
-EXPORT_SYMBOL_GPL(xdp_return_buff);

  /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */
  void __xdp_release_frame(void *data, struct xdp_mem_info *mem)
@@ -466,7 +463,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct 
xdp_buff *xdp)
  	xdpf->metasize = metasize;
  	xdpf->mem.type = MEM_TYPE_PAGE_ORDER0;

-	xdp_return_buff(xdp);
+	xsk_buff_free(xdp);
  	return xdpf;
  }
  EXPORT_SYMBOL_GPL(xdp_convert_zc_to_xdp_frame);
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ