lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <10fb2bc9-758f-4788-978a-819608688dac@yunsilicon.com>
Date: Wed, 26 Feb 2025 17:38:44 +0800
From: "Xin Tian" <tianx@...silicon.com>
To: "Joe Damato" <jdamato@...tly.com>, <netdev@...r.kernel.org>, 
	<leon@...nel.org>, <andrew+netdev@...n.ch>, <kuba@...nel.org>, 
	<pabeni@...hat.com>, <edumazet@...gle.com>, <davem@...emloft.net>, 
	<jeff.johnson@....qualcomm.com>, <przemyslaw.kitszel@...el.com>, 
	<weihg@...silicon.com>, <wanry@...silicon.com>, <horms@...nel.org>, 
	<parthiban.veerasooran@...rochip.com>, <masahiroy@...nel.org>
Subject: Re: [PATCH net-next v5 13/14] xsc: Add eth reception data path

On 2025/2/25 11:34, Joe Damato wrote:
> On Tue, Feb 25, 2025 at 01:24:44AM +0800, Xin Tian wrote:
>> rx data path:
> [...]
>
>> diff --git a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
>> index 72f33bb53..b87105c26 100644
>> --- a/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
>> +++ b/drivers/net/ethernet/yunsilicon/xsc/net/xsc_eth_rx.c
>> @@ -5,44 +5,594 @@
> [...]
>
>>   struct sk_buff *xsc_skb_from_cqe_linear(struct xsc_rq *rq,
>>   					struct xsc_wqe_frag_info *wi,
>>   					u32 cqe_bcnt, u8 has_pph)
>>   {
>> -	// TBD
>> -	return NULL;
>> +	int pph_len = has_pph ? XSC_PPH_HEAD_LEN : 0;
>> +	u16 rx_headroom = rq->buff.headroom;
>> +	struct xsc_dma_info *di = wi->di;
>> +	struct sk_buff *skb;
>> +	void *va, *data;
>> +	u32 frag_size;
>> +
>> +	va = page_address(di->page) + wi->offset;
>> +	data = va + rx_headroom + pph_len;
>> +	frag_size = XSC_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
>> +
>> +	dma_sync_single_range_for_cpu(rq->cq.xdev->device, di->addr, wi->offset,
>> +				      frag_size, DMA_FROM_DEVICE);
>> +	prefetchw(va); /* xdp_frame data area */
>> +	prefetch(data);
> net_prefetchw and net_prefetch, possibly?
>
> [...]
>
>>   struct sk_buff *xsc_skb_from_cqe_nonlinear(struct xsc_rq *rq,
>>   					   struct xsc_wqe_frag_info *wi,
>>   					   u32 cqe_bcnt, u8 has_pph)
>>   {
>> -	// TBD
>> -	return NULL;
>> +	struct xsc_rq_frag_info *frag_info = &rq->wqe.info.arr[0];
>> +	u16 headlen  = min_t(u32, XSC_RX_MAX_HEAD, cqe_bcnt);
>> +	struct xsc_wqe_frag_info *head_wi = wi;
>> +	struct xsc_wqe_frag_info *rx_wi = wi;
>> +	u16 head_offset = head_wi->offset;
>> +	u16 byte_cnt = cqe_bcnt - headlen;
>> +	u16 frag_consumed_bytes = 0;
>> +	u16 frag_headlen = headlen;
>> +	struct net_device *netdev;
>> +	struct xsc_channel *c;
>> +	struct sk_buff *skb;
>> +	struct device *dev;
>> +	u8 fragcnt = 0;
>> +	int i = 0;
>> +
>> +	c = rq->cq.channel;
>> +	dev = c->adapter->dev;
>> +	netdev = c->adapter->netdev;
>> +
>> +	skb = napi_alloc_skb(rq->cq.napi, ALIGN(XSC_RX_MAX_HEAD, sizeof(long)));
>> +	if (unlikely(!skb))
>> +		return NULL;
>> +
>> +	prefetchw(skb->data);
> Same as above: net_prefetchw ?
Sure, I'll update

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ