lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Nov 2019 12:38:50 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Lorenzo Bianconi <lorenzo@...nel.org>
Cc:     netdev@...r.kernel.org, davem@...emloft.net,
        ilias.apalodimas@...aro.org, lorenzo.bianconi@...hat.com,
        mcroce@...hat.com, jonathan.lemon@...il.com, brouer@...hat.com
Subject: Re: [PATCH v4 net-next 3/3] net: mvneta: get rid of huge dma sync
 in mvneta_rx_refill

On Mon, 18 Nov 2019 15:33:46 +0200
Lorenzo Bianconi <lorenzo@...nel.org> wrote:

> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index f7713c2c68e1..a06d109c9e80 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
[...]
> @@ -2097,8 +2093,10 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  		err = xdp_do_redirect(pp->dev, xdp, prog);
>  		if (err) {
>  			ret = MVNETA_XDP_DROPPED;
> -			page_pool_recycle_direct(rxq->page_pool,
> -						 virt_to_head_page(xdp->data));
> +			__page_pool_put_page(rxq->page_pool,
> +					virt_to_head_page(xdp->data),
> +					xdp->data_end - xdp->data_hard_start,
> +					true);
>  		} else {
>  			ret = MVNETA_XDP_REDIR;
>  		}
> @@ -2107,8 +2105,10 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  	case XDP_TX:
>  		ret = mvneta_xdp_xmit_back(pp, xdp);
>  		if (ret != MVNETA_XDP_TX)
> -			page_pool_recycle_direct(rxq->page_pool,
> -						 virt_to_head_page(xdp->data));
> +			__page_pool_put_page(rxq->page_pool,
> +					virt_to_head_page(xdp->data),
> +					xdp->data_end - xdp->data_hard_start,
> +					true);
>  		break;
>  	default:
>  		bpf_warn_invalid_xdp_action(act);
> @@ -2117,8 +2117,10 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>  		trace_xdp_exception(pp->dev, prog, act);
>  		/* fall through */
>  	case XDP_DROP:
> -		page_pool_recycle_direct(rxq->page_pool,
> -					 virt_to_head_page(xdp->data));
> +		__page_pool_put_page(rxq->page_pool,
> +				     virt_to_head_page(xdp->data),
> +				     xdp->data_end - xdp->data_hard_start,
> +				     true);

This does beg for the question: Should we create an API wrapper for
this in the header file?

But what to name it?

I know Jonathan doesn't like the "direct" part of the  previous function
name page_pool_recycle_direct.  (I do considered calling this 'napi'
instead, as it would be inline with networking use-cases, but it seemed
limited if other subsystem end-up using this).

Does is 'page_pool_put_page_len' sound better?

But I want also want hide the bool 'allow_direct' in the API name.
(As it makes it easier to identify users that uses this from softirq)

Going for 'page_pool_put_page_len_napi' starts to be come rather long.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ