lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 1 Sep 2016 00:20:33 +0300
From:   Aaro Koskinen <aaro.koskinen@....fi>
To:     Ed Swierk <eswierk@...portsystems.com>,
        David Daney <ddaney@...iumnetworks.com>
Cc:     driverdev-devel <devel@...verdev.osuosl.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/9] staging: octeon: multi rx group (queue) support

Hi,

On Wed, Aug 31, 2016 at 09:20:07AM -0700, Ed Swierk wrote:
> I'm not using CONFIG_NET_POLL_CONTROLLER either; the problem is in the
> normal cvm_oct_napi_poll() path.
> 
> Here's my workaround:

[...]

> -static int cvm_oct_poll(struct oct_rx_group *rx_group, int budget)
> +static int cvm_oct_poll(int group, int budget)
>  {
>  	const int	coreid = cvmx_get_core_num();
>  	u64	old_group_mask;
> @@ -181,13 +181,13 @@ static int cvm_oct_poll(struct oct_rx_group *rx_group, int budget)
>  	if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
>  		old_group_mask = cvmx_read_csr(CVMX_SSO_PPX_GRP_MSK(coreid));
>  		cvmx_write_csr(CVMX_SSO_PPX_GRP_MSK(coreid),
> -			       BIT(rx_group->group));
> +			       BIT(group));
> @@ -447,7 +447,7 @@ static int cvm_oct_napi_poll(struct napi_struct *napi, int budget)
>  						     napi);
>  	int rx_count;
> 
> -	rx_count = cvm_oct_poll(rx_group, budget);
> +	rx_count = cvm_oct_poll(rx_group->group, budget);

I'm confused - there should be no difference?!

> > Can you see multiple ethernet IRQs in /proc/interrupts and their
> > counters increasing?
> > 
> > With receive_group_order=4 you should see 16 IRQs.
> 
> I see the 16 IRQs, and the first one does increase. But packets don't make
> it to the application.

Yeah, turns out that CN68XX supports up to 64 receive groups, and the
reset value is such that up to 64 groups get enabled by default in the
tag mask unless we know how to disabled them. So probably your packets
end up in those 48 other groups that do not have handler. This should
be fixed in v2 (by limiting to 16).

A.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ