lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 31 Aug 2016 09:29:15 +0300
From:   Aaro Koskinen <aaro.koskinen@....fi>
To:     Ed Swierk <eswierk@...portsystems.com>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        David Daney <ddaney@...iumnetworks.com>,
        driverdev-devel <devel@...verdev.osuosl.org>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/9] staging: octeon: multi rx group (queue) support

Hi,

On Tue, Aug 30, 2016 at 06:12:17PM -0700, Ed Swierk wrote:
> On Tue, Aug 30, 2016 at 11:47 AM, Aaro Koskinen <aaro.koskinen@....fi> wrote:
> > This series implements multiple RX group support that should improve
> > the networking performance on multi-core OCTEONs. Basically we register
> > IRQ and NAPI for each group, and ask the HW to select the group for
> > the incoming packets based on hash.
> >
> > Tested on EdgeRouter Lite with a simple forwarding test using two flows
> > and 16 RX groups distributed between two cores - the routing throughput
> > is roughly doubled.
> 
> I applied the series to my 4.4.19 tree, which involved backporting a
> bunch of other patches from master, most of them trivial.
> 
> When I test it on a Cavium Octeon 2 (CN6880) board, I get an immediate
> crash (bus error) in the netif_receive_skb() call from cvm_oct_poll().
> Replacing the rx_group argument to cvm_oct_poll() with int group, and
> dereferencing rx_group->group in the caller (cvm_oct_napi_poll())
> instead makes the crash disappear. Apparently there's some race in
> dereferencing rx_group from within cvm_oct_poll().

Oops, looks like I tested without CONFIG_NET_POLL_CONTROLLER enabled
and that seems to be broken. Sorry.

> With this workaround in place, I can send and receive on XAUI
> interfaces, but don't see any performance improvement. I'm guessing I
> need to set receive_group_order > 0. But any value between 1 and 4
> seems to break rx altogether. When I ping another host I see both
> request and response on the wire, and the interface counters increase,
> but the response doesn't make it back to ping.

Can you see multiple ethernet IRQs in /proc/interrupts and their
counters increasing?

With receive_group_order=4 you should see 16 IRQs.

> Is some other configuration needed to make use of multiple rx groups?

Once RX interrupts are working you need to divide them to multiple cores
using /proc/irq/<number>/smp_affinity, or use irqbalance or such.

A.

Powered by blists - more mailing lists