lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAO_EM_=iHOFYWubZVaKTzUs8DYKjqQgK7TjtOm=xK9RB3mDKVw@mail.gmail.com>
Date:   Tue, 30 Aug 2016 18:12:17 -0700
From:   Ed Swierk <eswierk@...portsystems.com>
To:     Aaro Koskinen <aaro.koskinen@....fi>
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        David Daney <ddaney@...iumnetworks.com>,
        driverdev-devel <devel@...verdev.osuosl.org>,
        lkml <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/9] staging: octeon: multi rx group (queue) support

Hi Aaro,

On Tue, Aug 30, 2016 at 11:47 AM, Aaro Koskinen <aaro.koskinen@....fi> wrote:
> This series implements multiple RX group support that should improve
> the networking performance on multi-core OCTEONs. Basically we register
> IRQ and NAPI for each group, and ask the HW to select the group for
> the incoming packets based on hash.
>
> Tested on EdgeRouter Lite with a simple forwarding test using two flows
> and 16 RX groups distributed between two cores - the routing throughput
> is roughly doubled.

I applied the series to my 4.4.19 tree, which involved backporting a
bunch of other patches from master, most of them trivial.

When I test it on a Cavium Octeon 2 (CN6880) board, I get an immediate
crash (bus error) in the netif_receive_skb() call from cvm_oct_poll().
Replacing the rx_group argument to cvm_oct_poll() with int group, and
dereferencing rx_group->group in the caller (cvm_oct_napi_poll())
instead makes the crash disappear. Apparently there's some race in
dereferencing rx_group from within cvm_oct_poll().

With this workaround in place, I can send and receive on XAUI
interfaces, but don't see any performance improvement. I'm guessing I
need to set receive_group_order > 0. But any value between 1 and 4
seems to break rx altogether. When I ping another host I see both
request and response on the wire, and the interface counters increase,
but the response doesn't make it back to ping.

Is some other configuration needed to make use of multiple rx groups?

--Ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ