lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAA93jw5=Dh9w6x_EQtuWdAbWVUF00M+5x3idFz-XOvAzG5dMQw@mail.gmail.com>
Date:   Tue, 10 May 2022 07:09:56 -0700
From:   Dave Taht <dave.taht@...il.com>
To:     Rafał Miłecki <zajec5@...il.com>
Cc:     Andrew Lunn <andrew@...n.ch>, Felix Fietkau <nbd@....name>,
        Arnd Bergmann <arnd@...db.de>,
        Alexander Lobakin <alexandr.lobakin@...el.com>,
        Network Development <netdev@...r.kernel.org>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        Russell King <linux@...linux.org.uk>,
        "openwrt-devel@...ts.openwrt.org" <openwrt-devel@...ts.openwrt.org>,
        Florian Fainelli <f.fainelli@...il.com>
Subject: Re: Optimizing kernel compilation / alignments for network performance

I might have mentioned this before. but I'm really big on using the
flent tool to drive test runs. The comparison
plots are to die for, and it can also sample cpu and other statistics
over time. Also I'm big on testing bidirectional functionality.

client$ flent -H server -t what_test_conditions_you_have
--step-size=.05 --te=upload_streams=4 -x --socket-stats tcp_nup

Gathers a lot of data about everything. The rrul test is one of my
favorites for creating a bittorrent like load.

flent is usually available in apt/rpm/etc. there are scripts that can
run on routers, openwrt has opkg install flent-tools, you use ssh to
fire these off.

there are a few python dependencies for the flent-gui, that aren't
needed for the flent server or client
sometimes you have to install and compile netperf on your own with
./configure --enable-demo

Please see flent.org for more details, and/or hit the flent-users list
for questions.

On Tue, May 10, 2022 at 5:03 AM Rafał Miłecki <zajec5@...il.com> wrote:
>
> On 6.05.2022 14:42, Andrew Lunn wrote:
> >>> I just took a quick look at the driver. It allocates and maps rx buffers that can cover a packet size of BGMAC_RX_MAX_FRAME_SIZE = 9724.
> >>> This seems rather excessive, especially since most people are going to use a MTU of 1500.
> >>> My proposal would be to add support for making rx buffer size dependent on MTU, reallocating the ring on MTU changes.
> >>> This should significantly reduce the time spent on flushing caches.
> >>
> >> Oh, that's important too, it was changed by commit 8c7da63978f1 ("bgmac:
> >> configure MTU and add support for frames beyond 8192 byte size"):
> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8c7da63978f1672eb4037bbca6e7eac73f908f03
> >>
> >> It lowered NAT speed with bgmac by 60% (362 Mbps → 140 Mbps).
> >>
> >> I do all my testing with
> >> #define BGMAC_RX_MAX_FRAME_SIZE                      1536
> >
> > That helps show that cache operations are part of your bottleneck.
> >
> > Taking a quick look at the driver. On the receive side:
> >
> >                         /* Unmap buffer to make it accessible to the CPU */
> >                          dma_unmap_single(dma_dev, dma_addr,
> >                                           BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
> >
> > Here is data is mapped read for the CPU to use it.
> >
> >                       /* Get info from the header */
> >                          len = le16_to_cpu(rx->len);
> >                          flags = le16_to_cpu(rx->flags);
> >
> >                          /* Check for poison and drop or pass the packet */
> >                          if (len == 0xdead && flags == 0xbeef) {
> >                                  netdev_err(bgmac->net_dev, "Found poisoned packet at slot %d, DMA issue!\n",
> >                                             ring->start);
> >                                  put_page(virt_to_head_page(buf));
> >                                  bgmac->net_dev->stats.rx_errors++;
> >                                  break;
> >                          }
> >
> >                          if (len > BGMAC_RX_ALLOC_SIZE) {
> >                                  netdev_err(bgmac->net_dev, "Found oversized packet at slot %d, DMA issue!\n",
> >                                             ring->start);
> >                                  put_page(virt_to_head_page(buf));
> >                                  bgmac->net_dev->stats.rx_length_errors++;
> >                                  bgmac->net_dev->stats.rx_errors++;
> >                                  break;
> >                          }
> >
> >                          /* Omit CRC. */
> >                          len -= ETH_FCS_LEN;
> >
> >                          skb = build_skb(buf, BGMAC_RX_ALLOC_SIZE);
> >                          if (unlikely(!skb)) {
> >                                  netdev_err(bgmac->net_dev, "build_skb failed\n");
> >                                  put_page(virt_to_head_page(buf));
> >                                  bgmac->net_dev->stats.rx_errors++;
> >                                  break;
> >                          }
> >                          skb_put(skb, BGMAC_RX_FRAME_OFFSET +
> >                                  BGMAC_RX_BUF_OFFSET + len);
> >                          skb_pull(skb, BGMAC_RX_FRAME_OFFSET +
> >                                   BGMAC_RX_BUF_OFFSET);
> >
> >                          skb_checksum_none_assert(skb);
> >                          skb->protocol = eth_type_trans(skb, bgmac->net_dev);
> >
> > and this is the first access of the actual data. You can make the
> > cache actually work for you, rather than against you, to adding a call to
> >
> >       prefetch(buf);
> >
> > just after the dma_unmap_single(). That will start getting the frame
> > header from DRAM into cache, so hopefully it is available by the time
> > eth_type_trans() is called and you don't have a cache miss.
>
>
> I don't think that analysis is correct.
>
> Please take a look at following lines:
> struct bgmac_rx_header *rx = slot->buf + BGMAC_RX_BUF_OFFSET;
> void *buf = slot->buf;
>
> The first we do after dma_unmap_single() call is rx->len read. That
> actually points to DMA data. There is nothing we could keep CPU busy
> with while preteching data.
>
> FWIW I tried adding prefetch(buf); anyway. I didn't change NAT speed by
> a single 1 Mb/s. Speed was exactly the same as without prefetch() call.



-- 
FQ World Domination pending: https://blog.cerowrt.org/post/state_of_fq_codel/
Dave Täht CEO, TekLibre, LLC

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ