lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Dec 2018 09:22:57 +0100
From:   Marcin Wojtas <mw@...ihalf.com>
To:     Jisheng.Zhang@...aptics.com
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-arm-kernel@...ts.infradead.org,
        netdev <netdev@...r.kernel.org>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        Grzegorz Jaszczyk <jaz@...ihalf.com>,
        Grégory Clement <gregory.clement@...tlin.com>,
        Russell King - ARM Linux <linux@...linux.org.uk>,
        Maxime Chevallier <maxime.chevallier@...tlin.com>,
        nadavh@...vell.com,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Stefan Chulski <stefanc@...vell.com>,
        "David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

Hi Jisheng,

śr., 12 gru 2018 o 03:48 Jisheng Zhang <Jisheng.Zhang@...aptics.com> napisał(a):
>
> Hi,
>
> On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:
>
> > Recent changes in the mvneta driver reworked allocation
> > and handling of the ingress buffers to use entire pages.
> > Apart from that in SW BM scenario the HW must be informed
> > via PRXDQS about the biggest possible incoming buffer
> > that can be propagated by RX descriptors.
> >
> > The BufferSize field was filled according to the MTU-dependent
> > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > when usin 64K pages, as the field is simply too small.
> >
> > This patch conditionally limits the value passed to the BufferSize
> > of the PRXDQS register, depending on the PAGE_SIZE used.
> > On the occasion remove now unused frag_size field of the mvneta_port
> > structure.
> >
> > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> > method for SWBM")
>
> IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb
>
> The issue commit 562e2f467e71 wants to solve is due to commit 7e47fd84b56bb
> It looks a bit wired, to introduce regression then submit another commit(in
> the same patch set) solve it
>
> Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't reproduce
> what's claimed in commit 562e2f467e71 -- "With system having a small memory
> (around 256MB), the state "cannot allocate memory to refill with new buffer"
> is reach pretty quickly."

I am not the one to decide about patch reverting. From what I
understand, commit 7e47fd84b56bb was intorduced in order to increase
performance thanks to replacing mvneta_frag_alloc/free with using
entire pages for RX buffers. I have 2 questions:
- without reverting anything, do you observe memory allocation
problems during refill?
- are you able to check L2 forwarding numbers on top of the pure
mainline branch and after reverting the mentioned patches? I'm
wondering what would be the performance penalty (if any).

Best regards,
Marcin

>
>
> >
> > Signed-off-by: Marcin Wojtas <mw@...ihalf.com>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index e5397c8..61b2349 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -408,7 +408,6 @@ struct mvneta_port {
> >       struct mvneta_pcpu_stats __percpu       *stats;
> >
> >       int pkt_size;
> > -     unsigned int frag_size;
> >       void __iomem *base;
> >       struct mvneta_rx_queue *rxqs;
> >       struct mvneta_tx_queue *txqs;
> > @@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
> >       if (!pp->bm_priv) {
> >               /* Set Offset */
> >               mvneta_rxq_offset_set(pp, rxq, 0);
> > -             mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
> > +             mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> > +                                     PAGE_SIZE :
> > +                                     MVNETA_RX_BUF_SIZE(pp->pkt_size));
> >               mvneta_rxq_bm_disable(pp, rxq);
> >               mvneta_rxq_fill(pp, rxq, rxq->size);
> >       } else {
> > @@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
> >       int ret;
> >
> >       pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
> > -     pp->frag_size = PAGE_SIZE;
> >
> >       ret = mvneta_setup_rxqs(pp);
> >       if (ret)
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ