[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181216.124120.731491122771817496.davem@davemloft.net>
Date: Sun, 16 Dec 2018 12:41:20 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: mw@...ihalf.com
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
netdev@...r.kernel.org, linux@...linux.org.uk,
maxime.chevallier@...tlin.com, thomas.petazzoni@...tlin.com,
gregory.clement@...tlin.com, antoine.tenart@...tlin.com,
stefanc@...vell.com, nadavh@...vell.com, jaz@...ihalf.com
Subject: Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE
From: Marcin Wojtas <mw@...ihalf.com>
Date: Tue, 11 Dec 2018 13:56:49 +0100
> Recent changes in the mvneta driver reworked allocation
> and handling of the ingress buffers to use entire pages.
> Apart from that in SW BM scenario the HW must be informed
> via PRXDQS about the biggest possible incoming buffer
> that can be propagated by RX descriptors.
>
> The BufferSize field was filled according to the MTU-dependent
> pkt_size value. Later change to PAGE_SIZE broke RX operation
> when usin 64K pages, as the field is simply too small.
>
> This patch conditionally limits the value passed to the BufferSize
> of the PRXDQS register, depending on the PAGE_SIZE used.
> On the occasion remove now unused frag_size field of the mvneta_port
> structure.
>
> Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation method for SWBM")
> Signed-off-by: Marcin Wojtas <mw@...ihalf.com>
The discussion died on this, but the bug should be fixed.
So in the short term I am applying this and queueing it up for v4.19
-stable.
Thanks.
Powered by blists - more mailing lists