lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <09cc2500-5e6d-0b44-28f4-113ed88cf128@bitwise.fi>
Date:   Wed, 5 Dec 2018 15:58:43 +0200
From:   Anssi Hannula <anssi.hannula@...wise.fi>
To:     Claudiu.Beznea@...rochip.com
Cc:     Nicolas.Ferre@...rochip.com, davem@...emloft.net,
        netdev@...r.kernel.org, harini.katakam@...inx.com,
        michal.simek@...inx.com
Subject: Re: [PATCH 1/3] net: macb: fix random memory corruption on RX with
 64-bit DMA

On 5.12.2018 14:37, Claudiu.Beznea@...rochip.com wrote:
> Hi Anssi,

Hi, and thanks for looking at these.

> Few comments... Otherwise I tested this series on a SAMA5D2 Xplained and
> SAMA5D4 Xplained under heavy traffic and it seems to behave OK.
>
> Thank you,
> Claudiu Beznea
>
> On 30.11.2018 20:21, Anssi Hannula wrote:
>> 64-bit DMA addresses are split in upper and lower halves that are
>> written in separate fields on GEM. For RX, bit 0 of the address is used
>> as the ownership bit (RX_USED). When the RX_USED bit is unset the
>> controller is allowed to write data to the buffer.
>>
>> The driver does not guarantee that the controller already sees the upper
>> half when the RX_USED bit is cleared, possibly resulting in the
>> controller writing an incoming frame to an address with an incorrect
>> upper half and therefore possibly corrupting unrelated system memory.
>>
>> Fix that by adding the necessary DMA memory barrier between the writes.>
>> This corruption was observed on a ZynqMP based system.
>>
>> Signed-off-by: Anssi Hannula <anssi.hannula@...wise.fi>
>> Fixes: fff8019a08b6 ("net: macb: Add 64 bit addressing support for GEM")
>> Cc: Nicolas Ferre <nicolas.ferre@...rochip.com>
>> Cc: Harini Katakam <harini.katakam@...inx.com>
>> Cc: Michal Simek <michal.simek@...inx.com>
>> ---
>>  drivers/net/ethernet/cadence/macb_main.c | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
>> index d8c7ca037ae3..0bc2aab7be40 100644
>> --- a/drivers/net/ethernet/cadence/macb_main.c
>> +++ b/drivers/net/ethernet/cadence/macb_main.c
>> @@ -682,6 +682,11 @@ static void macb_set_addr(struct macb *bp, struct macb_dma_desc *desc, dma_addr_
>>  	if (bp->hw_dma_cap & HW_DMA_CAP_64B) {
>>  		desc_64 = macb_64b_desc(bp, desc);
>>  		desc_64->addrh = upper_32_bits(addr);
>> +		/* The low bits of RX address contain the RX_USED bit, clearing
>> +		 * of which allows packet RX. Make sure the high bits are also
>> +		 * visible to HW at that point.
>> +		 */
>> +		dma_wmb();
> I think a wmb() would fit better here so that on ARM to also force the
> flushing of caches not affected by dmb() by calling arm_heavy_mb().

Hmm, if we want to simply ensure ordering of the two writes (upper half,
lower half) to DMA memory, isn't dma_wmb() exactly for that purpose?
This situation seems to match the dma_wmb() example in
Documentation/memory-barriers.txt where data is updated before ownership
bit.

If dma_wmb() = dmb(oshst) is not enough on ARM for this purpose, should
not dma_wmb() be fixed then?
Or maybe I'm missing some difference why dma_wmb() is not enough here
but would be OK on some other case (e.g. in the memory-barriers.txt
example)?

> Thank you,
> Claudiu Beznea
>
>>  	}
>>  #endif
>>  	desc->addr = lower_32_bits(addr);
>>

-- 
Anssi Hannula / Bitwise Oy
+358 503803997

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ