lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 5 Jan 2020 12:22:29 +0200
From:   Liran Alon <>
To:     "Bshara, Saeed" <>
Cc:     "Machulsky, Zorik" <>,
        "Belgazal, Netanel" <>,
        "" <>,
        "" <>,
        "Jubran, Samih" <>,
        "Chauskin, Igor" <>,
        "Kiyanovski, Arthur" <>,
        "Schmeilin, Evgeny" <>,
        "Tzalik, Guy" <>,
        "Dagan, Noam" <>,
        "Matushevsky, Alexander" <>,
        "Pressman, Gal" <>,
        Håkon Bugge <>
Subject: Re: [PATCH 2/2] net: AWS ENA: Flush WCBs before writing new SQ tail
 to doorbell

Hi Saeed,

If I understand correctly, the device is only aware of new descriptors once the tail is updated by ena_com_write_sq_doorbell() using writel().
If that’s the case, then writel() guarantees all previous writes to WB/UC memory is visible to device before the write done by writel().

If device is allowed to fetch packet payload at the moment the transmit descriptor is written into device-memory using LLQ,
then ena_com_write_bounce_buffer_to_dev() should dma_wmb() before __iowrite64_copy(). Instead of wmb(). And comment
is wrong and should be updated accordingly.
For example, this will optimise x86 to only have a compiler-barrier instead of executing a SFENCE.

Can you clarify what is device behaviour on when it is allowed to read the packet payload?
i.e. Is it only after writing to doorbell or is it from the moment the transmit descriptor is written to LLQ?


> On 5 Jan 2020, at 11:53, Bshara, Saeed <> wrote:
> Thanks Liran,
> I think we missed the payload visibility; The LLQ descriptor contains the header part of the packet, in theory we will need also to make sure that all cpu writes to the packet payload are visible to the device, I bet that in practice those stores will be visible without explicit barrier, but we better stick to the rules.
> so we still need dma_wmb(), also, that means the first patch can't simply remove the wmb() as it actually may be taking care for the payload visibility.
> saeed
> From: Machulsky, Zorik
> Sent: Saturday, January 4, 2020 6:55 AM
> To: Liran Alon
> Cc: Belgazal, Netanel;;; Bshara, Saeed; Jubran, Samih; Chauskin, Igor; Kiyanovski, Arthur; Schmeilin, Evgeny; Tzalik, Guy; Dagan, Noam; Matushevsky, Alexander; Pressman, Gal; Håkon Bugge
> Subject: Re: [PATCH 2/2] net: AWS ENA: Flush WCBs before writing new SQ tail to doorbell
> On 1/3/20, 1:47 PM, "Liran Alon" <> wrote:
>     > On 2 Jan 2020, at 20:08, Liran Alon <> wrote:
>     > 
>     > AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
>     > referred to as "push-mode"). In this mode, the driver pushes the
>     > transmit descriptors and the first 128 bytes of the packet directly
>     > to the ENA device memory space, while the rest of the packet payload
>     > is fetched by the device from host memory. For this operation mode,
>     > the driver uses a dedicated PCI BAR which is mapped as WC memory.
>     > 
>     > The function ena_com_write_bounce_buffer_to_dev() is responsible
>     > to write to the above mentioned PCI BAR.
>     > 
>     > When the write of new SQ tail to doorbell is visible to device, device
>     > expects to be able to read relevant transmit descriptors and packets
>     > headers from device memory. Therefore, driver should ensure
>     > write-combined buffers (WCBs) are flushed before the write to doorbell
>     > is visible to the device.
>     > 
>     > For some CPUs, this will be taken care of by writel(). For example,
>     > x86 Intel CPUs flushes write-combined buffers when a read or write
>     > is done to UC memory (In our case, the doorbell). See Intel SDM section
>     > "If the WC buffer is partially filled, the writes may be delayed until
>     > the next occurrence of a serializing event; such as, an SFENCE or MFENCE
>     > instruction, CPUID execution, a read or write to uncached memory, an
>     > interrupt occurrence, or a LOCK instruction execution.”
>     > 
>     > However, other CPUs do not provide this guarantee. For example, x86
>     > AMD CPUs flush write-combined buffers only on a read from UC memory.
>     > Not a write to UC memory. See AMD Software Optimisation Guide for AMD
>     > Family 17h Processors section 2.13.3 Write-Combining Operations.
>     Actually... After re-reading AMD Optimization Guide SDM, I see it is guaranteed that:
>     “Write-combining is closed if all 64 bytes of the write buffer are valid”.
>     And this is indeed always the case for AWS ENA LLQ. Because as can be seen at
>     ena_com_config_llq_info(), desc_list_entry_size is either 128, 192 or 256. i.e. Always
>     a multiple of 64 bytes.
>     So this patch in theory could maybe be dropped as for x86 Intel & AMD and ARM64 with
>     current desc_list_entry_size, it isn’t strictly necessary to guarantee that WC buffers are flushed.
>     I will let AWS folks to decide if they prefer to apply this patch anyway to make WC flush explicit
>     and to avoid hard-to-debug issues in case of new non-64-multiply size appear in the future. Or
>     to drop this patch and instead add a WARN_ON() to ena_com_config_llq_info() in case desc_list_entry_size
>     is not a multiple of 64 bytes. To avoid taking perf hit for no real value.
> Liran, thanks for this important info. If this is the case, I believe we should drop this patch as it introduces unnecessary branch
> in data path. Agree with your WARN_ON() suggestion. 
>     -Liran

Powered by blists - more mailing lists