lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 18 Aug 2020 18:37:39 +0900
From:   Cho KyongHo <pullip.cho@...sung.com>
To:     Will Deacon <will@...nel.org>
Cc:     joro@...tes.org, catalin.marinas@....com,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, m.szyprowski@...sung.com,
        robin.murphy@....com, janghyuck.kim@...sung.com,
        hyesoo.yu@...sung.com
Subject: Re: [PATCH 1/2] dma-mapping: introduce relaxed version of dma sync

On Tue, Aug 18, 2020 at 09:28:53AM +0100, Will Deacon wrote:
> On Tue, Aug 18, 2020 at 04:43:10PM +0900, Cho KyongHo wrote:
> > Cache maintenance operations in the most of CPU architectures needs
> > memory barrier after the cache maintenance for the DMAs to view the
> > region of the memory correctly. The problem is that memory barrier is
> > very expensive and dma_[un]map_sg() and dma_sync_sg_for_{device|cpu}()
> > involves the memory barrier per every single cache sg entry. In some
> > CPU micro-architecture, a single memory barrier consumes more time than
> > cache clean on 4KiB. It becomes more serious if the number of CPU cores
> > are larger.
> 
> Have you got higher-level performance data for this change? It's more likely
> that the DSB is what actually forces the prior cache maintenance to
> complete,

This patch does not skip necessary DSB after cache maintenance. It just
remove repeated dsb per every single sg entry and call dsb just once
after cache maintenance on all sg entries is completed.

> so it's important to look at the bigger picture, not just the
> apparent relative cost of these instructions.
> 
If you mean bigger picture is the performance impact of this patch to a
complete user scenario, we are evaluating it in some latency sensitve
scenario. But I wonder if a performance gain in a platform/SoC specific
scenario is also persuasive.

> Also, it's a miracle that non-coherent DMA even works,

I am sorry, Will. I don't understand this. Can you let me know what do
you mena with the above sentence?

> so I'm not sure
> that we should be complicating the implementation like this to try to
> make it "fast".
> 
I agree that this patch makes the implementation of dma API a bit more
but I don't think this does not impact its complication seriously.

> Will
> 

Thank you.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ