lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Oct 2019 11:59:04 +0000
From:   Robin Murphy <robin.murphy@....com>
To:     Will Deacon <will@...nel.org>, Christoph Hellwig <hch@....de>
Cc:     isaacm@...eaurora.org, iommu@...ts.linux-foundation.org,
        linux-kernel@...r.kernel.org, joro@...tes.org,
        m.szyprowski@...sung.com, pratikp@...eaurora.org,
        lmark@...eaurora.org
Subject: Re: [PATCH] iommu/dma: Add support for DMA_ATTR_SYS_CACHE

On 28/10/2019 11:24, Will Deacon wrote:
> Hi Christoph,
> 
> On Mon, Oct 28, 2019 at 08:41:56AM +0100, Christoph Hellwig wrote:
>> On Sat, Oct 26, 2019 at 03:12:57AM -0700, isaacm@...eaurora.org wrote:
>>> On 2019-10-25 22:30, Christoph Hellwig wrote:
>>>> The definition makes very little sense.
>>> Can you please clarify what part doesn’t make sense, and why?
>>
>> It looks like complete garbage to me.  That might just be because it
>> uses tons of terms I've never heard of of and which aren't used anywhere
>> in the DMA API.  It also might be because it doesn't explain how the
>> flag might actually be practically useful.
> 
> Agreed. The way I /think/ it works is that on many SoCs there is a
> system/last-level cache (LLC) which effectively sits in front of memory for
> all masters. Even if a device isn't coherent with the CPU caches, we still
> want to be able to allocate into the LLC. Why this doesn't happen
> automatically is beyond me, but it appears that on these Qualcomm designs
> you actually have to set the memory attributes up in the page-table to
> ensure that the resulting memory transactions are non-cacheable for the CPU
> but cacheable for the LLC. Without any changes, the transactions are
> non-cacheable in both of them which assumedly has a performance cost.
> 
> But you can see that I'm piecing things together myself here. Isaac?

FWIW, that's pretty much how Pratik and Jordan explained it to me - the 
LLC sits directly in front of memory and is more or less transparent, 
although it might treat CPU and device accesses slightly differently (I 
don't remember exactly how the inner cacheablility attribute interacts). 
Certain devices don't get much benefit from the LLC, hence the desire 
for finer-grained control of their outer allocation policy to avoid more 
thrashing than necessary. Furthermore, for stuff in the 
video/GPU/display area certain jobs benefit more than others, hence the 
desire to go even finer-grained than a per-device control in order to 
maximise LLC effectiveness.

Robin.

>>> This is
>>> really just an extension of this patch that got mainlined, so that clients
>>> that use the DMA API can use IOMMU_QCOM_SYS_CACHE as well:
>>> https://patchwork.kernel.org/patch/10946099/
>>>>   Any without a user in the same series it is a complete no-go anyway.
>>> IOMMU_QCOM_SYS_CACHE does not have any current users in the mainline, nor
>>> did it have it in the patch series in which it got merged, yet it is still
>>> present? Furthermore, there are plans to upstream support for one of our
>>> SoCs that may benefit from this, as seen here:
>>> https://www.spinics.net/lists/iommu/msg39608.html.
>>
>> Which means it should have never been merged.  As a general policy we do
>> not add code to the Linux kernel without actual users.
> 
> Yes, in this case I was hoping a user would materialise via a different
> tree, but it didn't happen, hence my post last week about removing this
> altogether:
> 
> https://lore.kernel.org/linux-iommu/20191024153832.GA7966@jcrouse1-lnx.qualcomm.com/T/#t
> 
> which I suspect prompted this patch that unfortunately fails to solve the
> problem.
> 
> Will
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ