lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <70adc2cc-f7aa-d4b9-7d7a-71f3ae99f16c@gmail.com>
Date:   Thu, 29 Mar 2018 13:44:59 +0200
From:   Christian König <ckoenig.leichtzumerken@...il.com>
To:     Logan Gunthorpe <logang@...tatee.com>,
        Christian König <christian.koenig@....com>,
        Christoph Hellwig <hch@...radead.org>
Cc:     linaro-mm-sig@...ts.linaro.org, amd-gfx@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
        linux-media@...r.kernel.org
Subject: Re: [PATCH 2/8] PCI: Add pci_find_common_upstream_dev()

Am 28.03.2018 um 21:53 schrieb Logan Gunthorpe:
>
> On 28/03/18 01:44 PM, Christian König wrote:
>> Well, isn't that exactly what dma_map_resource() is good for? As far as
>> I can see it makes sure IOMMU is aware of the access route and
>> translates a CPU address into a PCI Bus address.
>> I'm using that with the AMD IOMMU driver and at least there it works
>> perfectly fine.
> Yes, it would be nice, but no arch has implemented this yet. We are just
> lucky in the x86 case because that arch is simple and doesn't need to do
> anything for P2P (partially due to the Bus and CPU addresses being the
> same). But in the general case, you can't rely on it.

Well, that an arch hasn't implemented it doesn't mean that we don't have 
the right interface to do it.

>>>> Yeah, but not for ours. See if you want to do real peer 2 peer you need
>>>> to keep both the operation as well as the direction into account.
>>> Not sure what you are saying here... I'm pretty sure we are doing "real"
>>> peer 2 peer...
>>>
>>>> For example when you can do writes between A and B that doesn't mean
>>>> that writes between B and A work. And reads are generally less likely to
>>>> work than writes. etc...
>>> If both devices are behind a switch then the PCI spec guarantees that A
>>> can both read and write B and vice versa.
>> Sorry to say that, but I know a whole bunch of PCI devices which
>> horrible ignores that.
> Can you elaborate? As far as the device is concerned it shouldn't know
> whether a request comes from a peer or from the host. If it does do
> crazy stuff like that it's well out of spec. It's up to the switch (or
> root complex if good support exists) to route the request to the device
> and it's the root complex that tends to be what drops the load requests
> which causes the asymmetries.

Devices integrated in the CPU usually only "claim" to be PCIe devices. 
In reality their memory request path go directly through the integrated 
north bridge. The reason for this is simple better throughput/latency.

That is hidden from the software, for example the BIOS just allocates 
address space for the BARs as if it's a normal PCIe device.

The only crux is when you then do peer2peer your request simply go into 
nirvana and are not handled by anything because the BARs are only 
visible from the CPU side of the northbridge.

Regards,
Christian.

>
> Logan
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@...ts.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ