[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CADnq5_PP_COGHxLdDtfnLrho8RNXLQFHc5s07+g55d9oXvB6rg@mail.gmail.com>
Date: Thu, 29 Mar 2018 10:37:44 -0400
From: Alex Deucher <alexdeucher@...il.com>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: linaro-mm-sig@...ts.linaro.org,
amd-gfx list <amd-gfx@...ts.freedesktop.org>,
LKML <linux-kernel@...r.kernel.org>,
Maling list - DRI developers
<dri-devel@...ts.freedesktop.org>,
linux-media <linux-media@...r.kernel.org>
Subject: Re: [PATCH 2/8] PCI: Add pci_find_common_upstream_dev()
Sorry, didn't mean to drop the lists here. re-adding.
On Wed, Mar 28, 2018 at 4:05 PM, Alex Deucher <alexdeucher@...il.com> wrote:
> On Wed, Mar 28, 2018 at 3:53 PM, Logan Gunthorpe <logang@...tatee.com> wrote:
>>
>>
>> On 28/03/18 01:44 PM, Christian König wrote:
>>> Well, isn't that exactly what dma_map_resource() is good for? As far as
>>> I can see it makes sure IOMMU is aware of the access route and
>>> translates a CPU address into a PCI Bus address.
>>
>>> I'm using that with the AMD IOMMU driver and at least there it works
>>> perfectly fine.
>>
>> Yes, it would be nice, but no arch has implemented this yet. We are just
>> lucky in the x86 case because that arch is simple and doesn't need to do
>> anything for P2P (partially due to the Bus and CPU addresses being the
>> same). But in the general case, you can't rely on it.
>
> Could we do something for the arches where it works? I feel like peer
> to peer has dragged out for years because everyone is trying to boil
> the ocean for all arches. There are a huge number of use cases for
> peer to peer on these "simple" architectures which actually represent
> a good deal of the users that want this.
>
> Alex
>
>>
>>>>> Yeah, but not for ours. See if you want to do real peer 2 peer you need
>>>>> to keep both the operation as well as the direction into account.
>>>> Not sure what you are saying here... I'm pretty sure we are doing "real"
>>>> peer 2 peer...
>>>>
>>>>> For example when you can do writes between A and B that doesn't mean
>>>>> that writes between B and A work. And reads are generally less likely to
>>>>> work than writes. etc...
>>>> If both devices are behind a switch then the PCI spec guarantees that A
>>>> can both read and write B and vice versa.
>>>
>>> Sorry to say that, but I know a whole bunch of PCI devices which
>>> horrible ignores that.
>>
>> Can you elaborate? As far as the device is concerned it shouldn't know
>> whether a request comes from a peer or from the host. If it does do
>> crazy stuff like that it's well out of spec. It's up to the switch (or
>> root complex if good support exists) to route the request to the device
>> and it's the root complex that tends to be what drops the load requests
>> which causes the asymmetries.
>>
>> Logan
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@...ts.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Powered by blists - more mailing lists