[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181120091650.0000419a@huawei.com>
Date: Tue, 20 Nov 2018 09:16:50 +0000
From: Jonathan Cameron <jonathan.cameron@...wei.com>
To: Jason Gunthorpe <jgg@...pe.ca>
CC: Kenneth Lee <liguozhu@...ilicon.com>,
Leon Romanovsky <leon@...nel.org>,
Kenneth Lee <nek.in.cn@...il.com>,
Tim Sell <timothy.sell@...sys.com>,
<linux-doc@...r.kernel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Zaibo Xu <xuzaibo@...wei.com>, <zhangfei.gao@...mail.com>,
<linuxarm@...wei.com>, <haojian.zhuang@...aro.org>,
Christoph Lameter <cl@...ux.com>,
Hao Fang <fanghao11@...wei.com>,
Gavin Schenk <g.schenk@...elmann.de>,
"RDMA mailing list" <linux-rdma@...r.kernel.org>,
Zhou Wang <wangzhou1@...ilicon.com>,
"Doug Ledford" <dledford@...hat.com>,
Uwe Kleine-König
<u.kleine-koenig@...gutronix.de>,
David Kershner <david.kershner@...sys.com>,
Johan Hovold <johan@...nel.org>,
Cyrille Pitchen <cyrille.pitchen@...e-electrons.com>,
Sagar Dharia <sdharia@...eaurora.org>,
Jens Axboe <axboe@...nel.dk>, <guodong.xu@...aro.org>,
linux-netdev <netdev@...r.kernel.org>,
Randy Dunlap <rdunlap@...radead.org>,
<linux-kernel@...r.kernel.org>, Vinod Koul <vkoul@...nel.org>,
<linux-crypto@...r.kernel.org>,
Philippe Ombredanne <pombredanne@...b.com>,
Sanyog Kale <sanyog.r.kale@...el.com>,
"David S. Miller" <davem@...emloft.net>,
<linux-accelerators@...ts.ozlabs.org>,
"Jean-Philippe Brucker" <jean-philippe.brucker@....com>,
<iommu@...ts.linux-foundation.org>
Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
+CC Jean-Phillipe and iommu list.
On Mon, 19 Nov 2018 20:29:39 -0700
Jason Gunthorpe <jgg@...pe.ca> wrote:
> On Tue, Nov 20, 2018 at 11:07:02AM +0800, Kenneth Lee wrote:
> > On Mon, Nov 19, 2018 at 11:49:54AM -0700, Jason Gunthorpe wrote:
> > > Date: Mon, 19 Nov 2018 11:49:54 -0700
> > > From: Jason Gunthorpe <jgg@...pe.ca>
> > > To: Kenneth Lee <liguozhu@...ilicon.com>
> > > CC: Leon Romanovsky <leon@...nel.org>, Kenneth Lee <nek.in.cn@...il.com>,
> > > Tim Sell <timothy.sell@...sys.com>, linux-doc@...r.kernel.org, Alexander
> > > Shishkin <alexander.shishkin@...ux.intel.com>, Zaibo Xu
> > > <xuzaibo@...wei.com>, zhangfei.gao@...mail.com, linuxarm@...wei.com,
> > > haojian.zhuang@...aro.org, Christoph Lameter <cl@...ux.com>, Hao Fang
> > > <fanghao11@...wei.com>, Gavin Schenk <g.schenk@...elmann.de>, RDMA mailing
> > > list <linux-rdma@...r.kernel.org>, Zhou Wang <wangzhou1@...ilicon.com>,
> > > Doug Ledford <dledford@...hat.com>, Uwe Kleine-König
> > > <u.kleine-koenig@...gutronix.de>, David Kershner
> > > <david.kershner@...sys.com>, Johan Hovold <johan@...nel.org>, Cyrille
> > > Pitchen <cyrille.pitchen@...e-electrons.com>, Sagar Dharia
> > > <sdharia@...eaurora.org>, Jens Axboe <axboe@...nel.dk>,
> > > guodong.xu@...aro.org, linux-netdev <netdev@...r.kernel.org>, Randy Dunlap
> > > <rdunlap@...radead.org>, linux-kernel@...r.kernel.org, Vinod Koul
> > > <vkoul@...nel.org>, linux-crypto@...r.kernel.org, Philippe Ombredanne
> > > <pombredanne@...b.com>, Sanyog Kale <sanyog.r.kale@...el.com>, "David S.
> > > Miller" <davem@...emloft.net>, linux-accelerators@...ts.ozlabs.org
> > > Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> > > User-Agent: Mutt/1.9.4 (2018-02-28)
> > > Message-ID: <20181119184954.GB4890@...pe.ca>
> > >
> > > On Mon, Nov 19, 2018 at 05:14:05PM +0800, Kenneth Lee wrote:
> > >
> > > > If the hardware cannot share page table with the CPU, we then need to have
> > > > some way to change the device page table. This is what happen in ODP. It
> > > > invalidates the page table in device upon mmu_notifier call back. But this cannot
> > > > solve the COW problem: if the user process A share a page P with device, and A
> > > > forks a new process B, and it continue to write to the page. By COW, the
> > > > process B will keep the page P, while A will get a new page P'. But you have
> > > > no way to let the device know it should use P' rather than P.
> > >
> > > Is this true? I thought mmu_notifiers covered all these cases.
> > >
> > > The mm_notifier for A should fire if B causes the physical address of
> > > A's pages to change via COW.
> > >
> > > And this causes the device page tables to re-synchronize.
> >
> > I don't see such code. The current do_cow_fault() implemenation has nothing to
> > do with mm_notifer.
>
> Well, that sure sounds like it would be a bug in mmu_notifiers..
>
> But considering Jean's SVA stuff seems based on mmu notifiers, I have
> a hard time believing that it has any different behavior from RDMA's
> ODP, and if it does have different behavior, then it is probably just
> a bug in the ODP implementation.
>
> > > > In WarpDrive/uacce, we make this simple. If you support IOMMU and it support
> > > > SVM/SVA. Everything will be fine just like ODP implicit mode. And you don't need
> > > > to write any code for that. Because it has been done by IOMMU framework. If it
> > >
> > > Looks like the IOMMU code uses mmu_notifier, so it is identical to
> > > IB's ODP. The only difference is that IB tends to have the IOMMU page
> > > table in the device, not in the CPU.
> > >
> > > The only case I know if that is different is the new-fangled CAPI
> > > stuff where the IOMMU can directly use the CPU's page table and the
> > > IOMMU page table (in device or CPU) is eliminated.
> >
> > Yes. We are not focusing on the current implementation. As mentioned in the
> > cover letter. We are expecting Jean Philips' SVA patch:
> > git://linux-arm.org/linux-jpb.
>
> This SVA stuff does not look comparable to CAPI as it still requires
> maintaining seperate IOMMU page tables.
>
> Also, those patches from Jean have a lot of references to
> mmu_notifiers (ie look at iommu_mmu_notifier).
>
> Are you really sure it is actually any different at all?
>
> > > Anyhow, I don't think a single instance of hardware should justify an
> > > entire new subsystem. Subsystems are hard to make and without multiple
> > > hardware examples there is no way to expect that it would cover any
> > > future use cases.
> >
> > Yes. That's our first expectation. We can keep it with our driver. But because
> > there is no user driver support for any accelerator in mainline kernel. Even the
> > well known QuickAssit has to be maintained out of tree. So we try to see if
> > people is interested in working together to solve the problem.
>
> Well, you should come with patches ack'ed by these other groups.
>
> > > If all your driver needs is to mmap some PCI bar space, route
> > > interrupts and do DMA mapping then mediated VFIO is probably a good
> > > choice.
> >
> > Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion and
> > try not to add complexity to the mm subsystem.
>
> Why would a mediated VFIO driver touch the mm subsystem? Sounds like
> you don't have a VFIO driver if it needs to do stuff like that...
>
> > > If it needs to do a bunch of other stuff, not related to PCI bar
> > > space, interrupts and DMA mapping (ie special code for compression,
> > > crypto, AI, whatever) then you should probably do what Jerome said and
> > > make a drivers/char/hisillicon_foo_bar.c that exposes just what your
> > > hardware does.
> >
> > Yes. If no other accelerator driver writer is interested. That is the
> > expectation:)
>
> I don't think it matters what other drivers do.
>
> If your driver does not need any other kernel code then VFIO is
> sensible. In this kind of world you will probably have a RDMA-like
> userspace driver that can bring this to a common user space API, even
> if one driver use VFIO and a different driver uses something else.
>
> > You create some connections (queues) to NIC, RSA, and AI engine. Then you got
> > data direct from the NIC and pass the pointer to RSA engine for decryption. The
> > CPU then finish some data taking or operation and then pass through to the AI
> > engine for CNN calculation....This will need a place to maintain the same
> > address space by some means.
>
> How is this any different from what we have today?
>
> SVA is not something even remotely new, IB has been doing various
> versions of it for 20 years.
>
> Jason
Powered by blists - more mailing lists