lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AM0PR04MB4481F3986CFB6D1EF26FA135889B0@AM0PR04MB4481.eurprd04.prod.outlook.com>
Date:   Fri, 25 Jan 2019 09:45:26 +0000
From:   Peng Fan <peng.fan@....com>
To:     "hch@...radead.org" <hch@...radead.org>,
        Stefano Stabellini <sstabellini@...nel.org>
CC:     "mst@...hat.com" <mst@...hat.com>,
        "jasowang@...hat.com" <jasowang@...hat.com>,
        "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
        "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "virtualization@...ts.linux-foundation.org" 
        <virtualization@...ts.linux-foundation.org>,
        "luto@...nel.org" <luto@...nel.org>,
        "jgross@...e.com" <jgross@...e.com>,
        "boris.ostrovsky@...cle.com" <boris.ostrovsky@...cle.com>,
        Andy Duan <fugang.duan@....com>
Subject: RE: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain

Hi,

> -----Original Message-----
> From: hch@...radead.org [mailto:hch@...radead.org]
> Sent: 2019年1月24日 5:14
> To: Stefano Stabellini <sstabellini@...nel.org>
> Cc: hch@...radead.org; Peng Fan <peng.fan@....com>; mst@...hat.com;
> jasowang@...hat.com; xen-devel@...ts.xenproject.org;
> linux-remoteproc@...r.kernel.org; linux-kernel@...r.kernel.org;
> virtualization@...ts.linux-foundation.org; luto@...nel.org; jgross@...e.com;
> boris.ostrovsky@...cle.com
> Subject: Re: [Xen-devel] [RFC] virtio_ring: check dma_mem for xen_domain
> 
> On Wed, Jan 23, 2019 at 01:04:33PM -0800, Stefano Stabellini wrote:
> > If vring_use_dma_api is actually supposed to return true when
> > dma_dev->dma_mem is set, then both Peng's patch and the patch I wrote
> > are not fixing the real issue here.
> >
> > I don't know enough about remoteproc to know where the problem
> > actually lies though.
> 
> The problem is the following:
> 
> Devices can declare a specific memory region that they want to use when the
> driver calls dma_alloc_coherent for the device, this is done using the
> shared-dma-pool DT attribute, which comes in two variants that would be a
> little to much to explain here.
> 
> remoteproc makes use of that because apparently the device can only
> communicate using that region.  But it then feeds back memory obtained
> with dma_alloc_coherent into the virtio code.  For that it calls
> vmalloc_to_page on the dma_alloc_coherent, which is a huge no-go for the
> ĐMA API and only worked accidentally on a few platform, and apparently
> arm64 just changed a few internals that made it stop working for remoteproc.
> 
> The right answer is to not use the DMA API to allocate memory from a
> device-speficic region, but to tie the driver directly into the DT reserved
> memory API in a way that allows it to easilt obtain a struct device for it.

Just have a question, 

Since vmalloc_to_page is ok for cma area, no need to take cma and per device
cma into consideration right? 

we only need to implement a piece code to handle per device specific region
using RESERVEDMEM_OF_DECLARE, just like:
RESERVEDMEM_OF_DECLARE(rpmsg-dma, "rpmsg-dma-pool", 
rmem_rpmsg_dma_setup);
And implement the device_init call back and build a map between page and phys.
Then in rpmsg driver, scatter list could use page structure, no need vmalloc_to_page
for per device dma.

Is this the right way?

Thanks
Peng.

> 
> This is orthogonal to another issue, and that is that hardware virtio devices
> really always need to use the DMA API, otherwise we'll bypass such features
> as the device specific DMA pools, DMA offsets, cache flushing, etc, etc.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ