lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 09 Apr 2018 16:06:15 +0300
From:   Laurent Pinchart <laurent.pinchart@...asonboard.com>
To:     Robin Murphy <robin.murphy@....com>
Cc:     jacopo mondi <jacopo@...ndi.org>,
        Jacopo Mondi <jacopo+renesas@...ndi.org>,
        ysato@...rs.sourceforge.jp, dalias@...c.org, geert@...ux-m68k.org,
        linux-renesas-soc@...r.kernel.org,
        iommu@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
        linux-sh@...r.kernel.org
Subject: Re: [RFC v2 2/2] base: dma-mapping: Postpone page_to_pfn() on mmap()

Hello,

On Monday, 9 April 2018 14:11:22 EEST Robin Murphy wrote:
> On 09/04/18 08:25, jacopo mondi wrote:
> > Hi Robin, Laurent,
> > 
> >      a long time passed, sorry about this.
> > 
> > On Wed, Nov 15, 2017 at 01:38:23PM +0000, Robin Murphy wrote:
> >> On 14/11/17 17:08, Jacopo Mondi wrote:
> >>> On SH4 architecture, with SPARSEMEM memory model, translating page to
> >>> pfn hangs the CPU. Post-pone translation to pfn after
> >>> dma_mmap_from_dev_coherent() function call as it succeeds and make page
> >>> translation not necessary.
> >>> 
> >>> This patch was suggested by Laurent Pinchart and he's working to submit
> >>> a proper fix mainline. Not sending for inclusion at the moment.
> >> 
> >> Y'know, I think this patch does have some merit by itself - until we know
> >> that cpu_addr *doesn't* represent some device-private memory which is not
> >> guaranteed to be backed by a struct page, calling virt_to_page() on it is
> >> arguably semantically incorrect, even if it might happen to be benign in
> >> most cases.
> > 
> > I still need to carry this patch in my trees to have a working dma memory
> > mapping on SH4 platforms. My understanding from your comment is that
> > there may be a way forward for this patch, do you still think the same?
> > Have you got any suggestion on how to improve this eventually for
> > inclusion?
> 
> As before, the change itself does seem reasonable; it might be worth
> rewording the commit message in more general terms rather than making it
> sound like an SH-specific workaround (which I really don't think it is),
> but otherwise I'd say just repost it as a non-RFC patch.

I actually can't remember any better fix I would have in mind, so this looks 
good to me :-) I agree with Robin, the commit message should be reworded. 
Robin's explanation of why virt_to_page() should be postponed until we know 
that cpu_addr represents memory that is guaranteed to be backed by a struct 
page is a good starting point. You can mention SH4 as an example of an 
architecture that will crash when calling virt_to_page() in such a case, but 
the fix isn't specific to SH4.

> >>> Suggested-by: Laurent Pinchart <laurent.pinchart@...asonboard.com>
> >>> Signed-off-by: Jacopo Mondi <jacopo+renesas@...ndi.org>
> >>> ---
> >>> 
> >>>   drivers/base/dma-mapping.c | 3 ++-
> >>>   1 file changed, 2 insertions(+), 1 deletion(-)
> >>> 
> >>> diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
> >>> index e584edd..73d64d3 100644
> >>> --- a/drivers/base/dma-mapping.c
> >>> +++ b/drivers/base/dma-mapping.c
> >>> @@ -227,8 +227,8 @@ int dma_common_mmap(struct device *dev, struct
> >>> vm_area_struct *vma,
> >>>   #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
> >>>   	unsigned long user_count = vma_pages(vma);
> >>>   	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> >>> -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
> >>>   	unsigned long off = vma->vm_pgoff;
> >>> +	unsigned long pfn;
> >>> 
> >>>   	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >>> 
> >>> @@ -236,6 +236,7 @@ int dma_common_mmap(struct device *dev, struct
> >>> vm_area_struct *vma,
> >>>   		return ret;
> >>>   	
> >>>   	if (off < count && user_count <= (count - off)) {
> >>> +		pfn = page_to_pfn(virt_to_page(cpu_addr));
> >>>   		ret = remap_pfn_range(vma, vma->vm_start,
> >>>   				      pfn + off,
> >>>   				      user_count << PAGE_SHIFT,

-- 
Regards,

Laurent Pinchart



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ