lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4982C4D3.4040704@shipmail.org>
Date:	Fri, 30 Jan 2009 10:13:55 +0100
From:	Thomas Hellström <thomas@...pmail.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Dave Airlie <airlied@...ux.ie>, linux-kernel@...r.kernel.org,
	kerolasa@....fi, Laurent Pinchart <laurent.pinchart@...net.be>,
	Dickins <hugh@...itas.com>, dri-devel@...ts.sourceforge.net,
	kerolasa@...il.com
Subject: Re: PROBLEM: kernel BUG at drivers/gpu/drm/drm_fops.c:146!

Andrew Morton wrote:
> On Thu, 29 Jan 2009 19:50:17 -0800 Jesse Barnes <jbarnes@...tuousgeek.org> wrote:
>
>   
>> On Thursday, January 29, 2009 5:43 pm Dave Airlie wrote:
>>     
>>> On Fri, Jan 30, 2009 at 11:20 AM, Andrew Morton
>>>       
>>>> hm, I'm a bit surprised to see the drm code using `struct
>>>> address_space' and read_mapping_page() and unmap_mapping_range() and
>>>> such.  I thought those only worked with regular files and pagecache :)
>>>>
>>>> Is it possible to briefly explain what's going on there?
>>>>
>>>> What instance of address_space_operations does ->dev_mapping actually
>>>> point at?
>>>>         
>>> Okay a bit tired and headache coming on but I'll try, maybe jbarnes
>>> can help out,
>>>
>>> We need to provide mappings to userspace that are backed by memory
>>> that can move around behind the mappings.
>>>
>>> So userspace wants a mapping for a GEM object via the AGP/GTT aperture
>>> instead of directly to the backing pages.
>>> Now as the GEM object is backed by shmem we can't use the shmem file
>>> descriptor we have to tie the mapping to without
>>> hacking up the shmem mmap functionality which seemed like a bad plan.
>>>
>>> So GEM uses the device inode to setup the mappings on. We just use a
>>> simple linear allocator to split up the device inodes address space
>>> and assign chunks to handles for different objects. The userspace app
>>> then uses the handle via mmap to get access to the VMAs. Now when GEM
>>> wants to move that object out of the GTT or to another area of the GTT
>>> we need some way to invalidate it, so we use unmap_mapping_range
>>> which destroys all the mappings for the object in all the VMA for all
>>> the processes mapping it currently
>>>
>>> GEM's read_mapping_page is distinct from this and is to do with the
>>> shmem interfaceing.
>>>
>>> Not sure if this explains it or just make it worse.
>>>       
>> Sounds right to me.  The offsets are just handles, not real file objects or 
>> backing store addresses.  We use them to take advantage of all the inode 
>> address mapping helpers, since they track stuff for us.
>>
>> That said, unmap_mapping_range may not be the best way to do this; basically 
>> we need a way to invalidate a given processes' mapping of a GTT range (which 
>> in turn is backed by real RAM).  If there's some other way we should be doing 
>> this I'm all ears.
>>     
>
> Well, we'd need to call in the big guns on this one - I've already
> stirred Hugh ;)
>
> unmap_mapping_range() is basically a truncate thing - it shoots down
> all mappings of a range of a *file*.  Across all processes in the
> machine which map that file.
>
> If that isn't what you want to do (and it sounds that way) then you'd
> want to use something which is mm_struct (or vma) centric, rather than
> file-centric.  zap_page_range(), methinks.
>
>   
I guess I was the one starting to use this function, so some explanation:

When the drm device is used to provide address space for buffers, 
user-space actually see it as a file with a distinct offset where 
buffers are laid out in a linear fashion, To access a certain buffer you 
need to lseek() to the correct offset and then read() write() or, the 
more common use, mmap / munmap.

When looking through its implementation, unmap_mapping_range() seemed to 
do exactly the thing I wanted, namely to kill all user-space mappings of 
all vmas of all processes mapping a part of the device address space. 
And it saves us from storing a list of all vmas mapping the device 
within the drm device.

What makes usage of unmap_mapping_range() on a device node with a well 
defined offset-to-data mapping different from using it on a file?

/Thomas
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by:
> SourcForge Community
> SourceForge wants to tell your story.
> http://p.sf.net/sfu/sf-spreadtheword
> --
> _______________________________________________
> Dri-devel mailing list
> Dri-devel@...ts.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/dri-devel
>   



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ