lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4982D9AF.10005@shipmail.org>
Date:	Fri, 30 Jan 2009 11:42:55 +0100
From:	Thomas Hellström <thomas@...pmail.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
CC:	Jesse Barnes <jbarnes@...tuousgeek.org>,
	Dave Airlie <airlied@...ux.ie>, linux-kernel@...r.kernel.org,
	kerolasa@....fi, Laurent Pinchart <laurent.pinchart@...net.be>,
	Dickins <hugh@...itas.com>, dri-devel@...ts.sourceforge.net,
	kerolasa@...il.com
Subject: Re: PROBLEM: kernel BUG at drivers/gpu/drm/drm_fops.c:146!

Andrew Morton wrote:
> On Fri, 30 Jan 2009 10:13:55 +0100 Thomas Hellstr__m <thomas@...pmail.org> wrote:
>
>   
>>>> Sounds right to me.  The offsets are just handles, not real file objects or 
>>>> backing store addresses.  We use them to take advantage of all the inode 
>>>> address mapping helpers, since they track stuff for us.
>>>>
>>>> That said, unmap_mapping_range may not be the best way to do this; basically 
>>>> we need a way to invalidate a given processes' mapping of a GTT range (which 
>>>> in turn is backed by real RAM).  If there's some other way we should be doing 
>>>> this I'm all ears.
>>>>     
>>>>         
>>> Well, we'd need to call in the big guns on this one - I've already
>>> stirred Hugh ;)
>>>
>>> unmap_mapping_range() is basically a truncate thing - it shoots down
>>> all mappings of a range of a *file*.  Across all processes in the
>>> machine which map that file.
>>>
>>> If that isn't what you want to do (and it sounds that way) then you'd
>>> want to use something which is mm_struct (or vma) centric, rather than
>>> file-centric.  zap_page_range(), methinks.
>>>
>>>   
>>>       
>> I guess I was the one starting to use this function, so some explanation:
>>
>> When the drm device is used to provide address space for buffers, 
>> user-space actually see it as a file with a distinct offset where 
>> buffers are laid out in a linear fashion, To access a certain buffer you 
>> need to lseek() to the correct offset and then read() write() or, the 
>> more common use, mmap / munmap.
>>
>> When looking through its implementation, unmap_mapping_range() seemed to 
>> do exactly the thing I wanted, namely to kill all user-space mappings of 
>> all vmas of all processes mapping a part of the device address space. 
>>     
>
> That's different from what Jesse said.  That _is_ a more appropriate
> use of unmap_mapping_range().  Although all the futzing that function
> does with truncate_count is now looking inappropriately-placed.
>
>   
Hmm, yes. I guess that was to fix a race with old do_nopage()?
Since GEM and similar managers are using  vm_insert_pfn, I guess that's 
not really needed.
>> And it saves us from storing a list of all vmas mapping the device 
>> within the drm device.
>>
>> What makes usage of unmap_mapping_range() on a device node with a well 
>> defined offset-to-data mapping different from using it on a file?
>>     
>
> umm, nothing I guess, if the driver sufficiently imitates a regular
> file.  It's unexpected (by me).  I don't think we wrote that code with
> this application in mind ;)
>
>
>   
No. It's a little odd ;), but we had a thorough discussion at that time 
(some two years ago) on LKML.

What strucks me now, though, is that if the struct address_space * 
differs between device nodes pointing to the same underlying device, we 
might be in trouble (Is that what started this thread from the first 
place?), and we might have to resort to keep track of all VMAs anyway...

/Thomas



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ