lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01adb7dd-589e-2cde-4fa9-68baa44c0976@nvidia.com>
Date:   Thu, 16 Jan 2020 12:16:30 -0800
From:   Ralph Campbell <rcampbell@...dia.com>
To:     Jason Gunthorpe <jgg@...lanox.com>
CC:     "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "nouveau@...ts.freedesktop.org" <nouveau@...ts.freedesktop.org>,
        "linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
        Jerome Glisse <jglisse@...hat.com>,
        "John Hubbard" <jhubbard@...dia.com>,
        Christoph Hellwig <hch@....de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ben Skeggs <bskeggs@...hat.com>, Shuah Khan <shuah@...nel.org>
Subject: Re: [PATCH v6 5/6] nouveau: use new mmu interval notifiers


On 1/16/20 8:00 AM, Jason Gunthorpe wrote:
> On Wed, Jan 15, 2020 at 02:09:47PM -0800, Ralph Campbell wrote:
> 
>> I don't understand the lifetime/membership issue. The driver is the only thing
>> that allocates, inserts, or removes struct mmu_interval_notifier and thus
>> completely controls the lifetime.
> 
> If the returned value is on the defered list it could be freed at any
> moment. The existing locks do not prevent it.
> 
>>>> +		ret = nouveau_svmm_interval_find(svmm, &range);
>>>> +		if (ret) {
>>>> +			up_read(&mm->mmap_sem);
>>>> +			return ret;
>>>> +		}
>>>> +		range.notifier_seq = mmu_interval_read_begin(range.notifier);
>>>>    		ret = hmm_range_fault(&range, 0);
>>>>    		up_read(&mm->mmap_sem);
>>>>    		if (ret <= 0) {
>>>
>>> I'm still not sure this is a better approach than what ODP does. It
>>> looks very expensive on the fault path..
>>>
>>> Jason
>>>
>>
>> ODP doesn't have this problem because users have to call ib_reg_mr()
>> before any I/O can happen to the process address space.
> 
> ODP supports a single 'full VA' call at process startup, just like
> these cases.
> 
>> That is when mmu_interval_notifier_insert() /
>> mmu_interval_notifier_remove() can be called and the driver doesn't
>> have to worry about the interval changing sizes or being removed
>> while I/O is happening.
> 
> No, for the 'ODP full process VA' (aka implicit ODP) mode it
> dynamically maintains a list of intervals. ODP chooses the align the
> dynamic intervals to it's HW page table levels, and not to SW VMAs.
> This is much simpler to manage and faster to fault, at the cost of
> capturing more VA for invalidations which have to be probed against
> the HW shadow PTEs.
> 
>> It isn't that expensive, there is an extra driver lock/unlock as
>> part of the lookup and possibly a find_vma() and kmalloc(GFP_ATOMIC)
>> for new intervals. Also, the deferred interval updates for munmap().
>> Compared to the cost of updating PTEs in the device and GPU fault
>> handling, this is minimal overhead.
> 
> Well, compared to ODP which does a single xa lookup with no lock to
> find its interval, this looks very expensive and not parallel.
> 
> I think if there is merit in having ranges cover the vmas and track
> changes then there is probably merit in having the core code provide
> much of that logic, not the driver.
> 
> But it would be interesting to see some kind of analysis on the two
> methods to decide if the complexity is worthwhile.
> 
> Jason
> 

Can you point me to the latest ODP code? Seems like my understanding is
quite off.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ