lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 3 Mar 2020 13:15:21 -0800
From:   Ralph Campbell <rcampbell@...dia.com>
To:     Jason Gunthorpe <jgg@...lanox.com>
CC:     <dri-devel@...ts.freedesktop.org>, <linux-rdma@...r.kernel.org>,
        <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        <nouveau@...ts.freedesktop.org>,
        Jerome Glisse <jglisse@...hat.com>,
        "John Hubbard" <jhubbard@...dia.com>,
        Christoph Hellwig <hch@....de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Ben Skeggs <bskeggs@...hat.com>
Subject: Re: [PATCH v2] nouveau/hmm: map pages after migration


On 3/3/20 4:42 AM, Jason Gunthorpe wrote:
> On Mon, Mar 02, 2020 at 05:00:23PM -0800, Ralph Campbell wrote:
>> When memory is migrated to the GPU, it is likely to be accessed by GPU
>> code soon afterwards. Instead of waiting for a GPU fault, map the
>> migrated memory into the GPU page tables with the same access permissions
>> as the source CPU page table entries. This preserves copy on write
>> semantics.
>>
>> Signed-off-by: Ralph Campbell <rcampbell@...dia.com>
>> Cc: Christoph Hellwig <hch@....de>
>> Cc: Jason Gunthorpe <jgg@...lanox.com>
>> Cc: "Jérôme Glisse" <jglisse@...hat.com>
>> Cc: Ben Skeggs <bskeggs@...hat.com>
>> ---
>>
>> Originally this patch was targeted for Jason's rdma tree since other HMM
>> related changes were queued there. Now that those have been merged, this
>> patch just contains changes to nouveau so it could go through any tree.
>> I guess Ben Skeggs' tree would be appropriate.
> 
> Yep
> 
>> +static inline struct nouveau_pfnmap_args *
>> +nouveau_pfns_to_args(void *pfns)
> 
> don't use static inline inside C files

OK.

>> +{
>> +	struct nvif_vmm_pfnmap_v0 *p =
>> +		container_of(pfns, struct nvif_vmm_pfnmap_v0, phys);
>> +
>> +	return container_of(p, struct nouveau_pfnmap_args, p);
> 
> And this should just be
> 
>     return container_of(pfns, struct nouveau_pfnmap_args, p.phys);

Much simpler, thanks.

>> +static struct nouveau_svmm *
>> +nouveau_find_svmm(struct nouveau_svm *svm, struct mm_struct *mm)
>> +{
>> +	struct nouveau_ivmm *ivmm;
>> +
>> +	list_for_each_entry(ivmm, &svm->inst, head) {
>> +		if (ivmm->svmm->notifier.mm == mm)
>> +			return ivmm->svmm;
>> +	}
>> +	return NULL;
>> +}
> 
> Is this re-implementing mmu_notifier_get() ?
> 
> Jason

Not quite. This is being called from an ioctl() call on the GPU device
file which calls nouveau_svmm_bind() which locks mmap_sem for reading,
walks the vmas for the address range given in the ioctl() data, and migrates
the pages to the GPU memory.
mmu_notifier_get() would try to lock mmap_sem for writing so that would deadlock.
But it is similar in that the GPU specific process context (nouveau_svmm) needs
to be found for the given ioctl caller.
If find_get_mmu_notifier() was exported, I think that could work.
Now that I look at this again, there is an easier way to find the svmm and I see
some other bugs that need fixing. I'll post a v3 as soon as I get those written
and tested.

Thanks for the review.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ