[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33ed8bff-97f4-16c0-e4cb-fec18ff843c0@huawei.com>
Date: Tue, 30 Aug 2022 10:34:56 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Mike Kravetz <mike.kravetz@...cle.com>
CC: Muchun Song <songmuchun@...edance.com>,
David Hildenbrand <david@...hat.com>,
Michal Hocko <mhocko@...e.com>, Peter Xu <peterx@...hat.com>,
Naoya Horiguchi <naoya.horiguchi@...ux.dev>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Davidlohr Bueso <dave@...olabs.net>,
Prakash Sangappa <prakash.sangappa@...cle.com>,
James Houghton <jthoughton@...gle.com>,
Mina Almasry <almasrymina@...gle.com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Ray Fucillo <Ray.Fucillo@...ersystems.com>,
Andrew Morton <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/8] hugetlb: add vma based lock for pmd sharing
On 2022/8/30 6:24, Mike Kravetz wrote:
> On 08/27/22 17:30, Miaohe Lin wrote:
>> On 2022/8/25 1:57, Mike Kravetz wrote:
>>> Allocate a rw semaphore and hang off vm_private_data for
>>> synchronization use by vmas that could be involved in pmd sharing. Only
>>> add infrastructure for the new lock here. Actual use will be added in
>>> subsequent patch.
>>>
>>> Signed-off-by: Mike Kravetz <mike.kravetz@...cle.com>
>>
>> <snip>
>>
>>> +static void hugetlb_vma_lock_free(struct vm_area_struct *vma)
>>> +{
>>> + /*
>>> + * Only present in sharable vmas. See comment in
>>> + * __unmap_hugepage_range_final about the neeed to check both
>>
>> s/neeed/need/
>>
>>> + * VM_SHARED and VM_MAYSHARE in free path
>>
>> I think there might be some wrong checks around this patch. As above comment said, we
>> need to check both flags, so we should do something like below instead?
>>
>> if (!(vma->vm_flags & (VM_MAYSHARE | VM_SHARED) == (VM_MAYSHARE | VM_SHARED)))
>>
>>> + */
>
> Thanks. I will update.
>
>>> + if (!vma || !(vma->vm_flags & (VM_MAYSHARE | VM_SHARED)))
>>> + return;
>>> +
>>> + if (vma->vm_private_data) {
>>> + kfree(vma->vm_private_data);
>>> + vma->vm_private_data = NULL;
>>> + }
>>> +}
>>> +
>>> +static void hugetlb_vma_lock_alloc(struct vm_area_struct *vma)
>>> +{
>>> + struct rw_semaphore *vma_sema;
>>> +
>>> + /* Only establish in (flags) sharable vmas */
>>> + if (!vma || !(vma->vm_flags & VM_MAYSHARE))
>>> + return;
>>> +
>>> + /* Should never get here with non-NULL vm_private_data */
>>
>> We can get here with non-NULL vm_private_data when called from hugetlb_vm_op_open during fork?
>
> Right!
>
> In fork, We allocate a new semaphore in hugetlb_dup_vma_private, and then
> shortly after call hugetlb_vm_op_open.
>
> It works as is, and I can update the comment. However, I wonder if we should
> just clear vm_private_data in hugetlb_dup_vma_private and let hugetlb_vm_op_open
> do the allocation.
I think it's a good idea. We can also avoid allocating memory for vma_lock (via clear_vma_resv_huge_pages()) and
then free the corresponding vma right away (via do_munmap())in move_vma(). But maybe I'm miss something.
Thanks,
Miaohe Lin
>
>>
>> Also there's one missing change on comment:
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index d0617d64d718..4bc844a1d312 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -863,7 +863,7 @@ __weak unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
>> * faults in a MAP_PRIVATE mapping. Only the process that called mmap()
>> * is guaranteed to have their future faults succeed.
>> *
>> - * With the exception of reset_vma_resv_huge_pages() which is called at fork(),
>> + * With the exception of hugetlb_dup_vma_private() which is called at fork(),
>> * the reserve counters are updated with the hugetlb_lock held. It is safe
>> * to reset the VMA at fork() time as it is not in use yet and there is no
>> * chance of the global counters getting corrupted as a result of the values.
>>
>>
>> Otherwise this patch looks good to me. Thanks.
>
> Will update, Thank you!
>
Powered by blists - more mailing lists