[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ded3c4dc-2df9-2ef2-add0-c17f0cdfaf32@redhat.com>
Date: Wed, 16 Aug 2023 11:49:03 +0200
From: David Hildenbrand <david@...hat.com>
To: Yan Zhao <yan.y.zhao@...el.com>
Cc: John Hubbard <jhubbard@...dia.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
pbonzini@...hat.com, seanjc@...gle.com, mike.kravetz@...cle.com,
apopple@...dia.com, jgg@...dia.com, rppt@...nel.org,
akpm@...ux-foundation.org, kevin.tian@...el.com,
Mel Gorman <mgorman@...hsingularity.net>
Subject: Re: [RFC PATCH v2 0/5] Reduce NUMA balance caused TLB-shootdowns in a
VM
On 16.08.23 11:06, Yan Zhao wrote:
> On Wed, Aug 16, 2023 at 09:43:40AM +0200, David Hildenbrand wrote:
>> On 15.08.23 04:34, John Hubbard wrote:
>>> On 8/14/23 02:09, Yan Zhao wrote:
>>> ...
>>>>> hmm_range_fault()-based memory management in particular might benefit
>>>>> from having NUMA balancing disabled entirely for the memremap_pages()
>>>>> region, come to think of it. That seems relatively easy and clean at
>>>>> first glance anyway.
>>>>>
>>>>> For other regions (allocated by the device driver), a per-VMA flag
>>>>> seems about right: VM_NO_NUMA_BALANCING ?
>>>>>
>>>> Thanks a lot for those good suggestions!
>>>> For VMs, when could a per-VMA flag be set?
>>>> Might be hard in mmap() in QEMU because a VMA may not be used for DMA until
>>>> after it's mapped into VFIO.
>>>> Then, should VFIO set this flag on after it maps a range?
>>>> Could this flag be unset after device hot-unplug?
>>>>
>>>
>>> I'm hoping someone who thinks about VMs and VFIO often can chime in.
>>
>> At least QEMU could just set it on the applicable VMAs (as said by Yuan Yao,
>> using madvise).
>>
>> BUT, I do wonder what value there would be for autonuma to still be active
> Currently MADV_* is up to 25
> #define MADV_COLLAPSE 25,
> while madvise behavior is of type "int". So it's ok.
>
> But vma->vm_flags is of "unsigned long", so it's full at least on 32bit platform.
I remember there were discussions to increase it also for 32bit. If
that's required, we might want to go down that path.
But do 32bit architectures even care about NUMA hinting? If not, just
ignore them ...
>
>> for the remainder of the hypervisor. If there is none, a prctl() would be
>> better.
> Add a new field in "struct vma_numab_state" in vma, and use prctl() to
> update this field?
Rather a global toggle per MM, no need to update individual VMAs -- if
we go down that prctl() path.
No need to consume more memory for VMAs.
[...]
>> We already do have a mechanism in QEMU to get notified when longterm-pinning
>> in the kernel might happen (and, therefore, MADV_DONTNEED must not be used):
>> * ram_block_discard_disable()
>> * ram_block_uncoordinated_discard_disable()
> Looks this ram_block_discard allow/disallow state is global rather than per-VMA
> in QEMU.
Yes. Once you transition into "discard of any kind disabled", you can go
over all guest memory VMAs (RAMBlock) and issue an madvise() for them.
(or alternatively, do the prctl() once )
We'll also have to handle new guest memory being created afterwards, but
that is easy.
Once we transition to "no discarding disabled", you can go over all
guest memory VMAs (RAMBlock) and issue an madvise() for them again (or
alternatively, do the prctl() once).
> So, do you mean that let kernel provide a per-VMA allow/disallow mechanism, and
> it's up to the user space to choose between per-VMA and complex way or
> global and simpler way?
QEMU could do either way. The question would be if a per-vma settings
makes sense for NUMA hinting.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists