[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08D418F5-0FAA-4544-B6DE-FA2371D3AAF7@fb.com>
Date: Wed, 18 May 2022 18:31:46 +0000
From: Song Liu <songliubraving@...com>
To: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"peterz@...radead.org" <peterz@...radead.org>,
"ast@...nel.org" <ast@...nel.org>,
"bpf@...r.kernel.org" <bpf@...r.kernel.org>,
"Torvalds, Linus" <torvalds@...ux-foundation.org>,
Kernel Team <Kernel-team@...com>,
"song@...nel.org" <song@...nel.org>,
"mcgrof@...nel.org" <mcgrof@...nel.org>
Subject: Re: [PATCH bpf-next 5/5] bpf: use module_alloc_huge for bpf_prog_pack
> On May 18, 2022, at 9:49 AM, Edgecombe, Rick P <rick.p.edgecombe@...el.com> wrote:
>
> On Wed, 2022-05-18 at 06:34 +0000, Song Liu wrote:
>>>> I am not quite sure the exact work needed here. Rick, would you
>>>> have
>>>> time to enable VM_FLUSH_RESET_PERMS for huge pages? Given the
>>>> merge
>>>> window is coming soon, I guess we need current work around in
>>>> 5.19.
>>>
>>> I would have hard time squeezing that in now. The vmalloc part is
>>> easy,
>>> I think I already posted a diff. But first hibernate needs to be
>>> changed to not care about direct map page sizes.
>>
>> I guess I missed the diff, could you please send a link to it?
>
>
> https://lore.kernel.org/lkml/5bd16e2c06a2df357400556c6ae01bb5d3c5c32a.camel@intel.com/
>
> The remaining problem is that hibernate may encounter NP pages when
> saving memory to disk. It resets them with CPA calls 4k at a time. So
> if a page is NP, hibernate needs it to be already be 4k or it might
> need to split. I think hibernate should just utilize a different
> mapping to get at the page when it encounters this rare scenario. In
> that diff I put some locking so that hibernate couldn't race with a
> huge NP page, but then I thought we should just change hibernate.
I am not quite sure how to test the hibernate path. Given the merge
window is coming soon, how about we ship this patch in 5.19, and fix
VM_FLUSH_RESET_PERMS in a later release?
>
>>
>>>
>>>>
>>>>>
>>>>>>
>>
>>>
>>> I'm also not clear why we wouldn't want to use the prog pack
>>> allocator
>>> even if vmalloc huge pages was disabled. Doesn't it improve
>>> performance
>>> even with small page sizes, per your benchmarks? What is the
>>> downside
>>> to just always using it?
>>
>> With current version, when huge page is disabled, the prog pack
>> allocator
>> will use 4kB pages for each pack. We still get about 0.5% performance
>> improvement with 4kB prog packs.
>
> Oh, I thought you were comparing a 2MB sized, small page mapped
> allocation to a 2MB sized, huge page mapped allocation.
>
> It looks like the logic is to free a pack if it is empty, so then for
> smaller packs you are more likely to let the pages go back to the page
> allocator. Then future allocations would break more pages.
This is correct. This is the current version we have with 5.18-rc7.
>
> So I think that is not a fully apples to apples test of huge mapping
> benefits. I'd be surprised if there really was no huge mapping benefit,
> since its been seen with core kernel text. Did you notice if the direct
> map breakage was different between the tests?
I didn’t check specifically, but it is expected that the 4kB prog pack
will cause more direct map breakage.
Thanks,
Song
Powered by blists - more mailing lists