lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6843bf1b-88cd-19fd-ddb0-5052af7ef2ac@linux.ibm.com>
Date:   Sat, 17 Apr 2021 12:38:08 +0530
From:   Pratik Sampat <psampat@...ux.ibm.com>
To:     Roman Gushchin <guro@...com>
Cc:     Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
        Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, pratik.r.sampat@...il.com
Subject: Re: [PATCH v3 0/6] percpu: partial chunk depopulation



On 17/04/21 1:33 am, Roman Gushchin wrote:
> On Sat, Apr 17, 2021 at 01:14:03AM +0530, Pratik Sampat wrote:
>>
>> On 17/04/21 12:39 am, Roman Gushchin wrote:
>>> On Sat, Apr 17, 2021 at 12:11:37AM +0530, Pratik Sampat wrote:
>>>> On 17/04/21 12:04 am, Roman Gushchin wrote:
>>>>> On Fri, Apr 16, 2021 at 11:57:03PM +0530, Pratik Sampat wrote:
>>>>>> On 16/04/21 10:43 pm, Roman Gushchin wrote:
>>>>>>> On Fri, Apr 16, 2021 at 08:58:33PM +0530, Pratik Sampat wrote:
>>>>>>>> Hello Dennis,
>>>>>>>>
>>>>>>>> I apologize for the clutter of logs before, I'm pasting the logs of before and
>>>>>>>> after the percpu test in the case of the patchset being applied on 5.12-rc6 and
>>>>>>>> the vanilla kernel 5.12-rc6.
>>>>>>>>
>>>>>>>> On 16/04/21 7:48 pm, Dennis Zhou wrote:
>>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> On Fri, Apr 16, 2021 at 06:26:15PM +0530, Pratik Sampat wrote:
>>>>>>>>>> Hello Roman,
>>>>>>>>>>
>>>>>>>>>> I've tried the v3 patch series on a POWER9 and an x86 KVM setup.
>>>>>>>>>>
>>>>>>>>>> My results of the percpu_test are as follows:
>>>>>>>>>> Intel KVM 4CPU:4G
>>>>>>>>>> Vanilla 5.12-rc6
>>>>>>>>>> # ./percpu_test.sh
>>>>>>>>>> Percpu:             1952 kB
>>>>>>>>>> Percpu:           219648 kB
>>>>>>>>>> Percpu:           219648 kB
>>>>>>>>>>
>>>>>>>>>> 5.12-rc6 + with patchset applied
>>>>>>>>>> # ./percpu_test.sh
>>>>>>>>>> Percpu:             2080 kB
>>>>>>>>>> Percpu:           219712 kB
>>>>>>>>>> Percpu:            72672 kB
>>>>>>>>>>
>>>>>>>>>> I'm able to see improvement comparable to that of what you're see too.
>>>>>>>>>>
>>>>>>>>>> However, on POWERPC I'm unable to reproduce these improvements with the patchset in the same configuration
>>>>>>>>>>
>>>>>>>>>> POWER9 KVM 4CPU:4G
>>>>>>>>>> Vanilla 5.12-rc6
>>>>>>>>>> # ./percpu_test.sh
>>>>>>>>>> Percpu:             5888 kB
>>>>>>>>>> Percpu:           118272 kB
>>>>>>>>>> Percpu:           118272 kB
>>>>>>>>>>
>>>>>>>>>> 5.12-rc6 + with patchset applied
>>>>>>>>>> # ./percpu_test.sh
>>>>>>>>>> Percpu:             6144 kB
>>>>>>>>>> Percpu:           119040 kB
>>>>>>>>>> Percpu:           119040 kB
>>>>>>>>>>
>>>>>>>>>> I'm wondering if there's any architectural specific code that needs plumbing
>>>>>>>>>> here?
>>>>>>>>>>
>>>>>>>>> There shouldn't be. Can you send me the percpu_stats debug output before
>>>>>>>>> and after?
>>>>>>>> I'll paste the whole debug stats before and after here.
>>>>>>>> 5.12-rc6 + patchset
>>>>>>>> -----BEFORE-----
>>>>>>>> Percpu Memory Statistics
>>>>>>>> Allocation Info:
>>>>>>> Hm, this looks highly suspicious. Here is your stats in a more compact form:
>>>>>>>
>>>>>>> Vanilla
>>>>>>>
>>>>>>> nr_alloc            :         9038         nr_alloc            :        97046
>>>>>>> nr_dealloc          :         6992	   nr_dealloc          :        94237
>>>>>>> nr_cur_alloc        :         2046	   nr_cur_alloc        :         2809
>>>>>>> nr_max_alloc        :         2178	   nr_max_alloc        :        90054
>>>>>>> nr_chunks           :            3	   nr_chunks           :           11
>>>>>>> nr_max_chunks       :            3	   nr_max_chunks       :           47
>>>>>>> min_alloc_size      :            4	   min_alloc_size      :            4
>>>>>>> max_alloc_size      :         1072	   max_alloc_size      :         1072
>>>>>>> empty_pop_pages     :            5	   empty_pop_pages     :           29
>>>>>>>
>>>>>>>
>>>>>>> Patched
>>>>>>>
>>>>>>> nr_alloc            :         9040         nr_alloc            :        97048
>>>>>>> nr_dealloc          :         6994	   nr_dealloc          :        95002
>>>>>>> nr_cur_alloc        :         2046	   nr_cur_alloc        :         2046
>>>>>>> nr_max_alloc        :         2208	   nr_max_alloc        :        90054
>>>>>>> nr_chunks           :            3	   nr_chunks           :           48
>>>>>>> nr_max_chunks       :            3	   nr_max_chunks       :           48
>>>>>>> min_alloc_size      :            4	   min_alloc_size      :            4
>>>>>>> max_alloc_size      :         1072	   max_alloc_size      :         1072
>>>>>>> empty_pop_pages     :           12	   empty_pop_pages     :           61
>>>>>>>
>>>>>>>
>>>>>>> So it looks like the number of chunks got bigger, as well as the number of
>>>>>>> empty_pop_pages? This contradicts to what you wrote, so can you, please, make
>>>>>>> sure that the data is correct and we're not messing two cases?
>>>>>>>
>>>>>>> So it looks like for some reason sidelined (depopulated) chunks are not getting
>>>>>>> freed completely. But I struggle to explain why the initial empty_pop_pages is
>>>>>>> bigger with the same amount of chunks.
>>>>>>>
>>>>>>> So, can you, please, apply the following patch and provide an updated statistics?
>>>>>> Unfortunately, I'm not completely well versed in this area, but yes the empty
>>>>>> pop pages number doesn't make sense to me either.
>>>>>>
>>>>>> I re-ran the numbers trying to make sure my experiment setup is sane but
>>>>>> results remain the same.
>>>>>>
>>>>>> Vanilla
>>>>>> nr_alloc            :         9040         nr_alloc            :        97048
>>>>>> nr_dealloc          :         6994	   nr_dealloc          :        94404
>>>>>> nr_cur_alloc        :         2046	   nr_cur_alloc        :         2644
>>>>>> nr_max_alloc        :         2169	   nr_max_alloc        :        90054
>>>>>> nr_chunks           :            3	   nr_chunks           :           10
>>>>>> nr_max_chunks       :            3	   nr_max_chunks       :           47
>>>>>> min_alloc_size      :            4	   min_alloc_size      :            4
>>>>>> max_alloc_size      :         1072	   max_alloc_size      :         1072
>>>>>> empty_pop_pages     :            4	   empty_pop_pages     :           32
>>>>>>
>>>>>> With the patchset + debug patch the results are as follows:
>>>>>> Patched
>>>>>>
>>>>>> nr_alloc            :         9040         nr_alloc            :        97048
>>>>>> nr_dealloc          :         6994	   nr_dealloc          :        94349
>>>>>> nr_cur_alloc        :         2046	   nr_cur_alloc        :         2699
>>>>>> nr_max_alloc        :         2194	   nr_max_alloc        :        90054
>>>>>> nr_chunks           :            3	   nr_chunks           :           48
>>>>>> nr_max_chunks       :            3	   nr_max_chunks       :           48
>>>>>> min_alloc_size      :            4	   min_alloc_size      :            4
>>>>>> max_alloc_size      :         1072	   max_alloc_size      :         1072
>>>>>> empty_pop_pages     :           12	   empty_pop_pages     :           54
>>>>>>
>>>>>> With the extra tracing I can see 39 entries of "Chunk (sidelined)"
>>>>>> after the test was run. I don't see any entries for "Chunk (to depopulate)"
>>>>>>
>>>>>> I've snipped the results of slidelined chunks because they went on for ~600
>>>>>> lines, if you need the full logs let me know.
>>>>> Yes, please! That's the most interesting part!
>>>> Got it. Pasting the full logs of after the percpu experiment was completed
>>> Thanks!
>>>
>>> Would you mind to apply the following patch and test again?
>>>
>>> --
>>>
>>> diff --git a/mm/percpu.c b/mm/percpu.c
>>> index ded3a7541cb2..532c6a7ebdfd 100644
>>> --- a/mm/percpu.c
>>> +++ b/mm/percpu.c
>>> @@ -2296,6 +2296,9 @@ void free_percpu(void __percpu *ptr)
>>>                                   need_balance = true;
>>>                                   break;
>>>                           }
>>> +
>>> +               chunk->depopulated = false;
>>> +               pcpu_chunk_relocate(chunk, -1);
>>>           } else if (chunk != pcpu_first_chunk && chunk != pcpu_reserved_chunk &&
>>>                      !chunk->isolated &&
>>>                      (pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] >
>>>
>> Sure thing.
>>
>> I see much lower sideline chunks. In one such test run I saw zero occurrences
>> of slidelined chunks
>>
> So looking at the stats it now works properly. Do you see any savings in
> comparison to vanilla? The size of savings can significanlty depend on the exact
> size of cgroup-related objects, how many of them fit into a single chunk, etc.
> So you might want to play with numbers in the test...
>
> Anyway, thank you very much for the report and your work on testing follow-up
> patches! It helped to reveal a serious bug in the implementation (completely
> empty sidelined chunks were not released in some cases), which by pure
> coincidence wasn't triggered on x86.
>
> Thanks!
>
Unfortunately not, I don't see any savings from the test.

# ./percpu_test_roman.sh
Percpu:             6144 kB
Percpu:           122880 kB
Percpu:           122880 kB

I had assumed that because POWER has a larger page size, we would indeed also
have higher fragmentation which could possibly lead to a lot more savings.

I'll dive deeper into the patches and tweak around the setup to see if I can
understand this behavior.

Thanks for helping me understand this patchset a little better and I'm glad we
found a bug with sidelined chunks!

I'll get back to you if I do find something interesting and need help
understanding it.

Thank you again,
Pratik

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ