lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <060b2056-35c3-dbd3-e097-a53423737e45@shopee.com>
Date:   Thu, 9 Mar 2023 10:33:02 +0800
From:   Haifeng Xu <haifeng.xu@...pee.com>
To:     David Hildenbrand <david@...hat.com>,
        Matthew Wilcox <willy@...radead.org>
Cc:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: remove redundant check in handle_mm_fault



On 2023/3/8 17:13, David Hildenbrand wrote:
> On 08.03.23 10:03, Haifeng Xu wrote:
>>
>>
>> On 2023/3/7 10:48, Matthew Wilcox wrote:
>>> On Tue, Mar 07, 2023 at 10:36:55AM +0800, Haifeng Xu wrote:
>>>> On 2023/3/6 21:49, David Hildenbrand wrote:
>>>>> On 06.03.23 03:49, Haifeng Xu wrote:
>>>>>> mem_cgroup_oom_synchronize() has checked whether current memcg_in_oom is
>>>>>> set or not, so remove the check in handle_mm_fault().
>>>>>
>>>>> "mem_cgroup_oom_synchronize() will returned immediately if memcg_in_oom is not set, so remove the check from handle_mm_fault()".
>>>>>
>>>>> However, that requires now always an indirect function call -- do we care about dropping that optimization?
>>>>>
>>>>>
>>>>
>>>> If memcg_in_oom is set, we will check it twice, one is from handle_mm_fault(), the other is from mem_cgroup_oom_synchronize(). That seems a bit redundant.
>>>>
>>>> if memcg_in_oom is not set, mem_cgroup_oom_synchronize() returns directly. Though it's an indirect function call, but the time spent can be negligible
>>>> compare to the whole mm user falut preocess. And that won't cause stack overflow error.
>>>
>>> I suggest you measure it.
>>
>> test steps:
>> 1) Run command: ./mmap_anon_test(global alloc, so the memcg_in_oom is not set)
>> 2) Calculate the quotient of cost time and page-fault counts, run 10 rounds and average the results.
>>
>> The test result shows that whether using indirect function call or not, the time spent in user fault
>> is almost the same, about 2.3ms.
> 
> I guess most of the benchmark time is consumed by allocating fresh pages in your test (also, why exactly do you use MAP_SHARED?).

Yes, most of the time consumption is page allocation. MAP_SHARED or MAP_PRIVATE doesn't affect the result,so I just use one of them at will,
although no process share memory with it.

> 
> Is 2.3ms the total time for writing to that 1GiB of memory or how did you derive that number? Posting both results would be cleaner (with more digits ;) ).
> 

I'm sorry I got the measuring unit wrong,actually it is 2.3us for every page fault. The details are as follows.

without change
-------------------------------------------------------------------------------------------------
cost time(ms)			number of page fault			time of page faults(ns)
599				262144					2285
590				262144					2251
595				262144					2270
595				262144					2270
594				262144					2266
597				262144					2277
596				262144					2274
598				262144					2281
594				262144					2266
598				262144					2281
-------------------------------------------------------------------------------------------------
									average: 2272

with change
-------------------------------------------------------------------------------------------------
cost time(ms)			number of page fault			time of page faults(ns)
600				262144					2289
597				262144					2277
596				262144					2274
596				262144					2274
597				262144					2277
595				262144					2270
598				262144					2281
588				262144					2243
596				262144					2274
598				262144					2281
-------------------------------------------------------------------------------------------------
									average: 2274

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ