lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d92912c7-511e-2ab5-39a6-38af3209fcaf@linux.alibaba.com>
Date:   Wed, 9 Jan 2019 12:36:11 -0800
From:   Yang Shi <yang.shi@...ux.alibaba.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     mhocko@...e.com, shakeelb@...gle.com, akpm@...ux-foundation.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC v3 PATCH 0/5] mm: memcontrol: do memory reclaim when
 offlining



On 1/9/19 11:32 AM, Johannes Weiner wrote:
> On Thu, Jan 10, 2019 at 03:14:40AM +0800, Yang Shi wrote:
>> We have some usecases which create and remove memcgs very frequently,
>> and the tasks in the memcg may just access the files which are unlikely
>> accessed by anyone else.  So, we prefer force_empty the memcg before
>> rmdir'ing it to reclaim the page cache so that they don't get
>> accumulated to incur unnecessary memory pressure.  Since the memory
>> pressure may incur direct reclaim to harm some latency sensitive
>> applications.
> We have kswapd for exactly this purpose. Can you lay out more details
> on why that is not good enough, especially in conjunction with tuning
> the watermark_scale_factor etc.?

watermark_scale_factor does help out for some workloads in general. 
However, memcgs might be created then do memory allocation faster than 
kswapd in some our workloads. And, the tune may work for one kind 
machine or workload, but may not work for others. But, we may have 
different kind workloads (for example, latency-sensitive and batch jobs) 
run on the same machine, so it is kind of hard for us to guarantee all 
the workloads work well together by relying on kswapd and 
watermark_scale_factor only.

And, we know the page cache access pattern would be one-off for some 
memcgs, and those page caches are unlikely shared by others, so why not 
just drop them when the memcg is offlined. Reclaiming those cold page 
caches earlier would also improve the efficiency of memcg creation for 
long run.

>
> We've been pretty adamant that users shouldn't use drop_caches for
> performance for example, and that the need to do this usually is
> indicative of a problem or suboptimal tuning in the VM subsystem.
>
> How is this different?

IMHO, that depends on the usecases and workloads. As I mentioned above, 
if we know some page caches from some memcgs are referenced one-off and 
unlikely shared, why just keep them around to increase memory pressure?

Thanks,
Yang


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ