lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <566F8781.80108@jp.fujitsu.com>
Date:	Tue, 15 Dec 2015 12:22:41 +0900
From:	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Michal Hocko <mhocko@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/7] mm: memcontrol: charge swap to cgroup2

On 2015/12/15 4:42, Vladimir Davydov wrote:
> On Mon, Dec 14, 2015 at 04:30:37PM +0100, Michal Hocko wrote:
>> On Thu 10-12-15 14:39:14, Vladimir Davydov wrote:
>>> In the legacy hierarchy we charge memsw, which is dubious, because:
>>>
>>>   - memsw.limit must be >= memory.limit, so it is impossible to limit
>>>     swap usage less than memory usage. Taking into account the fact that
>>>     the primary limiting mechanism in the unified hierarchy is
>>>     memory.high while memory.limit is either left unset or set to a very
>>>     large value, moving memsw.limit knob to the unified hierarchy would
>>>     effectively make it impossible to limit swap usage according to the
>>>     user preference.
>>>
>>>   - memsw.usage != memory.usage + swap.usage, because a page occupying
>>>     both swap entry and a swap cache page is charged only once to memsw
>>>     counter. As a result, it is possible to effectively eat up to
>>>     memory.limit of memory pages *and* memsw.limit of swap entries, which
>>>     looks unexpected.
>>>
>>> That said, we should provide a different swap limiting mechanism for
>>> cgroup2.
>>> This patch adds mem_cgroup->swap counter, which charges the actual
>>> number of swap entries used by a cgroup. It is only charged in the
>>> unified hierarchy, while the legacy hierarchy memsw logic is left
>>> intact.
>>
>> I agree that the previous semantic was awkward. The problem I can see
>> with this approach is that once the swap limit is reached the anon
>> memory pressure might spill over to other and unrelated memcgs during
>> the global memory pressure. I guess this is what Kame referred to as
>> anon would become mlocked basically. This would be even more of an issue
>> with resource delegation to sub-hierarchies because nobody will prevent
>> setting the swap amount to a small value and use that as an anon memory
>> protection.
>
> AFAICS such anon memory protection has a side-effect: real-life
> workloads need page cache to run smoothly (at least for mapping
> executables). Disabling swapping would switch pressure to page caches,
> resulting in performance degradation. So, I don't think per memcg swap
> limit can be abused to boost your workload on an overcommitted system.
>
> If you mean malicious users, well, they already have plenty ways to eat
> all available memory up to the hard limit by creating unreclaimable
> kernel objects.
>
"protect anon" user's malicious degree is far lower than such cracker like users.

> Anyway, if you don't trust a container you'd better set the hard memory
> limit so that it can't hurt others no matter what it runs and how it
> tweaks its sub-tree knobs.
>

Limiting swap can easily cause "OOM-Killer even while there are available swap"
with easy mistake. Can't you add "swap excess" switch to sysctl to allow global
memory reclaim can ignore swap limitation ?

Regards,
-Kame








--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ