lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 23 Aug 2017 11:00:56 +0300 From: Kirill Tkhai <ktkhai@...tuozzo.com> To: Vladimir Davydov <vdavydov.dev@...il.com> Cc: apolyakov@...et.ru, linux-kernel@...r.kernel.org, linux-mm@...ck.org, aryabinin@...tuozzo.com, akpm@...ux-foundation.org Subject: Re: [PATCH 3/3] mm: Count list_lru_one::nr_items lockless On 22.08.2017 22:47, Vladimir Davydov wrote: > On Tue, Aug 22, 2017 at 03:29:35PM +0300, Kirill Tkhai wrote: >> During the reclaiming slab of a memcg, shrink_slab iterates >> over all registered shrinkers in the system, and tries to count >> and consume objects related to the cgroup. In case of memory >> pressure, this behaves bad: I observe high system time and >> time spent in list_lru_count_one() for many processes on RHEL7 >> kernel (collected via $perf record --call-graph fp -j k -a): >> >> 0,50% nixstatsagent [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock >> 0,26% nixstatsagent [kernel.vmlinux] [k] shrink_slab [k] shrink_slab >> 0,23% nixstatsagent [kernel.vmlinux] [k] super_cache_count [k] super_cache_count >> 0,15% nixstatsagent [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock >> 0,15% nixstatsagent [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 >> >> 0,94% mysqld [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock >> 0,57% mysqld [kernel.vmlinux] [k] shrink_slab [k] shrink_slab >> 0,51% mysqld [kernel.vmlinux] [k] super_cache_count [k] super_cache_count >> 0,32% mysqld [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock >> 0,32% mysqld [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 >> >> 0,73% sshd [kernel.vmlinux] [k] _raw_spin_lock [k] _raw_spin_lock >> 0,35% sshd [kernel.vmlinux] [k] shrink_slab [k] shrink_slab >> 0,32% sshd [kernel.vmlinux] [k] super_cache_count [k] super_cache_count >> 0,21% sshd [kernel.vmlinux] [k] __list_lru_count_one.isra.2 [k] _raw_spin_lock >> 0,21% sshd [kernel.vmlinux] [k] list_lru_count_one [k] __list_lru_count_one.isra.2 > > It would be nice to see how this is improved by this patch. > Can you try to record the traces on the vanilla kernel with > and without this patch? Sadly, the talk is about a production node, and it's impossible to use vanila kernel there. >> >> This patch aims to make super_cache_count() more effective. It >> makes __list_lru_count_one() count nr_items lockless to minimize >> overhead introducing by locking operation, and to make parallel >> reclaims more scalable. >> >> The lock won't be taken on shrinker::count_objects(), >> it would be taken only for the real shrink by the thread, >> who realizes it. >> > >> https://jira.sw.ru/browse/PSBM-69296 > > Not relevant. >
Powered by blists - more mailing lists