lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9122972b-9a5b-db65-f145-585d6076b2ed@virtuozzo.com>
Date:   Thu, 26 Apr 2018 17:07:39 +0300
From:   Kirill Tkhai <ktkhai@...tuozzo.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     akpm@...ux-foundation.org, peterz@...radead.org, oleg@...hat.com,
        viro@...iv.linux.org.uk, mingo@...nel.org,
        paulmck@...ux.vnet.ibm.com, keescook@...omium.org, riel@...hat.com,
        tglx@...utronix.de, kirill.shutemov@...ux.intel.com,
        marcos.souza.org@...il.com, hoeun.ryu@...il.com,
        pasha.tatashin@...cle.com, gs051095@...il.com,
        ebiederm@...ssion.com, dhowells@...hat.com,
        rppt@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] exit: Make unlikely case in mm_update_next_owner()
 more scalable

On 26.04.2018 16:07, Michal Hocko wrote:
> On Thu 26-04-18 14:00:19, Kirill Tkhai wrote:
>> This function searches for a new mm owner in children and siblings,
>> and then iterates over all processes in the system in unlikely case.
>> Despite the case is unlikely, its probability growths with the number
>> of processes in the system. The time, spent on iterations, also growths.
>> I regulary observe mm_update_next_owner() in crash dumps (not related
>> to this function) of the nodes with many processes (20K+), so it looks
>> like it's not so unlikely case.
> 
> Did you manage to find the pattern that forces mm_update_next_owner to
> slow paths? This really shouldn't trigger very often. If we can fallback
> easily then I suspect that we should be better off reconsidering
> mm->owner and try to come up with something more clever. I've had a
> patch to remove owner few years back. It needed some work to finish but
> maybe that would be a better than try to make non-scalable thing suck
> less.

It's not easy to find a pattern on such the big number of processes, especially
because of only the final result is visible in crash. I assume, this may
be connected with some unexpected signals received by task set in topology,
but I'm not sure. Though, even it becomes the most unlikely situation with
some small not zero probability, it still has to be optimized to minimize
unexpected situations.

We can rework this simply by adding a list of tasks to mm. Also, reuse
task_lock() of the mm owner for that. Something like:

assign_task_mm(struct mm_struct *mm, struct task_struct *task)
{
	int again;
again:
	again = 0;
	rcu_read_lock();
	task = mm->owner;
	if (!task)
		goto unlock;
	task_lock(task);
	if (mm->owner = task)
		llist_add(&task->mm_list, &mm->task_list);
	else
		again = 1;
	task_unlock(task);
unlock:
	rcu_read_unlock();
	if (again)
		goto again;
}

Kirill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ