lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 7 Sep 2017 08:35:49 -0500
From:   Nathan Fontenot <nfont@...ux.vnet.ibm.com>
To:     Michael Bringmann <mwb@...ux.vnet.ibm.com>,
        linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Cc:     Michael Ellerman <mpe@...erman.id.au>,
        John Allen <jallen@...ux.vnet.ibm.com>
Subject: Re: [PATCH V13 4/4] powerpc/vphn: Fix numa update end-loop bug

On 09/06/2017 05:03 PM, Michael Bringmann wrote:
> 
> 
> On 09/06/2017 09:45 AM, Nathan Fontenot wrote:
>> On 09/01/2017 10:48 AM, Michael Bringmann wrote:
>>> powerpc/vphn: On Power systems with shared configurations of CPUs
>>> and memory, there are some issues with the association of additional
>>> CPUs and memory to nodes when hot-adding resources.  This patch
>>> fixes an end-of-updates processing problem observed occasionally
>>> in numa_update_cpu_topology().
>>>
>>> Signed-off-by: Michael Bringmann <mwb@...ux.vnet.ibm.com>
>>> ---
>>>  arch/powerpc/mm/numa.c |    7 +++++++
>>>  1 file changed, 7 insertions(+)
>>>
>>> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
>>> index 3a5b334..fccf23f 100644
>>> --- a/arch/powerpc/mm/numa.c
>>> +++ b/arch/powerpc/mm/numa.c
>>> @@ -1410,6 +1410,13 @@ int numa_update_cpu_topology(bool cpus_locked)
>>>  		cpu = cpu_last_thread_sibling(cpu);
>>>  	}
>>>
>>> +	/*
>>> +	 * Prevent processing of 'updates' from overflowing array
>>> +	 * in cases where last entry filled in a 'next' pointer.
>>> +	 */
>>> +	if (i)
>>> +		updates[i-1].next = NULL;
>>> +
>>
>> This really looks like the bug is in the code above this where we
>> fill in the updates array for each of the sibling cpus. The code
>> there assumes that if the current update entry is not the end that
>> there will be more updates and blindly sets the next pointer.
>>
>> Perhaps correcting the logic in that code to next pointers. Set the
>> ud pointer to NULL before the outer for_each_cpu() loop. Then in the
>> inner for_each_cpu(sibling,...) loop update the ud-> next pointer as
>> the first operation.
>>
>> 		for_each_cpu(sibling, cpu_sibling_mask(cpu)) {
>> 			if (ud)
>> 				ud->next = &updates[i];
>> 			...
>> 		}
>>
>> Obviously untested, but I think this would prevent setting the next
>> pointer in the last update entry that is filled out erroneously.
> 
> The above fragment looks to skip initialization of the 'next' pointer
> in the first element of the the 'updates'.  That would abort subsequent
> evaluation of the array too soon, I believe.  I would like to take another look
> to see whether the current check 'if (i < weight) ud->next = &updates[i];'
> is having problems due to i being 0-relative and weight being 1-relative.

Another thing to keep in mind is that cpus can be skipped by checks earlier
in the loop. There is not guarantee that we will add 'weight' elements to
the ud list.

-Nathan
 
> 
>>   
>> -Nathan
> 
> Michael
> 
>>
>>>  	pr_debug("Topology update for the following CPUs:\n");
>>>  	if (cpumask_weight(&updated_cpus)) {
>>>  		for (ud = &updates[0]; ud; ud = ud->next) {
>>>
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ