lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <82dbb7de-8211-4bab-8289-eb2573d8ef1d@linux.intel.com>
Date: Mon, 5 Feb 2024 10:32:35 -0500
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Fedor Pchelkin <pchelkin@...ras.ru>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Arnaldo Carvalho de Melo <acme@...nel.org>, x86@...nel.org,
 Alexander Antonov <alexander.antonov@...ux.intel.com>,
 linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
 lvc-project@...uxtesting.org
Subject: Re: [PATCH] perf/x86/uncore: avoid null-ptr-deref on error in
 pmu_alloc_topology



On 2024-02-05 10:18 a.m., Fedor Pchelkin wrote:
> Hello,
> 
> On 24/02/05 10:08AM, Liang, Kan wrote:
>>
>>
>> On 2024-02-04 8:48 a.m., Fedor Pchelkin wrote:
>>> If topology[die] array allocation fails then topology[die][idx] elements
>>> can't be accessed on error path.
>>>
>>> Checking this on the error path probably looks more readable than
>>> decrementing the counter in the allocation loop.
>>>
>>> Found by Linux Verification Center (linuxtesting.org).
>>>
>>> Fixes: 4d13be8ab5d4 ("perf/x86/intel/uncore: Generalize IIO topology support")
>>> Signed-off-by: Fedor Pchelkin <pchelkin@...ras.ru>
>>> ---
>>
>> It seems the code just jumps to the wrong kfree on the error path.
>> Does the below patch work?
>>
>> diff --git a/arch/x86/events/intel/uncore_snbep.c
>> b/arch/x86/events/intel/uncore_snbep.c
>> index 8250f0f59c2b..5481fd00d861 100644
>> --- a/arch/x86/events/intel/uncore_snbep.c
>> +++ b/arch/x86/events/intel/uncore_snbep.c
>> @@ -3808,7 +3808,7 @@ static int pmu_alloc_topology(struct
>> intel_uncore_type *type, int topology_type)
>>  	for (die = 0; die < uncore_max_dies(); die++) {
>>  		topology[die] = kcalloc(type->num_boxes, sizeof(**topology), GFP_KERNEL);
>>  		if (!topology[die])
>> -			goto clear;
>> +			goto free_topology;
>>  		for (idx = 0; idx < type->num_boxes; idx++) {
>>  			topology[die][idx].untyped = kcalloc(type->num_boxes,
>>  							     topology_size[topology_type],
>> @@ -3827,6 +3827,7 @@ static int pmu_alloc_topology(struct
>> intel_uncore_type *type, int topology_type)
>>  			kfree(topology[die][idx].untyped);
>>  		kfree(topology[die]);
>>  	}
>> +free_topology:
>>  	kfree(topology);
>>  err:
>>  	return -ENOMEM;
>>
>> Thanks,
>> Kan
>>
> 
> In this way the already allocated topology[die] elements won't be freed.
>

Ah, right. The patch looks good to me.

Reviewed-by: Kan Liang <kan.liang@...ux.intel.com>

Thanks,
Kan
> --
> Fedor
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ