lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fda62d41-ad87-4e33-9662-11d397a26f7c@suse.com>
Date: Fri, 27 Jun 2025 14:14:40 +0300
From: Nikolay Borisov <nik.borisov@...e.com>
To: Yazen Ghannam <yazen.ghannam@....com>, x86@...nel.org,
 Tony Luck <tony.luck@...el.com>, "Rafael J. Wysocki" <rafael@...nel.org>,
 Len Brown <lenb@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-edac@...r.kernel.org,
 Smita.KoralahalliChannabasappa@....com, Qiuxu Zhuo <qiuxu.zhuo@...el.com>,
 linux-acpi@...r.kernel.org
Subject: Re: [PATCH v4 08/22] x86/mce/amd: Put list_head in threshold_bank



On 6/25/25 19:52, Nikolay Borisov wrote:
> 
> 
> On 6/24/25 17:16, Yazen Ghannam wrote:
>> The threshold_bank structure is a container for one or more
>> threshold_block structures. Currently, the container has a single
>> pointer to the 'first' threshold_block structure which then has a linked
>> list of the remaining threshold_block structures.
>>
>> This results in an extra level of indirection where the 'first' block is
>> checked before iterating over the remaining blocks.
>>
>> Remove the indirection by including the head of the block list in the
>> threshold_bank structure which already acts as a container for all the
>> bank's thresholding blocks.
>>
>> Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@...el.com>
>> Tested-by: Tony Luck <tony.luck@...el.com>
>> Reviewed-by: Tony Luck <tony.luck@...el.com>
>> Signed-off-by: Yazen Ghannam <yazen.ghannam@....com>
>> ---
>>
>> Notes:
>>      Link:
>>      https://lore.kernel.org/r/20250415-wip-mca-updates- 
>> v3-4-8ffd9eb4aa56@....com
>>      v3->v4:
>>      * No change.
>>      v2->v3:
>>      * Added tags from Qiuxu and Tony.
>>      v1->v2:
>>      * New in v2.
>>
>>   arch/x86/kernel/cpu/mce/amd.c | 43 +++++++++++ 
>> +-------------------------------
>>   1 file changed, 12 insertions(+), 31 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/ 
>> amd.c
>> index 0ffbee329a8c..5d351ec863cd 100644
>> --- a/arch/x86/kernel/cpu/mce/amd.c
>> +++ b/arch/x86/kernel/cpu/mce/amd.c
>> @@ -241,7 +241,8 @@ struct threshold_block {
>>   struct threshold_bank {
>>       struct kobject        *kobj;
>> -    struct threshold_block    *blocks;
>> +    /* List of threshold blocks within this MCA bank. */
>> +    struct list_head    miscj;
>>   };
>>   static DEFINE_PER_CPU(struct threshold_bank **, threshold_banks);
>> @@ -900,9 +901,9 @@ static void log_and_reset_block(struct 
>> threshold_block *block)
>>    */
>>   static void amd_threshold_interrupt(void)
>>   {
>> -    struct threshold_block *first_block = NULL, *block = NULL, *tmp = 
>> NULL;
>> -    struct threshold_bank **bp = this_cpu_read(threshold_banks);
>> +    struct threshold_bank **bp = this_cpu_read(threshold_banks), 
>> *thr_bank;
>>       unsigned int bank, cpu = smp_processor_id();
>> +    struct threshold_block *block, *tmp;
>>       /*
>>        * Validate that the threshold bank has been initialized 
>> already. The
>> @@ -916,16 +917,11 @@ static void amd_threshold_interrupt(void)
>>           if (!(per_cpu(bank_map, cpu) & BIT_ULL(bank)))
>>               continue;
> 
> <slight off topic>
> 
> nit: I wonder if instead of using per_cpu and manual bit testing can't a 
> direct
> call to x86_this_cpu_test_bit be a better solution. The assembly looks 
> like:
> 
> [OLD]
> 
> xorl    %r14d, %r14d    # ivtmp.245
> movq    %rax, 8(%rsp)   # cpu, %sfp
> # arch/x86/kernel/cpu/mce/amd.c:917:        if (!(per_cpu(bank_map, cpu) 
> & BIT_ULL(bank)))
> movq    $bank_map, %rax #, __ptr
> movq    %rax, (%rsp)    # __ptr, %sfp
> .L236:
> movq    8(%rsp), %rax   # %sfp, cpu
> # arch/x86/kernel/cpu/mce/amd.c:917:        if (!(per_cpu(bank_map, cpu) 
> & BIT_ULL(bank)))
> movq    (%rsp), %rsi    # %sfp, __ptr
> # arch/x86/kernel/cpu/mce/amd.c:917:        if (!(per_cpu(bank_map, cpu) 
> & BIT_ULL(bank)))
> movq    __per_cpu_offset(,%rax,8), %rax # __per_cpu_offset[cpu_23], 
> __per_cpu_offset[cpu_23]
> # arch/x86/kernel/cpu/mce/amd.c:917:        if (!(per_cpu(bank_map, cpu) 
> & BIT_ULL(bank)))
> movq    (%rax,%rsi), %rax
> # arch/x86/kernel/cpu/mce/amd.c:917:        if (!(per_cpu(bank_map, cpu) 
> & BIT_ULL(bank)))
> btq %r14, %rax
> 
> [NEW]
> 
> xorl    %r15d, %r15d    # ivtmp.246
> .L236:
> # 917 "arch/x86/kernel/cpu/mce/amd.c" 1
> btl %r15d, %gs:bank_map(%rip)   # ivtmp.246, *_9
> 
> 
> That way you end up with a single btl (but I guess a version that uses 
> btq should be added as well)
> inside the loop rather than a bunch of instructions moving data around 
> for per_cpu.
> 
> Alternatively, since this is running in interrupt context can't you use 
> directly this_cpu_read(bank_map) and eliminate the smp_processor_id 
> invocation?


Actually the total number of banks are at most 128 as per the layout of 
MCG_CAP register, so using btl is fine. Also I'm not sure why the 
original code uses BIT_ULL vs just BIT since we can't have a 64bit value.


> 
> </slight off topic>
> 
> 
> <snip>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ