lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 09 Aug 2023 01:20:09 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Sohil Mehta <sohil.mehta@...el.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
        Tom Lendacky <thomas.lendacky@....com>,
        Andrew Cooper <andrew.cooper3@...rix.com>,
        Arjan van de Ven <arjan@...ux.intel.com>,
        Huang Rui <ray.huang@....com>, Juergen Gross <jgross@...e.com>,
        Dimitri Sivanich <dimitri.sivanich@....com>,
        Michael Kelley <mikelley@...rosoft.com>,
        K Prateek Nayak <kprateek.nayak@....com>,
        Kan Liang <kan.liang@...ux.intel.com>,
        Zhang Rui <rui.zhang@...el.com>,
        "Paul E. McKenney" <paulmck@...nel.org>,
        Feng Tang <feng.tang@...el.com>,
        Andy Shevchenko <andy@...radead.org>
Subject: Re: [patch 00/53] x86/topology: The final installment

On Tue, Aug 08 2023 at 15:58, Sohil Mehta wrote:

> On 8/8/2023 3:10 PM, Peter Zijlstra wrote:
>> It works better if you move this hunk into acpi_parse_x2apic() instead.
>> Then I can indeed confirm it works as advertised -- also having one of
>> them afflicted ivb-ep machines.
>> 
>
> I had a disappointed email typed up

Rightfully though as I'm clearly too tired and too grumpy to think
straight.
 
> and was about to send it when I saw this.

:)

> The inconsistency and warning on my system resolves with this. I lost
> 120 imaginary hotpluggable cpus but other than that everything seems
> fine :)

Sorry about that loss. :)

> CPU topo: Max. logical packages:   2
> CPU topo: Max. logical dies:       2
> CPU topo: Max. dies per package:   1
> CPU topo: Max. threads per core:   2
> CPU topo: Num. cores per package:    10
> CPU topo: Num. threads per package:  20
> CPU topo: Allowing 40 present CPUs plus 0 hotplug CPUs
> CPU topo: Thread    :    40
> CPU topo: Core      :    20
> CPU topo: Module    :     2
> CPU topo: Tile      :     2
> CPU topo: Die       :     2
> CPU topo: Package   :     2
>
> domain: Thread     shift: 1 dom_size:     2 max_threads:     2
> domain: Core       shift: 5 dom_size:    16 max_threads:    32
> domain: Module     shift: 5 dom_size:     1 max_threads:    32
> domain: Tile       shift: 5 dom_size:     1 max_threads:    32
> domain: Die        shift: 5 dom_size:     1 max_threads:    32
> domain: Package    shift: 5 dom_size:     1 max_threads:    32
>
> /sys/kernel/debug/x86/topo/cpus/39
> online:              1
> initial_apicid:      39
> apicid:              39
> pkg_id:              1
> die_id:              1
> cu_id:               255
> core_id:             12
> logical_pkg_id:      1
> logical_die_id:      1
> llc_id:              32
> l2c_id:              56
> amd_node_id:         0
> amd_nodes_per_pkg:   0
> num_threads:         20
> num_cores:           10
> max_dies_per_pkg:    1
> max_threads_per_core:2

That makes much more sense now.

Zhang, can you please follow up on:

  https://lore.kernel.org/all/613df280116378115585d0c483f7e186cffaeb58.camel@intel.com/

or otherwise I just polish up PeterZ's variant of it tomorrow.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ