lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 14 Nov 2019 15:43:17 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Shaokun Zhang <zhangshaokun@...ilicon.com>
Cc:     linux-kernel@...r.kernel.org, yuqi jin <jinyuqi@...wei.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Paul Burton <paul.burton@...s.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Anshuman Khandual <anshuman.khandual@....com>,
        netdev@...r.kernel.org
Subject: Re: [PATCH v3] lib: optimize cpumask_local_spread()

On Wed 13-11-19 10:46:05, Shaokun Zhang wrote:
[...]
> >> available: 4 nodes (0-3)
> >> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
> >> node 0 size: 63379 MB
> >> node 0 free: 61899 MB
> >> node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
> >> node 1 size: 64509 MB
> >> node 1 free: 63942 MB
> >> node 2 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
> >> node 2 size: 64509 MB
> >> node 2 free: 63056 MB
> >> node 3 cpus: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
> >> node 3 size: 63997 MB
> >> node 3 free: 63420 MB
> >> node distances:
> >> node   0   1   2   3
> >>   0:  10  16  32  33
> >>   1:  16  10  25  32
> >>   2:  32  25  10  16
> >>   3:  33  32  16  10
[...]
> before patch
> Euler:/sys/bus/pci/devices/0000:7d:00.2 # cat numa_node
> 2
> Euler:/sys/bus/pci # cat /proc/irq/345/smp_affinity_list
> 48

node 2

> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
> 0

node 0

> Euler:/sys/bus/pci # cat /proc/irq/393/smp_affinity_list
> 24

node 1

> Euler:/sys/bus/pci #
> 
> after patch
> Euler:/sys/bus/pci/devices/0000:7d:00.2 # cat numa_node
> 2
> Euler:/sys/bus/pci # cat /proc/irq/345/smp_affinity_list
> 48

node 2

> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
> 72

node 3

> Euler:/sys/bus/pci # cat /proc/irq/393/smp_affinity_list
> 24

node 1

So few more questions. The only difference seems to be IRQ369
moving from 0 to 3 and having the device affinity to node 2
makes some sense because node 3 is closer. So far so good.
I still have a large gap to get the whole picture. Namely why those
other IRQs are not using any of the existing CPUs on the node 2.
Could you explain that please?

Btw. this all should be in the changelog.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ