lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9af13fea-95a6-30cb-2c0e-770aa649a549@hisilicon.com>
Date:   Fri, 15 Nov 2019 17:09:13 +0800
From:   Shaokun Zhang <zhangshaokun@...ilicon.com>
To:     Michal Hocko <mhocko@...nel.org>
CC:     <linux-kernel@...r.kernel.org>, yuqi jin <jinyuqi@...wei.com>,
        "Andrew Morton" <akpm@...ux-foundation.org>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        "Paul Burton" <paul.burton@...s.com>,
        Michael Ellerman <mpe@...erman.id.au>,
        Anshuman Khandual <anshuman.khandual@....com>,
        <netdev@...r.kernel.org>
Subject: Re: [PATCH v3] lib: optimize cpumask_local_spread()

Hi Michal,

On 2019/11/14 22:43, Michal Hocko wrote:
> On Wed 13-11-19 10:46:05, Shaokun Zhang wrote:
> [...]
>>>> available: 4 nodes (0-3)
>>>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
>>>> node 0 size: 63379 MB
>>>> node 0 free: 61899 MB
>>>> node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
>>>> node 1 size: 64509 MB
>>>> node 1 free: 63942 MB
>>>> node 2 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
>>>> node 2 size: 64509 MB
>>>> node 2 free: 63056 MB
>>>> node 3 cpus: 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
>>>> node 3 size: 63997 MB
>>>> node 3 free: 63420 MB
>>>> node distances:
>>>> node   0   1   2   3
>>>>   0:  10  16  32  33
>>>>   1:  16  10  25  32
>>>>   2:  32  25  10  16
>>>>   3:  33  32  16  10
> [...]
>> before patch
>> Euler:/sys/bus/pci/devices/0000:7d:00.2 # cat numa_node
>> 2
>> Euler:/sys/bus/pci # cat /proc/irq/345/smp_affinity_list
>> 48
> 
> node 2
> 
>> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
>> 0
> 
> node 0
> 
>> Euler:/sys/bus/pci # cat /proc/irq/393/smp_affinity_list
>> 24
> 
> node 1
> 
>> Euler:/sys/bus/pci #
>>
>> after patch
>> Euler:/sys/bus/pci/devices/0000:7d:00.2 # cat numa_node
>> 2
>> Euler:/sys/bus/pci # cat /proc/irq/345/smp_affinity_list
>> 48
> 
> node 2
> 
>> Euler:/sys/bus/pci # cat /proc/irq/369/smp_affinity_list
>> 72
> 
> node 3
> 
>> Euler:/sys/bus/pci # cat /proc/irq/393/smp_affinity_list
>> 24
> 
> node 1
> 
> So few more questions. The only difference seems to be IRQ369
> moving from 0 to 3 and having the device affinity to node 2
> makes some sense because node 3 is closer. So far so good.

Right, it is what we want.

> I still have a large gap to get the whole picture. Namely why those
> other IRQs are not using any of the existing CPUs on the node 2.
> Could you explain that please?
> 

Oh, my mistake, for the previous instance, I don't list all IRQs and
just choose one IRQ from one NUMA node. You can see that the IRQ
number is not consistent :-).
IRQ from 345 to 368 will be bound to CPU cores which are in NUMA node2
and each IRQ is corresponding to one core.

Euler:/sys/bus/pci # cat /proc/irq/346/smp_affinity_list
49

Others are the similar.

> Btw. this all should be in the changelog.

Ok, I will follow it in future.

Thanks,
Shaokun

> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ