lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 4 Aug 2023 09:19:45 -0600
From:   Keith Busch <kbusch@...nel.org>
To:     Pratyush Yadav <ptyadav@...zon.de>
Cc:     Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
        Jens Axboe <axboe@...nel.dk>, linux-nvme@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-pci: do not set the NUMA node of device if it has
 none

On Fri, Aug 04, 2023 at 04:50:16PM +0200, Pratyush Yadav wrote:
> With this patch, I get the below affinities:

Something still seems off without effective_affinity set. That attribute
should always reflect one CPU from the smp_affinity_list.

At least with your patch, the smp_affinity_list looks as expected: every
CPU is accounted for, and no vector appears to share the resource among
CPUs in different nodes.
 
>     $   for i in $(cat /proc/interrupts | grep nvme0 | sed "s/^ *//g" | cut -d":" -f 1); do \
>     >     cat /proc/irq/$i/{smp,effective}_affinity_list; \
>     >   done
>     8
>     8
>     16-17,48,65,67,69
> 
>     18-19,50,71,73,75
> 
>     20,52,77,79
> 
>     21,53,81,83
> 
>     22,54,85,87
> 
>     23,55,89,91
> 
>     24,56,93,95
> 
>     25,57,97,99
> 
>     26,58,101,103
> 
>     27,59,105,107
> 
>     28,60,109,111
> 
>     29,61,113,115
> 
>     30,62,117,119
> 
>     31,63,121,123
> 
>     49,51,125,127
> 
>     0,32,64,66
> 
>     1,33,68,70
> 
>     2,34,72,74
> 
>     3,35,76,78
> 
>     4,36,80,82
> 
>     5,37,84,86
> 
>     6,38,88,90
> 
>     7,39,92,94
> 
>     8,40,96,98
> 
>     9,41,100,102
> 
>     10,42,104,106
> 
>     11,43,108,110
> 
>     12,44,112,114
> 
>     13,45,116,118
> 
>     14,46,120,122
> 
>     15,47,124,126
> 
> The blank lines are because effective_smp_affinity is blank for all but the first interrupt.
> 
> The problem is, even with this I still get the same performance
> difference when running on Node 1 vs Node 0. I am not sure why. Any
> pointers?

I suspect effective_affinity isn't getting set and interrupts are
triggering on unexpected CPUs. If you check /proc/interrupts, can you
confirm if the interrupts are firing on CPUs within the
smp_affinity_list or some other CPU?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ