lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 20 Apr 2023 15:45:49 -0700
From:   Yury Norov <yury.norov@...il.com>
To:     Tariq Toukan <ttoukan.linux@...il.com>
Cc:     Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
        linux-rdma@...r.kernel.org, linux-kernel@...r.kernel.org,
        Saeed Mahameed <saeedm@...dia.com>,
        Pawel Chmielewski <pawel.chmielewski@...el.com>,
        Leon Romanovsky <leon@...nel.org>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Steven Rostedt <rostedt@...dmis.org>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Daniel Bristot de Oliveira <bristot@...hat.com>,
        Valentin Schneider <vschneid@...hat.com>,
        Gal Pressman <gal@...dia.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Heiko Carstens <hca@...ux.ibm.com>,
        Barry Song <baohua@...nel.org>
Subject: Re: [PATCH v2 4/8] net: mlx5: switch comp_irqs_request() to using
 for_each_numa_cpu

On Thu, Apr 20, 2023 at 11:27:26AM +0300, Tariq Toukan wrote:
> I like this clean API.

Thanks :)
 
> nit:
> Previously cpu_online_mask was used here. Is this change intentional?
> We can fix it in a followup patch if this is the only comment on the series.
> 
> Reviewed-by: Tariq Toukan <tariqt@...dia.com>

The only CPUs listed in the sched_domains_numa_masks are 'available',
i.e. online CPUs. The for_each_numa_cpu() ANDs user-provided cpumask
with a map associate to the hop, and that means that if we AND with
possible mask, we'll eventually walk online CPUs only.

To make sure, I experimented with the modified test:

diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
index 6becb044a66f..c8d557731080 100644
--- a/lib/test_bitmap.c
+++ b/lib/test_bitmap.c
@@ -760,8 +760,13 @@ static void __init test_for_each_numa(void)
                unsigned int hop, c = 0;

                rcu_read_lock();
-               for_each_numa_cpu(cpu, hop, node, cpu_online_mask)
+               pr_err("Node %d:\t", node);
+               for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) {
                        expect_eq_uint(cpumask_local_spread(c++, node), cpu);
+                       pr_cont("%3d", cpu);
+
+               }
+               pr_err("\n");
                rcu_read_unlock();
        }
 }

This is the NUMA topology of my test machine after the boot:

    root@...ian:~# numactl -H
    available: 4 nodes (0-3)
    node 0 cpus: 0 1 2 3
    node 0 size: 1861 MB
    node 0 free: 1792 MB
    node 1 cpus: 4 5
    node 1 size: 1914 MB
    node 1 free: 1823 MB
    node 2 cpus: 6 7
    node 2 size: 1967 MB
    node 2 free: 1915 MB
    node 3 cpus: 8 9 10 11 12 13 14 15
    node 3 size: 7862 MB
    node 3 free: 7259 MB
    node distances:
    node   0   1   2   3
      0:  10  50  30  70
      1:  50  10  70  30
      2:  30  70  10  50
      3:  70  30  50  10

And this is what test prints:

     root@...ian:~# insmod test_bitmap.ko
     test_bitmap: loaded.
     test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 472
     test_bitmap: bitmap_print_to_pagebuf: input is '0-32767
     ', Time: 2665
     test_bitmap: Node 0:	  0  1  2  3  6  7  4  5  8  9 10 11 12 13 14 15
     test_bitmap:
     test_bitmap: Node 1:	  4  5  8  9 10 11 12 13 14 15  0  1  2  3  6  7
     test_bitmap:
     test_bitmap: Node 2:	  6  7  0  1  2  3  8  9 10 11 12 13 14 15  4  5
     test_bitmap:
     test_bitmap: Node 3:	  8  9 10 11 12 13 14 15  4  5  6  7  0  1  2  3
     test_bitmap:
     test_bitmap: all 6614 tests passed

Now, disable a couple of CPUs:

     root@...ian:~# chcpu -d 1-2
     smpboot: CPU 1 is now offline
     CPU 1 disabled
     smpboot: CPU 2 is now offline
     CPU 2 disabled

And try again:

     root@...ian:~# rmmod test_bitmap
     rmmod: ERROR: ../libkmod/libkmod[  320.275904] test_bitmap: unloaded.
     root@...ian:~# numactl -H
     available: 4 nodes (0-3)
     node 0 cpus: 0 3
     node 0 size: 1861 MB
     node 0 free: 1792 MB
     node 1 cpus: 4 5
     node 1 size: 1914 MB
     node 1 free: 1823 MB
     node 2 cpus: 6 7
     node 2 size: 1967 MB
     node 2 free: 1915 MB
     node 3 cpus: 8 9 10 11 12 13 14 15
     node 3 size: 7862 MB
     node 3 free: 7259 MB
     node distances:
     node   0   1   2   3
       0:  10  50  30  70
       1:  50  10  70  30
       2:  30  70  10  50
       3:  70  30  50  10
     root@...ian:~# insmod test_bitmap.ko
     test_bitmap: loaded.
     test_bitmap: parselist: 14: input is '0-2047:128/256' OK, Time: 491
     test_bitmap: bitmap_print_to_pagebuf: input is '0-32767
     ', Time: 2174
     test_bitmap: Node 0:	  0  3  6  7  4  5  8  9 10 11 12 13 14 15
     test_bitmap:
     test_bitmap: Node 1:	  4  5  8  9 10 11 12 13 14 15  0  3  6  7
     test_bitmap:
     test_bitmap: Node 2:	  6  7  0  3  8  9 10 11 12 13 14 15  4  5
     test_bitmap:
     test_bitmap: Node 3:	  8  9 10 11 12 13 14 15  4  5  6  7  0  3
     test_bitmap:
     test_bitmap: all 6606 tests passed

I used cpu_possible_mask because I wanted to keep the patch
consistent: before we traversed NUMA hop masks, now we traverse the
same hop masks AND user-provided mask, so the latter should include
all possible CPUs.

If you think it's better to have cpu_online_mask in the driver, let's
make it in a separate patch?

Thanks,
Yury

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ