lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0f6a58a-0078-05a2-a7f9-946b2ff21846@grimberg.me>
Date:   Wed, 1 Aug 2018 08:12:48 +0300
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Max Gurtovoy <maxg@...lanox.com>,
        Jason Gunthorpe <jgg@...lanox.com>
Cc:     Steve Wise <swise@...ngridcomputing.com>,
        'Leon Romanovsky' <leon@...nel.org>,
        'Doug Ledford' <dledford@...hat.com>,
        'RDMA mailing list' <linux-rdma@...r.kernel.org>,
        'Saeed Mahameed' <saeedm@...lanox.com>,
        'linux-netdev' <netdev@...r.kernel.org>
Subject: Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

Hi Max,

> Yes, since nvmf is the only user of this function.
> Still waiting for comments on the suggested patch :)
> 

Sorry for the late response (but I'm on vacation so I have
an excuse ;))

I'm thinking that we should avoid trying to find an assignment
when stuff like irqbalance daemon is running and changing
the affinitization.

This extension was made to apply optimal affinity assignment
when the device irq affinity is lined up in a vector per
core.

I'm thinking that when we identify this is not the case, we immediately
fallback to the default mapping.

1. when we get a mask, if its weight != 1, we fallback.
2. if a queue was left unmapped, we fallback.

Maybe something like the following:
--
diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c
index 996167f1de18..1ada6211c55e 100644
--- a/block/blk-mq-rdma.c
+++ b/block/blk-mq-rdma.c
@@ -35,17 +35,26 @@ int blk_mq_rdma_map_queues(struct blk_mq_tag_set *set,
         const struct cpumask *mask;
         unsigned int queue, cpu;

+       /* reset all CPUs mapping */
+       for_each_possible_cpu(cpu)
+               set->mq_map[cpu] = UINT_MAX;
+
         for (queue = 0; queue < set->nr_hw_queues; queue++) {
                 mask = ib_get_vector_affinity(dev, first_vec + queue);
                 if (!mask)
                         goto fallback;

-               for_each_cpu(cpu, mask)
-                       set->mq_map[cpu] = queue;
+               if (cpumask_weight(mask) != 1)
+                       goto fallback;
+
+               cpu = cpumask_first(mask);
+               if (set->mq_map[cpu] != UINT_MAX)
+                       goto fallback;
+
+               set->mq_map[cpu] = queue;
         }

         return 0;
-
  fallback:
         return blk_mq_map_queues(set);
  }
--

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ