lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 4 Aug 2016 15:36:10 -0400 From: kan.liang@...el.com To: davem@...emloft.net, linux-kernel@...r.kernel.org, netdev@...r.kernel.org Cc: mingo@...hat.com, peterz@...radead.org, kuznet@....inr.ac.ru, jmorris@...ei.org, yoshfuji@...ux-ipv6.org, kaber@...sh.net, akpm@...ux-foundation.org, keescook@...omium.org, viro@...iv.linux.org.uk, gorcunov@...nvz.org, john.stultz@...aro.org, aduyck@...antis.com, ben@...adent.org.uk, decot@...glers.com, fw@...len.de, alexander.duyck@...il.com, daniel@...earbox.net, tom@...bertland.com, rdunlap@...radead.org, xiyou.wangcong@...il.com, hannes@...essinduktion.org, jesse.brandeburg@...el.com, andi@...stfloor.org, Kan Liang <kan.liang@...el.com> Subject: [RFC V2 PATCH 06/25] net/netpolicy: set and remove IRQ affinity From: Kan Liang <kan.liang@...el.com> This patches introduces functions to set and remove IRQ affinity according to cpu and queue mapping. The functions will not record the previous affinity status. After a set/remove cycles, it will set the affinity on all online CPU with IRQ balance enabling. Signed-off-by: Kan Liang <kan.liang@...el.com> --- net/core/netpolicy.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/net/core/netpolicy.c b/net/core/netpolicy.c index ff7fc04..c44818d 100644 --- a/net/core/netpolicy.c +++ b/net/core/netpolicy.c @@ -29,6 +29,7 @@ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/init.h> +#include <linux/irq.h> #include <linux/seq_file.h> #include <linux/proc_fs.h> #include <linux/uaccess.h> @@ -128,6 +129,38 @@ err: return -ENOMEM; } +static void netpolicy_clear_affinity(struct net_device *dev) +{ + struct netpolicy_sys_info *s_info = &dev->netpolicy->sys_info; + u32 i; + + for (i = 0; i < s_info->avail_rx_num; i++) { + irq_clear_status_flags(s_info->rx[i].irq, IRQ_NO_BALANCING); + irq_set_affinity_hint(s_info->rx[i].irq, cpu_online_mask); + } + + for (i = 0; i < s_info->avail_tx_num; i++) { + irq_clear_status_flags(s_info->tx[i].irq, IRQ_NO_BALANCING); + irq_set_affinity_hint(s_info->tx[i].irq, cpu_online_mask); + } +} + +static void netpolicy_set_affinity(struct net_device *dev) +{ + struct netpolicy_sys_info *s_info = &dev->netpolicy->sys_info; + u32 i; + + for (i = 0; i < s_info->avail_rx_num; i++) { + irq_set_status_flags(s_info->rx[i].irq, IRQ_NO_BALANCING); + irq_set_affinity_hint(s_info->rx[i].irq, cpumask_of(s_info->rx[i].cpu)); + } + + for (i = 0; i < s_info->avail_tx_num; i++) { + irq_set_status_flags(s_info->tx[i].irq, IRQ_NO_BALANCING); + irq_set_affinity_hint(s_info->tx[i].irq, cpumask_of(s_info->tx[i].cpu)); + } +} + const char *policy_name[NET_POLICY_MAX] = { "NONE" }; -- 2.5.5
Powered by blists - more mailing lists