lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 13 Oct 2020 14:44:19 +0200 From: Eelco Chaudron <echaudro@...hat.com> To: netdev@...r.kernel.org Cc: davem@...emloft.net, dev@...nvswitch.org, kuba@...nel.org, pabeni@...hat.com, pshelar@....org, jlelli@...hat.com, bigeasy@...utronix.de Subject: [PATCH net-next] net: openvswitch: fix to make sure flow_lookup() is not preempted The flow_lookup() function uses per CPU variables, which must not be preempted. However, this is fine in the general napi use case where the local BH is disabled. But, it's also called in the netlink context, which is preemptible. The below patch makes sure that even in the netlink path, preemption is disabled. Fixes: eac87c413bf9 ("net: openvswitch: reorder masks array based on usage") Reported-by: Juri Lelli <jlelli@...hat.com> Signed-off-by: Eelco Chaudron <echaudro@...hat.com> --- net/openvswitch/flow_table.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 87c286ad660e..16289386632b 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -850,9 +850,17 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table *tbl, struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); u32 __always_unused n_mask_hit; u32 __always_unused n_cache_hit; + struct sw_flow *flow; u32 index = 0; - return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); + /* This function gets called trough the netlink interface and therefore + * is preemptible. However, flow_lookup() function needs to be called + * with preemption disabled due to CPU specific variables. + */ + preempt_disable(); + flow = flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); + preempt_enable(); + return flow; } struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl,
Powered by blists - more mailing lists