lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Thu, 23 Jul 2020 11:59:51 +0200 From: "Eelco Chaudron" <echaudro@...hat.com> To: "Florian Westphal" <fw@...len.de> Cc: netdev@...r.kernel.org, davem@...emloft.net, dev@...nvswitch.org, kuba@...nel.org, pabeni@...hat.com, pshelar@....org Subject: Re: [PATCH net-next 2/2] net: openvswitch: make masks cache size configurable On 22 Jul 2020, at 21:22, Florian Westphal wrote: > Eelco Chaudron <echaudro@...hat.com> wrote: >> This patch makes the masks cache size configurable, or with >> a size of 0, disable it. >> >> Reviewed-by: Paolo Abeni <pabeni@...hat.com> >> Signed-off-by: Eelco Chaudron <echaudro@...hat.com> >> --- >> include/uapi/linux/openvswitch.h | 1 >> net/openvswitch/datapath.c | 11 +++++ >> net/openvswitch/flow_table.c | 86 >> ++++++++++++++++++++++++++++++++++---- >> net/openvswitch/flow_table.h | 10 ++++ >> 4 files changed, 98 insertions(+), 10 deletions(-) >> >> diff --git a/include/uapi/linux/openvswitch.h >> b/include/uapi/linux/openvswitch.h >> index 7cb76e5ca7cf..8300cc29dec8 100644 >> --- a/include/uapi/linux/openvswitch.h >> +++ b/include/uapi/linux/openvswitch.h >> @@ -86,6 +86,7 @@ enum ovs_datapath_attr { >> OVS_DP_ATTR_MEGAFLOW_STATS, /* struct ovs_dp_megaflow_stats */ >> OVS_DP_ATTR_USER_FEATURES, /* OVS_DP_F_* */ >> OVS_DP_ATTR_PAD, >> + OVS_DP_ATTR_MASKS_CACHE_SIZE, > > This new attr should probably get an entry in > datapath.c datapath_policy[]. Yes, I should have, will fix in v2. >> --- a/net/openvswitch/datapath.c >> +++ b/net/openvswitch/datapath.c >> @@ -1535,6 +1535,10 @@ static int ovs_dp_cmd_fill_info(struct >> datapath *dp, struct sk_buff *skb, >> if (nla_put_u32(skb, OVS_DP_ATTR_USER_FEATURES, dp->user_features)) >> goto nla_put_failure; >> >> + if (nla_put_u32(skb, OVS_DP_ATTR_MASKS_CACHE_SIZE, >> + ovs_flow_tbl_masks_cache_size(&dp->table))) >> + goto nla_put_failure; >> + >> genlmsg_end(skb, ovs_header); >> return 0; > > > ovs_dp_cmd_msg_size() should add another nla_total_size(sizeof(u32)) > to make sure there is enough space. Same as above >> + if (a[OVS_DP_ATTR_MASKS_CACHE_SIZE]) { >> + u32 cache_size; >> + >> + cache_size = nla_get_u32(a[OVS_DP_ATTR_MASKS_CACHE_SIZE]); >> + ovs_flow_tbl_masks_cache_resize(&dp->table, cache_size); >> + } > > I see a 0 cache size is legal (turns it off) and that the allocation > path has a few sanity checks as well. > > Would it make sense to add min/max policy to datapath_policy[] for > this > as well? Yes I could add the following: @@ -1906,7 +1906,8 @@ static const struct nla_policy datapath_policy[OVS_DP_ATTR_MAX + 1] = { [OVS_DP_ATTR_NAME] = { .type = NLA_NUL_STRING, .len = IFNAMSIZ - 1 }, [OVS_DP_ATTR_UPCALL_PID] = { .type = NLA_U32 }, [OVS_DP_ATTR_USER_FEATURES] = { .type = NLA_U32 }, + [OVS_DP_ATTR_MASKS_CACHE_SIZE] = NLA_POLICY_RANGE(NLA_U32, 0, + PCPU_MIN_UNIT_SIZE / sizeof(struct mask_cache_entry)), }; Let me know your thoughts
Powered by blists - more mailing lists