[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADXeF1FNGMxBHV8_mvU99Xjj-40BcdG44MtbLNywwr1X8CqHkw@mail.gmail.com>
Date: Tue, 10 Dec 2024 12:19:11 +0900
From: Yuyang Huang <yuyanghuang@...gle.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>, David Ahern <dsahern@...nel.org>,
roopa@...ulusnetworks.com, jiri@...nulli.us, stephen@...workplumber.org,
jimictw@...gle.com, prohr@...gle.com, liuhangbin@...il.com,
nicolas.dichtel@...nd.com, andrew@...n.ch, netdev@...r.kernel.org,
Maciej Żenczykowski <maze@...gle.com>,
Lorenzo Colitti <lorenzo@...gle.com>, Patrick Ruddy <pruddy@...tta.att-mail.com>
Subject: Re: [PATCH net-next, v5] netlink: add IGMP/MLD join/leave notifications
Thanks for the review feedback.
>Is there a strong reason you reimplement this instead of trying to reuse
>inet6_fill_ifmcaddr() ? Keeping notifications and get responses in sync
>used to be a thing in rtnetlink, this code already diverged but maybe
>we can bring it back.
In order to reuse the `inet6_fill_ifmcaddr()` logic, we need move the
function and `struct inet6_fill_args` definition into
`net/core/rtnetlink.c` and `include/net/rtnetlink.h`. Is this approach
acceptable?
In this patch, we will always set `ifa_scope` to `RT_SCOPE_UNIVERSE`
for both IPv4 and IPv6 multicast addresses for consistency. Reusing
`inet6_fill_ifmcaddr()` will revert to the following logic:
>u8 scope = RT_SCOPE_UNIVERSE;
>struct nlmsghdr *nlh;
>if (ipv6_addr_scope(&ifmca->mca_addr) & IFA_SITE)
scope = RT_SCOPE_SITE;
Is it acceptable, or should I update the old logic to always set
‘RT_SCOPE_UNIVERSE’?
>ENOMEM ? I could be wrong but in atomic context the memory pressure
>can well be transient, it's not like the socket queue filled up.
Will update in the next patch version.
>nit: + goes to the end of previous line
Will update in the next patch version.
>nit: nlmsg_free(), since it exists
Will update in the next patch version.
Thanks,
Yuyang
On Tue, Dec 10, 2024 at 11:25 AM Jakub Kicinski <kuba@...nel.org> wrote:
>
> On Fri, 6 Dec 2024 13:10:25 +0900 Yuyang Huang wrote:
> > +static int inet6_fill_ifmcaddr(struct sk_buff *skb, struct net_device *dev,
> > + const struct in6_addr *addr, int event)
> > +{
> > + struct ifaddrmsg *ifm;
> > + struct nlmsghdr *nlh;
> > +
> > + nlh = nlmsg_put(skb, 0, 0, event, sizeof(struct ifaddrmsg), 0);
> > + if (!nlh)
> > + return -EMSGSIZE;
> > +
> > + ifm = nlmsg_data(nlh);
> > + ifm->ifa_family = AF_INET6;
> > + ifm->ifa_prefixlen = 128;
> > + ifm->ifa_flags = IFA_F_PERMANENT;
> > + ifm->ifa_scope = RT_SCOPE_UNIVERSE;
> > + ifm->ifa_index = dev->ifindex;
> > +
> > + if (nla_put_in6_addr(skb, IFA_MULTICAST, addr) < 0) {
> > + nlmsg_cancel(skb, nlh);
> > + return -EMSGSIZE;
> > + }
> > +
> > + nlmsg_end(skb, nlh);
> > + return 0;
> > +}
>
> Is there a strong reason you reimplement this instead of trying to reuse
> inet6_fill_ifmcaddr() ? Keeping notifications and get responses in sync
> used to be a thing in rtnetlink, this code already diverged but maybe
> we can bring it back.
>
> > +static void inet6_ifmcaddr_notify(struct net_device *dev,
> > + const struct in6_addr *addr, int event)
> > +{
> > + struct net *net = dev_net(dev);
> > + struct sk_buff *skb;
> > + int err = -ENOBUFS;
>
> ENOMEM ? I could be wrong but in atomic context the memory pressure
> can well be transient, it's not like the socket queue filled up.
>
> > + skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct ifaddrmsg))
> > + + nla_total_size(16), GFP_ATOMIC);
>
> nit: + goes to the end of previous line
>
> > + if (!skb)
> > + goto error;
> > +
> > + err = inet6_fill_ifmcaddr(skb, dev, addr, event);
> > + if (err < 0) {
> > + WARN_ON_ONCE(err == -EMSGSIZE);
> > + kfree_skb(skb);
>
> nit: nlmsg_free(), since it exists
>
> > + goto error;
> > + }
> > +
> > + rtnl_notify(skb, net, 0, RTNLGRP_IPV6_MCADDR, NULL, GFP_ATOMIC);
> > + return;
> > +error:
> > + rtnl_set_sk_err(net, RTNLGRP_IPV6_MCADDR, err);
> > +}
> > +
> --
> pw-bot: cr
Powered by blists - more mailing lists