[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZEjM0aTEyxHgAcwa@Laptop-X1>
Date: Wed, 26 Apr 2023 15:03:45 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Jay Vosburgh <jay.vosburgh@...onical.com>
Cc: Jakub Kicinski <kuba@...nel.org>,
kernel test robot <lkp@...el.com>, netdev@...r.kernel.org,
oe-kbuild-all@...ts.linux.dev,
"David S . Miller" <davem@...emloft.net>,
Paolo Abeni <pabeni@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Liang Li <liali@...hat.com>, Vincent Bernat <vincent@...nat.ch>
Subject: Re: [PATCH net 1/4] bonding: fix send_peer_notif overflow
On Fri, Apr 21, 2023 at 05:55:16PM +0800, Hangbin Liu wrote:
> > I'm fine to limit the peerf_notif_delay range and then use a
> > smaller type.
> >
> > num_peer_notif is already limited to 255; I'm going to suggest a
> > limit to the delay of 300 seconds. That seems like an absurdly long
> > time for this; I didn't do any kind of science to come up with that
> > number.
> >
> > As peer_notif_delay is stored in units of miimon intervals, that
> > gives a worst case peer_notif_delay value of 300000 if miimon is 1, and
> > 255 * 300000 fits easily in a u32 for send_peer_notif.
>
> OK, I just found another overflow. In bond_fill_info(),
> or bond_option_miimon_set():
>
> if (nla_put_u32(skb, IFLA_BOND_PEER_NOTIF_DELAY,
> bond->params.peer_notif_delay * bond->params.miimon))
> goto nla_put_failure;
>
> Since both peer_notif_delay and miimon are defined as int, there is a
> possibility that the fill in number got overflowed. The same with up/down delay.
>
> Even we limit the peer_notif_delay to 300s, which is 30000, there is still has
> possibility got overflowed if we set miimon large enough.
>
> This overflow should only has effect on use space shown since it's a
> multiplication result. The kernel part works fine. I'm not sure if we should
> also limit the miimon, up/down delay values..
Hi Jay,
Any comments for this issue? Should I send the send_peer_notif fix first and
discuss the miimon, up/down delay userspace overflow issue later?
Thanks
Hangbin
Powered by blists - more mailing lists