[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8201.1610503246@famine>
Date: Tue, 12 Jan 2021 18:00:46 -0800
From: Jay Vosburgh <jay.vosburgh@...onical.com>
To: Saeed Mahameed <saeed@...nel.org>
cc: Vladimir Oltean <olteanv@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
Andrew Lunn <andrew@...n.ch>,
Florian Fainelli <f.fainelli@...il.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Stephen Hemminger <stephen@...workplumber.org>,
Eric Dumazet <edumazet@...gle.com>,
George McCollister <george.mccollister@...il.com>,
Oleksij Rempel <o.rempel@...gutronix.de>,
Veaceslav Falico <vfalico@...il.com>,
Andy Gospodarek <andy@...yhouse.net>,
Arnd Bergmann <arnd@...db.de>, Taehee Yoo <ap420073@...il.com>,
Jiri Pirko <jiri@...nulli.us>, Florian Westphal <fw@...len.de>,
Nikolay Aleksandrov <nikolay@...dia.com>,
Pravin B Shelar <pshelar@....org>,
Sridhar Samudrala <sridhar.samudrala@...el.com>
Subject: Re: [PATCH v6 net-next 14/15] net: bonding: ensure .ndo_get_stats64 can sleep
Saeed Mahameed <saeed@...nel.org> wrote:
>On Tue, 2021-01-12 at 16:37 +0200, Vladimir Oltean wrote:
>> On Mon, Jan 11, 2021 at 03:38:49PM -0800, Saeed Mahameed wrote:
>> > GFP_ATOMIC is a little bit aggressive especially when user daemons
>> > are
>> > periodically reading stats. This can be avoided.
>> >
>> > You can pre-allocate with GFP_KERNEL an array with an "approximate"
>> > size.
>> > then fill the array up with whatever slaves the the bond has at
>> > that
>> > moment, num_of_slaves can be less, equal or more than the array
>> > you
>> > just allocated but we shouldn't care ..
>> >
>> > something like:
>> > rcu_read_lock()
>> > nslaves = bond_get_num_slaves();
>> > rcu_read_unlock()
Can be nslaves = READ_ONCE(bond->slave_cnt), or, for just active
slaves:
struct bond_up_slave *slaves;
slaves = rcu_dereference(bond->slave_arr);
nslaves = slaves ? READ_ONCE(slaves->count) : 0;
>> > sarray = kcalloc(nslaves, sizeof(struct bonding_slave_dev),
>> > GFP_KERNEL);
>> > rcu_read_lock();
>> > bond_fill_slaves_array(bond, sarray); // also do: dev_hold()
>> > rcu_read_unlock();
>> >
>> >
>> > bond_get_slaves_array_stats(sarray);
>> >
>> > bond_put_slaves_array(sarray);
>>
>> I don't know what to say about acquiring RCU read lock twice and
>> traversing the list of interfaces three or four times.
>
>You can optimize this by tracking #num_slaves.
I think that the set of active slaves changing between the two
calls will be a rare exception, and that the number of slaves is
generally small (more than 2 is uncommon in my experience).
>> On the other hand, what's the worst that can happen if the GFP_ATOMIC
>> memory allocation fails. It's not like there is any data loss.
>> User space will retry when there is less memory pressure.
>
>Anyway Up to you, i just don't like it when we use GFP_ATOMIC when it
>can be avoided, especially for periodic jobs, like stats polling..
And, for the common case, I suspect that an array allocation
will have lower overhead than a loop that allocates once per slave.
-J
---
-Jay Vosburgh, jay.vosburgh@...onical.com
Powered by blists - more mailing lists