[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241101-cheerful-pretty-wapiti-d5f69e@leitao>
Date: Fri, 1 Nov 2024 03:51:59 -0700
From: Breno Leitao <leitao@...ian.org>
To: Jakub Kicinski <kuba@...nel.org>
Cc: horms@...nel.org, davem@...emloft.net, edumazet@...gle.com,
pabeni@...hat.com, thepacketgeek@...il.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, davej@...emonkey.org.uk,
vlad.wing@...il.com, max@...sevol.com, kernel-team@...a.com,
jiri@...nulli.us, jv@...sburgh.net, andy@...yhouse.net,
aehkn@...hub.one, Rik van Riel <riel@...riel.com>,
Al Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH net-next 1/3] net: netpoll: Defer skb_pool population
until setup success
Hello Jakub,
On Thu, Oct 31, 2024 at 06:26:47PM -0700, Jakub Kicinski wrote:
> On Fri, 25 Oct 2024 07:20:18 -0700 Breno Leitao wrote:
> > The current implementation has a flaw where it populates the skb_pool
> > with 32 SKBs before calling __netpoll_setup(). If the setup fails, the
> > skb_pool buffer will persist indefinitely and never be cleaned up.
> >
> > This change moves the skb_pool population to after the successful
> > completion of __netpoll_setup(), ensuring that the buffers are not
> > unnecessarily retained. Additionally, this modification alleviates rtnl
> > lock pressure by allowing the buffer filling to occur outside of the
> > lock.
>
> arguably if the setup succeeds there would now be a window of time
> where np is active but pool is empty.
I am not convinced this is a problem. Given that netpoll_setup() is only
called from netconsole.
In netconsole, a target is not enabled (as in sending packets) until the
netconsole target is, in fact, enabled. (nt->enabled = true). Enabling
the target(nt) only happen after netpoll_setup() returns successfully.
Example:
static void write_ext_msg(struct console *con, const char *msg,
unsigned int len)
{
...
list_for_each_entry(nt, &target_list, list)
if (nt->extended && nt->enabled && netif_running(nt->np.dev))
send_ext_msg_udp(nt, msg, len);
So, back to your point, the netpoll interface will be up, but, not used
at all.
On top of that, two other considerations:
* If the netpoll target is used without the buffer, it is not a big
deal, since refill_skbs() is called, independently if the pool is full
or not. (Which is not ideal and I eventually want to improve it).
Anyway, this is how the code works today:
void netpoll_send_udp(struct netpoll *np, const char *msg, int len)
{
...
skb = find_skb(np, total_len + np->dev->needed_tailroom,...
// transmit the skb
static struct sk_buff *find_skb(struct netpoll *np, int len, int reserve)
{
...
refill_skbs(np);
skb = alloc_skb(len, GFP_ATOMIC);
if (!skb)
skb = skb_dequeue(&np->skb_pool);
...
// return the skb
So, even in there is a transmission in-between enabling the netpoll
target and not populating the pool (which is NOT the case in the code
today), it would not be a problem, given that netpoll_send_udp() will
call refill_skbs() anyway.
I have an in-development patch to improve it, by deferring this to a
workthread, mainly because this whole allocation dance is done with a
bunch of locks held, including printk/console lock.
I think that a best mechanism might be something like:
* If find_skb() needs to consume from the pool (which is rare, only
when alloc_skb() fails), raise workthread that tries to repopulate the
pool in the background.
* Eventually avoid alloc_skb() first, and getting directly from the
pool first, if the pool is depleted, try to alloc_skb(GPF_ATOMIC).
This might make the code faster, but, I don't have data yet.
This might also required a netpool reconfigurable of pool size. Today
it is hardcoded (#define MAX_SKBS 32). This current patchset is the
first step to individualize the pool, then, we can have a field in
struct netpoll that specify what is the pool size (32 by default),
but user configuration.
On netconsole, we can do it using the configfs fields.
Anyway, are these ideas too crazy?
Powered by blists - more mailing lists