[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zyq0mBCJEBQ2s2Jm@mini-arch>
Date: Tue, 5 Nov 2024 16:13:12 -0800
From: Stanislav Fomichev <stfomichev@...il.com>
To: Mina Almasry <almasrymina@...gle.com>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Shuah Khan <shuah@...nel.org>, Yi Lai <yi1.lai@...ux.intel.com>
Subject: Re: [PATCH net-next v1 6/7] net: fix SO_DEVMEM_DONTNEED looping too
long
On 11/05, Mina Almasry wrote:
> On Tue, Nov 5, 2024 at 1:46 PM Stanislav Fomichev <stfomichev@...il.com> wrote:
> > > > > Also, the information is useless to the user. If the user sees 'frag
> > > > > 128 failed to free'. There is nothing really the user can do to
> > > > > recover at runtime. Only usefulness that could come is for the user to
> > > > > log the error. We already WARN_ON_ONCE on the error the user would not
> > > > > be able to trigger.
> > > >
> > > > I'm thinking from the pow of user application. It might have bugs as
> > > > well and try to refill something that should not have been refilled.
> > > > Having info about which particular token has failed (even just for
> > > > the logging purposes) might have been nice.
> > >
> > > Yeah, it may have been nice. On the flip side it complicates calling
> > > sock_devmem_dontneed(). The userspace need to count the freed frags in
> > > its input, remove them, skip the leaked one, and re-call the syscall.
> > > On the flipside the userspace gets to know the id of the frag that
> > > leaked but the usefulness of the information is slightly questionable
> > > for me. :shrug:
> >
> > Right, because I was gonna suggest for this patch, instead of having
> > a separate extra loop that returns -E2BIG (since this loop is basically
> > mostly cycles wasted assuming most of the calls are gonna be well behaved),
> > can we keep num_frags freed as we go and stop and return once
> > we reach MAX_DONTNEED_FRAGS?
> >
> > for (i = 0; i < num_tokens; i++) {
> > for (j ...) {
> > netmem_ref netmem ...
> > ...
> > }
> > num_frags += tokens[i].token_count;
> > if (num_frags > MAX_DONTNEED_FRAGS)
> > return ret;
> > }
> >
> > Or do you still find it confusing because userspace has to handle that?
>
> Ah, I don't think this will work, because it creates this scenario:
>
> - user calls SO_DEVMEM_DONTNEED passing 1030 tokens.
> - Kernel returns 500 freed.
> - User doesn't know whether:
> (a) The remaining 530 tokens are all attached to the last
> tokens.count and that's why the kernel returned early, or
> (b) the kernel leaked 530 tokens because it could not find any of them
> in the sk_user_frags.
>
> In (a) the user is supposed to recall SO_DEVMEM_DONTNEED on the
> remaining 530 tokens, but in (b) the user is not supposed to do that
> (the tokens have leaked and there is nothing the user can do to
> recover).
I kinda feel like people will still write code against internal limits anyway?
At least that's what we did with the internal version of your code: you
know that you can't return more than 128 tokens per call
so you don't even try. If you get an error, or ret != the expected
length, you kill the connection. It seems like there is no graceful
recovery from that?
Regarding your (a) vs (b) example, you can try to call DONTNEED another
time for both cases and either get non-zero and make some progress
or get 0 and give up?
> The current interface is more simple. The kernel either returns an
> error (nothing has been freed): recall SO_DEVMEM_DONTNEED on all the
> tokens after resolving the error, or,
>
> the kernel returns a positive value which means all the tokens have
> been freed (or unrecoverably leaked), and the userspace must not call
> SO_DEVMEM_DONTNEED on this batch again.
Totally agree that it's more simple. But my worry is that we now
essentially waste a bunch of cpu looping over and testing for the
condition that's not gonna happed in a well-behaved applications.
But maybe I'm over blowing it, idk.
(I'm gonna wait for you to respin before formally sending acks because
it's not clear which series goes where...)
Powered by blists - more mailing lists