[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ad119bd1f24f408921b16eb0ebdf67935d1d880.camel@redhat.com>
Date: Tue, 07 Mar 2023 15:55:20 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Jason Xing <kerneljasonxing@...il.com>, simon.horman@...igine.com,
willemdebruijn.kernel@...il.com, davem@...emloft.net,
dsahern@...nel.org, edumazet@...gle.com, kuba@...nel.org
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org, Jason Xing <kernelxing@...cent.com>
Subject: Re: [PATCH v3 net-next] udp: introduce __sk_mem_schedule() usage
On Tue, 2023-03-07 at 09:56 +0800, Jason Xing wrote:
> From: Jason Xing <kernelxing@...cent.com>
>
> Keep the accounting schema consistent across different protocols
> with __sk_mem_schedule(). Besides, it adjusts a little bit on how
> to calculate forward allocated memory compared to before. After
> applied this patch, we could avoid receive path scheduling extra
> amount of memory.
>
> Link: https://lore.kernel.org/lkml/20230221110344.82818-1-kerneljasonxing@gmail.com/
> Signed-off-by: Jason Xing <kernelxing@...cent.com>
> ---
> v3:
> 1) get rid of inline suggested by Simon Horman
>
> v2:
> 1) change the title and body message
> 2) use __sk_mem_schedule() instead suggested by Paolo Abeni
> ---
> net/ipv4/udp.c | 31 ++++++++++++++++++-------------
> 1 file changed, 18 insertions(+), 13 deletions(-)
>
> diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
> index c605d171eb2d..60473781933c 100644
> --- a/net/ipv4/udp.c
> +++ b/net/ipv4/udp.c
> @@ -1531,10 +1531,23 @@ static void busylock_release(spinlock_t *busy)
> spin_unlock(busy);
> }
>
> +static int udp_rmem_schedule(struct sock *sk, int size)
> +{
> + int delta;
> +
> + delta = size - sk->sk_forward_alloc;
> + if (delta > 0 && !__sk_mem_schedule(sk, delta, SK_MEM_RECV))
> + return -ENOBUFS;
> +
> + sk->sk_forward_alloc -= size;
I think it's better if you maintain the above statement outside of this
helper: it's a bit confusing that rmem_schedule() actually consumes fwd
memory.
Side note
Cheers,
Paolo
Powered by blists - more mailing lists