[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1477052642.7065.63.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Fri, 21 Oct 2016 05:24:02 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
James Morris <jmorris@...ei.org>,
Trond Myklebust <trond.myklebust@...marydata.com>,
Alexander Duyck <aduyck@...antis.com>,
Daniel Borkmann <daniel@...earbox.net>,
Eric Dumazet <edumazet@...gle.com>,
Tom Herbert <tom@...bertland.com>,
Hannes Frederic Sowa <hannes@...essinduktion.org>,
Edward Cree <ecree@...arflare.com>, linux-nfs@...r.kernel.org
Subject: Re: [PATCH net-next v6 2/3] udp: implement memory accounting helpers
On Fri, 2016-10-21 at 13:55 +0200, Paolo Abeni wrote:
> Avoid using the generic helpers.
> Use the receive queue spin lock to protect the memory
> accounting operation, both on enqueue and on dequeue.
>
> On dequeue perform partial memory reclaiming, trying to
> leave a quantum of forward allocated memory.
>
> On enqueue use a custom helper, to allow some optimizations:
> - use a plain spin_lock() variant instead of the slightly
> costly spin_lock_irqsave(),
> - avoid dst_force check, since the calling code has already
> dropped the skb dst
> - avoid orphaning the skb, since skb_steal_sock() already did
> the work for us
>
> +static void udp_rmem_release(struct sock *sk, int size, int partial)
> +{
> + int amt;
> +
> + atomic_sub(size, &sk->sk_rmem_alloc);
> +
> + spin_lock_bh(&sk->sk_receive_queue.lock);
> + sk->sk_forward_alloc += size;
> + amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1);
> + sk->sk_forward_alloc -= amt;
> + spin_unlock_bh(&sk->sk_receive_queue.lock);
> +
> + if (amt)
> + __sk_mem_reduce_allocated(sk, amt >> SK_MEM_QUANTUM_SHIFT);
> +}
> +
> +static void udp_rmem_free(struct sk_buff *skb)
> +{
> + udp_rmem_release(skb->sk, skb->truesize, 1);
> +}
> +
It looks like you are acquiring/releasing sk_receive_queue.lock twice
per packet in recvmsg() (the second time in the destructor above)
We could do slightly better if :
We do not set skb->destructor at all, and manage
sk_rmem_alloc/sk_forward_alloc changes at the time we dequeue skb
(if !MSG_PEEK), before copy to user space.
Powered by blists - more mailing lists