[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+TiiLKsE7k4TyRqr03uNPW=UpkvpXL1LVWvTmhE_AUpA@mail.gmail.com>
Date: Wed, 4 Mar 2020 17:36:14 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Shakeel Butt <shakeelb@...gle.com>
Cc: Roman Gushchin <guro@...com>, Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"David S . Miller" <davem@...emloft.net>,
Alexey Kuznetsov <kuznet@....inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
netdev <netdev@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Cgroups <cgroups@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] net: memcg: late association of sock to memcg
On Wed, Mar 4, 2020 at 3:39 PM Shakeel Butt <shakeelb@...gle.com> wrote:
>
> If a TCP socket is allocated in IRQ context or cloned from unassociated
> (i.e. not associated to a memcg) in IRQ context then it will remain
> unassociated for its whole life. Almost half of the TCPs created on the
> system are created in IRQ context, so, memory used by such sockets will
> not be accounted by the memcg.
>
> This issue is more widespread in cgroup v1 where network memory
> accounting is opt-in but it can happen in cgroup v2 if the source socket
> for the cloning was created in root memcg.
>
> To fix the issue, just do the late association of the unassociated
> sockets at accept() time in the process context and then force charge
> the memory buffer already reserved by the socket.
>
> Signed-off-by: Shakeel Butt <shakeelb@...gle.com>
> ---
> Changes since v1:
> - added sk->sk_rmem_alloc to initial charging.
> - added synchronization to get memory usage and set sk_memcg race-free.
>
> net/ipv4/inet_connection_sock.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
> index a4db79b1b643..7bcd657cd45e 100644
> --- a/net/ipv4/inet_connection_sock.c
> +++ b/net/ipv4/inet_connection_sock.c
> @@ -482,6 +482,25 @@ struct sock *inet_csk_accept(struct sock *sk, int flags, int *err, bool kern)
> }
> spin_unlock_bh(&queue->fastopenq.lock);
> }
> +
> + if (mem_cgroup_sockets_enabled && !newsk->sk_memcg) {
> + int amt;
> +
> + /* atomically get the memory usage and set sk->sk_memcg. */
> + lock_sock(newsk);
> +
> + /* The sk has not been accepted yet, no need to look at
> + * sk->sk_wmem_queued.
> + */
> + amt = sk_mem_pages(newsk->sk_forward_alloc +
> + atomic_read(&sk->sk_rmem_alloc));
> + mem_cgroup_sk_alloc(newsk);
> +
> + release_sock(newsk);
> +
> + if (newsk->sk_memcg)
Most sockets in accept queue should have amt == 0, so maybe avoid
calling this thing only when amt == 0 ?
Also I would release_sock(newsk) after this, otherwise incoming
packets could mess with newsk->sk_forward_alloc
if (amt && newsk->sk_memcg)
mem_cgroup_charge_skmem(newsk->sk_memcg, amt);
release_sock(newsk);
Also, I wonder if mem_cgroup_charge_skmem() has been used at all
these last four years
on arches with PAGE_SIZE != 4096
( SK_MEM_QUANTUM is not anymore PAGE_SIZE, but 4096)
> + mem_cgroup_charge_skmem(newsk->sk_memcg, amt);
> + }
> out:
> release_sock(sk);
> if (req)
> --
> 2.25.0.265.gbab2e86ba0-goog
>
Powered by blists - more mailing lists