[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CH3PR11MB7345035086C1661BF5352E6EFC789@CH3PR11MB7345.namprd11.prod.outlook.com>
Date: Mon, 15 May 2023 03:46:39 +0000
From: "Zhang, Cathy" <cathy.zhang@...el.com>
To: Shakeel Butt <shakeelb@...gle.com>
CC: Eric Dumazet <edumazet@...gle.com>, Linux MM <linux-mm@...ck.org>, Cgroups
<cgroups@...r.kernel.org>, Paolo Abeni <pabeni@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>, "kuba@...nel.org"
<kuba@...nel.org>, "Brandeburg@...gle.com" <Brandeburg@...gle.com>,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>, "Srinivas, Suresh"
<suresh.srinivas@...el.com>, "Chen, Tim C" <tim.c.chen@...el.com>, "You,
Lizhen" <lizhen.you@...el.com>, "eric.dumazet@...il.com"
<eric.dumazet@...il.com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
size
> -----Original Message-----
> From: Shakeel Butt <shakeelb@...gle.com>
> Sent: Saturday, May 13, 2023 1:17 AM
> To: Zhang, Cathy <cathy.zhang@...el.com>
> Cc: Shakeel Butt <shakeelb@...gle.com>; Eric Dumazet
> <edumazet@...gle.com>; Linux MM <linux-mm@...ck.org>; Cgroups
> <cgroups@...r.kernel.org>; Paolo Abeni <pabeni@...hat.com>;
> davem@...emloft.net; kuba@...nel.org; Brandeburg@...gle.com;
> Brandeburg, Jesse <jesse.brandeburg@...el.com>; Srinivas, Suresh
> <suresh.srinivas@...el.com>; Chen, Tim C <tim.c.chen@...el.com>; You,
> Lizhen <lizhen.you@...el.com>; eric.dumazet@...il.com;
> netdev@...r.kernel.org
> Subject: Re: [PATCH net-next 1/2] net: Keep sk->sk_forward_alloc as a proper
> size
>
> On Fri, May 12, 2023 at 05:51:40AM +0000, Zhang, Cathy wrote:
> >
> >
> [...]
> > >
> > > Thanks a lot. This tells us that one or both of following scenarios
> > > are
> > > happening:
> > >
> > > 1. In the softirq recv path, the kernel is processing packets from
> > > multiple memcgs.
> > >
> > > 2. The process running on the CPU belongs to memcg which is
> > > different from the memcgs whose packets are being received on that CPU.
> >
> > Thanks for sharing the points, Shakeel! Is there any trace records you
> > want to collect?
> >
>
> Can you please try the following patch and see if there is any improvement?
Hi Shakeel,
Try the following patch, the data of 'perf top' from system wide indicates that
the overhead of page_counter_cancel is dropped from 15.52% to 4.82%.
Without patch:
15.52% [kernel] [k] page_counter_cancel
12.30% [kernel] [k] page_counter_try_charge
11.97% [kernel] [k] try_charge_memcg
With patch:
10.63% [kernel] [k] page_counter_try_charge
9.49% [kernel] [k] try_charge_memcg
4.82% [kernel] [k] page_counter_cancel
The patch is applied on the latest net-next/main:
befcc1fce564 ("sfc: fix use-after-free in efx_tc_flower_record_encap_match()")
>
>
> From 48eb23c8cbb5d6c6086299c8a5ae4b3485c79a8c Mon Sep 17 00:00:00
> 2001
> From: Shakeel Butt <shakeelb@...gle.com>
> Date: Fri, 12 May 2023 17:04:35 +0000
> Subject: [PATCH] No batch charge in irq context
>
> ---
> mm/memcontrol.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c index
> d31fb1e2cb33..f1453a140fc8 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2652,7 +2652,8 @@ void mem_cgroup_handle_over_high(void) static
> int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
> unsigned int nr_pages)
> {
> - unsigned int batch = max(MEMCG_CHARGE_BATCH, nr_pages);
> + unsigned int batch = in_task() ?
> + max(MEMCG_CHARGE_BATCH, nr_pages) : nr_pages;
> int nr_retries = MAX_RECLAIM_RETRIES;
> struct mem_cgroup *mem_over_limit;
> struct page_counter *counter;
> --
> 2.40.1.606.ga4b1b128d6-goog
Powered by blists - more mailing lists