lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod5di3saFdDJ1cwFDgvLPmnEJ7XB9P8YBTJ3uzfBKAFi3Q@mail.gmail.com>
Date:   Fri, 21 Oct 2022 09:34:20 -0700
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
        davem@...emloft.net, pabeni@...hat.com, cgroups@...r.kernel.org,
        roman.gushchin@...ux.dev, weiwan@...gle.com, ncardwell@...gle.com,
        ycheng@...gle.com
Subject: Re: [PATCH net] net-memcg: avoid stalls when under memory pressure

On Fri, Oct 21, 2022 at 9:28 AM Eric Dumazet <edumazet@...gle.com> wrote:
>
> On Fri, Oct 21, 2022 at 9:26 AM Shakeel Butt <shakeelb@...gle.com> wrote:
> >
> > On Fri, Oct 21, 2022 at 9:03 AM Jakub Kicinski <kuba@...nel.org> wrote:
> > >
> > > As Shakeel explains the commit under Fixes had the unintended
> > > side-effect of no longer pre-loading the cached memory allowance.
> > > Even tho we previously dropped the first packet received when
> > > over memory limit - the consecutive ones would get thru by using
> > > the cache. The charging was happening in batches of 128kB, so
> > > we'd let in 128kB (truesize) worth of packets per one drop.
> > >
> > > After the change we no longer force charge, there will be no
> > > cache filling side effects. This causes significant drops and
> > > connection stalls for workloads which use a lot of page cache,
> > > since we can't reclaim page cache under GFP_NOWAIT.
> > >
> > > Some of the latency can be recovered by improving SACK reneg
> > > handling but nowhere near enough to get back to the pre-5.15
> > > performance (the application I'm experimenting with still
> > > sees 5-10x worst latency).
> > >
> > > Apply the suggested workaround of using GFP_ATOMIC. We will now
> > > be more permissive than previously as we'll drop _no_ packets
> > > in softirq when under pressure. But I can't think of any good
> > > and simple way to address that within networking.
> > >
> > > Link: https://lore.kernel.org/all/20221012163300.795e7b86@kernel.org/
> > > Suggested-by: Shakeel Butt <shakeelb@...gle.com>
> > > Fixes: 4b1327be9fe5 ("net-memcg: pass in gfp_t mask to mem_cgroup_charge_skmem()")
> > > Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> > > ---
> > > CC: weiwan@...gle.com
> > > CC: shakeelb@...gle.com
> > > CC: ncardwell@...gle.com
> > > CC: ycheng@...gle.com
> > > ---
> > >  include/net/sock.h | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/include/net/sock.h b/include/net/sock.h
> > > index 9e464f6409a7..22f8bab583dd 100644
> > > --- a/include/net/sock.h
> > > +++ b/include/net/sock.h
> > > @@ -2585,7 +2585,7 @@ static inline gfp_t gfp_any(void)
> > >
> > >  static inline gfp_t gfp_memcg_charge(void)
> > >  {
> > > -       return in_softirq() ? GFP_NOWAIT : GFP_KERNEL;
> > > +       return in_softirq() ? GFP_ATOMIC : GFP_KERNEL;
> > >  }
> > >
> >
> > How about just using gfp_any() and we can remove gfp_memcg_charge()?
>
> How about keeping gfp_memcg_charge() and adding a comment on its intent ?
>
> gfp_any() is very generic :/

SGTM.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ