lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Nov 2022 10:23:48 -0800
From:   Eric Dumazet <edumazet@...gle.com>
To:     Ivan Babrou <ivan@...udflare.com>
Cc:     Linux MM <linux-mm@...ck.org>,
        Linux Kernel Network Developers <netdev@...r.kernel.org>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>,
        Shakeel Butt <shakeelb@...gle.com>,
        Muchun Song <songmuchun@...edance.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "David S. Miller" <davem@...emloft.net>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>, cgroups@...r.kernel.org,
        kernel-team <kernel-team@...udflare.com>
Subject: Re: Low TCP throughput due to vmpressure with swap enabled

On Tue, Nov 22, 2022 at 10:11 AM Ivan Babrou <ivan@...udflare.com> wrote:
>
> On Tue, Nov 22, 2022 at 10:01 AM Eric Dumazet <edumazet@...gle.com> wrote:
> >
> > On Mon, Nov 21, 2022 at 4:53 PM Ivan Babrou <ivan@...udflare.com> wrote:
> > >
> > > Hello,
> > >
> > > We have observed a negative TCP throughput behavior from the following commit:
> > >
> > > * 8e8ae645249b mm: memcontrol: hook up vmpressure to socket pressure
> > >
> > > It landed back in 2016 in v4.5, so it's not exactly a new issue.
> > >
> > > The crux of the issue is that in some cases with swap present the
> > > workload can be unfairly throttled in terms of TCP throughput.
> >
> > I guess defining 'fairness' in such a scenario is nearly impossible.
> >
> > Have you tried changing /proc/sys/net/ipv4/tcp_rmem  (and/or tcp_wmem) ?
> > Defaults are quite conservative.
>
> Yes, our max sizes are much higher than the defaults. I don't see how
> it matters though. The issue is that the kernel clamps rcv_sshtrehsh
> at 4 x advmss.

There are some places (eg tcp_clamp_window) where we have this
additional condition :

sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0)

So I was suggesting maybe to add a similar condition to tcp_try_rmem_schedule()

Then adjust tcp_rmem for your needs.

 No matter how much TCP memory you end up using, the
> kernel will clamp based on responsiveness to memory reclaim, which in
> turn depends on swap presence. We're seeing it in production with tens
> of thousands of sockets and high max tcp_rmem and I'm able to
> replicate the same issue in my vm with the default sysctl values.
>
> > If for your workload you want to ensure a minimum amount of memory per
> > TCP socket,
> > that might be good enough.
>
> That's not my goal at all. We don't have a problem with TCP memory
> consumption. Our issue is low throughput because vmpressure() thinks
> that the cgroup is memory constrained when it most definitely is not.

OK, then I will stop commenting I guess :)

Powered by blists - more mailing lists