[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20210623085810.7e281e8d@kicinski-fedora-PC1C0HJN.hsd1.ca.comcast.net>
Date: Wed, 23 Jun 2021 08:58:10 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, willemb@...gle.com,
dsahern@...il.com, yoshfuji@...ux-ipv6.org, Dave Jones <dsj@...com>
Subject: Re: [PATCH net-next v2 2/2] net: ip: avoid OOM kills with large UDP
sends over loopback
On Wed, 23 Jun 2021 16:25:18 +0200 Eric Dumazet wrote:
> On 6/23/21 12:50 AM, Jakub Kicinski wrote:
> > Dave observed number of machines hitting OOM on the UDP send
> > path. The workload seems to be sending large UDP packets over
> > loopback. Since loopback has MTU of 64k kernel will try to
> > allocate an skb with up to 64k of head space. This has a good
> > chance of failing under memory pressure. What's worse if
> > the message length is <32k the allocation may trigger an
> > OOM killer.
> >
> > This is entirely avoidable, we can use an skb with frags.
>
> Are you referring to IP fragments, or page frags ?
page frags, annoyingly overloaded term. I'll say paged, it's
not the common term but at least won't be confusing.
> > af_unix solves a similar problem by limiting the head
> > length to SKB_MAX_ALLOC. This seems like a good and simple
> > approach. It means that UDP messages > 16kB will now
> > use fragments if underlying device supports SG, if extra
> > allocator pressure causes regressions in real workloads
> > we can switch to trying the large allocation first and
> > falling back.
> >
> > Reported-by: Dave Jones <dsj@...com>
> > Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> > ---
> > net/ipv4/ip_output.c | 2 +-
> > net/ipv6/ip6_output.c | 2 +-
> > 2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
> > index 90031f5446bd..1ab140c173d0 100644
> > --- a/net/ipv4/ip_output.c
> > +++ b/net/ipv4/ip_output.c
> > @@ -1077,7 +1077,7 @@ static int __ip_append_data(struct sock *sk,
> >
> > if ((flags & MSG_MORE) && !has_sg)
> > alloclen = mtu;
> > - else if (!paged)
> > + else if (!paged && (fraglen < SKB_MAX_ALLOC || !has_sg))
>
> This looks indeed better, but there are some boundary conditions,
> caused by the fact that we add hh_len+15 later when allocating the skb.
>
> (I expect hh_len+15 being 31)
>
>
> You probably need
> else if (!paged && (fraglen + hh_len + 15 < SKB_MAX_ALLOC || !has_sg))
>
> Otherwise we might still attempt order-3 allocations ?
>
> SKB_MAX_ALLOC is 16064 currently (skb_shinfo size being 320 on 64bit arches)
>
> An UDP message with 16034 bytes of payload would translate to
> alloclen==16062. If we add 28 bytes for UDP+IP headers, plus 31 bytes for hh_len+31
> this would go to 16413, thus asking for 32768 bytes (order-3 page)
>
> (16062+320 = 16382, which is smaller than 16384)
Will do, thanks!
> > alloclen = fraglen;
> > else {
> > alloclen = min_t(int, fraglen, MAX_HEADER);
> > diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
> > index c667b7e2856f..46d805097a79 100644
> > --- a/net/ipv6/ip6_output.c
> > +++ b/net/ipv6/ip6_output.c
> > @@ -1585,7 +1585,7 @@ static int __ip6_append_data(struct sock *sk,
> >
> > if ((flags & MSG_MORE) && !has_sg)
> > alloclen = mtu;
> > - else if (!paged)
> > + else if (!paged && (fraglen < SKB_MAX_ALLOC || !has_sg))
> > alloclen = fraglen;
> > else {
> > alloclen = min_t(int, fraglen, MAX_HEADER);
> >
Powered by blists - more mailing lists