[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170830125843.250c91c1@redhat.com>
Date: Wed, 30 Aug 2017 12:58:43 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Florian Westphal <fw@...len.de>
Cc: "liujian (CE)" <liujian56@...wei.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"Wangkefeng (Kevin)" <wangkefeng.wang@...wei.com>,
"weiyongjun (A)" <weiyongjun1@...wei.com>, brouer@...hat.com
Subject: Re: Question about ip_defrag
(trimmed CC list a bit)
On Tue, 29 Aug 2017 09:53:15 +0200 Florian Westphal <fw@...len.de> wrote:
> Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> > On Mon, 28 Aug 2017 16:00:32 +0200
> > Florian Westphal <fw@...len.de> wrote:
> >
> > > liujian (CE) <liujian56@...wei.com> wrote:
> > > > Hi
> > > >
> > > > I checked our 3.10 kernel, we had backported all percpu_counter
> > > > bug fix in lib/percpu_counter.c and include/linux/percpu_counter.h.
> > > > And I check 4.13-rc6, also has the issue if NIC's rx cpu num big enough.
> > > >
> > > > > > > > the issue:
> > > > > > > > Ip_defrag fail caused by frag_mem_limit reached 4M(frags.high_thresh).
> > > > > > > > At this moment,sum_frag_mem_limit is about 10K.
> > > >
> > > > So should we change ipfrag high/low thresh to a reasonable value ?
> > > > And if it is, is there a standard to change the value?
> > >
> > > Each cpu can have frag_percpu_counter_batch bytes rest doesn't know
> > > about so with 64 cpus that is ~8 mbyte.
> > >
> > > possible solutions:
> > > 1. reduce frag_percpu_counter_batch to 16k or so
> > > 2. make both low and high thresh depend on NR_CPUS
>
> I take 2) back. Its wrong to do this, for large NR_CPU values it
> would even overflow.
Alternatively solution 3:
Why do we want to maintain a (4MBytes) memory limit, across all CPUs?
Couldn't we just allow each CPU to have a memory limit?
> > To me it looks like we/I have been using the wrong API for comparing
> > against percpu_counters. I guess we should have used __percpu_counter_compare().
>
> Are you sure? For liujian use case (64 cores) it looks like we would
> always fall through to percpu_counter_sum() so we eat spinlock_irqsave
> cost for all compares.
>
> Before we entertain this we should consider reducing frag_percpu_counter_batch
> to a smaller value.
Yes, I agree, we really need to lower/reduce the frag_percpu_counter_batch.
As you say, else the __percpu_counter_compare() call will be useless
(around systems with >= 32 CPUs).
I think the bug is in frag_mem_limit(). It just reads the global
counter (fbc->count), without considering other CPUs can have upto 130K
that haven't been subtracted yet (due to 3M low limit, become dangerous
at >=24 CPUs). The __percpu_counter_compare() does the right thing,
and takes into account the number of (online) CPUs and batch size, to
account for this.
If we choose 16K (16384), and use __percpu_counter_compare(), then we
can scale to systems with 256 CPUs (4*1024*1024/16384=256), before this
memory accounting becomes more expensive (than not using percpu_counters).
But Liujian, reports he have a 384 CPU system, so he would still need
to increase the lower+high threshold.
$ grep -H . /proc/sys/net/ipv*/ip*frag_*_thresh /proc/sys/net/netfilter/nf_conntrack_frag6_*_thresh
/proc/sys/net/ipv4/ipfrag_high_thresh:4194304
/proc/sys/net/ipv4/ipfrag_low_thresh:3145728
/proc/sys/net/ipv6/ip6frag_high_thresh:4194304
/proc/sys/net/ipv6/ip6frag_low_thresh:3145728
/proc/sys/net/netfilter/nf_conntrack_frag6_high_thresh:4194304
/proc/sys/net/netfilter/nf_conntrack_frag6_low_thresh:3145728
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists