lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 5 May 2020 19:49:43 +0200
From:   SeongJae Park <sjpark@...zon.com>
To:     "Paul E. McKenney" <paulmck@...nel.org>
CC:     Eric Dumazet <eric.dumazet@...il.com>,
        SeongJae Park <sjpark@...zon.com>,
        Eric Dumazet <edumazet@...gle.com>,
        David Miller <davem@...emloft.net>,
        "Al Viro" <viro@...iv.linux.org.uk>,
        Jakub Kicinski <kuba@...nel.org>,
        "Greg Kroah-Hartman" <gregkh@...uxfoundation.org>,
        <sj38.park@...il.com>, netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        SeongJae Park <sjpark@...zon.de>, <snu@...zon.com>,
        <amit@...nel.org>, <stable@...r.kernel.org>
Subject: Re: Re: [PATCH net v2 0/2] Revert the 'socket_alloc' life cycle change

On Tue, 5 May 2020 10:23:58 -0700 "Paul E. McKenney" <paulmck@...nel.org> wrote:

> On Tue, May 05, 2020 at 09:25:06AM -0700, Eric Dumazet wrote:
> > 
> > 
> > On 5/5/20 9:13 AM, SeongJae Park wrote:
> > > On Tue, 5 May 2020 09:00:44 -0700 Eric Dumazet <edumazet@...gle.com> wrote:
> > > 
> > >> On Tue, May 5, 2020 at 8:47 AM SeongJae Park <sjpark@...zon.com> wrote:
> > >>>
> > >>> On Tue, 5 May 2020 08:20:50 -0700 Eric Dumazet <eric.dumazet@...il.com> wrote:
> > >>>
> > >>>>
> > >>>>
> > >>>> On 5/5/20 8:07 AM, SeongJae Park wrote:
> > >>>>> On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet <edumazet@...gle.com> wrote:
> > >>>>>
> > >>>>
> > >>>>>> Why do we have 10,000,000 objects around ? Could this be because of
> > >>>>>> some RCU problem ?
> > >>>>>
> > >>>>> Mainly because of a long RCU grace period, as you guess.  I have no idea how
> > >>>>> the grace period became so long in this case.
> > >>>>>
> > >>>>> As my test machine was a virtual machine instance, I guess RCU readers
> > >>>>> preemption[1] like problem might affected this.
> > >>>>>
> > >>>>> [1] https://www.usenix.org/system/files/conference/atc17/atc17-prasad.pdf
> 
> If this is the root cause of the problem, then it will be necessary to
> provide a hint to the hypervisor.  Or, in the near term, avoid loading
> the hypervisor the point that vCPU preemption is so lengthy.
> 
> RCU could also provide some sort of pre-stall-warning notification that
> some of the CPUs aren't passing through quiescent states, which might
> allow the guest OS's userspace to take corrective action.
> 
> But first, what are you doing to either confirm or invalidate the
> hypothesis that this might be due to vCPU preemption?

Nothing, I was just guessing.  Sorry if this made you confused.

> 
> > >>>>>> Once Al patches reverted, do you have 10,000,000 sock_alloc around ?
> > >>>>>
> > >>>>> Yes, both the old kernel that prior to Al's patches and the recent kernel
> > >>>>> reverting the Al's patches didn't reproduce the problem.
> > >>>>>
> > >>>>
> > >>>> I repeat my question : Do you have 10,000,000 (smaller) objects kept in slab caches ?
> > >>>>
> > >>>> TCP sockets use the (very complex, error prone) SLAB_TYPESAFE_BY_RCU, but not the struct socket_wq
> > >>>> object that was allocated in sock_alloc_inode() before Al patches.
> > >>>>
> > >>>> These objects should be visible in kmalloc-64 kmem cache.
> > >>>
> > >>> Not exactly the 10,000,000, as it is only the possible highest number, but I
> > >>> was able to observe clear exponential increase of the number of the objects
> > >>> using slabtop.  Before the start of the problematic workload, the number of
> > >>> objects of 'kmalloc-64' was 5760, but I was able to observe the number increase
> > >>> to 1,136,576.
> > >>>
> > >>>           OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME
> > >>> before:   5760   5088  88%    0.06K     90       64       360K kmalloc-64
> > >>> after:  1136576 1136576 100%    0.06K  17759       64     71036K kmalloc-64
> > >>>
> > >>
> > >> Great, thanks.
> > >>
> > >> How recent is the kernel you are running for your experiment ?
> > > 
> > > It's based on 5.4.35.
> 
> Is it possible to retest on v5.6?  I have been adding various mechanisms
> to make RCU keep up better with heavy callback overload.

I will try soon!

> 
> Also, could you please provide the .config?  If either NO_HZ_FULL or
> RCU_NOCB_CPU, please also provide the kernel boot parameters.

NO_HZ_FULL is not set, but RCU_NOCB_CPU is y.

I think I should check whether it's ok to share the full config and boot
parameters.  Please wait this.

> 
> > >> Let's make sure the bug is not in RCU.
> > > 
> > > One thing I can currently say is that the grace period passes at last.  I
> > > modified the benchmark to repeat not 10,000 times but only 5,000 times to run
> > > the test without OOM but easily observable memory pressure.  As soon as the
> > > benchmark finishes, the memory were freed.
> > > 
> > > If you need more tests, please let me know.
> > 
> > I would ask Paul opinion on this issue, because we have many objects
> > being freed after RCU grace periods.
> 
> As always, "It depends."
> 
> o	If the problem is a too-long RCU reader, RCU is prohibited from
> 	ending the grace period.  The reader duration must be shortened,
> 	and until it is shortened, there is nothing RCU can do.
> 
> o	In some special cases of the above, RCU can and does help, for
> 	example, by enlisting the aid of cond_resched().  So perhaps
> 	there is a long in-kernel loop that needs a cond_resched().
> 
> 	And perhaps RCU can help for some types of vCPU preemption.
> 
> o	As Al suggested offline and as has been discussed in the past,
> 	it would not be hard to cause RCU to burn CPU to attain faster
> 	grace periods during OOM events.  This could be helpful, but only
> 	given that RCU readers are completing in reasonable timeframes.

Totally agreed.

> 
> > If RCU subsystem can not keep-up, I guess other workloads will also suffer.
> 
> If readers are not excessively long, RCU should be able to keep up.
> (In the absence of misconfigurations, for example, both NO_HZ_FULL and
> then binding all the rcuo kthreads to a single CPU on a 100-CPU system
> or some such.)
> 
> > Sure, we can revert patches there and there trying to work around the issue,
> > but for objects allocated from process context, we should not have these problems.
> 
> Agreed, let's get more info on what is happening to RCU.
> 
> One approach is to shorten the RCU CPU stall warning timeout
> (rcupdate.rcu_cpu_stall_timeout=10 for 10 seconds).

I will also try this and let you know the results.


Thanks,
SeongJae Park

> 
> 							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ