lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 1 Apr 2020 14:24:19 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     Joel Fernandes <joel@...lfernandes.org>,
        linux-kernel@...r.kernel.org, rcu@...r.kernel.org
Subject: Re: What should we be doing to stress-test kfree_rcu()?

On Wed, Apr 01, 2020 at 11:16:07PM +0200, Uladzislau Rezki wrote:
> On Wed, Apr 01, 2020 at 04:50:12PM -0400, Joel Fernandes wrote:
> > On Wed, Apr 01, 2020 at 11:44:15AM -0700, Paul E. McKenney wrote:
> > > Hello!
> > > 
> > > What should we be doing to stress-test kfree_rcu(), including its ability
> > > to cope with OOM conditions?  Yes, rcuperf runs are nice, but they are not
> > > currently doing much more than testing base functionality, performance,
> > > and scalability.
> > 
> > I already stress kfree_rcu() with rcuperf right now to a point of OOM and
> > make sure it does not OOM. The way I do this is set my VM to low memory (like
> > 512MB) and then flood kfree_rcu()s. After the shrinker changes, I don't see
> > OOM with my current rcuperf settings.
> > 
> > Not saying that my testing is sufficient, just saying this is what I do. It
> > would be good to get a real workload to trigger lot of kfree_rcu() activity
> > as well especially on low memory systems. Any ideas on that?
> > 
> > One idea could be to trigger memory pressure from unrelated allocations (such
> > as userspace memory hogs), and see how it perform with memory-pressure. For
> > one, the shrinker should trigger in such situations to force the queue into
> > waiting for a GP in such situations instead of batching too much.

This would be good!

> > We are also missing vmalloc() tests. I remember Vlad had some clever vmalloc
> > tests around for his great vmalloc rewrites :). Vlad, any thoughts on getting
> > to stress kvfree_rcu()?
> > 
> Actually i updated(localy for my tests) the lib/test_vmalloc.c module with extra
> test cases to stress kvfree_rcu() stuff. I think i should add them :)

As would this!  ;-)

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ