lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Aug 2019 17:44:36 -0700
From:   "Paul E. McKenney" <paulmck@...ux.ibm.com>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     linux-kernel@...r.kernel.org, byungchul.park@....com,
        Davidlohr Bueso <dave@...olabs.net>,
        Josh Triplett <josh@...htriplett.org>, kernel-team@...roid.com,
        kernel-team@....com, Lai Jiangshan <jiangshanlai@...il.com>,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        max.byungchul.park@...il.com, Rao Shoaib <rao.shoaib@...cle.com>,
        rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v4 2/2] rcuperf: Add kfree_rcu() performance Tests

On Tue, Aug 20, 2019 at 08:31:32PM -0400, Joel Fernandes wrote:
> On Tue, Aug 20, 2019 at 08:27:05PM -0400, Joel Fernandes wrote:
> [snip]
> > > > > Or is the idea to time the kfree_rcu() loop separately?  (I don't see
> > > > > any such separate timing, though.)
> > > > 
> > > > The kmalloc() times are included within the kfree loop. The timing of
> > > > kfree_rcu() is not separate in my patch.
> > > 
> > > You lost me on this one.  What happens when you just interleave the
> > > kmalloc() and kfree_rcu(), without looping, compared to the looping
> > > above?  Does this get more expensive?  Cheaper?  More vulnerable to OOM?
> > > Something else?
> > 
> > You mean pairing a single kmalloc() with a single kfree_rcu() and doing this
> > several times? The results are very similar to doing kfree_alloc_num
> > kmalloc()s, then do kfree_alloc_num kfree_rcu()s; and repeat the whole thing
> > kfree_loops times (as done by this rcuperf patch we are reviewing).
> > 
> > Following are some numbers. One change is the case where we are not at all
> > batching does seem to complete even faster when we fully interleave kmalloc()
> > with kfree() while the case of batching in the same scenario completes at the
> > same time as did the "not fully interleaved" scenario. However, the grace
> > period reduction improvements and the chances of OOM'ing are pretty much the
> > same in either case.
> [snip]
> > Not fully interleaved: do kfree_alloc_num kmallocs, then do kfree_alloc_num kfree_rcu()s. And repeat this kfree_loops times.
> > =======================
> > (1) Batching
> > rcuperf.kfree_loops=20000 rcuperf.kfree_alloc_num=8000 rcuperf.kfree_no_batch=0 rcuperf.kfree_rcu_test=1
> > 
> > root@(none):/# free -m
> >               total        used        free      shared  buff/cache   available
> > Mem:            977         251         686           0          39         684
> > Swap:             0           0           0
> > 
> > [   15.574402] Total time taken by all kfree'ers: 14185970787 ns, loops: 20000, batches: 1548
> > 
> > (2) No Batching
> > rcuperf.kfree_loops=20000 rcuperf.kfree_alloc_num=8000 rcuperf.kfree_no_batch=1 rcuperf.kfree_rcu_test=1
> > 
> > root@(none):/# free -m
> >               total        used        free      shared  buff/cache   available
> > Mem:            977          82         855           0          39         853
> > Swap:             0           0           0
> > 
> > [   13.724554] Total time taken by all kfree'ers: 12246217291 ns, loops: 20000, batches: 7262
> 
> And the diff for changing the test to do this case is as follows (I don't
> plan to fold this diff in, since I feel the existing test suffices and
> results are similar):

But why not?  It does look to be a nice simplification, after all.

							Thanx, Paul

> diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
> index 46f9c4449348..e4e4be4aaf51 100644
> --- a/kernel/rcu/rcuperf.c
> +++ b/kernel/rcu/rcuperf.c
> @@ -618,18 +618,13 @@ kfree_perf_thread(void *arg)
>  {
>  	int i, loop = 0;
>  	long me = (long)arg;
> -	struct kfree_obj **alloc_ptrs;
> +	struct kfree_obj *alloc_ptr;
>  	u64 start_time, end_time;
>  
>  	VERBOSE_PERFOUT_STRING("kfree_perf_thread task started");
>  	set_cpus_allowed_ptr(current, cpumask_of(me % nr_cpu_ids));
>  	set_user_nice(current, MAX_NICE);
>  
> -	alloc_ptrs = (struct kfree_obj **)kmalloc(sizeof(struct kfree_obj *) * kfree_alloc_num,
> -						  GFP_KERNEL);
> -	if (!alloc_ptrs)
> -		return -ENOMEM;
> -
>  	start_time = ktime_get_mono_fast_ns();
>  
>  	if (atomic_inc_return(&n_kfree_perf_thread_started) >= kfree_nrealthreads) {
> @@ -646,19 +641,17 @@ kfree_perf_thread(void *arg)
>  	 */
>  	do {
>  		for (i = 0; i < kfree_alloc_num; i++) {
> -			alloc_ptrs[i] = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
> -			if (!alloc_ptrs[i])
> +			alloc_ptr = kmalloc(sizeof(struct kfree_obj), GFP_KERNEL);
> +			if (!alloc_ptr)
>  				return -ENOMEM;
> -		}
>  
> -		for (i = 0; i < kfree_alloc_num; i++) {
>  			if (!kfree_no_batch) {
> -				kfree_rcu(alloc_ptrs[i], rh);
> +				kfree_rcu(alloc_ptr, rh);
>  			} else {
>  				rcu_callback_t cb;
>  
>  				cb = (rcu_callback_t)(unsigned long)offsetof(struct kfree_obj, rh);
> -				kfree_call_rcu_nobatch(&(alloc_ptrs[i]->rh), cb);
> +				kfree_call_rcu_nobatch(&(alloc_ptr->rh), cb);
>  			}
>  		}
>  
> @@ -682,7 +675,6 @@ kfree_perf_thread(void *arg)
>  		}
>  	}
>  
> -	kfree(alloc_ptrs);
>  	torture_kthread_stopping("kfree_perf_thread");
>  	return 0;
>  }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ