[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200115224542.GB94036@google.com>
Date: Wed, 15 Jan 2020 17:45:42 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: linux-kernel@...r.kernel.org, bristot@...hat.com,
frextrite@...il.com, madhuparnabhowmik04@...il.com,
urezki@...il.com, Davidlohr Bueso <dave@...olabs.net>,
Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
rcu@...r.kernel.org, Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v3 rcu-dev] rcuperf: Measure memory footprint during
kfree_rcu() test
On Wed, Jan 15, 2020 at 02:42:51PM -0800, Paul E. McKenney wrote:
> > [snip]
> > > > We can certainly refine it further but at this time I am thinking of spending
> > > > my time reviewing Lai's patches and learning some other RCU things I need to
> > > > catch up on. If you hate this patch too much, we can also defer this patch
> > > > review for a bit and I can carry it in my tree for now as it is only a patch
> > > > to test code. But honestly, in its current form I am sort of happy with it.
> > >
> > > OK, I will keep it as is for now and let's look again later on. It is not
> > > in the bucket for the upcoming merge window in any case, so we do have
> > > quite a bit of time.
> > >
> > > It is not that I hate it, but rather that I want to be able to give
> > > good answers to questions that might come up. And given that I have
> > > occasionally given certain people a hard time about their statistics,
> > > it is only reasonable to expect them to return the favor. I wouldn't
> > > want you to be caught in the crossfire. ;-)
> >
> > Since the weights were concerning, I was thinking of just using a weight of
> > (1 / N) where N is the number of samples. Essentially taking the average.
> > That could be simple enough and does not cause your concerns with weight
> > tuning. I tested it and looks good, I'll post it shortly.
>
> YES!!! ;-)
>
> Snapshot mem_begin before entering the loop. For the mean value to
> be solid, you need at least 20-30 samples, which might mean upping the
> default for kfree_loops. Have an "unsigned long long" to accumulate the
> sum, which should avoid any possibility of overflow for current systems
> and for all systems for kfree_loops less than PAGE_SIZE. At which point,
> forget the "%" stuff and just sum up the si_mem_available() on each pass
> through the loop.
>
> Do the division on exit from the loop, preferably checking for divide
> by zero.
>
> Straightforward, fast, reasonably reliable, and easy to defend.
I mostly did it along these lines. Hopefully the latest posting is reasonable
enough ;-) I sent it twice because I messed up the authorship (sorry).
thanks,
- Joel
Powered by blists - more mailing lists