[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.0905070927510.32734@gandalf.stny.rr.com>
Date: Thu, 7 May 2009 09:51:48 -0400 (EDT)
From: Steven Rostedt <rostedt@...dmis.org>
To: Ingo Molnar <mingo@...e.hu>
cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Li Zefan <lizf@...fujitsu.com>, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 4/7] ring-buffer: change test to be more latency
friendly
On Thu, 7 May 2009, Ingo Molnar wrote:
>
> * Steven Rostedt <rostedt@...dmis.org> wrote:
>
> > From: Steven Rostedt <srostedt@...hat.com>
> >
> > The ring buffer benchmark/test runs a producer for 10 seconds.
> > This is done with preemption and interrupts enabled. But if the
> > kernel is not compiled with CONFIG_PREEMPT, it basically stops
> > everything but interrupts for 10 seconds.
> >
> > Although this is just a test and is not for production, this attribute
> > can be quite annoying. It can also spawn badness elsewhere.
>
> Yep, this probably explains that lockdep splat i got in a networking
> driver. Some functionality (a workqueue iirc) of the driver got
> starved and a time-out timer triggered - where lockdep caught
> locking badness.
We probably need to notify the network people about that.
>
> > This patch solves the issues by calling "cond_resched" when the
> > system is not compiled with CONFIG_PREEMPT. It also keeps track of
> > the time spent to call cond_resched such that it does not go
> > against the time calculations. That is, if the task schedules
> > away, the time scheduled out is removed from the test data. Note,
> > this only works for non PREEMPT because we do not know when the
> > task is scheduled out if we have PREEMPT enabled.
> >
> > [ Impact: prevent test from stopping the world for 10 seconds ]
> >
> > Signed-off-by: Steven Rostedt <rostedt@...dmis.org>
> > ---
> > kernel/trace/ring_buffer_benchmark.c | 31 +++++++++++++++++++++++++++++++
> > 1 files changed, 31 insertions(+), 0 deletions(-)
> >
> > diff --git a/kernel/trace/ring_buffer_benchmark.c b/kernel/trace/ring_buffer_benchmark.c
> > index dcd75e9..a26fc67 100644
> > --- a/kernel/trace/ring_buffer_benchmark.c
> > +++ b/kernel/trace/ring_buffer_benchmark.c
> > @@ -185,6 +185,35 @@ static void ring_buffer_consumer(void)
> > complete(&read_done);
> > }
> >
> > +/*
> > + * If we are a non preempt kernel, the 10 second run will
> > + * stop everything while it runs. Instead, we will call cond_resched
> > + * and also add any time that was lost by a rescedule.
> > + */
> > +#ifdef CONFIG_PREEMPT
> > +static void sched_if_needed(struct timeval *start_tv, struct timeval *end_tv)
> > +{
> > +}
> > +#else
> > +static void sched_if_needed(struct timeval *start_tv, struct timeval *end_tv)
> > +{
> > + struct timeval tv;
> > +
> > + cond_resched();
> > + do_gettimeofday(&tv);
> > + if (tv.tv_usec < end_tv->tv_usec) {
> > + tv.tv_usec += 1000000;
> > + tv.tv_sec--;
> > + }
> > + start_tv->tv_sec += tv.tv_sec - end_tv->tv_sec;
> > + start_tv->tv_usec += tv.tv_usec - end_tv->tv_usec;
> > + if (start_tv->tv_usec > 1000000) {
> > + start_tv->tv_usec -= 1000000;
> > + start_tv->tv_sec++;
> > + }
> > +}
> > +#endif
>
> This is _way_ too ugly. Why not just add a cond_resched() to the
> inner loop and be done with it? cond_resched() is conditional
> already, so it will only schedule 'if needed'.
>
> If the test's timing gets skewed, what's the big deal? If its being
> preempted there will be impact _anyway_. (due to cache footprint
> elimination, etc.) People obviously should only rely on the numbers
> if the system is idle.
OK, I'll nuke it. I find that the writer is more affected by the reader.
With this code, I get the same timings as if I run it with reader
disabled.
But it is more a way to see how a change may affect the buffering than to
really time the buffering itself. If that was the case, I would have added
preemption around the timings.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists