[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100705111213.GM10072@secunet.com>
Date: Mon, 5 Jul 2010 13:12:13 +0200
From: Steffen Klassert <steffen.klassert@...unet.com>
To: Dan Kruchinin <dkruchinin@....org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Herbert Xu <herbert@...dor.apana.org.au>
Subject: Re: [PATCH 2/2] pcrypt: sysfs interface
On Fri, Jul 02, 2010 at 02:20:15PM +0400, Dan Kruchinin wrote:
> On Fri, Jul 2, 2010 at 1:08 PM, Steffen Klassert
> <steffen.klassert@...unet.com> wrote:
> > On Thu, Jul 01, 2010 at 06:28:34PM +0400, Dan Kruchinin wrote:
> >> >
> >> > These statistic counters add a lot of atomic operations to the fast-path.
> >> > Would'nt it be better to have these statistics in a percpu manner?
> >> > This would avoid the atomic operations and we would get some additional
> >> > information on the distribution of the queued objects.
> >> >
> >>
> >> If I understood you correctly the resulting sysfs hierarchy would look like
> >> this one:
> >> pcrypt/
> >> |- serial_cpumask
> >> |- parallel_cpumask
> >> |- w0/
> >> +--- parallel_objects
> >> +--- serial_objects
> >> +--- reorder_objects
> >> |- w1/
> >> ...
> >> |- wN/
> >>
> >> right? If so I think it won't be very convenient to monitor summary number
> >> of parallel, serial and reorder objects.
> >
> > Yes, I thought about something like this. You can still take the sum
> > over the percpu objects when you output the statistics.
>
> But summation can not be clear without some kind of lock because
> while we're summing another CPU can increase or decrease its percpu statistic
> counters. Then each statistic percpu counter must be modified under lock, right?
>
Thinking a bit longer about this statistics, this statistics work should be
an extra patch. We should focus on the cpumask separation now and think
about this statistics later.
Steffen
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists