lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4540D61B.5040802@de.ibm.com>
Date:	Thu, 26 Oct 2006 17:36:59 +0200
From:	Martin Peschke <mp3@...ibm.com>
To:	"Frank Ch. Eigler" <fche@...hat.com>
CC:	psusi@....rr.com, Jens Axboe <jens.axboe@...cle.com>,
	Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [Patch 0/5] I/O statistics through request queues

Frank Ch. Eigler wrote:
> Hi -
> 
> On Thu, Oct 26, 2006 at 03:37:54PM +0200, Martin Peschke wrote:
>> [...]
>> lookup_table[key] = value	, or
>> lookup_table[key]++
>>
>> How does this scale?
> 
> It depends.  If one is interested in only aggregates as an end result,
> then intermediate totals can be tracked individiaully per-cpu with no
> locking contention, so this scales well.

Sorry for not being clear.
I meant scaling with regard to lots of different keys.
This is what you have described as "By 'rows'", isn't it?

For example, if I wanted to store a timestamp for each request
issued, and there were lots of devices and the I/O was driving
the system crazy - how would affect that lookup time?

I would use, say, the address of struct request as key. I would
store the start time of a request there. Once a requests completes
I would look up the start time and calculate a latency.

I would be done with that lookup table entry then.
But it won't go away, will it? Is this an issue?

>> It must be someting else than an array, because key boundaries
>> aren't known when the lookup table is created, right?
>> And actual keys might be few and far between.
> 
> In systemtap, we use a hash table.
> 
>> What if the heap of intermediate results grows into thousands or
>> more?  [...]
> 
> It depends whether you mean "rows" or "columns".
> 
> By "rows", if you need to track thousands of queues, you will need
> memory to store some data for each of them.  In systemtap's case, the
> maximum number of elements in a hash table is configurable, and is all
> allocated at startup time.  (The default is a couple of thousand.)
> This is of course still larger than enlarging the base structures the
> way your code does.  But it's only larger by a constant amount, and
> makes it unnecessary to patch the code.
> 
> By "columns", if you need to track statistical aggregates of thousands
> of data points for an individual queue, then one can use a handful of
> fixed-size counters, as you already have for histograms.
> 
> 
> Anyway, my point was not that you should use systemtap proper, or that
> you need to use the same techniques for managing data on the side.
> It's that by using instrumentation markers, more things are possible.

Right, I have gone off on a tangent - systemtap, just one out of many
likely exploiters of markers.

Anyway, I think this discussion shows that any dynamically added client
of kernel markers which needs to hold extra data for entities like
requests might be difficult to be implemented efficiently (compared
to static instrumentation), because markers, by nature, only allow
for code additions, but not for additions to existing data structures.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ