[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04c02fdea99c71ad5f166152c9331f2d.squirrel@www.firstfloor.org>
Date: Tue, 21 Sep 2010 19:39:04 +0200
From: "Andi Kleen" <andi@...stfloor.org>
To: "Steven Rostedt" <rostedt@...dmis.org>
Cc: "Andi Kleen" <andi@...stfloor.org>,
"Jason Baron" <jbaron@...hat.com>, linux-kernel@...r.kernel.org,
mingo@...e.hu, mathieu.desnoyers@...ymtl.ca, hpa@...or.com,
tglx@...utronix.de, roland@...hat.com, rth@...hat.com,
mhiramat@...hat.com, fweisbec@...il.com, avi@...hat.com,
davem@...emloft.net, vgoyal@...hat.com, sam@...nborg.org,
tony@...eyournoodle.com
Subject: Re: [PATCH 03/10] jump label v11: base patch
>> Even 1000 is fine to walk, but if it was sorted a binary search
>> would be much faster anyways. That is then you would still
>> need to search for each module, but that is a relatively small
>> number (< 100)
>
> xfs has > 100 tracepoints
No problem for binary search.
>> > Also, I think the hash table deals nicely with modules.
>>
>> Maybe but it's also a lot of code. And it seems to me
>> that it is optimizing the wrong thing. Simpler is nicer.
>
> I guess simplicity is in the eye of the beholder. I find hashes easier
> to deal with than binary searching sorted lists. Every time you add a
> tracepoint, you need to resort the list.
You only add trace points with new modules right?
In this case you only sort the section of the new module, nothing else.
And only once when you load it.
> Hashes are much easier to deal with and scale nicely. I don't think
> there's enough rational to switch this to a binary list.
The problem I see is that there's a lot of hashing related code
and a lot of memory overhead. I suspect with inplace access
everything would be much simpler and less overhead.
For me the current implementation simply seems overengineered.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists