[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D6EDEBF1F91015459DB866AC4EE162CCF76BDA@IRSMSX103.ger.corp.intel.com>
Date: Fri, 17 Jul 2015 17:28:20 +0000
From: "Odzioba, Lukasz" <lukasz.odzioba@...el.com>
To: Guenter Roeck <linux@...ck-us.net>, Jean Delvare <jdelvare@...e.de>
CC: "Yu, Fenghua" <fenghua.yu@...el.com>,
"lm-sensors@...sensors.org" <lm-sensors@...sensors.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] hwmon: coretemp: use list instead of fixed size array
for temp data
From: Guenter Roeck [mailto:linux@...ck-us.net]
On Friday, July 17, 2015 6:55 PM Guenter Roeck wrote:
> You don't really explain why your approach would be better than
> allocating an array of pointers to struct temp_data and increasing
> its size using krealloc if needed.
Let's consider two cases of such implementation:
a) we use array of pointers with O(n) access algorithm
b) we use array of pointers with O(1) access algorithm
In both cases an array will have greater memory footprint unless
we implement reallocation ourselves when cpus are disabled which will
make code harder to maintain.
Case b) does not handle huge core ids and sparse enumeration well -
it is still to discuss whether we really need it since there is no
such hardware yet.
I am not saying that my solution is the best of possible ones.
I am saying that "the best" can vary depending on which criteria do you
choose from (time, memory, clean code...). Some may say that O(n) is
fine unless we have thousands of cores and this code is not on hot path,
others may be concerned more about memory on small/old devices.
I don't see holy grail here, If you see one please let me know.
If you think that we don't have to care so much about memory,
then I can create another patch which uses array instead of list.
Thanks,
Lukas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists