[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87bmq7qntw.fsf@rasmusvillemoes.dk>
Date: Fri, 02 Jun 2017 01:40:11 +0200
From: Rasmus Villemoes <linux@...musvillemoes.dk>
To: Thomas Meyer <thomas@...3r.de>
Cc: linux-kernel@...r.kernel.org
Subject: Re: [PATCH] lib/extable.c: use bsearch() library function in search_extable()
On Wed, May 31 2017, Thomas Meyer <thomas@...3r.de> wrote:
> Signed-off-by: Thomas Meyer <thomas@...3r.de>
> ---
> lib/extable.c | 30 +++++++++++++-----------------
> 1 file changed, 13 insertions(+), 17 deletions(-)
>
> diff --git a/lib/extable.c b/lib/extable.c
> index 62968da..eb16cb3 100644
> --- a/lib/extable.c
> +++ b/lib/extable.c
> @@ -9,6 +9,7 @@
> * 2 of the License, or (at your option) any later version.
> */
>
> +#include <linux/bsearch.h>
> #include <linux/module.h>
> #include <linux/init.h>
> #include <linux/sort.h>
> @@ -51,7 +52,7 @@ static void swap_ex(void *a, void *b, int size)
> * This is used both for the kernel exception table and for
> * the exception tables of modules that get loaded.
> */
> -static int cmp_ex(const void *a, const void *b)
> +static int cmp_ex_sort(const void *a, const void *b)
> {
> const struct exception_table_entry *x = a, *y = b;
>
> @@ -67,7 +68,7 @@ void sort_extable(struct exception_table_entry *start,
> struct exception_table_entry *finish)
> {
> sort(start, finish - start, sizeof(struct exception_table_entry),
> - cmp_ex, swap_ex);
> + cmp_ex_sort, swap_ex);
> }
>
> #ifdef CONFIG_MODULES
> @@ -93,6 +94,13 @@ void trim_init_extable(struct module *m)
> #endif /* !ARCH_HAS_SORT_EXTABLE */
>
> #ifndef ARCH_HAS_SEARCH_EXTABLE
> +
> +static int cmp_ex_search(const void *key, const void *elt)
> +{
> + const struct exception_table_entry * _elt = elt;
> + return *(unsigned long*) key - ex_to_insn(_elt);
> +}
> +
This triggers a pet peeve of mine: Don't return the result of a
subtraction as a comparison result; you may have unsigned longs x, y for
which x < y but (int)(x-y) > 0; e.g. 0x20000000 and 0xb0000000. On a 32
bit machine, all the addresses you're comparing are likely to be within
one half of the address space, so the differences do fit in a signed 32
bit int, but when BITS_PER_LONG=64, this will cause problems.
So do it carefully, just as it's done in cmp_ex.
You should be able to get away with just passing value by, well, value,
so that this just needs to be (unsigned long)key and (void*)value in the
bsearch call, though I don't know how much that saves.
> /*
> * Search one exception table for an entry corresponding to the
> * given instruction address, and return the address of the entry,
> @@ -105,21 +113,9 @@ search_extable(const struct exception_table_entry *first,
> const struct exception_table_entry *last,
> unsigned long value)
> {
> - while (first <= last) {
> - const struct exception_table_entry *mid;
> + if(last < first)
> + return NULL;
>
> - mid = ((last - first) >> 1) + first;
> - /*
> - * careful, the distance between value and insn
> - * can be larger than MAX_LONG:
> - */
> - if (ex_to_insn(mid) < value)
> - first = mid + 1;
> - else if (ex_to_insn(mid) > value)
> - last = mid - 1;
> - else
> - return mid;
> - }
> - return NULL;
> + return bsearch(&value, first, last - first + 1, sizeof(struct exception_table_entry), cmp_ex_search);
It does seem to make sense to use the library routine, but maybe the
binary search is open-coded for performance, to avoid the cost of the
callbacks?
If we do this, could you also get rid of the silly plus/minus 1 thing in
a followup patch (there aren't that many search_extable callers).
Rasmus
Powered by blists - more mailing lists