[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191107081635.GE30739@gmail.com>
Date: Thu, 7 Nov 2019 09:16:35 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
Stephen Hemminger <stephen@...workplumber.org>,
Willy Tarreau <w@....eu>, Juergen Gross <jgross@...e.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [patch 5/9] x86/ioport: Reduce ioperm impact for sane usage
further
* Thomas Gleixner <tglx@...utronix.de> wrote:
> + /* Update the bitmap */
> + if (turn_on) {
> + bitmap_clear(bitmap, from, num);
> + } else {
> + bitmap_set(bitmap, from, num);
> + }
> +
> + /* Get the new range */
> + first = find_first_zero_bit(bitmap, IO_BITMAP_BITS);
> +
> + for (last = next = first; next < IO_BITMAP_BITS; last = next) {
> + /* Find the next set bit and update last */
> + next = find_next_bit(bitmap, IO_BITMAP_BITS, last);
> + last = next - 1;
> + if (next == IO_BITMAP_BITS)
> + break;
> + /* Find the next zero bit and continue searching */
> + next = find_next_zero_bit(bitmap, IO_BITMAP_BITS, next);
> + }
> +
> + /* Calculate the byte boundaries for the updated region */
> + copy_start = from / 8;
> + copy_len = (round_up(from + num, 8) / 8) - copy_start;
This might seem like a small detail, but since we do the range tracking
and copying at byte granularity anyway, why not do the zero range search
at byte granularity as well?
I bet it's faster and simpler as well than the bit-searching.
We could also change over the bitmap to a char or u8 based array and lose
all the sizeof(long) indexing headaches, resulting type casts, for
anything but the actual bitmap_set/clear() calls, etc.?
I.e. now that most of the logic is byte granular, the basic data
structure might as well reflect that?
> + /*
> + * Update the per thread storage and the TSS bitmap. This must be
> + * done with preemption disabled to prevent racing against a
> + * context switch.
> + */
> + preempt_disable();
> + tss = this_cpu_ptr(&cpu_tss_rw);
>
> + if (!t->io_bitmap_ptr) {
> + unsigned int tss_start = tss->io_zerobits_start;
> + /*
> + * If the task did not use the I/O bitmap yet then the
> + * perhaps stale content in the TSS needs to be taken into
> + * account. If tss start is out of bounds the TSS storage
> + * does not contain a zero bit and it's sufficient just to
> + * copy the new range over.
> + */
s/tss/TSS
> + if (tss_start < IO_BITMAP_BYTES) {
> + unsigned int tss_end = tss->io_zerobits_end;
> + unsigned int copy_end = copy_start + copy_len;
> +
> + copy_start = min(tss_start, copy_start);
> + copy_len = max(tss_end, copy_end) - copy_start;
> + }
> + }
> +
> + /* Copy the changed range over to the TSS bitmap */
> + dst = (char *)tss->io_bitmap;
> + src = (char *)bitmap;
> + memcpy(dst + copy_start, src + copy_start, copy_len);
> +
> + if (first >= IO_BITMAP_BITS) {
> + /*
> + * If the resulting bitmap has all permissions dropped, clear
> + * TIF_IO_BITMAP and set the IO bitmap offset in the TSS to
> + * invalid. Deallocate both the new and the thread's bitmap.
> + */
> + clear_thread_flag(TIF_IO_BITMAP);
> + tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET_INVALID;
> + tofree = bitmap;
> + bitmap = NULL;
BTW., wouldn't it be simpler to just say that if a thread uses IO ops
even once, it gets a bitmap and that's it? I.e. we could further simplify
this seldom used piece of code.
> + } else {
> /*
> + * I/O bitmap contains zero bits. Set TIF_IO_BITMAP, make
> + * the bitmap offset valid and make sure that the TSS limit
> + * is correct. It might have been wreckaged by a VMEXiT.
> */
> + set_thread_flag(TIF_IO_BITMAP);
> + tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET_VALID;
> refresh_tss_limit();
> }
I'm wondering, shouldn't we call refresh_tss_limit() in both branches, or
is a VMEXIT-wreckaged TSS limit harmless if we otherwise have
io_bitmap_base set to IO_BITMAP_OFFSET_INVALID?
> /*
> + * Update the range in the thread and the TSS
> *
> + * Get the byte position of the first zero bit and calculate
> + * the length of the range in which zero bits exist.
> */
> + start = first / 8;
> + end = first < IO_BITMAP_BITS ? round_up(last, 8) / 8 : 0;
> + t->io_zerobits_start = tss->io_zerobits_start = start;
> + t->io_zerobits_end = tss->io_zerobits_end = end;
>
> /*
> + * Finally exchange the bitmap pointer in the thread.
> */
> + bitmap = xchg(&t->io_bitmap_ptr, bitmap);
> + preempt_enable();
>
> + kfree(bitmap);
> + kfree(tofree);
>
> return 0;
Thanks,
Ingo
Powered by blists - more mailing lists