lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b537a890-4b9f-462e-8c17-5c7aa9b60138@www.fastmail.com>
Date:   Thu, 30 Sep 2021 15:01:44 -0700
From:   "Andy Lutomirski" <luto@...nel.org>
To:     "Thomas Gleixner" <tglx@...utronix.de>,
        "Sohil Mehta" <sohil.mehta@...el.com>,
        "the arch/x86 maintainers" <x86@...nel.org>
Cc:     "Tony Luck" <tony.luck@...el.com>,
        "Dave Hansen" <dave.hansen@...el.com>,
        "Ingo Molnar" <mingo@...hat.com>, "Borislav Petkov" <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, "Jens Axboe" <axboe@...nel.dk>,
        "Christian Brauner" <christian@...uner.io>,
        "Peter Zijlstra (Intel)" <peterz@...radead.org>,
        "Shuah Khan" <shuah@...nel.org>, "Arnd Bergmann" <arnd@...db.de>,
        "Jonathan Corbet" <corbet@....net>,
        "Raj Ashok" <ashok.raj@...el.com>,
        "Jacob Pan" <jacob.jun.pan@...ux.intel.com>,
        "Gayatri Kammela" <gayatri.kammela@...el.com>,
        "Zeng Guang" <guang.zeng@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "Randy E Witt" <randy.e.witt@...el.com>,
        "Shankar, Ravi V" <ravi.v.shankar@...el.com>,
        "Ramesh Thomas" <ramesh.thomas@...el.com>,
        "Linux API" <linux-api@...r.kernel.org>,
        linux-arch@...r.kernel.org,
        "Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
        linux-kselftest@...r.kernel.org
Subject: Re: [RFC PATCH 11/13] x86/uintr: Introduce uintr_wait() syscall



On Thu, Sep 30, 2021, at 12:29 PM, Thomas Gleixner wrote:
> On Thu, Sep 30 2021 at 11:08, Andy Lutomirski wrote:
>> On Tue, Sep 28, 2021, at 9:56 PM, Sohil Mehta wrote:
>> I think we have three choices:
>>
>> Use a fancy wrapper around SENDUIPI.  This is probably a bad idea.
>>
>> Treat the NV-2 as a real interrupt and honor affinity settings.  This
>> will be annoying and slow, I think, if it's even workable at all.
>
> We can make it a real interrupt in form of a per CPU interrupt, but
> affinity settings are not really feasible because the affinity is in the
> UPID.ndst field. So, yes we can target it to some CPU, but that's racy.
>
>> Handle this case with faults instead of interrupts.  We could set a
>> reserved bit in UPID so that SENDUIPI results in #GP, decode it, and
>> process it.  This puts the onus on the actual task causing trouble,
>> which is nice, and it lets us find the UPID and target directly
>> instead of walking all of them.  I don't know how well it would play
>> with hypothetical future hardware-initiated uintrs, though.
>
> I thought about that as well and dismissed it due to the hardware
> initiated ones but thinking more about it, those need some translation
> unit (e.g. irq remapping) anyway, so it might be doable to catch those
> as well. So we could just ignore them for now and go for the #GP trick
> and deal with the device initiated ones later when they come around :)

Sounds good to me. In the long run, if Intel wants device initiated fancy interrupts to work well, they need a new design.

>
> But even with that we still need to keep track of the armed ones per CPU
> so we can handle CPU hotunplug correctly. Sigh...

I don’t think any real work is needed. We will only ever have armed UPIDs (with notification interrupts enabled) for running tasks, and hot-unplugged CPUs don’t have running tasks.  We do need a way to drain pending IPIs before we offline a CPU, but that’s a separate problem and may be unsolvable for all I know. Is there a magic APIC operation to wait until all initiated IPIs targeting the local CPU arrive?  I guess we can also just mask the notification vector so that it won’t crash us if we get a stale IPI after going offline.

>
> Thanks,
>
>         tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ