lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d404db6-828a-98de-f409-94ddf2c2af67@intel.com>
Date:   Tue, 28 Sep 2021 16:08:40 -0700
From:   Sohil Mehta <sohil.mehta@...el.com>
To:     Thomas Gleixner <tglx@...utronix.de>, <x86@...nel.org>
CC:     Tony Luck <tony.luck@...el.com>,
        Dave Hansen <dave.hansen@...el.com>,
        "Ingo Molnar" <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        "H . Peter Anvin" <hpa@...or.com>,
        Andy Lutomirski <luto@...nel.org>,
        Jens Axboe <axboe@...nel.dk>,
        Christian Brauner <christian@...uner.io>,
        Peter Zijlstra <peterz@...radead.org>,
        Shuah Khan <shuah@...nel.org>, Arnd Bergmann <arnd@...db.de>,
        Jonathan Corbet <corbet@....net>,
        Ashok Raj <ashok.raj@...el.com>,
        Jacob Pan <jacob.jun.pan@...ux.intel.com>,
        "Gayatri Kammela" <gayatri.kammela@...el.com>,
        Zeng Guang <guang.zeng@...el.com>,
        "Dan Williams" <dan.j.williams@...el.com>,
        Randy E Witt <randy.e.witt@...el.com>,
        Ravi V Shankar <ravi.v.shankar@...el.com>,
        Ramesh Thomas <ramesh.thomas@...el.com>,
        <linux-api@...r.kernel.org>, <linux-arch@...r.kernel.org>,
        <linux-kernel@...r.kernel.org>, <linux-kselftest@...r.kernel.org>
Subject: Re: [RFC PATCH 11/13] x86/uintr: Introduce uintr_wait() syscall

On 9/24/2021 4:04 AM, Thomas Gleixner wrote:
> On Mon, Sep 13 2021 at 13:01, Sohil Mehta wrote:
>> Currently, the task wait list is global one. To make the implementation
>> scalable there is a need to move to a distributed per-cpu wait list.
> How are per cpu wait lists going to solve the problem?


Currently, the global wait list can be concurrently accessed by multiple 
cpus. If we have per-cpu wait lists then the UPID scanning only needs to 
happen on the local cpu's wait list.

After an application calls uintr_wait(), the notification interrupt will 
be delivered only to the cpu where the task blocked. In this case, we 
can reduce the UPID search list and probably get rid of the global 
spinlock as well.

Though, I am not sure how much impact this would have vs. the problem of 
scanning the entire wait list.

>> +
>> +/*
>> + * Handler for UINTR_KERNEL_VECTOR.
>> + */
>> +DEFINE_IDTENTRY_SYSVEC(sysvec_uintr_kernel_notification)
>> +{
>> +	/* TODO: Add entry-exit tracepoints */
>> +	ack_APIC_irq();
>> +	inc_irq_stat(uintr_kernel_notifications);
>> +
>> +	uintr_wake_up_process();
> So this interrupt happens for any of those notifications. How are they
> differentiated?


Unfortunately, there is no help from the hardware here to identify the 
intended target.

When a task blocks we:
* switch the UINV to a kernel NV.
* leave SN as 0
* leave UPID.NDST to the current cpu
* add the task to a wait list

When the notification interrupt arrives:
* Scan the entire wait list to check if the ON bit is set for any UPID 
(very inefficient)
* Set SN to 1 for that task.
* Change the UINV to user NV.
* Remove the task from the list and make it runnable.

We could end up detecting multiple tasks that have the ON bit set. The 
notification interrupt for any task that has ON set is expected to 
arrive soon anyway. So no harm done here.

The main issue here is we would end up scanning the entire list for 
every interrupt. Not sure if there any way we could optimize this?


> Again. We have proper wait primitives.

I'll use proper wait primitives next time.
>> +	return -EINTR;
>> +}
>> +
>> +/*
>> + * Runs in interrupt context.
>> + * Scan through all UPIDs to check if any interrupt is on going.
>> + */
>> +void uintr_wake_up_process(void)
>> +{
>> +	struct uintr_upid_ctx *upid_ctx, *tmp;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&uintr_wait_lock, flags);
>> +	list_for_each_entry_safe(upid_ctx, tmp, &uintr_wait_list, node) {
>> +		if (test_bit(UPID_ON, (unsigned long*)&upid_ctx->upid->nc.status)) {
>> +			set_bit(UPID_SN, (unsigned long *)&upid_ctx->upid->nc.status);
>> +			upid_ctx->upid->nc.nv = UINTR_NOTIFICATION_VECTOR;
>> +			upid_ctx->waiting = false;
>> +			wake_up_process(upid_ctx->task);
>> +			list_del(&upid_ctx->node);
> So any of these notification interrupts does a global mass wake up? How
> does that make sense?


The wake up happens only for the tasks that have a pending interrupt. 
They are going to be woken up soon anyways.

>> +/* Called when task is unregistering/exiting */
>> +static void uintr_remove_task_wait(struct task_struct *task)
>> +{
>> +	struct uintr_upid_ctx *upid_ctx, *tmp;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&uintr_wait_lock, flags);
>> +	list_for_each_entry_safe(upid_ctx, tmp, &uintr_wait_list, node) {
>> +		if (upid_ctx->task == task) {
>> +			pr_debug("wait: Removing task %d from wait\n",
>> +				 upid_ctx->task->pid);
>> +			upid_ctx->upid->nc.nv = UINTR_NOTIFICATION_VECTOR;
>> +			upid_ctx->waiting = false;
>> +			list_del(&upid_ctx->node);
>> +		}
> What? You have to do a global list walk to find the entry which you
> added yourself?

Duh! I could have gotten the upid_ctx from the task_struct itself. Will 
fix this.

Thanks,

Sohil


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ