lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Jan 2020 15:28:31 -0500
From:   Jason Baron <jbaron@...mai.com>
To:     Davidlohr Bueso <dave@...olabs.net>, rpenyaev@...e.de
Cc:     akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        normalperson@...t.net, viro@...iv.linux.org.uk,
        Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH] fs/epoll: rework safewake for CONFIG_DEBUG_LOCK_ALLOC

Hi,

Thanks for looking at this.

On 1/6/20 2:38 PM, Davidlohr Bueso wrote:
> Some comments on:
> 
>       f6520c52084 (epoll: simplify ep_poll_safewake() for CONFIG_DEBUG_LOCK_ALLOC)
> 
> For one this does not play nice with preempt_rt as disabling irq and
> then taking a spinlock_t is a no no; the critical region becomes
> preemptible. This is particularly important as -rt is being mainlined.
> 

hmmm, but before the spinlock is taken there is a preempt_disable() call.

> Secondly, it is really ugly compared to what we had before - albeit not
> having to deal with all the ep_call_nested() checks, but I doubt this
> overhead matters at all with CONFIG_DEBUG_LOCK_ALLOC.
> 

Yes, the main point of the patch is to continue to remove dependencies
on ep_call_nested(), and then eventually to remove it completely.

> While the current logic avoids nesting by disabling irq during the whole
> path, this seems like an overkill under debug. This patch proposes using
> this_cpu_inc_return() then taking the irq-safe lock - albeit a possible
> irq coming in the middle between these operations and messing up the
> subclass. If this is unacceptable, we could always revert the patch,
> as this was never a problem in the first place.

I personally don't want to introduce false positives. But I'm not quite
sore on that point - the subclass will still I think always increase on
nested calls it just may skip some numbers. I'm not sure offhand if that
messes up lockdep. perhaps not?


Thanks,

-Jason


> 
> Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
> ---
> 
> This is pretty much untested - I wanted to see what is the best way to
> address the concerns first.
> 
> XXX: I don't think we need get/put_cpu() around the call, this was intended
> only for ep_call_nested() tasks_call_list checking afaict.
> 
>  fs/eventpoll.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/eventpoll.c b/fs/eventpoll.c
> index 67a395039268..97d036deff3e 100644
> --- a/fs/eventpoll.c
> +++ b/fs/eventpoll.c
> @@ -558,16 +558,17 @@ static void ep_poll_safewake(wait_queue_head_t *wq)
>  	unsigned long flags;
>  	int subclass;
>  
> -	local_irq_save(flags);
> -	preempt_disable();
> -	subclass = __this_cpu_read(wakeup_nest);
> -	spin_lock_nested(&wq->lock, subclass + 1);
> -	__this_cpu_inc(wakeup_nest);
> +	subclass = this_cpu_inc_return(wakeup_nest);
> +	/*
> +	 * Careful: while unlikely, an irq can occur here and mess up
> +	 * the subclass. The same goes for right before the dec
> +	 * operation, but that does not matter.
> +	 */
> +	spin_lock_irqsave_nested(&wq->lock, flags, subclass + 1);
>  	wake_up_locked_poll(wq, POLLIN);
> -	__this_cpu_dec(wakeup_nest);
> -	spin_unlock(&wq->lock);
> -	local_irq_restore(flags);
> -	preempt_enable();
> +	spin_unlock_irqrestore(&wq->lock, flags);
> +
> +	this_cpu_dec(wakeup_nest);
>  }
>  
>  #else
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ