lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <YLVwdsa97jYjKKU6@yoga>
Date:   Mon, 31 May 2021 18:25:42 -0500
From:   Bjorn Andersson <bjorn.andersson@...aro.org>
To:     Hillf Danton <hdanton@...a.com>
Cc:     Mathieu Poirier <mathieu.poirier@...aro.org>,
        Alex Elder <elder@...aro.org>, ohad@...ery.com,
        linux-remoteproc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/1] remoteproc: use freezable workqueue for crash
 notifications

On Sat 29 May 22:07 CDT 2021, Hillf Danton wrote:

> On Sat, 29 May 2021 12:28:36 -0500 Bjorn Andersson wrote:
> >
> >Can you please explain why the mutex_lock() "requires" the context
> >executing it to be "unbound"? The lock is there to protect against
> >concurrent modifications of the state coming from e.g. sysfs.
> 
> There are simple and light events pending on the bound workqueue,
> 
> static void foo_event_fn(struct work_struct *w)
> {
> 	struct bar_struct *bar = container_of(w, struct bar_struct, work);
> 
> 	spin_lock_irq(&foo_lock);
> 	list_del(&bar->list);
> 	spin_unlock_irq(&foo_lock);
> 
> 	kfree(bar);
> 	return;
> or
> 	if (bar has waiter)
> 		wake_up();
> }
> 
> and they are not tough enough to tolerate a schedule() for which the unbound
> wq is allocated.

If you have work that is so latency sensitive that it can't handle other
work items sleeping momentarily, is it really a good idea to schedule
them on the system wide queues - or even schedule them at all?

That said, the proposed patch does not move the work from an unbound to
a bound queue, it simply moves it from one bound system queue to another
and further changes to this should be done in a separate patch - backed
by some measurements/data.

Thanks,
Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ