lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 4 Feb 2020 10:28:53 +0800 From: "sunke (E)" <sunke32@...wei.com> To: <josef@...icpanda.com>, <axboe@...nel.dk>, <mchristi@...hat.com> CC: <linux-block@...r.kernel.org>, <nbd@...er.debian.org>, <linux-kernel@...r.kernel.org> Subject: Re: [v2] nbd: add a flush_workqueue in nbd_start_device ping 在 2020/1/22 11:18, Sun Ke 写道: > When kzalloc fail, may cause trying to destroy the > workqueue from inside the workqueue. > > If num_connections is m (2 < m), and NO.1 ~ NO.n > (1 < n < m) kzalloc are successful. The NO.(n + 1) > failed. Then, nbd_start_device will return ENOMEM > to nbd_start_device_ioctl, and nbd_start_device_ioctl > will return immediately without running flush_workqueue. > However, we still have n recv threads. If nbd_release > run first, recv threads may have to drop the last > config_refs and try to destroy the workqueue from > inside the workqueue. > > To fix it, add a flush_workqueue in nbd_start_device. > > Fixes: e9e006f5fcf2 ("nbd: fix max number of supported devs") > Signed-off-by: Sun Ke <sunke32@...wei.com> > --- > drivers/block/nbd.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c > index b4607dd96185..78181908f0df 100644 > --- a/drivers/block/nbd.c > +++ b/drivers/block/nbd.c > @@ -1265,6 +1265,16 @@ static int nbd_start_device(struct nbd_device *nbd) > args = kzalloc(sizeof(*args), GFP_KERNEL); > if (!args) { > sock_shutdown(nbd); > + /* > + * If num_connections is m (2 < m), > + * and NO.1 ~ NO.n(1 < n < m) kzallocs are successful. > + * But NO.(n + 1) failed. We still have n recv threads. > + * So, add flush_workqueue here to prevent recv threads > + * dropping the last config_refs and trying to destroy > + * the workqueue from inside the workqueue. > + */ > + if (i) > + flush_workqueue(nbd->recv_workq); > return -ENOMEM; > } > sk_set_memalloc(config->socks[i]->sock->sk); >
Powered by blists - more mailing lists