[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250718221137.2432-1-hdanton@sina.com>
Date: Sat, 19 Jul 2025 06:11:36 +0800
From: Hillf Danton <hdanton@...a.com>
To: Nikolay Kuratov <kniv@...dex-team.ru>
Cc: linux-kernel@...r.kernel.org,
virtualization@...ts.linux.dev,
"Michael S. Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Eugenio Peterz <eperezma@...hat.com>,
Lei Yang <leiyang@...hat.com>
Subject: Re: [PATCH] vhost/net: Replace wait_queue with completion in ubufs reference
On Fri, 18 Jul 2025 16:24:14 +0300 Nikolay Kuratov wrote:
> > reinit after wait, so the chance for missing wakeup still exists.
>
> Can you please provide more details on this? Yes, it is reinit after wait,
The missing wakeup exists if complete_all() is used in combination with
reinit after wait, with nothing to do with vhost.
Your patch was checked simply because of reinit, which hints the chance for
mess in mind without exception.
Of course feel free to prove that missing wakeup disappears in vhost
even if reinit is deployed.
> but wait should not be concurrent. I checked multiple code pathes towards
> vhost_net_flush(), they're all protected by device mutex, except
> vhost_net_release(). In case of vhost_net_release() - it would be a
> problem itself if it was called in parallel with some ioctl on a device?
>
> Also rationale for this is that put_and_wait() is waiting for zero
> refcount condition. Zero refcount means that after put_and_wait() calling
> thread is the only owner of an ubufs structure. If multiple threads got
> ubufs structure with zero refcount - how either thread can be sure that
> another one is not free'ing it?
>
>
Powered by blists - more mailing lists