[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZV8kPHlGoPYKORQn@dragonet>
Date: Thu, 23 Nov 2023 19:06:52 +0900
From: "Dae R. Jeong" <threeearcat@...il.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Yewon Choi <woni9911@...il.com>, Bryan Tan <bryantan@...are.com>,
Vishnu Dasa <vdasa@...are.com>,
VMware PV-Drivers Reviewers <pv-drivers@...are.com>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] vmci_host: use smp_load_acquire/smp_store_release when
accessing vmci_host_dev->ct_type
On Thu, Nov 23, 2023 at 08:44:46AM +0000, Greg Kroah-Hartman wrote:
> On Thu, Nov 23, 2023 at 04:49:22PM +0900, Yewon Choi wrote:
> > On Wed, Nov 22, 2023 at 02:34:55PM +0000, Greg Kroah-Hartman wrote:
> > > On Wed, Nov 22, 2023 at 09:20:08PM +0900, Yewon Choi wrote:
> > > > In vmci_host.c, missing memory barrier between vmci_host_dev->ct_type
> > > > and vmci_host_dev->context may cause uninitialized data access.
> > > >
> > > > One of possible execution flows is as follows:
> > > >
> > > > CPU 1 (vmci_host_do_init_context)
> > > > =====
> > > > vmci_host_dev->context = vmci_ctx_create(...) // 1
> > > > vmci_host_dev->ct_type = VMCIOBJ_CONTEXT; // 2
> > > >
> > > > CPU 2 (vmci_host_poll)
> > > > =====
> > > > if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) { // 3
> > > > context = vmci_host_dev->context; // 4
> > > > poll_wait(..., &context->host_context.wait_queue, ...);
> > > >
> > > > While ct_type serves as a flag indicating that context is initialized,
> > > > there is no memory barrier which prevents reordering between
> > > > 1,2 and 3, 4. So it is possible that 4 reads uninitialized
> > > > vmci_host_dev->context.
> > > > In this case, the null dereference occurs in poll_wait().
> > > >
> > > > In order to prevent this kind of reordering, we change plain accesses
> > > > to ct_type into smp_load_acquire() and smp_store_release().
> > > >
> > > > Signed-off-by: Yewon Choi <woni9911@...il.com>
> > > > ---
> > > > drivers/misc/vmw_vmci/vmci_host.c | 40 ++++++++++++++++++-------------
> > > > 1 file changed, 23 insertions(+), 17 deletions(-)
> > > >
> > > > diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
> > > > index abe79f6fd2a7..e83b6e0fe55b 100644
> > > > --- a/drivers/misc/vmw_vmci/vmci_host.c
> > > > +++ b/drivers/misc/vmw_vmci/vmci_host.c
> > > > @@ -139,7 +139,7 @@ static int vmci_host_close(struct inode *inode, struct file *filp)
> > > > {
> > > > struct vmci_host_dev *vmci_host_dev = filp->private_data;
> > > >
> > > > - if (vmci_host_dev->ct_type == VMCIOBJ_CONTEXT) {
> > > > + if (smp_load_acquire(&vmci_host_dev->ct_type) == VMCIOBJ_CONTEXT) {
> > >
> > > This is getting tricky, why not use a normal lock to ensure that all is
> > > safe? close isn't on a "fast path", so this shouldn't be a speed issue,
> > > right?
> > >
> >
> > I think using locks can be considered orthogonal to correcting memory ordering.
>
> But they ensure proper memory ordering.
Yes, using a lock obviously ensures memory ordering.
> > If the lock is needed, we will need to add locks in all of them. I cannot be
> > sure which is better. Besides that, it seems to be a separate issue.
>
> Nope, I think it's the same issue :)
>
> > On the other hand, the current implementation doesn't guarantee memory ordering
> > which leads to wrong behavior.
> > This patch fixes this issue by adding primitives.
>
> But it's still wrong, again, what keeps the value from changing right
> after checking it?
>
> thanks,
>
> greg k-h
It seems that VMCI assumes that vmci_host_dev->context is not NULL if
vmci_host_dev->ct_type == VMCIOBJ_CONTEXT (because all readers of
vmci_host_dev->context check whether vmci_host_dev->ct_type is
VMCIOBJ_CONTEXT or not, and access vmci_host_dev->context without
checking whether is it NULL or not). So I think this patch clarifies
this assumption.
As you said, we need to ensure that vmci_host_dev->context is not
changed after checking vmci_host_dev->ct_type. But
(1) the only place that changes vmci_host_dev->context is
vmci_host_close() and
(2) (I think) vmci_host_close() do not concurrently run with readers
of vmci_host_dev->context. IIUC, all readers of vmci_host_dev->context
are system calls (eg, ioctl handlers or the poll handler). So I think
the ref count of the file saves us here. (Otherwise, Syzkaller will
tell us the truth maybe?)
At least, this patch introduces no change of the logic but the
guarantees of the memory ordering, so I think this patch is safe?
Best regards,
Dae R. Jeong
Powered by blists - more mailing lists