[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <41925d66-f4b5-4f96-93f6-b29437399005@oracle.com>
Date: Fri, 17 Jan 2025 11:11:01 -0600
From: Mike Christie <michael.christie@...cle.com>
To: Haoran Zhang <wh1sper@....edu.cn>
Cc: mst@...hat.com, jasowang@...hat.com, pbonzini@...hat.com,
stefanha@...hat.com, eperezma@...hat.com,
virtualization@...ts.linux.dev, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] vhost/scsi: Fix improper cleanup in
vhost_scsi_set_endpoint()
On 1/17/25 10:50 AM, Mike Christie wrote:
>> You are welcome. There is another bug I was about to report, but I'm not
>> sure whether I should create a new thread. I feel that the original design
>> of dynamically allocating new vs_tpgs in vhost_scsi_set_endpoint is not
>> intuitive, and copying TPGs before setting the target doesn't seem
>> logical. Since you are already refactoring the code, maybe I should post
>> it here so we can address these issues in one go.
> Yeah, I'm not sure if being able to call vhost_scsi_set_endpoint multiple
> times and pick up new tpgs is actually a feature or not.
Oh yeah by this I mean should we just do:
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 718fa4e0b31e..372a7bfda14c 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1699,6 +1699,11 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
}
}
+ if (vs->vs_tpg) {
+ ret = -EEXIST;
+ goto out;
+ }
+
len = sizeof(vs_tpg[0]) * VHOST_SCSI_MAX_TARGET;
vs_tpg = kzalloc(len, GFP_KERNEL);
if (!vs_tpg) {
?
I can't tell if being able to call VHOST_SCSI_SET_ENDPOINT multiple
times without calling VHOST_SCSI_CLEAR_ENDPOINT between calls is an
actual feature that the code was trying to support or that is the
root bug. It's so buggy I feel like it was never meant to be called
like this so we should just add a check at the beginning of the function.
The worry would be that if there are userspace tools doing this
and living with the bugs then the above patch would add a regression.
However, I think that's highly unlikely because of how useless/buggy
it is.
Powered by blists - more mailing lists