[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190529080836.13031-1-xiubli@redhat.com>
Date: Wed, 29 May 2019 16:08:36 +0800
From: xiubli@...hat.com
To: josef@...icpanda.com, axboe@...nel.dk, nbd@...er.debian.org
Cc: mchristi@...hat.com, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, atumball@...hat.com,
Xiubo Li <xiubli@...hat.com>
Subject: [RFC PATCH] nbd: set the default nbds_max to 0
From: Xiubo Li <xiubli@...hat.com>
There is one problem that when trying to check the nbd device
NBD_CMD_STATUS and at the same time insert the nbd.ko module,
we can randomly get some of the 16 /dev/nbd{0~15} are connected,
but they are not. This is because that the udev service in user
space will try to open /dev/nbd{0~15} devices to do some sanity
check when they are added in "__init nbd_init()" and then close
it asynchronousely.
Signed-off-by: Xiubo Li <xiubli@...hat.com>
---
Not sure whether this patch make sense here, coz this issue can be
avoided by setting the "nbds_max=0" when inserting the nbd.ko modules.
drivers/block/nbd.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 4c1de1c..98be6ca 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -137,7 +137,7 @@ struct nbd_cmd {
#define NBD_DEF_BLKSIZE 1024
-static unsigned int nbds_max = 16;
+static unsigned int nbds_max;
static int max_part = 16;
static struct workqueue_struct *recv_workqueue;
static int part_shift;
@@ -2310,6 +2310,6 @@ static void __exit nbd_cleanup(void)
MODULE_LICENSE("GPL");
module_param(nbds_max, int, 0444);
-MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 16)");
+MODULE_PARM_DESC(nbds_max, "number of network block devices to initialize (default: 0)");
module_param(max_part, int, 0444);
MODULE_PARM_DESC(max_part, "number of partitions per device (default: 16)");
--
1.8.3.1
Powered by blists - more mailing lists