[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <63658620-7d1f-f626-b637-8f551ca07f95@redhat.com>
Date: Thu, 30 May 2019 09:22:32 +0800
From: Xiubo Li <xiubli@...hat.com>
To: Josef Bacik <josef@...icpanda.com>
Cc: axboe@...nel.dk, nbd@...er.debian.org, mchristi@...hat.com,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
atumball@...hat.com
Subject: Re: [RFC PATCH] nbd: set the default nbds_max to 0
On 2019/5/29 21:48, Josef Bacik wrote:
> On Wed, May 29, 2019 at 04:08:36PM +0800, xiubli@...hat.com wrote:
>> From: Xiubo Li <xiubli@...hat.com>
>>
>> There is one problem that when trying to check the nbd device
>> NBD_CMD_STATUS and at the same time insert the nbd.ko module,
>> we can randomly get some of the 16 /dev/nbd{0~15} are connected,
>> but they are not. This is because that the udev service in user
>> space will try to open /dev/nbd{0~15} devices to do some sanity
>> check when they are added in "__init nbd_init()" and then close
>> it asynchronousely.
>>
>> Signed-off-by: Xiubo Li <xiubli@...hat.com>
>> ---
>>
>> Not sure whether this patch make sense here, coz this issue can be
>> avoided by setting the "nbds_max=0" when inserting the nbd.ko modules.
>>
> Yeah I'd rather not make this the default, as of right now most people still
> probably use the old method of configuration and it may surprise them to
> suddenly have to do nbds_max=16 to make their stuff work. Thanks,
Sure, make sense to me :-)
So this patch here in the mail list will as one note and reminder to
other who may hit the same issue in future.
Thanks.
BRs
Xiubo
> Josef
>
Powered by blists - more mailing lists