[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k2rryczs.fsf@vitty.brq.redhat.com>
Date: Tue, 15 Sep 2015 16:27:03 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: "James E.J. Bottomley" <JBottomley@...n.com>
Cc: linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org,
"K. Y. Srinivasan" <kys@...rosoft.com>,
Long Li <longli@...rosoft.com>,
Dexuan Cui <decui@...rosoft.com>
Subject: Re: [PATCH v2] scsi: introduce short_inquiry flag for broken host adapters
Vitaly Kuznetsov <vkuznets@...hat.com> writes:
> Some host adapters (e.g. Hyper-V storvsc) are known for not respecting the
> SPC-2/3/4 requirement for 'INQUIRY data (see table ...) shall contain at
> least 36 bytes'. As a result we get tons on 'scsi 0:7:1:1: scsi scan:
> INQUIRY result too short (5), using 36' messages on console. This can be
> problematic for slow consoles. Introduce short_inquiry host template flag
> to avoid printing error messages for such adapters.
>
> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> ---
> Changes since v1:
> - This is a successor of previously sent "scsi_scan: move 'INQUIRY result
> too short' message to debug level" patch. Instead of moving the message
> to debug level for all adapters introduce a special 'short_inquiry' flag
> for host template [inspired by James Bottomley].
James,
sorry for the ping but can you please let me know your opinion? This is
not a 'cosmetic fix', serial port on Hyper-V is extremely slow and users
get softlockups just because we output too much. Here is a freshly
booted guest with SCSI and FC adapters connected:
# dmesg | grep -c INQUIRY
2076
(my other pernding '[PATCH] scsi_scan: don't dump trace when
scsi_prep_async_scan() is called twice' is related to the same issue).
See also: https://lkml.org/lkml/2015/9/6/119
Thanks,
[...]
--
Vitaly
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists