[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3a3e9c44-1fd5-dc27-bd1c-1b6e19ffbd95@oracle.com>
Date: Thu, 25 Oct 2018 11:50:06 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Juergen Gross <jgross@...e.com>,
Joao Martins <joao.m.martins@...cle.com>
Cc: linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen: drop writing error messages to xenstore
On 10/25/18 8:36 AM, Juergen Gross wrote:
> On 11/10/2018 13:03, Joao Martins wrote:
>> On 10/11/2018 06:05 AM, Juergen Gross wrote:
>>> On 10/10/2018 18:57, Boris Ostrovsky wrote:
>>>> On 10/10/18 11:53 AM, Juergen Gross wrote:
>>>>> On 10/10/2018 17:09, Joao Martins wrote:
>>>>>> On 10/09/2018 05:09 PM, Juergen Gross wrote:
>>>>>>> xenbus_va_dev_error() will try to write error messages to Xenstore
>>>>>>> under the error/<dev-name>/error node (with <dev-name> something like
>>>>>>> "device/vbd/51872"). This will fail normally and another message
>>>>>>> about this failure is added to dmesg.
>>>>>>>
>>>>>>> I believe this is a remnant from very ancient times, as it was added
>>>>>>> in the first pvops rush of commits in 2007.
>>>>>>>
>>>>>>> So remove the additional message when writing to Xenstore failed as
>>>>>>> a minimum step.
>>>>>>>
>>>>>>> Signed-off-by: Juergen Gross <jgross@...e.com>
>>>>>>> ---
>>>>>>> I am considering removing the Xenstore write altogether, but I'm
>>>>>>> not sure it isn't needed e.g. by xend based installations. So please
>>>>>>> speak up in case you know why this write is there.
>>>>>> So this:
>>>>>>
>>>>>> "This will fail normally and another message about this failure is added to dmesg."
>>>>>>
>>>>>> Brings me to the question: What about {stub,driver}domains? Ideally you
>>>>>> shouldn't be looking at domU's dmesg as a control domain no? I can't remember
>>>>>> any other error node, but if something fails e.g. netfront fails to allocate an
>>>>>> unbound event channel - how do you know the cause from the control domain
>>>>>> perspective?
>>>>>>
>>>>>> Irrespective of xend or not: isn't this 'error' node the only one that
>>>>>> propagates error causes per device from domU?
>>>>> What does it help you in dom0 if you have an error message in Xenstore
>>>>> if a frontend driver couldn't do its job? Is there anything you can do
>>>>> as a Xen admin?
>>>> The admin may want to know, for example, that a hotplug in the guest failed.
>>> Shouldn't he ask the guest for that? There are dozens of other possible
>>> problems letting hotplug fail which won't write anything to Xenstore.
>>>
>> But I think nothing stops people from using their own hotplug script that could
>> use this error node (or even something else).
>>
>>> This might be interesting for development/test purposes, but I really
>>> question it to stay in mature code.
>>>
>> You're right in all of it: it doesn't convey the error in a agnostic manner, ATM
>> doesn't report all errors involved in the device setup, and when a
>> xenbus_dev_fatal happens you might end up looking at the guest anyways. But
>> there might be users right now of this node e.g. cases where you have a bunch of
>> known/trusted Linux guests working as backends (which also use this error node,
>> it's not just frontends *I think*) which you would be able to recognize the
>> error messages to inform the admin e.g. maybe QubesOS? It is merely an
>> informative error message node, but it seems better than just a simple
>> XenbusClosed state, with many reasons that it could lead to. Anyhow, just my 2c.
> If there are any users this will be in rather old Xen versions only, as
> writing the Xenstore node is _failing_ with newer Xen versions (that was
> the original reason for writing that patch).
>
> So back to my patch: any reason not to take it? After all it will only
> remove the not very helpful kernel message that writing the Xenstore
> node failed.
For the patch itself:
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@...cel.com>
(For the possible future removal of the write altogether --- I'd rather
keep it. "rather old versions" --- tragically, we are still on 4.4)
Powered by blists - more mailing lists