[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad9b1163-fe3b-6793-c799-75a9c4ce87f9@pensando.io>
Date: Tue, 15 Sep 2020 10:20:11 -0700
From: Shannon Nelson <snelson@...sando.io>
To: "Keller, Jacob E" <jacob.e.keller@...el.com>,
Jakub Kicinski <kuba@...nel.org>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH v3 net-next 2/2] ionic: add devlink firmware update
On 9/15/20 9:50 AM, Keller, Jacob E wrote:
>
>> -----Original Message-----
>> From: Jakub Kicinski <kuba@...nel.org>
>> Sent: Tuesday, September 15, 2020 8:51 AM
>> To: Shannon Nelson <snelson@...sando.io>
>> Cc: Keller, Jacob E <jacob.e.keller@...el.com>; netdev@...r.kernel.org;
>> davem@...emloft.net
>> Subject: Re: [PATCH v3 net-next 2/2] ionic: add devlink firmware update
>>
>> On Mon, 14 Sep 2020 18:14:22 -0700 Shannon Nelson wrote:
>>> So now we're beginning to dance around timeout boundaries - how can we
>>> define the beginning and end of a timeout boundary, and how do they
>>> relate to the component and label? Currently, if either the component
>>> or status_msg changes, the devlink user program does a newline to start
>>> a new status line. The done and total values are used from each notify
>>> message to create a % value displayed, but are not dependent on any
>>> previous done or total values, so the total doesn't need to be the same
>>> value from status message to status message, even if the component and
>>> label remain the same, devlink will just print whatever % gets
>>> calculated that time.
>> I think systemd removes the timeout marking when it moves on to the
>> next job, and so should devlink when it moves on to the next
>> component/status_msg.
Works for me. I'll try to note these UI implementation hints somewhere
useful.
>>
>>> I'm thinking that the behavior of the timeout value should remain
>>> separate from the component and status_msg values, such that once given,
>>> then the userland countdown continues on that timeout. Each subsequent
>>> notify, regardless of component or label changes, should continue
>>> reporting that same timeout value for as long as it applies to the
>>> action. If a new timeout value is reported, the countdown starts over.
>> What if no timeout exists for the next action? Driver reports 0 to
>> "clear"?
Yes, that's what I would expect.
>>
>>> This continues until either the countdown finishes or the driver reports
>>> the flash as completed. I think this allows is the flexibility for
>>> multiple steps that Jake alludes to above. Does this make sense?
>> I disagree. This doesn't match reality/driver behavior and will lead to
>> timeouts counting to some random value, that's to say the drivers
>> timeout instant will not match when user space reaches timeout.
>>
>> The timeout should be per notification, because drivers send a
>> notification per command, and commands have timeout.
>>
> This is how everything operates today. Just send a new status for every command.
>
> Is that not how your case works?
>
>> The timeout is only needed if there is no progress to report, i.e.
>> driver is waiting for something to happen.
>>
> Right.
>
>>> What should the userland program do when the timeout expires? Start
>>> counting backwards? Stop waiting? Do we care to define this at the moment?
>> [component] bla bla X% (timeout reached)
> Yep. I don't think userspace should bail or do anything but display here. Basically: the driver will timeout and then end the update process with an error. The timeout value is just a useful display so that users aren't confused why there is no output going on while waiting.
>
>
If individual notify messages have a timeout, how can we have a
progress-percentage reported with a timeout? This implies to me that
the timeout is on the component:bla-bla pair, but there are many notify
messages in order to show the progress in percentage done. This is why
I was suggesting that if the timeout and component and status messages
haven't changed, then we're still working on the same timeout.
sln
Powered by blists - more mailing lists