lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200331174016.GA254911@minitux>
Date:   Tue, 31 Mar 2020 10:40:16 -0700
From:   Bjorn Andersson <bjorn.andersson@...aro.org>
To:     Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Cc:     Chris Lew <clew@...eaurora.org>, gregkh@...uxfoundation.org,
        davem@...emloft.net, smohanad@...eaurora.org, jhugo@...eaurora.org,
        kvalo@...eaurora.org, hemantk@...eaurora.org,
        linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
        netdev@...r.kernel.org
Subject: Re: [PATCH v3 6/7] net: qrtr: Add MHI transport layer

On Tue 31 Mar 04:23 PDT 2020, Manivannan Sadhasivam wrote:

> Hi Bjorn,
> 
> On Mon, Mar 30, 2020 at 03:19:32PM -0700, Bjorn Andersson wrote:
> > On Mon 30 Mar 02:49 PDT 2020, Manivannan Sadhasivam wrote:
> > 
> > > Hi Chris,
> > > 
> > > On Thu, Mar 26, 2020 at 03:54:42PM -0700, Chris Lew wrote:
> > > > 
> > > > 
> > > > On 3/25/2020 3:37 AM, Manivannan Sadhasivam wrote:
> > > > > Hi Bjorn,
> > > > > 
> > > > > + Chris Lew
> > > > > 
> > > > > On Tue, Mar 24, 2020 at 01:39:52PM -0700, Bjorn Andersson wrote:
> > > > > > On Mon 23 Mar 23:10 PDT 2020, Manivannan Sadhasivam wrote:
> > [..]
> > > > > > > +	spin_lock_irqsave(&qdev->ul_lock, flags);
> > > > > > > +	list_for_each_entry(pkt, &qdev->ul_pkts, node)
> > > > > > > +		complete_all(&pkt->done);
> > > > > 
> > > > > Chris, shouldn't we require list_del(&pkt->node) here?
> > > > > 
> > > > 
> > > > No this isn't a full cleanup, with the "early notifier" we just unblocked
> > > > any threads waiting for the ul_callback. Those threads will wake, check
> > > > in_reset, return an error back to the caller. Any list cleanup will be done
> > > > in the ul_callbacks that the mhi bus will do for each queued packet right
> > > > before device remove.
> > > > 
> > > > Again to simplify the code, we can probable remove the in_reset handling
> > > > since it's not required with the current feature set.
> > > > 
> > > 
> > > So since we are not getting status_cb for fatal errors, I think we should just
> > > remove status_cb, in_reset and timeout code.
> > > 
> > 
> > Looks reasonable.
> > 
> > [..]
> > > > I thought having the client get an error on timeout and resend the packet
> > > > would be better than silently dropping it. In practice, we've really only
> > > > seen the timeout or ul_callback errors on unrecoverable errors so I think
> > > > the timeout handling can definitely be redone.
> > > > 
> > > 
> > > You mean we can just remove the timeout handling part and return after
> > > kref_put()?
> > > 
> > 
> > If all messages are "generated" by qcom_mhi_qrtr_send() and "released"
> > in qcom_mhi_qrtr_ul_callback() I don't think you need the refcounting at
> > all.
> > 
> 
> Hmm, you're right. We can move the packet releasing part to ul_callback now.
> 
> > 
> > Presumably though, it would have been nice to not have to carry a
> > separate list of packets (and hope that it's in sync with the mhi core)
> > and instead have the ul callback somehow allow us to derive the skb to
> > be freed.
> > 
> 
> Yep, MHI stack holds the skb in buf_addr member of mhi_result. So, we can just
> use below to get the skb in ul_callback:
> 
> struct sk_buff *skb = (struct sk_buff *)mhi_res->buf_addr;
> 
> This will help us to avoid the use of pkt, ul_pkts list and use the skb directly
> everywhere. At the same time I think we can also remove the ul_lock which
> was added to protect the ul_pkts list.
> 
> Let me know your opinion, I'll just send a series with this modified QRTR MHI
> client driver and MHI suspend/resume patches.
> 

This looks more robust than having the separate list shadowing the
internal state of the MHI core.

+1

Thanks,
Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ