lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 27 Jul 2022 11:25:29 -0600
From:   Mathieu Poirier <mathieu.poirier@...aro.org>
To:     Arnaud POULIQUEN <arnaud.pouliquen@...s.st.com>
Cc:     Chris Lew <quic_clew@...cinc.com>, bjorn.andersson@...aro.org,
        linux-remoteproc@...r.kernel.org, linux-arm-msm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/4] Introduction of rpmsg_rx_done

On Mon, Jul 18, 2022 at 10:54:30AM -0600, Mathieu Poirier wrote:
> On Mon, 18 Jul 2022 at 02:26, Arnaud POULIQUEN
> <arnaud.pouliquen@...s.st.com> wrote:
> >
> > Hello Chris,
> >
> > On 6/8/22 03:16, Chris Lew wrote:
> > > This series proposes an implementation for the rpmsg framework to do
> > > deferred cleanup of buffers provided in the rx callback. The current
> > > implementation assumes that the client is done with the buffer after
> > > returning from the rx callback.
> > >
> > > In some cases where the data size is large, the client may want to
> > > avoid copying the data in the rx callback for later processing. This
> > > series proposes two new facilities for signaling that they want to
> > > hold on to a buffer after the rx callback.
> > > They are:
> > >  - New API rpmsg_rx_done() to tell the rpmsg framework the client is
> > >    done with the buffer
> > >  - New return codes for the rx callback to signal that the client will
> > >    hold onto a buffer and later call rpmsg_rx_done()
> > >
> > > This series implements the qcom_glink_native backend for these new
> > > facilities.
> >
> > The API you proposed seems to me quite smart and adaptable to the rpmsg
> > virtio backend.
> >
> > My main concern is about the release of the buffer when the endpoint
> > is destroyed.
> >
> > Does the buffer release should be handled by each services or by the
> > core?
> >
> > I wonder if the buffer list could be managed by the core part by adding
> > the list in the rpmsg_endpoint structure. On destroy the core could call
> > the rx_done for each remaining buffers in list...

Arnaud has a valid point, though rpmst_endpoint_ops::destroy_ept() is there for
this kind of cleanup (and this patchet is making use of it).

I think we can leave things as they are now and consider moving to the core if
we see a trend in future submissions.

Thanks,
Mathieu

> >
> > I let Bjorn and Mathieu advise on this...
> 
> Thanks for taking a look Arnaud.  I'll get to this sortly.
> 
> >
> > Thanks,
> > Arnaud
> >
> > >
> > > Chris Lew (4):
> > >   rpmsg: core: Add rx done hooks
> > >   rpmsg: char: Add support to use rpmsg_rx_done
> > >   rpmsg: glink: Try to send rx done in irq
> > >   rpmsg: glink: Add support for rpmsg_rx_done
> > >
> > >  drivers/rpmsg/qcom_glink_native.c | 112 ++++++++++++++++++++++++++++++--------
> > >  drivers/rpmsg/rpmsg_char.c        |  50 ++++++++++++++++-
> > >  drivers/rpmsg/rpmsg_core.c        |  20 +++++++
> > >  drivers/rpmsg/rpmsg_internal.h    |   1 +
> > >  include/linux/rpmsg.h             |  24 ++++++++
> > >  5 files changed, 183 insertions(+), 24 deletions(-)
> > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ