[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BY1PR0201MB08852737FE73AAE2FFE90D7CB01C0@BY1PR0201MB0885.namprd02.prod.outlook.com>
Date: Fri, 5 Jan 2018 22:10:37 +0000
From: Jiaying Liang <jliang@...inx.com>
To: Arnaud Pouliquen <arnaud.pouliquen@...com>,
"ohad@...ery.com" <ohad@...ery.com>,
"bjorn.andersson@...aro.org" <bjorn.andersson@...aro.org>
CC: "linux-remoteproc@...r.kernel.org" <linux-remoteproc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
> -----Original Message-----
> From: Arnaud Pouliquen [mailto:arnaud.pouliquen@...com]
> Sent: Friday, January 05, 2018 6:48 AM
> To: Jiaying Liang <jliang@...inx.com>; ohad@...ery.com;
> bjorn.andersson@...aro.org
> Cc: linux-remoteproc@...r.kernel.org; linux-kernel@...r.kernel.org; Jiaying
> Liang <jliang@...inx.com>
> Subject: Re: [RFC] rpmsg: virtio rpmsg: Add RPMsg char driver support
>
> Hi Wendy,
>
> Few remarks on your patch.
>
> On 01/05/2018 12:18 AM, Wendy Liang wrote:
> > virtio rpmsg was not implemented to use RPMsg char driver.
> > Each virtio ns announcement will create a new RPMsg device which is
> > supposed to bound to a RPMsg driver. It doesn't support dynamic
> > endpoints with name service per RPMsg device.
> > With RPMsg char driver, you can have multiple endpoints per RPMsg
> > device.
> >
> > Here is the change from this patch:
> > * Introduce a macro to indicate if want to use RPMsg char driver
> > for virtio RPMsg. The RPMsg device can either be bounded to
> > a simple RPMsg driver or the RPMsg char driver.
> > * Create Virtio RPMsg char device when the virtio RPMsg driver is
> > probed.
> > * when there is a remote service announced, keep it in the virtio
> > proc remote services list.
> > * when there is an endpoint created, bind it to a remote service
> > from the remote services list. If the service doesn't exist yet,
> > create one and mark the service address as ANY.
> Would be nice to simplify the review if patch was split in several patches (for
> instance per feature introduced).
[Wendy] These changes are made to use the RPMsg char driver.
Some items such when an endpoint is created while the remote hasn't announced
this service yet is "new feature", but at the moment I am not 100% sure if this change follows
the right direction.
>
> >
> > Signed-off-by: Wendy Liang <jliang@...inx.com>
> > ---
> > We have different userspace applications to use RPMsg differently,
> > what we need is a RPMsg char driver which can supports multiple
> > endpoints per remote device.
> > The virtio rpmsg driver at the moment doesn't support the RPMsg char
> > driver.
> > Please advise if this is patch is the right direction. If not, any
> > suggestions? Thanks
> > ---
> > drivers/rpmsg/Kconfig | 8 +
> > drivers/rpmsg/virtio_rpmsg_bus.c | 364
> > ++++++++++++++++++++++++++++++++++++++-
> > 2 files changed, 365 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/rpmsg/Kconfig b/drivers/rpmsg/Kconfig index
> > 65a9f6b..746f07e 100644
> > --- a/drivers/rpmsg/Kconfig
> > +++ b/drivers/rpmsg/Kconfig
> > @@ -52,4 +52,12 @@ config RPMSG_VIRTIO
> > select RPMSG
> > select VIRTIO
> >
> > +config RPMSG_VIRTIO_CHAR
> > + bool "Enable Virtio RPMSG char device driver support"
> > + default y
> > + depends on RPMSG_VIRTIO
> > + depends on RPMSG_CHAR
> > + help
> > + Say y here to enable to use RPMSG char device interface.
> > +
> > endmenu
> > diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c
> > b/drivers/rpmsg/virtio_rpmsg_bus.c
> > index 82b8300..6e30a3cc 100644
> > --- a/drivers/rpmsg/virtio_rpmsg_bus.c
> > +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
> > @@ -56,6 +56,7 @@
> > * @sendq: wait queue of sending contexts waiting for a tx buffers
> > * @sleepers: number of senders that are waiting for a tx buffer
> > * @ns_ept: the bus's name service endpoint
> > + * @rsvcs: remote services
> > *
> > * This structure stores the rpmsg state of a given virtio remote processor
> > * device (there might be several virtio proc devices for each
> > physical @@ -75,6 +76,9 @@ struct virtproc_info {
> > wait_queue_head_t sendq;
> > atomic_t sleepers;
> > struct rpmsg_endpoint *ns_ept;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct list_head rsvcs;
> > +#endif
> > };
> >
> > /* The feature bitmap for virtio rpmsg */ @@ -141,6 +145,36 @@ struct
> > virtio_rpmsg_channel { #define to_virtio_rpmsg_channel(_rpdev) \
> > container_of(_rpdev, struct virtio_rpmsg_channel, rpdev)
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +/**
> > + * struct virtio_rpmsg_rsvc - virtio RPMsg remote service
> > + * @name: name of the RPMsg remote service
> > + * @addr: RPMsg address of the remote service
> > + * @ept: local endpoint bound to the remote service
> > + * @node: list node
> > + */
> > +struct virtio_rpmsg_rsvc {
> > + char name[RPMSG_NAME_SIZE];
> > + u32 addr;
> > + struct rpmsg_endpoint *ept;
> > + struct list_head node;
> > +};
> > +
> > +/**
> > + * struct virtio_rpmsg_ept - virtio RPMsg endpoint
> > + * @rsvc: RPMsg service
> > + * @ept: RPMsg endpoint
> > + *
> > + */
> > +struct virtio_rpmsg_ept {
> > + struct virtio_rpmsg_rsvc *rsvc;
> > + struct rpmsg_endpoint ept;
> > +};
> > +
> > +#define to_virtio_rpmsg_ept(_ept) \
> > + container_of(_ept, struct virtio_rpmsg_ept, ept) #endif
> > +
> > /*
> > * We're allocating buffers of 512 bytes each for communications. The
> > * number of buffers will be computed from the number of buffers
> > supported @@ -216,6 +250,199 @@ rpmsg_sg_init(struct scatterlist *sg,
> void *cpu_addr, unsigned int len)
> > }
> > }
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > +/**
> > + * virtio_rpmsg_find_rsvc_by_name - find the announced remote service
> > + * @vrp: virtio remote proc information
> > + * @name: remote service name
> > + *
> > + * The caller is supposed to have mutex lock before calling this
> > +function
> > + *
> > + * return NULL if no remote service has been found, otherwise, return
> > + * the remote service pointer.
> > + */
> > +static struct virtio_rpmsg_rsvc *
> > +virtio_rpmsg_find_rsvc_by_name(struct virtproc_info *vrp, char *name)
> > +{
> > + struct virtio_rpmsg_rsvc *rsvc;
> > +
> > + list_for_each_entry(rsvc, &vrp->rsvcs, node) {
> > + if (!strncmp(rsvc->name, name, RPMSG_NAME_SIZE))
> > + /* remote service has found */
> > + return rsvc;
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > +/**
> > + * virtio_rpmsg_create_rsvc_by_name - create remote service
> > + * @vrp: virtio remote proc information
> > + * @name: remote service name
> > + *
> > + * The caller is supposed to have mutex lock before calling this
> > +function
> > + *
> > + * return NULL if remote service creation failed. otherwise, return
> > + * the remote service pointer.
> > + */
> > +static struct virtio_rpmsg_rsvc *
> > +virtio_rpmsg_create_rsvc_by_name(struct virtproc_info *vrp, char
> > +*name) {
> > + struct virtio_rpmsg_rsvc *rsvc;
> > +
> > + rsvc = virtio_rpmsg_find_rsvc_by_name(vrp, name);
> > + if (rsvc)
> > + return rsvc;
> Here, I think you break the possibility to instantiate several times the same
> service .
> For instance, today if you have 2 communications in parallel:
> Core1_Aplli1 <-- "foo" service --> core2_appli1
> Core1_Aplli2 <-- "foo" service --> core2_appli2
[Wendy] I can add the remote service address to the input argument
That is to identify a remote service, it will be the "name" + service address.
>
> 2 ways to do it:
> 1) one service and 4 endpoints
> => issue it to know the destination address.
> 2) 2 instances of the same service with default endpoint.
>
> > + rsvc = kzalloc(sizeof(*rsvc), GFP_KERNEL);
> > + if (!rsvc)
> > + return NULL;
> > + strncpy(rsvc->name, name, RPMSG_NAME_SIZE);
> > + list_add_tail(&rsvc->node, &vrp->rsvcs);
> > + return rsvc;
> > +}
> > +
> > +/**
> > + * virtio_rpmsg_remove_rsvc_by_name - remove remote service
> > + * @vrp: virtio remote proc information
> > + * @name: remote service name
> > + *
> > + */
> > +static void
> > +virtio_rpmsg_remove_rsvc_by_name(struct virtproc_info *vrp, char
> > +*name) {
> > + struct virtio_rpmsg_rsvc *rsvc;
> > + struct rpmsg_endpoint *ept;
> > + struct virtio_rpmsg_ept *vept;
> > +
> > + mutex_lock(&vrp->endpoints_lock);
> > + list_for_each_entry(rsvc, &vrp->rsvcs, node) {
> > + if (!strncmp(rsvc->name, name, RPMSG_NAME_SIZE)) {
> > + /* remote service has found, no need to
> > + * create
> > + */
> > + ept = rsvc->ept;
> > + if (ept) {
> > + vept = to_virtio_rpmsg_ept(ept);
> > + vept->rsvc = NULL;
> > + }
> > + list_del(&rsvc->node);
> > + kfree(rsvc);
> > + break;
> > + }
> > + }
> > + mutex_unlock(&vrp->endpoints_lock);
> > +}
> > +
> > +/**
> > + * virtio_rpmsg_create_rsvc - create remote service with channel
> > +information
> > + * @vrp: virtio remote proc information
> > + * @chinfo: channel information
> > + *
> > + * return remote service pointer if it is successfully created;
> > +otherwise,
> > + * return NULL.
> > + */
> > +static struct virtio_rpmsg_rsvc *
> > +virtio_rpmsg_create_rsvc(struct virtproc_info *vrp,
> > + struct rpmsg_channel_info *chinfo) {
> > + struct virtio_rpmsg_rsvc *rsvc;
> > +
> > + mutex_lock(&vrp->endpoints_lock);
> > + rsvc = virtio_rpmsg_create_rsvc_by_name(vrp, chinfo->name);
> > + if (rsvc) {
> > + struct virtio_device *vdev = vrp->vdev;
> > +
> > + rsvc->addr = chinfo->dst;
> > + dev_info(&vdev->dev, "Remote has announced
> service %s, %d.\n",
> > + chinfo->name, rsvc->addr);
> > + }
> > + mutex_unlock(&vrp->endpoints_lock);
> > + return rsvc;
> > +}
> > +
> > +/**
> > + * virtio_rpmsg_announce_ept_create - announce endpoint has been
> > +created
> > + * @ept: RPMsg endpoint
> > + *
> > + * return 0 if succeeded, otherwise, return error code.
> > + */
> > +static int virtio_rpmsg_announce_ept_create(struct rpmsg_endpoint
> > +*ept) {
> > + struct virtio_rpmsg_ept *vept = to_virtio_rpmsg_ept(ept);
> > + struct virtio_rpmsg_rsvc *rsvc = vept->rsvc;
> > + struct rpmsg_device *rpdev = ept->rpdev;
> > + struct virtio_rpmsg_channel *vch;
> > + struct virtproc_info *vrp;
> > + struct device *dev;
> > + int err = 0;
> > +
> > + if (!rpdev)
> > + /* If the endpoint is not connected to a RPMsg device,
> > + * no need to send the announcement.
> > + */
> > + return 0;
> > + vch = to_virtio_rpmsg_channel(rpdev);
> > + vrp = vch->vrp;
> > + dev = &ept->rpdev->dev;
> > + /* need to tell remote processor's name service about this channel ?
> */
> > + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) {
> > + struct rpmsg_ns_msg nsm;
> > +
> > + strncpy(nsm.name, rsvc->name, RPMSG_NAME_SIZE);
> > + nsm.addr = ept->addr;
> > + nsm.flags = RPMSG_NS_CREATE;
> > +
> > + err = rpmsg_sendto(ept, &nsm, sizeof(nsm),
> > + RPMSG_NS_ADDR);
> > + if (err)
> > + dev_err(dev, "failed to announce service %d\n", err);
> > + }
> > +
> > + return err;
> > +}
> > +
> > +/**
> > + * virtio_rpmsg_announce_ept_destroy - announce endpoint has been
> > +destroyed
> > + * @ept: RPMsg endpoint
> > + *
> > + * return 0 if succeeded, otherwise, return error code.
> > + */
> > +static int virtio_rpmsg_announce_ept_destroy(struct rpmsg_endpoint
> > +*ept) {
> > + struct virtio_rpmsg_ept *vept = to_virtio_rpmsg_ept(ept);
> > + struct virtio_rpmsg_rsvc *rsvc = vept->rsvc;
> > + struct rpmsg_device *rpdev = ept->rpdev;
> > + struct virtio_rpmsg_channel *vch;
> > + struct virtproc_info *vrp;
> > + struct device *dev;
> > + int err = 0;
> > +
> > + if (!rpdev)
> > + /* If the endpoint is not connected to a RPMsg device,
> > + * no need to send the announcement.
> > + */
> > + return 0;
> > + vch = to_virtio_rpmsg_channel(rpdev);
> > + vrp = vch->vrp;
> > + dev = &ept->rpdev->dev;
> > + /* tell remote processor's name service we're removing this channel
> */
> > + if (virtio_has_feature(vrp->vdev, VIRTIO_RPMSG_F_NS)) {
> > + struct rpmsg_ns_msg nsm;
> > +
> > + strncpy(nsm.name, rsvc->name, RPMSG_NAME_SIZE);
> > + nsm.addr = ept->addr;
> > + nsm.flags = RPMSG_NS_DESTROY;
> > +
> > + err = rpmsg_sendto(ept, &nsm, sizeof(nsm),
> > + RPMSG_NS_ADDR);
> > + if (err)
> > + dev_err(dev, "failed to announce service %d\n", err);
> > + }
> > +
> > + return err;
> > +}
> > +#endif
> > +
> > /**
> > * __ept_release() - deallocate an rpmsg endpoint
> > * @kref: the ept's reference count
> > @@ -229,26 +456,53 @@ static void __ept_release(struct kref *kref) {
> > struct rpmsg_endpoint *ept = container_of(kref, struct
> rpmsg_endpoint,
> > refcount);
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_ept *vept = to_virtio_rpmsg_ept(ept);
> > + struct virtio_rpmsg_channel *vch;
> > + struct virtproc_info *vrp;
> > +
> > + if (ept->rpdev) {
> > + vch = to_virtio_rpmsg_channel(ept->rpdev);
> > + vrp = vch->vrp;
> > + (void)virtio_rpmsg_announce_ept_destroy(ept);
> > + mutex_lock(&vrp->endpoints_lock);
> > + vept->rsvc->ept = NULL;
> > + mutex_unlock(&vrp->endpoints_lock);
> > + }
> > + kfree(vept);
> > +#else
> > /*
> > * At this point no one holds a reference to ept anymore,
> > * so we can directly free it
> > */
> > kfree(ept);
> > +#endif
> > }
> >
> > /* for more info, see below documentation of rpmsg_create_ept() */
> > static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info
> *vrp,
> > struct rpmsg_device *rpdev,
> > rpmsg_rx_cb_t cb,
> > - void *priv, u32 addr)
> > + void *priv,
> > + struct rpmsg_channel_info
> *ch)
> > {
> > int id_min, id_max, id;
> > struct rpmsg_endpoint *ept;
> > + u32 addr = ch->src;
> > struct device *dev = rpdev ? &rpdev->dev : &vrp->vdev->dev;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_ept *vept;
> > + struct virtio_rpmsg_rsvc *rsvc;
> >
> > + vept = kzalloc(sizeof(*vept), GFP_KERNEL);
> > + if (!vept)
> > + return NULL;
> > + ept = &vept->ept;
> > +#else
> > ept = kzalloc(sizeof(*ept), GFP_KERNEL);
> > if (!ept)
> > return NULL;
> > +#endif
> >
> > kref_init(&ept->refcount);
> > mutex_init(&ept->cb_lock);
> > @@ -259,7 +513,7 @@ static struct rpmsg_endpoint
> *__rpmsg_create_ept(struct virtproc_info *vrp,
> > ept->ops = &virtio_endpoint_ops;
> >
> > /* do we need to allocate a local address ? */
> > - if (addr == RPMSG_ADDR_ANY) {
> > + if (ch->src == RPMSG_ADDR_ANY) {
> > id_min = RPMSG_RESERVED_ADDRESSES;
> > id_max = 0;
> > } else {
> > @@ -277,6 +531,19 @@ static struct rpmsg_endpoint
> *__rpmsg_create_ept(struct virtproc_info *vrp,
> > }
> > ept->addr = id;
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + if (ept->addr != RPMSG_NS_ADDR) {
> > + rsvc = virtio_rpmsg_create_rsvc_by_name(vrp, ch->name);
> > + if (!rsvc)
> > + goto free_ept;
> > + vept->rsvc = rsvc;
> > + rsvc->ept = ept;
> > + dev_info(&vrp->vdev->dev, "RPMsg ept
> created, %s:%d,%d.\n",
> > + ch->name, ept->addr, rsvc->addr);
> > + (void)virtio_rpmsg_announce_ept_create(ept);
> > + }
> > +#endif
> > +
> > mutex_unlock(&vrp->endpoints_lock);
> >
> > return ept;
> > @@ -294,7 +561,7 @@ static struct rpmsg_endpoint
> > *virtio_rpmsg_create_ept(struct rpmsg_device *rpdev {
> > struct virtio_rpmsg_channel *vch = to_virtio_rpmsg_channel(rpdev);
> >
> > - return __rpmsg_create_ept(vch->vrp, rpdev, cb, priv, chinfo.src);
> > + return __rpmsg_create_ept(vch->vrp, rpdev, cb, priv, &chinfo);
> > }
> >
> > /**
> > @@ -392,6 +659,7 @@ static void virtio_rpmsg_release_device(struct
> device *dev)
> > kfree(vch);
> > }
> >
> > +#ifndef CONFIG_RPMSG_VIRTIO_CHAR
> > /*
> > * create an rpmsg channel using its name and address info.
> > * this function will be used to create both static and dynamic @@
> > -444,6 +712,7 @@ static struct rpmsg_device
> > *rpmsg_create_channel(struct virtproc_info *vrp,
> >
> > return rpdev;
> > }
> > +#endif
> >
> > /* super simple buffer "allocator" that is just enough for now */
> > static void *get_a_tx_buf(struct virtproc_info *vrp) @@ -660,8 +929,21
> > @@ static int rpmsg_send_offchannel_raw(struct rpmsg_device *rpdev,
> > static int virtio_rpmsg_send(struct rpmsg_endpoint *ept, void *data,
> > int len) {
> > struct rpmsg_device *rpdev = ept->rpdev;
> > - u32 src = ept->addr, dst = rpdev->dst;
> > + u32 src = ept->addr;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_ept *vept = to_virtio_rpmsg_ept(ept);
> > + u32 dst = vept->rsvc->addr;
> > +
> > + if (dst == RPMSG_ADDR_ANY)
> > + return -EPIPE;
> > +#else
> > + u32 dst = rpdev->dst;
> > +#endif
> >
> > + if (!rpdev) {
> > + pr_err("%s, ept %p rpdev is NULL\n", __func__, ept);
> > + return -EINVAL;
> > + }
> > return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, true);
> > }
> >
> > @@ -685,7 +967,16 @@ static int virtio_rpmsg_send_offchannel(struct
> > rpmsg_endpoint *ept, u32 src, static int virtio_rpmsg_trysend(struct
> > rpmsg_endpoint *ept, void *data, int len) {
> > struct rpmsg_device *rpdev = ept->rpdev;
> > - u32 src = ept->addr, dst = rpdev->dst;
> > + u32 src = ept->addr;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_ept *vept = to_virtio_rpmsg_ept(ept);
> > + u32 dst = vept->rsvc->addr;
> > +
> > + if (dst == RPMSG_ADDR_ANY)
> > + return -EPIPE;
> > +#else
> > + u32 dst = rpdev->dst;
> > +#endif
> >
> > return rpmsg_send_offchannel_raw(rpdev, src, dst, data, len, false);
> > } @@ -824,11 +1115,13 @@ static int rpmsg_ns_cb(struct rpmsg_device
> > *rpdev, void *data, int len,
> > void *priv, u32 src)
> > {
> > struct rpmsg_ns_msg *msg = data;
> > - struct rpmsg_device *newch;
> > struct rpmsg_channel_info chinfo;
> > struct virtproc_info *vrp = priv;
> > struct device *dev = &vrp->vdev->dev;
> > +#ifndef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct rpmsg_device *newch;
> > int ret;
> > +#endif
> >
> > #if defined(CONFIG_DYNAMIC_DEBUG)
> > dynamic_hex_dump("NS announcement: ", DUMP_PREFIX_NONE, 16,
> 1, @@
> > -863,13 +1156,25 @@ static int rpmsg_ns_cb(struct rpmsg_device *rpdev,
> void *data, int len,
> > chinfo.dst = msg->addr;
> >
> > if (msg->flags & RPMSG_NS_DESTROY) {
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + virtio_rpmsg_remove_rsvc_by_name(vrp, chinfo.name); #else
> > ret = rpmsg_unregister_device(&vrp->vdev->dev, &chinfo);
> > if (ret)
> > dev_err(dev, "rpmsg_destroy_channel failed: %d\n",
> ret);
> > +#endif
> > } else {
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_rsvc *rsvc;
> > +
> > + rsvc = virtio_rpmsg_create_rsvc(vrp, &chinfo);
> > + if (!rsvc)
> > + dev_err(dev, "virtio_rpmsg_create_rsvc failed\n");
> #else
> > newch = rpmsg_create_channel(vrp, &chinfo);
> > if (!newch)
> > dev_err(dev, "rpmsg_create_channel failed\n");
> > +#endif
> > }
> >
> > return 0;
> > @@ -885,6 +1190,10 @@ static int rpmsg_probe(struct virtio_device *vdev)
> > int err = 0, i;
> > size_t total_buf_space;
> > bool notify;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_channel *vch;
> > + struct rpmsg_device *rp_char_dev;
> > +#endif
> >
> > vrp = kzalloc(sizeof(*vrp), GFP_KERNEL);
> > if (!vrp)
> > @@ -956,9 +1265,14 @@ static int rpmsg_probe(struct virtio_device
> > *vdev)
> >
> > /* if supported by the remote processor, enable the name service */
> > if (virtio_has_feature(vdev, VIRTIO_RPMSG_F_NS)) {
> > + struct rpmsg_channel_info ns_chinfo;
> > +
> > + ns_chinfo.src = RPMSG_NS_ADDR;
> > + ns_chinfo.dst = RPMSG_NS_ADDR;
> > + strcpy(ns_chinfo.name, "name_service");
> > /* a dedicated endpoint handles the name service msgs */
> > vrp->ns_ept = __rpmsg_create_ept(vrp, NULL, rpmsg_ns_cb,
> > - vrp, RPMSG_NS_ADDR);
> > + vrp, &ns_chinfo);
> > if (!vrp->ns_ept) {
> > dev_err(&vdev->dev, "failed to create the ns ept\n");
> > err = -ENOMEM;
> > @@ -966,6 +1280,32 @@ static int rpmsg_probe(struct virtio_device *vdev)
> > }
> > }
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + vch = kzalloc(sizeof(*vch), GFP_KERNEL);
> > + if (!vch) {
> > + err = -ENOMEM;
> > + goto free_coherent;
> > + }
> > +
> > + /* Link the channel to our vrp */
> > + vch->vrp = vrp;
> > + /* Initialize remote services list */
> > + INIT_LIST_HEAD(&vrp->rsvcs);
> > +
> > + /* Assign public information to the rpmsg_device */
> > + rp_char_dev = &vch->rpdev;
> > + rp_char_dev->ops = &virtio_rpmsg_ops;
> > +
> > + rp_char_dev->dev.parent = &vrp->vdev->dev;
> > + rp_char_dev->dev.release = virtio_rpmsg_release_device;
> > + err = rpmsg_chrdev_register_device(rp_char_dev);
>
> I also tried to use this char device to expose rpmsg service for application.
> Issue with this device (from my point of view) is that it has to be relied on a
> kernel rpmsg device.
> I suppose that your need is to expose RMPSG to userland without creating a
> platform driver.
> In this case perhaps an alternative could be to move this part in rmpsg_char
> to allow to probe it in standalone...
[Wendy] with the existing virtio rpmsg implementation, each service will be binded
to a rpmsg_device. Each rpmsg_device can have multiple static endpoints.
But this is different to how rpmsg_char is supposed to use, rpmsg_char looks to me
Each endpoint created by rpmsg_char, either dynamic (without knowing the remote
Address beforehand) or static(specifying the remote endpoint) should be binded
to a service.
rpmsg_char user APIs can meet what we need, however, we uses rpmsg over virtio.
But at the moment the virtio_rpmsg doesn't support rpmsg_char.
You talked about the allow rpmsg_char to be probed in standalone, I think that's good
Idea, however, we need to bind the rpmsg_char to the virtio rpmsg device.
How about to have "driver_override" of rpmsg_device exposed to userspace with channel
name information available (sysfs) and then we can dynamically bind it to rpmsg_char driver
from userspace? And then we can limit the rpmsg_char create endpoint feature to
only create static endpoints over the same channel in case of virtio_rpmsg usage?
Thanks,
Wendy
>
> Regards
> Arnaud
>
> > + if (err) {
> > + kfree(vch);
> > + goto free_coherent;
> > + }
> > + dev_info(&vdev->dev, "Registered as RPMsg char device.\n"); #endif
> > +
> > /*
> > * Prepare to kick but don't notify yet - we can't do this before
> > * device is ready.
> > @@ -1009,6 +1349,9 @@ static void rpmsg_remove(struct virtio_device
> *vdev)
> > struct virtproc_info *vrp = vdev->priv;
> > size_t total_buf_space = vrp->num_bufs * vrp->buf_size;
> > int ret;
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + struct virtio_rpmsg_rsvc *rsvc, *tmp; #endif
> >
> > vdev->config->reset(vdev);
> >
> > @@ -1021,6 +1364,13 @@ static void rpmsg_remove(struct virtio_device
> > *vdev)
> >
> > idr_destroy(&vrp->endpoints);
> >
> > +#ifdef CONFIG_RPMSG_VIRTIO_CHAR
> > + list_for_each_entry_safe(rsvc, tmp, &vrp->rsvcs, node) {
> > + list_del(&rsvc->node);
> > + kfree(rsvc);
> > + }
> > +#endif
> > +
> > vdev->config->del_vqs(vrp->vdev);
> >
> > dma_free_coherent(vdev->dev.parent->parent, total_buf_space,
> >
Powered by blists - more mailing lists