lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1703071658480.8160@sstabellini-ThinkPad-X260>
Date:   Tue, 7 Mar 2017 17:06:57 -0800 (PST)
From:   Stefano Stabellini <sstabellini@...nel.org>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>
cc:     Stefano Stabellini <sstabellini@...nel.org>,
        xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
        Stefano Stabellini <stefano@...reto.com>, jgross@...e.com,
        Eric Van Hensbergen <ericvh@...il.com>,
        Ron Minnich <rminnich@...dia.gov>,
        Latchesar Ionkov <lucho@...kov.net>,
        v9fs-developer@...ts.sourceforge.net
Subject: Re: [PATCH 6/7] xen/9pfs: receive responses

On Tue, 7 Mar 2017, Boris Ostrovsky wrote:
> On 03/06/2017 03:01 PM, Stefano Stabellini wrote:
> > Upon receiving a notification from the backend, schedule the
> > p9_xen_response work_struct. p9_xen_response checks if any responses are
> > available, if so, it reads them one by one, calling p9_client_cb to send
> > them up to the 9p layer (p9_client_cb completes the request). Handle the
> > ring following the Xen 9pfs specification.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@...reto.com>
> > CC: boris.ostrovsky@...cle.com
> > CC: jgross@...e.com
> > CC: Eric Van Hensbergen <ericvh@...il.com>
> > CC: Ron Minnich <rminnich@...dia.gov>
> > CC: Latchesar Ionkov <lucho@...kov.net>
> > CC: v9fs-developer@...ts.sourceforge.net
> > ---
> >  net/9p/trans_xen.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 53 insertions(+)
> > 
> > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> > index 4e26556..1ca9246 100644
> > --- a/net/9p/trans_xen.c
> > +++ b/net/9p/trans_xen.c
> > @@ -149,6 +149,59 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> >  
> >  static void p9_xen_response(struct work_struct *work)
> >  {
> > +	struct xen_9pfs_front_priv *priv;
> > +	struct xen_9pfs_dataring *ring;
> > +	RING_IDX cons, prod, masked_cons, masked_prod;
> > +	struct xen_9pfs_header h;
> > +	struct p9_req_t *req;
> > +	int status = REQ_STATUS_ERROR;
> 
> 
> Doesn't this need to go inside the loop?

Yes, thank you!


> > +
> > +	ring = container_of(work, struct xen_9pfs_dataring, work);
> > +	priv = ring->priv;
> > +
> > +	while (1) {
> > +		cons = ring->intf->in_cons;
> > +		prod = ring->intf->in_prod;
> > +		rmb();
> 
> 
> Is this rmb() or mb()? (Or, in fact, virt_XXX()?) You used mb() in the
> previous patch.
 
I think they should all be virt_XXX, thanks.


> > +
> > +		if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) {
> > +			notify_remote_via_irq(ring->irq);
> > +			return;
> > +		}
> > +
> > +		masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
> > +		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> > +
> > +		xen_9pfs_read_packet(ring->ring.in,
> > +				masked_prod, &masked_cons,
> > +				XEN_9PFS_RING_SIZE, &h, sizeof(h));
> > +
> > +		req = p9_tag_lookup(priv->client, h.tag);
> > +		if (!req || req->status != REQ_STATUS_SENT) {
> > +			dev_warn(&priv->dev->dev, "Wrong req tag=%x\n", h.tag);
> > +			cons += h.size;
> > +			mb();
> > +			ring->intf->in_cons = cons;
> > +			continue;
> 
> 
> I don't know what xen_9pfs_read_packet() does so perhaps it's done there
> but shouldn't the pointers be updated regardless of the 'if' condition?

This is the error path - the index is increased immediately. In the
non-error case, we do that right after the next read_packet call, few
lines below.


> > +		}
> > +
> > +		memcpy(req->rc, &h, sizeof(h));
> > +		req->rc->offset = 0;
> > +
> > +		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> > +		xen_9pfs_read_packet(ring->ring.in,
> > +				masked_prod, &masked_cons,
> > +				XEN_9PFS_RING_SIZE, req->rc->sdata, h.size);
> > +
> > +		mb();
> > +		cons += h.size;
> > +		ring->intf->in_cons = cons;

                   Here ^


> > +		if (req->status != REQ_STATUS_ERROR)
> > +			status = REQ_STATUS_RCVD;
> > +
> > +		p9_client_cb(priv->client, req, status);
> > +	}
> >  }
> >  
> >  static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ