[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150218173746.GF8152@l.oracle.com>
Date: Wed, 18 Feb 2015 12:37:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To: Roger Pau Monné <roger.pau@...rix.com>
Cc: Bob Liu <bob.liu@...cle.com>, xen-devel@...ts.xen.org,
david.vrabel@...rix.com, linux-kernel@...r.kernel.org,
felipe.franciosi@...rix.com, axboe@...com, hch@...radead.org,
avanzini.arianna@...il.com
Subject: Re: [PATCH 04/10] xen/blkfront: separate ring information to an new
struct
On Wed, Feb 18, 2015 at 06:28:49PM +0100, Roger Pau Monné wrote:
> El 15/02/15 a les 9.18, Bob Liu ha escrit:
> > A ring is the representation of a hardware queue, this patch separate ring
> > information from blkfront_info to an new struct blkfront_ring_info to make
> > preparation for real multi hardware queues supporting.
> >
> > Signed-off-by: Arianna Avanzini <avanzini.arianna@...il.com>
> > Signed-off-by: Bob Liu <bob.liu@...cle.com>
> > ---
> > drivers/block/xen-blkfront.c | 403 +++++++++++++++++++++++--------------------
> > 1 file changed, 218 insertions(+), 185 deletions(-)
> >
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 5a90a51..aaa4a0e 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -102,23 +102,15 @@ MODULE_PARM_DESC(max, "Maximum amount of segments in indirect requests (default
> > #define BLK_RING_SIZE __CONST_RING_SIZE(blkif, PAGE_SIZE)
> >
> > /*
> > - * We have one of these per vbd, whether ide, scsi or 'other'. They
> > - * hang in private_data off the gendisk structure. We may end up
> > - * putting all kinds of interesting stuff here :-)
> > + * Per-ring info.
> > + * A blkfront_info structure can associate with one or more blkfront_ring_info,
> > + * depending on how many hardware queues supported.
> > */
> > -struct blkfront_info
> > -{
> > +struct blkfront_ring_info {
> > spinlock_t io_lock;
> > - struct mutex mutex;
> > - struct xenbus_device *xbdev;
> > - struct gendisk *gd;
> > - int vdevice;
> > - blkif_vdev_t handle;
> > - enum blkif_state connected;
> > int ring_ref;
> > struct blkif_front_ring ring;
> > unsigned int evtchn, irq;
> > - struct request_queue *rq;
> > struct work_struct work;
> > struct gnttab_free_callback callback;
> > struct blk_shadow shadow[BLK_RING_SIZE];
> > @@ -126,6 +118,22 @@ struct blkfront_info
> > struct list_head indirect_pages;
> > unsigned int persistent_gnts_c;
> > unsigned long shadow_free;
> > + struct blkfront_info *info;
>
> AFAICT you seem to have a list of persistent grants, indirect pages and
> a grant table callback for each ring, isn't this supposed to be shared
> between all rings?
>
> I don't think we should be going down that route, or else we can hoard a
> large amount of memory and grants.
It does remove the lock that would have to be accessed by each
ring thread to access those. Those values (grants) can be limited to be a smaller
value such that the overall number is the same as it was with the previous
version. As in: each ring has = MAX_GRANTS / nr_online_cpus().
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists