[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211110075412-mutt-send-email-mst@kernel.org>
Date: Wed, 10 Nov 2021 07:54:46 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
Cc: virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
Jason Wang <jasowang@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH v4 0/3] virtio support cache indirect desc
On Wed, Nov 10, 2021 at 07:53:49AM -0500, Michael S. Tsirkin wrote:
> On Mon, Nov 08, 2021 at 10:47:40PM +0800, Xuan Zhuo wrote:
> > On Mon, 8 Nov 2021 08:49:27 -0500, Michael S. Tsirkin <mst@...hat.com> wrote:
> > >
> > > Hmm a bunch of comments got ignored. See e.g.
> > > https://lore.kernel.org/r/20211027043851-mutt-send-email-mst%40kernel.org
> > > if they aren't relevant add code comments or commit log text explaining the
> > > design choice please.
> >
> > I should have responded to related questions, I am guessing whether some emails
> > have been lost.
> >
> > I have sorted out the following 6 questions, if there are any missing questions,
> > please let me know.
> >
> > 1. use list_head
> > In the earliest version, I used pointers directly. You suggest that I use
> > llist_head, but considering that llist_head has atomic operations. There is no
> > competition problem here, so I used list_head.
> >
> > In fact, I did not increase the allocated space for list_head.
> >
> > use as desc array: | vring_desc | vring_desc | vring_desc | vring_desc |
> > use as queue item: | list_head ........................................|
>
> the concern is that you touch many cache lines when removing an entry.
>
> I suggest something like:
>
> llist: add a non-atomic list_del_first
>
> One has to know what one's doing, but if one has locked the list
> preventing all accesses, then it's ok to just pop off an entry without
> atomics.
>
> Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
>
> ---
>
> diff --git a/include/linux/llist.h b/include/linux/llist.h
> index 24f207b0190b..13a47dddb12b 100644
> --- a/include/linux/llist.h
> +++ b/include/linux/llist.h
> @@ -247,6 +247,17 @@ static inline struct llist_node *__llist_del_all(struct llist_head *head)
>
> extern struct llist_node *llist_del_first(struct llist_head *head);
>
> +static inline struct llist_node *__llist_del_first(struct llist_head *head)
> +{
> + struct llist_node *first = head->first;
> +
> + if (!first)
> + return NULL;
> +
> + head->first = first->next;
> + return first;
> +}
> +
> struct llist_node *llist_reverse_order(struct llist_node *head);
>
> #endif /* LLIST_H */
>
>
> -----
>
>
> > 2.
> > > > + if (vq->use_desc_cache && total_sg <= VIRT_QUEUE_CACHE_DESC_NUM) {
> > > > + if (vq->desc_cache_chain) {
> > > > + desc = vq->desc_cache_chain;
> > > > + vq->desc_cache_chain = (void *)desc->addr;
> > > > + goto got;
> > > > + }
> > > > + n = VIRT_QUEUE_CACHE_DESC_NUM;
> > >
> > > Hmm. This will allocate more entries than actually used. Why do it?
> >
> >
> > This is because the size of each cache item is fixed, and the logic has been
> > modified in the latest code. I think this problem no longer exists.
> >
> >
> > 3.
> > > What bothers me here is what happens if cache gets
> > > filled on one numa node, then used on another?
> >
> > I'm thinking about another question, how did the cross-numa appear here, and
> > virtio desc queue also has the problem of cross-numa. So is it necessary for us
> > to deal with the cross-numa scene?
>
> It's true that desc queue might be cross numa, and people are looking
> for ways to improve that. Not a reason to make things worse ...
>
To add to that, given it's a concern, how about actually
testing performance for this config?
> > Indirect desc is used as virtio desc, so as long as it is in the same numa as
> > virito desc. So we can allocate indirect desc cache at the same time when
> > allocating virtio desc queue.
>
> Using it from current node like we do now seems better.
>
> > 4.
> > > So e.g. for rx, we are wasting memory since indirect isn't used.
> >
> > In the current version, desc cache is set up based on pre-queue.
> >
> > So if the desc cache is not used, we don't need to set the desc cache.
> >
> > For example, virtio-net, as long as the tx queue and the rx queue in big packet
> > mode enable desc cache.
>
>
> I liked how in older versions adding indrect enabled it implicitly
> though without need to hack drivers.
>
> > 5.
> > > Would a better API be a cache size in bytes? This controls how much
> > > memory is spent after all.
> >
> > My design is to set a threshold. When total_sg is greater than this threshold,
> > it will fall back to kmalloc/kfree. When total_sg is less than or equal to
> > this threshold, use the allocated cache.
> >
>
> I know. My question is this, do devices know what a good threshold is?
> If yes how do they know?
>
> > 6. kmem_cache_*
> >
> > I have tested these, the performance is not as good as the method used in this
> > patch.
>
> Do you mean kmem_cache_alloc_bulk/kmem_cache_free_bulk?
> You mentioned just kmem_cache_alloc previously.
>
> >
> > Thanks.
Powered by blists - more mailing lists