[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180320152725.5ea01d34@redhat.com>
Date: Tue, 20 Mar 2018 15:27:25 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Jason Wang <jasowang@...hat.com>
Cc: netdev@...r.kernel.org,
BjörnTöpel <bjorn.topel@...el.com>,
magnus.karlsson@...el.com, eugenia@...lanox.com,
John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>, brouer@...hat.com
Subject: Re: [bpf-next V2 PATCH 10/15] xdp: rhashtable with allocator ID to
pointer mapping
On Tue, 20 Mar 2018 10:26:50 +0800
Jason Wang <jasowang@...hat.com> wrote:
> On 2018年03月19日 17:48, Jesper Dangaard Brouer wrote:
> > On Fri, 16 Mar 2018 16:45:30 +0800
> > Jason Wang <jasowang@...hat.com> wrote:
> >
> >> On 2018年03月10日 00:07, Jesper Dangaard Brouer wrote:
> >>> On Fri, 9 Mar 2018 21:07:36 +0800
> >>> Jason Wang <jasowang@...hat.com> wrote:
> >>>
> >>>>>>> Use the IDA infrastructure for getting a cyclic increasing ID number,
> >>>>>>> that is used for keeping track of each registered allocator per
> >>>>>>> RX-queue xdp_rxq_info.
> >>>>>>>
> >>>>>>> Signed-off-by: Jesper Dangaard Brouer<brouer@...hat.com>
> >>>>>> A stupid question is, can we manage to unify this ID with NAPI id?
> >>>>> Sorry I don't understand the question?
> >>>> I mean can we associate page poll pointer to napi_struct, record NAPI id
> >>>> in xdp_mem_info and do lookup through NAPI id?
> >>> No. The driver can unreg/reg a new XDP memory model,
> >>
> >> Is there an actual use case for this?
> >
> > I believe this is the common use case. When attaching an XDP/bpf prog,
> > then the driver usually want to change the RX-ring memory model
> > (different performance trade off).
>
> Right, but a single driver should only have one XDP memory model.
No! -- a driver can have multiple XDP memory models, based on different
performance trade offs and hardware capabilities.
The mlx5 (100Gbit/s) driver/hardware is a good example, which need
different memory models. It already support multiple RX memory models,
depending on HW support. So, I predict that we hit at performance
limit around 42Mpps on PCIe (I can measure 36Mpps), this is due to
PCI-express translations/sec limit. The mlx5 HW supports a compressed
descriptor format which deliver packets in several pages (based on
offset and len), thus lowering the needed PCIe transactions. The
pitfall is that this comes tail room limitations, which can be okay if
e.g. the users use-case does not involve cpumap.
Plus, when a driver need to support AF_XDP zero-copy, that also count
as another XDP memory model...
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists