lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181213184717.GA8436@apalos>
Date:   Thu, 13 Dec 2018 20:47:17 +0200
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Ioana Ciocoi Radulescu <ruxandra.radulescu@....com>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        Ioana Ciornei <ioana.ciornei@....com>,
        "dsahern@...il.com" <dsahern@...il.com>,
        Camelia Alexandra Groza <camelia.groza@....com>
Subject: Re: [PATCH v2 net-next 0/8] dpaa2-eth: Introduce XDP support

Hi Ioanna
> > >
> > > Well if you don't have to use 64kb pages you can use the page_pool API
> > (only
> > > used from mlx5 atm) and get the xdp recycling for free. The memory
> > 'waste'
> > > for
> > > 4kb pages isn't too much if the platforms the driver sits on have decent
> > > amounts
> > > of memory  (and the number of descriptors used is not too high).
> > > We still have work in progress with Jesper (just posted an RFC)with
> > > improvements
> > > on the API.
> > > Using it is fairly straightforward. This is a patchset on marvell's mvneta
> > > driver with the API changes needed:
> > > https://www.spinics.net/lists/netdev/msg538285.html
> > >
> > > If you need 64kb pages you would have to introduce page recycling and
> > > sharing
> > > like intel/mlx drivers on your driver.
> > 
> > Thanks a lot for the info, will look into this. Do you have any pointers
> > as to why the full page restriction exists in the first place? Sorry if it's
> > a dumb question, but I haven't found details on this and I'd really like
> > to understand it.
> 
> After a quick glance, not sure we can use page_pool API.
> 
> The problem is our driver is not ring-based: we have a single
> buffer pool used by all Rx queues, so using page_pool allocations
> would imply adding a layer of synchronization in our driver.

We had similar concerns a while ago. Have a look at:
https://www.spinics.net/lists/netdev/msg481494.html
https://www.mail-archive.com/netdev@vger.kernel.org/msg236820.html

Jesper and i have briefly discussed on this and this type of hardware is
something we need to consider for page_pool API.

> 
> I'm still trying to figure out how deep is the trouble we're in
> for not using single page per packet in our driver, considering
> we don't support XDP_REDIRECT yet. Guess I'll wait for Jasper's
> answer for this.
I might be wrong, but i don't think anything apart from performance will go
'break', since no memory is sent to the userspace (no XDP_REDIRECT implemented).
Jesper will probably be able to think of any corner cases i might be ignoring.

Then again you write a driver, test it and you'll end up rewriting and
re-testing if you ever need the feature. 

/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ