lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 2 Oct 2020 13:19:05 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Maor Gottlieb <maorg@...dia.com>
CC:     Leon Romanovsky <leon@...nel.org>,
        Doug Ledford <dledford@...hat.com>,
        Maor Gottlieb <maorg@...lanox.com>,
        Christoph Hellwig <hch@....de>,
        "Daniel Vetter" <daniel@...ll.ch>, David Airlie <airlied@...ux.ie>,
        <dri-devel@...ts.freedesktop.org>,
        <intel-gfx@...ts.freedesktop.org>,
        "Jani Nikula" <jani.nikula@...ux.intel.com>,
        Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
        <linux-kernel@...r.kernel.org>, <linux-rdma@...r.kernel.org>,
        Rodrigo Vivi <rodrigo.vivi@...el.com>,
        "Roland Scheidegger" <sroland@...are.com>,
        Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
        VMware Graphics <linux-graphics-maintainer@...are.com>
Subject: Re: [PATCH rdma-next v4 1/4] lib/scatterlist: Add support in dynamic
 allocation of SG table from pages

On Fri, Oct 02, 2020 at 07:11:33PM +0300, Maor Gottlieb wrote:
> 
> On 10/2/2020 6:02 PM, Jason Gunthorpe wrote:
> > On Sun, Sep 27, 2020 at 09:46:44AM +0300, Leon Romanovsky wrote:
> > > +struct scatterlist *__sg_alloc_table_from_pages(struct sg_table *sgt,
> > > +		struct page **pages, unsigned int n_pages, unsigned int offset,
> > > +		unsigned long size, unsigned int max_segment,
> > > +		struct scatterlist *prv, unsigned int left_pages,
> > > +		gfp_t gfp_mask)
> > >   {
> > > -	unsigned int chunks, cur_page, seg_len, i;
> > > +	unsigned int chunks, cur_page, seg_len, i, prv_len = 0;
> > > +	struct scatterlist *s = prv;
> > > +	unsigned int table_size;
> > > +	unsigned int tmp_nents;
> > >   	int ret;
> > > -	struct scatterlist *s;
> > > 
> > >   	if (WARN_ON(!max_segment || offset_in_page(max_segment)))
> > > -		return -EINVAL;
> > > +		return ERR_PTR(-EINVAL);
> > > +	if (IS_ENABLED(CONFIG_ARCH_NO_SG_CHAIN) && prv)
> > > +		return ERR_PTR(-EOPNOTSUPP);
> > > +
> > > +	tmp_nents = prv ? sgt->nents : 0;
> > > +
> > > +	if (prv &&
> > > +	    page_to_pfn(sg_page(prv)) + (prv->length >> PAGE_SHIFT) ==
> > This calculation of the end doesn't consider sg->offset
> 
> Right, should be fixed.
> > 
> > > +	    page_to_pfn(pages[0]))
> > > +		prv_len = prv->length;
> > > 
> > >   	/* compute number of contiguous chunks */
> > >   	chunks = 1;
> > > @@ -410,13 +461,17 @@ int __sg_alloc_table_from_pages(struct sg_table *sgt, struct page **pages,
> > >   		}
> > >   	}
> > > 
> > > -	ret = sg_alloc_table(sgt, chunks, gfp_mask);
> > > -	if (unlikely(ret))
> > > -		return ret;
> > > +	if (!prv) {
> > > +		/* Only the last allocation could be less than the maximum */
> > > +		table_size = left_pages ? SG_MAX_SINGLE_ALLOC : chunks;
> > > +		ret = sg_alloc_table(sgt, table_size, gfp_mask);
> > > +		if (unlikely(ret))
> > > +			return ERR_PTR(ret);
> > > +	}
> > This is basically redundant right? Now that get_next_sg() can allocate
> > SGs it can just build them one by one, no need to preallocate.
> > 
> > Actually all the changes the the allocation seem like overkill, just
> > allocate a single new array directly in get_next_sg() whenever it
> > needs.
> 
> No, only the last allocation could be less than maximum. (as written in the
> comment).

The point is that get_next_sg is fully redundent with
sg_alloc_table() because it is always used in cases when prv is
set. There is zero reason to call sg_alloc_table here in the one case
where prv is not set.

Further this cleans up the spagehtti goto in the middle of the for
loop and avoids allocating an extra chunk if the page list fully fits
in prv.

Given how much smaller it is I think you should look more carefully.

Jason

Powered by blists - more mailing lists