lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 2 Mar 2010 13:40:56 +0900
From:	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
To:	konrad.wilk@...cle.com
Cc:	davem@...emloft.net, fujita.tomonori@....ntt.co.jp,
	hancockrwd@...il.com, bzolnier@...il.com,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	linux-usb@...r.kernel.org
Subject: Re: Was:  Re: [RFC PATCH] fix problems with NETIF_F_HIGHDMA in
	networking, Now: SWIOTLB dynamic allocation

On Mon, 1 Mar 2010 11:34:37 -0500
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com> wrote:

> On Sun, Feb 28, 2010 at 12:16:28AM -0800, David Miller wrote:
> > From: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
> > Date: Sun, 28 Feb 2010 03:38:19 +0900
> > 
> > > When I proposed such approach (always use swiotlb) before, IIRC,
> > > the objections were:
> > > 
> > > - better to make allocation respect dma_mask. (I don't think that this
> > >   approach is possible since we don't know which device handles data
> > >   later when we allocate memory).
> > 
> > And such objects might end up being processed by multiple devices with
> > different DMA restrictions.
> > 
> > > - swiotlb is not good for small systems since it allocates too much
> > >   memory (we can fix this though).
> > 
> > Indeed.
> 
> What would be a good mechanism for this? Enumerating all of the PCI
> devices to find out which ones are 32-bit and then allocate some chunk
> of memory based on the amount of them? say, 1MB per card?
> 
> Or maybe a simpler one - figure out how many pages we have an allocate
> based on some sliding rule (say, 8MB for under 512MB, 16MB between 512MB
> and 2GB, and 32MB for 2GB to 4GB, and after that the full 64MB?)

Hmm, have you read the above objection from embedded system people ? :)

We can't pre-allocate such lots of memory (several MB). And we don't
need to.

We have to pre-allocate some for the block layer (we have to guarantee
that we can handle one request at least), however we don't need to
pre-allocate for the rest (including the network stack).

This guarantee is a bit difficult since the dma_map_* doesn't know who
is at the upper layer; the dma_map_* cannot know if it needs to
allocate from the pre-allocated pool or not. I thought about adding
'dma attribute flag' to struct device (to be exact, struct
device_dma_parameters).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ