lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 08 May 2014 13:24:18 -0400 (EDT)
From:	David Miller <davem@...emloft.net>
To:	David.Laight@...LAB.COM
Cc:	nhorman@...driver.com, netdev@...r.kernel.org,
	cooldavid@...ldavid.org
Subject: Re: [PATCH] jme: Fix DMA unmap warning

From: David Laight <David.Laight@...LAB.COM>
Date: Thu, 8 May 2014 09:02:04 +0000

> From: Neil Horman
> ...
>> Perhaps a solution is a signalling mechanism tied to completion interrupts?
>> I.e. a mapping failure gets reported to the stack, which causes the
>> correspondnig queue to be stopped, until such time a the driver signals a safe
>> restart by the reception of a tx completion interrupt?  I'm actually tinkering
>> right now with a mechanism that provides guidance to the stack as to how many
>> dma descriptors are available in a given net_device that might come in handy
> 
> Is there any mileage in the driver pre-allocating a block of iommu entries
> and then allocating them to the tx and rx buffers itself?
> This might need some 'claw back' mechanism to get 'fair' (ok working)
> allocations when there aren't enough entries for all the drivers.

The idea of preallocation has been explored before, but those efforts
never went very far.

In the case that we're mapping SKBs into the TX or RX ring, there is
little benefit cost wise.  As described much of the cost is installing
the translation, and that can't be done until we have the SKB itself.

Would it help with resource exhaustion?  I'm not so sure, because I'd
rather have everything that isn't currently in use available to those
entities that have an immediate need rather than holding onto space
"just in case".

> I remember some old systems where the cost of setting up the iommu
> entries was such that the breakeven point for copying data was
> measured as about 1k bytes. I've no idea what it is for these systems.

There are usually two costs associated with that, the first is the
spinlock that protects the iommu allocation data structures, the
second is programming the IOMMU hardware to flush the I/O TLB when
mappings change.

There isn't much you can do about the spinlock, but for the other
problem I experimented and implemented a scheme where the allocations
are done sequentially and therefore the I/O TLB flush only happens
once each time we wrap around thus mitigating that code.

See arch/sparc/kernel/iommu.c:iommu_range_alloc()

Unfortunately, on newer sparc64 systems the IOMMU PTE updates are done
with hypervisor calls of which I have no control over and they
unconditionally do an IOMMU TLB flush, and therefore this mitigation
trick is no longer possible.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ