lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <553A66EB.3050802@redhat.com>
Date:	Fri, 24 Apr 2015 11:53:15 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Christoph Lameter <cl@...ux.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
CC:	Jerome Glisse <j.glisse@...il.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, jglisse@...hat.com, mgorman@...e.de,
	aarcange@...hat.com, airlied@...hat.com, benh@...nel.crashing.org,
	aneesh.kumar@...ux.vnet.ibm.com,
	Cameron Buschardt <cabuschardt@...dia.com>,
	Mark Hairgrove <mhairgrove@...dia.com>,
	Geoffrey Gerfin <ggerfin@...dia.com>,
	John McKenna <jmckenna@...dia.com>, akpm@...ux-foundation.org
Subject: Re: Interacting with coherent memory on external devices

On 04/24/2015 10:01 AM, Christoph Lameter wrote:
> On Thu, 23 Apr 2015, Paul E. McKenney wrote:
> 
>>> As far as I know Jerome is talkeing about HPC loads and high performance
>>> GPU processing. This is the same use case.
>>
>> The difference is sensitivity to latency.  You have latency-sensitive
>> HPC workloads, and Jerome is talking about HPC workloads that need
>> high throughput, but are insensitive to latency.
> 
> Those are correlated.
> 
>>> What you are proposing for High Performacne Computing is reducing the
>>> performance these guys trying to get. You cannot sell someone a Volkswagen
>>> if he needs the Ferrari.
>>
>> You do need the low-latency Ferrari.  But others are best served by a
>> high-throughput freight train.
> 
> The problem is that they want to run 2000 trains at the same time
> and they all must arrive at the destination before they can be send on
> their next trip. 1999 trains will be sitting idle because they need
> to wait of the one train that was delayed. This reduces the troughput.
> People really would like all 2000 trains to arrive on schedule so that
> they get more performance.

So you run 4000 or even 6000 trains, and have some subset of them
run at full steam, while others are waiting on memory accesses.

In reality the overcommit factor is likely much smaller, because
the GPU threads run and block on memory in smaller, more manageable
numbers, say a few dozen at a time.

-- 
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ