lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45D1AE13.9000504@citd.de>
Date:	Tue, 13 Feb 2007 13:24:51 +0100
From:	Matthias Schniedermeyer <ms@...d.de>
To:	"Martin A. Fink" <fink@....mpg.de>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: SATA-performance: Linux vs. FreeBSD

Martin A. Fink wrote:

>> Also you have skipped the information how the images "arrive" on the system 
> (PCI(e) card?), that may be important for an "end to end" view of the 
> problem.
> 
> Images arrive via Gigabit Ethernet. GigE Vision standard. (PCIe x4)

The the next question is: ChipSet/Used Protocol/JumboFrames/(NAPI)/... .

Have you already determined the load caused by this part?
Depending on the GigE-Chipset, and Protocol/JumboFrames/(NAPI)/..., the involved overhead can be quite serious.

>> And what's also missing. What is "a long period of time".
>> Calculating best-case with the SSD:
>> 27GB divided by 30MB/s only gives a bit more than 15 Minutes.
>> And worst case with 50MB/s is less than 10 Minutes.
> 
> Well. The testdrive has 27GB. The final drive will have 225 GB. And there will 
> be 3 cameras and thus 3 disks. This means we talk about 140 MB/s for around 
> 90 minutes.
> For space applications with low power but high performance this is a long 
> time... ;-)

The MB/CPU/RAM will be the one specified in the first mail?
My gut feeling says: Forget it.

The needed total bandwidth may be to high and at least the incoming part via GigE may have serious overhead.
150MB/s in via (at least 2) GigE, without Zero-Copy there is another 150MB/s memory to memory.
Then there is the next 150MB/s memory to the discs, without Zero-Copy there also another 150MB/s memory to memory.
In total that's 300MB/s to 600MB/s without any processing.

But on the other hand, hdparm -T says my system (Core2Duo E6700, FSB1066, 2GB DDR2-800 RAM, 32Bit) has a buffer-cache bandwidth around 4000MB/s.
As you don't said which FSB and Memory-Type you have i would guess that your system should reach between 2000MB/s and 3500MB/s of LINEAR(!) memory bandwidth.
(Total usable Memory-Bandwidth is unfortunately also dependent on usage pattern. Large & linear is not as important as with a rotating HDD, but it factors in)



Btw. On the topic of filesystem and Linux performance:
SGI did a "really big" test some time ago width a big iron having 24 Itanium2-CPUs in 12 nodes, and 12*2 GB of ram and having 256 discs using XFS(Which is from SGI!).
The pdf-file is here:
http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf

According the the paper the system had a theoretical peak IO-performance of 11.5 GB/s and practically peaked at 10.7GB/s reading and 8.9GB/s writing.
IOW Linux and XFS CAN perform quite well, but the system has to have enough muscle for the job.
And since the paper (and Kernel 2.6.5) the development of Linux hasn't stopped.



-- 
Real Programmers consider "what you see is what you get" to be just as
bad a concept in Text Editors as it is in women. No, the Real Programmer
wants a "you asked for it, you got it" text editor -- complicated,
cryptic, powerful, unforgiving, dangerous.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ