lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [day] [month] [year] [list]
Date:	Thu, 3 Jul 2008 20:28:55 +0100
From:	"Daniel J Blueman" <daniel.blueman@...il.com>
To:	"Justin Piszcz" <jpiszcz@...idpixels.com>
Cc:	"Jeff Garzik" <jeff@...zik.org>,
	"Linux Kernel" <linux-kernel@...r.kernel.org>
Subject: Re: Veliciraptor HDD 3.0gbps but UDMA/100 on PCI-e controller?

On 3 Jul, 18:10, Justin Piszcz <jpis...@...idpixels.com> wrote:
> On Thu, 3 Jul 2008, Roger Heflin wrote:
> > Justin Piszcz wrote:
>
> >> On Thu, 3 Jul 2008, Jeff Garzik wrote:
>
> >>> Justin Piszcz wrote:
>
> > Well, given that pcie x1 is max 250MB/second, and a number of pcie cards are
> > not native (they have a pcie to pci converter between them), "dmidecode -vvv"
> > will give you more details on the actual layout of things, and given that I
> > have seen several devices actually run slower by having the ability to
> > oversubscribe the bandwidth that is available and seemingly actually run
> > slower because of this ability, that may have some bearing,    Ie 2 slower
> > disks may be faster than 2 fast disks on the pcie just because they don't
> > oversubscribe the interfere. And given that if there is a pci converter that
> > may lower the overall bandwidth even more, and cause the issue.   If this was
> > old style ethernet I would have though collisions, but it must just come down
> > to the arbitration setups not being carefully designed for high utilization,
> > and high interference between devices.
>
> >                               Roger
>
> I have ordered a couple 4 port boards (that are PCI-e x4), my next plan
> of action to acquire > 600MiB/s is as follows:
>
> Current:
> Mobo: 6 drives (full speed)
> Silicon Image (3 cards, 2 drives each)
>
> Future:
> Mobo: 6 drives (full speed)
> Silicon Image (3 cards, 1 drive each)
> Four Port Card in x16 slot (the 3 remaining drives)
>
> This should in theory allow 1000 MiB/s..
>
> --
>
> # dmidecode -vvv
> dmidecode: invalid option -- v
>
> Assume you mean lspci:
>
> 05:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01)
>          Subsystem: Silicon Image, Inc. Device 7132
>          Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
>          Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
>          Latency: 0, Cache Line Size: 64 bytes
>          Interrupt: pin A routed to IRQ 16
>          Region 0: Memory at e0104000 (64-bit, non-prefetchable) [size=128]
>          Region 2: Memory at e0100000 (64-bit, non-prefetchable) [size=16K]
>          Region 4: I/O ports at 2000 [size=128]
>          Expansion ROM at e0900000 [disabled] [size=512K]
>          Capabilities: [54] Power Management version 2
>                  Flags: PMEClk- DSI+ D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
>                  Status: D0 PME-Enable- DSel=0 DScale=1 PME-
>          Capabilities: [5c] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
>                  Address: 0000000000000000  Data: 0000
>          Capabilities: [70] Express (v1) Legacy Endpoint, MSI 00
>                  DevCap: MaxPayload 1024 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
>                          ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset-
>                  DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
>                          RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
>                          MaxPayload 128 bytes, MaxReadReq 512 bytes
>                  DevSta: CorrErr+ UncorrErr+ FatalErr- UnsuppReq+ AuxPwr- TransPend-
>                  LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Latency L0 unlimited, L1 unlimited
>                          ClockPM- Suprise- LLActRep- BwNot-
>                  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
>                          ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>                  LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
>          Capabilities: [100] Advanced Error Reporting <?>
>          Kernel driver in use: sata_sil24

PCIe (gen 1) x1 tops out around 186MB/s with 128 bytes Max Payload,
after 8b/10b encoding,  DLLP and TLP protocols, so this would mostly
account for the limits.

Part of it may depend on implementation, you'll know what I mean if
you've used hp's older (C)CISS controllers.

Daniel
-- 
Daniel J Blueman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ