lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Mar 2007 08:35:30 -0800
From:	"Michael K. Edwards" <medwards.linux@...il.com>
To:	"Ralf Baechle" <ralf@...ux-mips.org>
Cc:	"Don Fry" <pcnet32@...izon.net>, jgarzik@...ox.com,
	netdev@...r.kernel.org
Subject: Re: [PATCH 1/2] pcnet32: only allocate init_block dma consistent

On 3/6/07, Ralf Baechle <ralf@...ux-mips.org> wrote:
> Price question: why would this patch make a difference under VMware? :-)

Moving the struct pcnet32_private from the GFP_DMA32 init_block to the
GFP_KERNEL netdev allocation may be a win even on systems where
GFP_DMA32 is normally cached, because the private area will get read
ahead into cache when the netdev is touched.  (This could be a bigger
win if the most often accessed members were moved to the beginning of
the pcnet32_private struct.)

On the other hand, VMWare may engage in some sort of sleight of hand
to keep the low 16MB or more of the VM's memory contiguously allocated
and warm in the real MMU (locked hugepage TLB entries?  I'm
speculating).  So having allocated the private area as part of a
DMA-able page may have silently spared you a page fault on access.

On the third hand, the new layout will rarely be a problem if the
whole netdev (including private area) fits in one page, since if you
were going to take a page fault you took it when you looked into the
netdev.  So it's hard to see how it could cause a performance
regression unless VMWare loses its timeslice (and the TLB entry for
the page containing the netdev) in the middle of pcnet32_rx, etc.

Lennart is of course right that most VMWare VMs are using vmxnet
instead, but they're also using distro kernels.  :-)  I find VMWare
useful for certain kinds of kernel experimentation, and don't want to
fool with vmxnet every time I flip a kernel config switch.  Linus
kernels run just fine on VMWare Workstation using piix, mptspi, and
pcnet32 (I'm running vanilla 2.6.20.1 right now).  I would think that
changes to those drivers should be regression tested under VMWare, and
I'm happy to help.

Cheers,
- Michael
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ