lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 27 Jul 2006 11:49:02 +0400
From:	Evgeniy Polyakov <johnpol@....mipt.ru>
To:	David Miller <davem@...emloft.net>
Cc:	drepper@...hat.com, linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org
Subject: Re: async network I/O, event channels, etc

On Wed, Jul 26, 2006 at 11:10:55PM -0700, David Miller (davem@...emloft.net) wrote:
> From: Evgeniy Polyakov <johnpol@....mipt.ru>
> Date: Wed, 26 Jul 2006 10:28:17 +0400
> 
> > I have not created additional DMA memory allocation methods, like
> > Ulrich described in his article, so I handle it inside NAIO which
> > has some overhead (I posted get_user_pages() sclability graph some
> > time ago).
> 
> I've been thinking about this aspect, and I think it's very
> interesting.  Let's be clear what the ramifications of this
> are first.
> 
> Using the terminology of Network Algorithmics, this is an
> instance of Principle 2, "Shift computation in time".
> 
> Instead of using get_user_pages() at AIO setup, we instead map the
> thing to userspace later when the user wants it.  Pinning pages is a
> pain because both user and kernel refer to the buffer at the same
> time.  We get more flexibility when the user has to map the thing
> explicitly.

I.e. map skb's data to userspace? Not a good idea especially with it's
tricky lifetime and unability for userspace to inform kernel when it
finished and skb can be freed (without additional syscall).
I did it with af_tlb zero-copy sniffer (but I substitute mapped pages
with physical skb->data pages), and it was not very good.

> I want us to think about how a user might want to use this.  What
> I anticipate is that users will want to organize a pool of AIO
> buffers for themselves using this DMA interface.  So the events
> they are truly interested in are of a finer granularity than you
> might expect.  They want to know when pieces of a buffer are
> available for reuse.

Ah, I see.
Well, I think preallocate some buffers and use that in AIO setup is a
plus, since in that case user does not care about when it is possible to
reuse the same buffer - when appropriate kevent is completed, that means
that provided buffer is no longer in use by kernel, and user can reuse
it.

-- 
	Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ