[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <MDEHLPKNGKAHNMBLJOLKCEGNDEAC.davids@webmaster.com>
Date: Tue, 24 Apr 2007 12:19:08 -0700
From: "David Schwartz" <davids@...master.com>
To: "Alex Vorona" <voron@...ost.net>, <linux-kernel@...r.kernel.org>
Subject: RE: Re[2]: sendfile to nonblocking socket
> DS> Threads plus epoll is another.
> 20k threads and maybe more is too much :). Look at http://nginx.net/
> senction "Architecture and scalability" for example.
> DS> It really depends upon how much performance you need
> all, that hardware can take and hold :)
Why would you want 20k threads? You aren't seriously suggesting that you
need to have 20,000 outstanding disk operations, are you? Surely you don't
think that would be efficient. If the disk is the limiting factor, it may
get slightly faster as you pend more concurrent requests, but surely 20,000
is not the best number! (256 is probably closer to the optimal value, and it
may be less.)
Your application has to manage the outstanding disk read requests. I don't
know of any way to foist this task on the kernel. Perhaps a pool of disk
read threads?
I would keep a flag for each connection to track whether the last write got
a 'would block' or was incomplete. So long as this flag is clear, let the
disk read thread attempt the socket 'write'. If the disk read thread gets a
partial write (or a would block indication), set the flag on the socket and
let the socket I/O threads takeover the connection (based on 'epoll'
notification). When a write completes and you need more disk data, clear the
flag and let the disk read threads takeover the connection until a write
blocks again. (This disk read threads can use 'sendfile' or 'splice' so long
as they don't block on the socket.)
Perhaps the disk read threads should be using 'mmap' with MAP_POPULATE.
There are certainly many possible approaches.
DS
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists