[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zro1onXfGkKoIRbY@casper.infradead.org>
Date: Mon, 12 Aug 2024 17:17:38 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: Joe Damato <jdamato@...tly.com>, netdev@...r.kernel.org,
mkarsten@...terloo.ca, amritha.nambiar@...el.com,
sridhar.samudrala@...el.com, sdf@...ichev.me,
Alexander Viro <viro@...iv.linux.org.uk>,
Christian Brauner <brauner@...nel.org>, Jan Kara <jack@...e.cz>,
"open list:FILESYSTEMS (VFS and infrastructure)" <linux-fsdevel@...r.kernel.org>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [RFC net-next 4/5] eventpoll: Trigger napi_busy_loop, if
prefer_busy_poll is set
On Mon, Aug 12, 2024 at 06:19:35AM -0700, Christoph Hellwig wrote:
> On Mon, Aug 12, 2024 at 12:57:07PM +0000, Joe Damato wrote:
> > From: Martin Karsten <mkarsten@...terloo.ca>
> >
> > Setting prefer_busy_poll now leads to an effectively nonblocking
> > iteration though napi_busy_loop, even when busy_poll_usecs is 0.
>
> Hardcoding calls to the networking code from VFS code seems like
> a bad idea. Not that I disagree with the concept of disabling
> interrupts during busy polling, but this needs a proper abstraction
> through file_operations.
I don't understand what's going on with this patch set. Is it just
working around badly designed hardware? NVMe is specified in a way that
lets it be completely interruptless if the host is keeping up with the
incoming completions from the device (ie the device will interrupt if a
completion has been posted for N microseconds without being acknowledged).
I assumed this was how network devices worked too, but I didn't check.
Powered by blists - more mailing lists