[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 31 Dec 2010 20:18:10 -0500
From: Trond Myklebust <Trond.Myklebust@...app.com>
To: George Spelvin <linux@...izon.com>
Cc: linux-kernel@...r.kernel.org, linux-nfs@...r.kernel.org
Subject: Re: still nfs problems [Was: Linux 2.6.37-rc8]
On Fri, 2010-12-31 at 20:03 -0500, George Spelvin wrote:
> > ...and your point would be that an exponentially increasing addition to
> > the existing number of tests is an acceptable tradeoff in a situation
> > where the >99.999999999999999% case is that of sane servers with no
> > looping? I don't think so...
>
> 1) Look again; it's O(1) work per entry, or O(n) work for an n-entry
> directory. And O(1) space. With very small constant factors, and
> very little code. The only thing exponentially increasing is the
> interval at which you save the current cookie for future comparison.
> 2) You said it *was* a problem, so it seemed worth presenting a
> practical solution. If you don't think it's worth it, I'm not
> going to disagree. But it's not impossible, or even difficult.
Yes. I was thinking about it this morning (after coffee).
One variant on those algorithms that might make sense here is to save
the current cookie each time we see that the result of a cookie search
is a filp->f_pos offset < the current filp->f_pos offset. That means we
will in general only detect the loop after going through an entire
cycle, but that should be sufficient...
Trond
--
Trond Myklebust
Linux NFS client maintainer
NetApp
Trond.Myklebust@...app.com
www.netapp.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists