[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAsGZS5Kn0nnwE0=0Eoo9a9e4+9w2fQ4nWjbCNsNrL-yfE3BtQ@mail.gmail.com>
Date: Mon, 26 Mar 2012 13:11:32 -0400
From: chetan loke <loke.chetan@...il.com>
To: Richard Cochran <richardcochran@...il.com>
Cc: "Keller, Jacob E" <jacob.e.keller@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"e1000-devel@...ts.sourceforge.net"
<e1000-devel@...ts.sourceforge.net>,
"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
"Ronciak, John" <john.ronciak@...el.com>,
"john.stultz@...aro.org" <john.stultz@...aro.org>,
"tglx@...utronix.de" <tglx@...utronix.de>
Subject: Re: [PATCH net V4 2/2] igb: offer a PTP Hardware Clock instead of the
timecompare method
On Mon, Mar 26, 2012 at 11:27 AM, Richard Cochran
<richardcochran@...il.com> wrote:
> On Mon, Mar 26, 2012 at 11:07:40AM -0400, chetan loke wrote:
>> On Sat, Mar 24, 2012 at 2:51 AM, Richard Cochran
>> <richardcochran@...il.com> wrote:
>> > On Fri, Mar 23, 2012 at 03:39:08PM -0400, chetan loke wrote:
>> >>
>> >> So, how is it working today? Because we could have tx and rx
>> >> completions on different CPUs. Is it not possible to have the
>> >> following race today - between timecompare_update->timecompare_offset
>> >> -> timecounter_readdelta of say Rx and timecounter_cyc2time from Tx?
>> >
>> > I works (in the igb) because of the spinlock. You know, that thing
>> > that you are so against using.
>> >
>>
>> I meant, was there a lock before the PHC functionality in igb?
>
> There was no lock, and yes, it was a bug.
Ok, so this needs to be fixed irrespective of the PHC code.
>
>> >> How about rate limiting at the PHC class driver level? And then it
>> >> will work across the board for all the adapters at the device level.
>> >
>> > No, don't go there. Enough bikeshedding already. If you have a serious
Did I ever tell you that your patch is costing us 'N' clock-cycles in
the fast path? We all understand that to gain some features we may
have to sacrifice something. Folks who want time-stamping might have
to take a small performance hit (may be to work around hardware issues
and so on).
You are confusing 'blocking the driver's fast path' with
performance/optimization etc. We cannot let user-space code jam the
system. Kernel code should be designed such that
bugs(intentional/unintentional) in user-space code cannot cause system
wide adverse affects. Period.
Does your existing design limit (ab)users from pounding the ioctls?
As I mentioned earlier, it could be possible to take care of gettime
and the driver's Rx/Tx path by using a mixture of locks/kernel-thread.
But settime/adjtime still needs to be curbed.
Why isn't ioctl-rate limiting acceptable? Let's say an app that is
trying to set/adj NIC counter is running on host side then how often
would it need to read and correct/set/adj? once every msec(so 1000
times a second), once every 10 msec(100 times a second) etc?
So, will pounding the ioctl 1000 times, while processing ~820K
frames(1500 byte payload on 10G link) still cause a problem for the
driver is what we would need to see. rate can also be a factor of
link-speed(?).
And we don't need 100 such apps. Only 1 app should be working in
tandem with the NIC. If other apps fail then atleast the sysadmin or
users would know someone else is (ab)using it.
> Thanks,
> Richard
Chetan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists