[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <02874ECE860811409154E81DA85FBB580DD9C8@ORSMSX105.amr.corp.intel.com>
Date: Thu, 29 Mar 2012 23:08:59 +0000
From: "Keller, Jacob E" <jacob.e.keller@...el.com>
To: chetan loke <loke.chetan@...il.com>
CC: Richard Cochran <richardcochran@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"e1000-devel@...ts.sourceforge.net"
<e1000-devel@...ts.sourceforge.net>,
"Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
"Ronciak, John" <john.ronciak@...el.com>,
"john.stultz@...aro.org" <john.stultz@...aro.org>,
"tglx@...utronix.de" <tglx@...utronix.de>
Subject: RE: [PATCH net V4 2/2] igb: offer a PTP Hardware Clock instead of
the timecompare method
> -----Original Message-----
> From: chetan loke [mailto:loke.chetan@...il.com]
> Sent: Tuesday, March 27, 2012 2:55 PM
> To: Keller, Jacob E
> Cc: Richard Cochran; netdev@...r.kernel.org; e1000-
> devel@...ts.sourceforge.net; Kirsher, Jeffrey T; Ronciak, John;
> john.stultz@...aro.org; tglx@...utronix.de
> Subject: Re: [PATCH net V4 2/2] igb: offer a PTP Hardware Clock instead of the
> timecompare method
>
> On Tue, Mar 27, 2012 at 4:58 PM, Keller, Jacob E <jacob.e.keller@...el.com>
> wrote:
>
> >
> > I think we could see contention regardless because the spinlock doesn't
> guarantee the ordering of who gets it next.
> >
> > I am not sure. But I will try and set something like this up. However, I do
> think that many get-set calls is pretty high for even a 'highly' loaded
> system. Though the buggy app for sure is possible.
> >
> > Here is what I am thinking as a test case. Linuxptp running normally with a
> higher sync rate than once per second, plus a 'buggy' app which will try to
> infinitely thread the gettime calls. I hope to have something like this
> working soon.
> >
>
> Agreed, we don't know who would grab the lock next. But with just one
> app/process we may not be able to induce the contention because process
> scheduling would come into play. A single process would only get that much
> time slice. With multiple processes, you will be able to schedule them on
> multiple CPUs and hence contend with the driver's completion path because that
> is what a real-exploit would do.
>
> make sure numactl is installed on your system. Within a shell script, launch
> multiple instances of the process as follows:
>
> #!/bin/bash
>
> num_cpus=`cat /proc/cpuinfo |grep -i processor |wc -l`
>
> for ((i=0; i<$num_cpus; i++))
> do
> echo "Launching instance:$(($i+1))"
> numa_cmd="numactl --physcpubind=$i /path/to/buggy-app &"
> echo "executing numa-cmd:$numa_cmd"
> eval $numa_cmd
> done
>
>
> > - Jake
>
> Chetan
I performed this test on my machine. Even with a buggy app that calls clock_gettime
inside a while loop, this does not produce any contention whatsoever with
time stamping. I checked the timestamps returned also, and it appears to be running
more than 20k times a second.
Based on the following factors I do not believe this is an issue:
1) A user requires root to use the ioctl. (Or root user has to grant the user access, which isn't necessary for normal ptp functionality).
2) A root user can already trash the system, therefore we cannot consider this an
exploit as it isn't a method to gain root access or halt a system under a normal
user.
3) A program operating at such a high ioctl rate is flawed design which is not
expected. PTP users should already understand the ioctl has more latency than a
PPS setup.
4) Even running 48 processes (2 per CPU) with a while loop repeatedly calling
clock_gettime, zero contention was observed. The times returned have about a
50us delay between them. This translates to somewhere in the ballpark of 20k
calls per second. This has no effect on network traffic (I can still ssh/ping
over that interface) and has no impact on dropping timestamps of packets. Using
large amounts of traffic is not valid because that is already a known constraint
on ptp. (Too much traffic, especially ptp traffic) results in dropped timestamps
because of the way the hardware stores timestamps for packets in the registers)
If you are not satisfied with this, please provide evidence of a failed test case.
Powered by blists - more mailing lists