lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
From: tate at ClearNetSec.com (Tate Hansen)
Subject: Nessus experience

Few thoughts:

1)  Often it is only a few vulnerability checks consuming the majority of
the overall time to complete a scan of a single device.  I wrote a script
which parses nessusd.messages to help me find which vulnerability checks
were taking all the time - below is a snippet of the output:

# ./parseNessusdMessages.pl 10
===========================================================
192.168.2.19: completed checks = 1548: Time to complete host scan = 497.39
(8:17)
Seconds | % of total time consumed | plugin name
184.069 (%37.007) mydoom_virus.nasl
176.551 (%35.495) synscan.nes
81.061 (%16.297) X.nasl
46.808 (%9.411) snmp_default_communities.nasl
30.119 (%6.055) vnc_http.nasl
25.462 (%5.119) relative_field_vulnerability.nasl
20.381 (%4.098) netscape_pop_auth.nasl
20.290 (%4.079) vnc.nasl
20.122 (%4.046) girlfriend.nasl
20.052 (%4.031) mysql_auth_bypass_zeropass.nasl
20.044 (%4.030) proxy_use.nasl
not showing remaining list... (only showing 10)

===========================================================
192.168.2.18: completed checks = 1548: Time to complete host scan = 503.25
(8:23)
Seconds | % of total time consumed | plugin name
179.172 (%35.603) synscan.nes
176.109 (%34.994) mydoom_virus.nasl
46.662 (%9.272) snmp_default_communities.nasl
45.017 (%8.945) benhur_ftp_firewall.nasl
44.176 (%8.778) X.nasl
29.305 (%5.823) vnc_http.nasl
25.628 (%5.092) relative_field_vulnerability.nasl
21.680 (%4.308) dangerous_cgis.nasl
20.214 (%4.017) netscape_pop_auth.nasl
20.120 (%3.998) girlfriend.nasl
20.045 (%3.983) proxy_use.nasl
not showing remaining list... (only showing 10)

Now I know the SYN scan is listed above; the point here is I often see one
or a few vulnerability checks consuming 80+% of the total scan time.  If I
know the environment, I can often exclude the offending checks (i.e. not
important to me) or I can delay checking those vulnerabilities until a later
time.  

2) A mix of tuning the following variables can also change the overall scan
time by more than 90%:  From knowing the environment details (available
bandwidth, latency, etc.) you can 'optimize' the scan and attempt to balance
accuracy and speed.

checks_read_timeout:  maximum number of seconds to wait for a probe
response:  wait doing a recv()
plugins_timeout:  the maximum number of seconds of lifetime for a
vulnerability check

If you set checks_read_timeout to 1 second and plugins_timeout to 5 seconds,
you'll blaze through the scan.  The problem is you may lose accuracy because
you didn't allow enough time for a client to respond.  If I know I'm
scanning on a high-bandwidth and low latency network, I'll use small values
because I can make an assumption I will not lose accuracy by going too fast.
On the other hand, if I'm scanning over a T1 with latency above 300ms, I may
inflate those values to something like 60 seconds for checks_read_timeout
and maybe 300 seconds for plugins_timeout.  This will extend the scan time a
huge amount - but what are you looking for, speed or accuracy.  That is the
trade-off.  Speed = missed vulnerabilities.  Accuracy = slower scans.    

In one example at a client, I tried different values of checks_read_timeout
in order to illustrate to the client that speed is the wrong thing to focus
on:  If I set checks_read_timeout to small values, I could scan multiples of
devices in less than 10 minutes.  If I increased the value, nessus would
find 3x as many vulnerabilities.  They choose to go slower for obvious
reasons.

As already mentioned, there are several other available tuning parameters
which all alter the speed/accuracy trade-off:

-	optimize_test is also one of those variable if you set to yes, you
have the potential to miss vulnerabilities.  For that, you get to scan
faster.  Turn off and you potentially increase accuracy, but scan slower if
additional checks are executed.
-	delay_between_test:  number of seconds to pause between successive
vuln. checks
-	unscanned_closed, max_threads, max_hosts, etc.

3) Lastly, another variable which has affected several of my attempts to
speed up nessus scanning is the actual scan targets themselves.  In test
labs with 100s of fast 'scan targets', the overall scan times were
respectable.  Going into the real world and scanning 100s of slow desktops
(i.e. old hardware) would often seemingly double scan time even though my
.nessusrc tuning configuration was the same.  Of course, the type of systems
and number of open ports only exaggerates this issue:  slow windows boxes
with 40 open ports takes a lot longer to scan versus a UNIX box with 4 open
ports.  Knowing the environment, again, helps tremendously.

HTH
-Tate

Tate Hansen
Principal InfoSec Engineer
ClearNet Security
e-mail:  tate@...arNetSec.com


Greetings, full-disclosure!

>From time to time I find myself needing to estimate the time it takes
to run Nessus against various network ranges.  For some reason, it always
seems to take longer than I expect, and I'm wondering if:

  1: I am doing something wrong (this is always a possibility)
  2: Nessus has been getting slower over time 

Specifically, with two laptops (each with 2GHz processor, and upwards of
600MB RAM), I recently tried to scan a range of two class C-size networks,
to which I was directly connected via Ethernet.  I had already done full
nmaps of the hosts (this took about an hour), so I was not running nmap from
within Nessus.  I found that after over three hours, I had only been able to
complete tests on 90-something hosts.

This strikes me as unreasonably slow, for bulk automated testing, so first,
I'd like to ask if these performance metrics are in line with others'
experiences.  I'd also solicit any hints people might have to offer on how
they optimize performance, any rules of thumb anyone might care to share
about estimating times for Nessus runs.

Thanks, in advance, to all helpful replies.

--Foofus.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ