lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F402A0D.2020200@netpeas.com>
Date: Sat, 18 Feb 2012 22:45:33 +0000
From: Jerome Athias <jerome@...peas.com>
To: full-disclosure@...ts.grok.org.uk
Subject: Operation Bring Peace To Machines : New Info


Sorry, I am just crazy
\x90

Sujet: 	RE: Vulnerability conceptual map (UNCLASSIFIED)
Date : 	Sat, 18 Feb 2012 16:37:45 -0500
De : 	WOLFKIEL, JOSEPH L CIV DISA PEO-MA <joseph.wolfkiel@...a.mil>
Répondre à : 	joseph.wolfkiel@...a.mil
Pour : 	Multiple recipients of list <scap-dev@...t.gov>



Classification: UNCLASSIFIED
Caveats: NONE

The NetD schemas were developed with that concept in mind.  We had hoped to contribute the entire body of knowledge to the community and start building automated communications based on the schemas and the relationships they document.

Using SCAP names and metadata tags was a key component and gave us some early quick wins.

I'd love to come to community consensus on ontological models for threat, vulnerability, device, person, incident, event, workflow, etc that we could start incorporating into SCAP standards (starting with ARF and ASR).

Joseph L. Wolfkiel
Engineering Group Lead
DISA PEO MA/IA52
(301) 225-8820
Joseph.Wolfkiel@...A.mil


-----Original Message-----
From: scap-dev@...t.gov [mailto:scap-dev@...t.gov] On Behalf Of Davidson II, Mark S
Sent: Friday, February 17, 2012 7:55 AM
To: Multiple recipients of list
Subject: RE: Vulnerability conceptual map


I think the core of the topic is turning information into action. You might have an ongoing attack, a vulnerability that needs to be patched, an exploitable configuration, or one of many other security risks. You will have varying degrees of information (as Kurt said) within each risk.

Currently, an organization that can aggregate risk and threat information to a single point  and have a human make a decision that is carried out in a timely manner is among the more mature organizations. Many organizations do not have all of their security information in a single place. Many organizations, once they make a security decision, have a difficult time implementing and communicating that decision.

There's probably three areas of action:
1) Collect information and present it in a useful way
2) Make a decision based on that information
3) Carry out the decision

#1 and #3 should be automated, and #2 should be where we spend most of our effort. SCAP and CM are within the domain of Collect/Present, and I think there have always been discussions about automating #3. Certain decisions in #2 can be automated once you have #1 and #3, but that's a ways away (in my opinion).

Part of the difficulty of #3 is that networks will always be different. Network management technologies will always be different. Let's say for the sake of argument you want to block web traffic. How would you communicate that? You'd have to, at a minimum, communicate the following: inbound/outbound, applicable subnets/locations,&  timeframe. Specifying a port may not be enough. What about web traffic over non-standard ports? Then you'd have to use an application aware firewall. Or, what if you are trying to contain a segment of the network that has a router as it's only access?
	You'd have to have a uniform language that could turn a thought "Block web traffic for sales - they got ANOTHER virus" into a command that must be usable by a variety of devices with functionality that may or may not overlap, all in a network whose topography cannot be known when that language is written. And you have to be able to 'remove' the block when you want.

I guess that was just a long way of saying 'I agree'. There's a lot of work to be done and much of it is unexplored (at least from a shared knowledge perspective).

-Mark

-----Original Message-----
From: scap-dev@...t.gov [mailto:scap-dev@...t.gov] On Behalf Of Kurt Seifried
Sent: Thursday, February 16, 2012 6:55 PM
To: Multiple recipients of list
Subject: Re: Vulnerability conceptual map


On 02/16/2012 06:11 AM, Jerome Athias wrote:
>  For me,
>
>  The problem:
>  we must quickly mitigate (and then remediate) vulnerabilities
>
>  Existing scope:
>  we have actually (too much?) too complicated (and incomplete) standards
>  we have not-interoperable vulnerability tools
>
>  My proposed solution:
>  we have to act quickly to deal with the problem
>  So the idea is to produce, and use an open, SIMPLIFIED, easy to
>  implement and use, standard
>  What i call IVIL v1.0
>
>  And I would like to explain, demonstrate and validate my solution

I find this discussion interesting. As I see it for a vulnerability
(e.g. a technical issue that can be exploited to gain access or elevate
privilege) we have several options:

1) fix it with a software update (which generally relies upon a
vendor(s) shipping an update)
2) use a workaround (like change file permissions, disable the specific
component that is affected, etc.)
3) disable the entire thing temporarily or permanently. For example by
turning it off, restricting access to a limited subset of users,
replacing it with something else, etc.
4) accept the risk and continue on (e.g. denial of service attacks, have
a re-mediation routine to deal with it such as restarting it).

actually if anyone else has other main options I'd love to hear from you.

Now as I see it for option 1 you generally need your vendor to ship an
update (or you need to have the source and patch it yourself, run it
through testing/QE/QA/acceptance/etc.)

For option 2 you need research, either done internally, by the vendor or
a third party you use (e.g. iDefense, iSIGHT, etc.).

For option 3 you need internal knowledge/support and/or support from
option 2.

For option 4 you need internal knowledge/support and/or support from
option 2.

So my concern has always been you either need a high degree of internal
knowledge and/or external support in some form. All of this takes time
if you want to minimize the impact (if you block web access every time a
potential web browser vulnerability comes out you'd probably never have
access to the web).

What struck me as a best case scenario was having a limited/simplified
protocol for immediate reaction and a more complete/slower protocol for
long term reaction. I think it would be ideal if the protocol proposed
could basically say which parts are required and which parts can be left
for later.

Food for thought.

-- 
Kurt Seifried Red Hat Security Response Team (SRT)


---------------------------------------------------------------

To unsubscribe from this mailing list, please send an e-mail to
listproc@...t.gov with the words "unsubscribe scap-dev" in the body. You
will need to send this from the email account that you used to initially
subscribe to scap-dev.


Classification: UNCLASSIFIED
Caveats: NONE




Content of type "text/html" skipped

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ