what’s wrong with information security (and … · what’s wrong with information security (and...

Post on 29-Aug-2018

226 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

WHAT’S WRONG WITH INFORMATION

SECURITY(and how to make some particular improvements)

arkenoi@gmail.com

UISGCON 13 October 2017

2

Well, almostGlanc, ltd, Bulgaria, Information Security consulting, one man company.Almost.

# whoami You don’t exist, go away!#

Who am i

3

4

Is information security hardand complicated?

• No. It is just a bit alien and counterintuitive

• Metaphors are wrong

and misleading.

5

How information security education does (not) work

We need familiar metaphors and good visualisations.

We got flawed analogies and information overload.

6

Flawed analogy #1 (incorrect one):following rules is sufficient to keep you safe

7

Cyber environment is hostile

• Basic rules do exist, but they cannot guarantee safety

• “Best practices” are often obsolete and impractical

• “Doing everything right” is prohibitively expensive for typical business

• Nobody ever did won a battle because he had a certified gun

8

Flawed analogy #2 (incomprehensible one):you are at war, use military tactic

9

We are not soldiers

• Very few of us did attend a military service

• Those who did rarely have even been on actual war

• Those who were are not able to teach others

• Come on, is it a war? Why nobody has been shot, then?

• Military discipline is not applicable in civilian environment

10WHAT GRC VENDORS WANT YOU TO THINK ABOUT HOW INFOSEC WORKS(And why it is utter bullshit)

(Yes, this slide is so disgusting I cannot resist to include it in all my conference talks)

11

Infosec truth is simple

• All we do is part of business risk management

• Risk management is not part of compliance requirements. Managing

compliance requirements is indeed part of risk management.

• “Hostile” and “ever-changing” cyber environment is no different from

competitive business environment. Same survival rules are

applicable.

• Your survival depends on your efficiency.

12

“Best practices” aren’t.

• Not best (full of outdated and crazy stuff);

• Not practical (being fully compliant is stupid waste of resources, and partial compliance does not provide any benefit);

• Easy to fall to “vendor-driven” security.

13Taxonomy of attacks (making sense)

Collapse multiple metrics into two and make a heat map!

Y axis = TCap by FAIR(Threat capability, measured by how good this attacker is, as compared to general “threat population”, by percentile). Combined metric for time/money/resources/skill/whatever

X axis = how many targets are attacked in similar manner?

This metric defines how relevant our “purely statistical” threat intelligence and signature-based technologies could be regarding this attack.

14Taxonomy of attacks (do we get it right?)

A few questions to ask yourself to make sure we have mutual understanding

• Why lower right corner is so empty? Aren’t there swarms of people who have a lot of time to try a new attack yet lack any specific skills?

• Why middle left zone is less dense than lower left corner? Aren’t typical malware authors quite skilled in what they do?

15Taxonomy of attacks (know your enemy)

16Mastering our tools in context (*)(* except dealing with Adobe Flash, it just makes no sense, drop it now)

17

Detection hype and prevention futility

• “Everyone gets hacked, so it is practical to give up already” (It is true, but no, it is not)

• Detection cannot replace protection• Companies invest big money building SOCs like if they

solved all their Vulnerability Management problems already (and they didn’t)

• If VM is “better” (closer to the basics, provides better ROI* or whatever), why is it “hard”?

*) Yes, I am aware that ROI is not directly applicable for security

18

Information overload

19

More “best practices” bashing

• “Best practices” imply you have organization-wide (almost) perfectly working patch management;

• “Periodic scans” are invented to see gaps in that process;

• Well, you have gaps. A lot of gaps. Now what?

20

Typical poor man’s pitfall

• Short list of critical systems;• Exposed to outside world;• Everything else is “later”;

A moderately sophisticated attacker may maintain presence for months and years.Note that methodology is more or less correct, but there are gaps that make it fail.

21

Worst case: pentest-driven vulnerability management

• 5,000 critical bugs and we are still in business? Come on!

• Exploits or GTFO!

Security team does not have time and resources for permanent spectacular show, so external pentest team is hired with all its scope and communication issues.

22

Ok, we need to prioritize. How?

• Scanning is cheap• Continuous vulnerability management is premium• Risk management, like in GRC, is luxury• State of the art is black magic

23

Poverty: The Vendor Trap(why being low on budget sucks)

• If you live below the Information Security poverty line, it means all IS industry works for wealthy people somewhere else, not for you;

• Vendors are not interested in creating affordable solutions while their position in luxury segment is secure;

• Scaremongering FTW!

24

What is risk (from Capt. Obvious)

Loss magnitude (LM)Loss event frequency (LEF)

R = LM * LEF, that’s it.(There also is Threat Event Frequency and Vulnerability: LEF = TEF * Vuln)

25

What does your vulnerability scan tell you?

Next to nothing. “Severity” slightly correlates with LEF. Challenges:

1. You have more than you can eat;2. IP address is not an asset;3. CVSS score is not a risk;4. Your quarterly (and weekly, too) scan is already outdated.

26

Some CVSS bashing

CVSS is relevant here• (Where) theoretical exploitability

is more important than availability of particular exploit or other threat intel data.

27

What does your threat intelligence tell you?

• Quite a lot. But it could be later than you need it.

• Knowing about “current hacker/malware campaigns” does not give you any advance warnings.

• It does not help you to look at your network like hackers do.• “Number of sightings” for particular CVE is not TEF you

expect

28

Advance warnings explained

• Patching is a slow process• Good for you, weaponizing exploits and preparing a campaign

takes time, too• In all known cases of large-scale breaches and malware

infestations victims had weeks, months and sometimes years to prepare

• Yet until the hell breaks out, statistics tells you quite a little.

29

(more) CVSS bashing (no different for v3)

• Environmental metric group• Temporal metric group

Anyone, seriously?Two things matter: exploit capabilities and exposure. Let’s start with the first one.

30

What is “Severity”? (Winshock example, MS14-066)

• Unauthenitcated RCE• “Exploits are available”• CVSS == 10.0• Assigned maximum severity by all vulnerability scanners• Practical confidentiality/integrity impact is next to

negligible.

31TI-relevant CVSS alternatives?

• “Exploitable by Metasploit” (surprisingly good if you do not have anything better, yet with winshock it fails you)

• Rapid7’s risk score• Qualys Realtime Threat Indicators• Vulners search metrics• Leonov’s vulnerability quadrants

Wait, there should be some vendor-neutral, logically transparent TI data format? Can we separate “current activity” from “attacker capabilities”?

32TI standards and frameworks

• Detection-focused (SIEM is the king)• STIX/Cybox is about events and IoCs• Everything else is pretty much compliance oriented

(SCAP/OVAL)• The rest is not machine readable (CVRF etc)

33How to create a viable standard

• Keep it simple• Real life applications• DHS support interoperability with other standards

34Requirements

• Simple• Machine readable• NOT a vulnerability notification format• Not a format for IDS signatures• Close enough to data formats already used

35Challenges

• “Primary key”: sometimes we do not have a CVE• CPE inventory is hard• CCE is dead, yet we are configuration dependant

36Autopsy: VEDEF

• Golden age of TI standardisation (first decade of 21th century)

• Five years of meetings• Overcomplicated requirements• Not a single draft has been ever published

37Autopsy: CCE

• Golden age of TI standardisation (first decade of 21th century)

• DHS promised to bring it back to life, silently discarded

• Database is expensive to maintain, poor coverage, no updates since 2013

• Dropped from STIX 2.0

38On life support: CPE

• Golden age of TI standardisation (first decade of 21th century)

• Database is expensive to maintain and covers basic components only

• Software vendors ignore it• SCAP/OVAL tests are used instead of inventory

$ wc -l gb_windows_cpe_detect.nasl 2574

39Proposal: ECDML

• Exploit Capability Definition Markup Language• XML-based (like it or not)• Defines a few distinct key properties of an exploit:CVE id applicable platform (CPE is ok here)Impact (see EACVSS metrics on following slide)Availability in major exploit frameworksConfiguration constraintsHas it been seen in malware? Was it ever incorporated in autonomous worms?

40Proposal: EACVSS vector and score

• Exploit Adjusted CVSS• uses CVSS namespace, either v2 or v3• Otherwise it is just plain old CVSS, only difference

that it is applicable to practical exploitability, not theoretically possible impact.

41ECDML: a primer (headers)

<?xml version="1.0" ?>

<exploit_capability_data_v1 xmlns:cvssv2="http://scap.nist.gov/schema/cvss-v2/0.2"

xmlns:cvssv3="http://www.first.org/cvss/cvss-v3.0.xsd"

xmlns:cpe-lang="http://cpe.mitre.org/language/2.0" >

<vulnerability>

<cve id="CVE-2014-6321"/>

<exploit>

42ECDML: a primer (configuration)<configuration>

<cpe-lang:logical-test operator="OR" negate="false">

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_7::sp1:x64"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_7::sp1:x86"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_server_2008:r2:sp1"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_8:-::~~~~x64~"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_8:-::~~~~x86~"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_8.1:-:-:~-~-~-~x64~"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_8.1:-:-:~-~-~-~x86~"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_server_2012:-:gold"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_server_2012:r2:-:~-~datacenter~~~

"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_server_2012:r2:-:~-~essentials~~~

"/>

<cpe-lang:fact-ref name="cpe:/o:microsoft:windows_server_2012:r2:-:~-~standard~~~"/

>

</cpe-lang:logical-test>

</configuration>

43ECDML: a primer (EACVSS)<eacvss>

<cvssv2:base-score>7.8</cvssv2:base-score>

<cvssv2:access-vector>NETWORK</cvssv2:access-vector>

<cvssv2:access-complexity>LOW</cvssv2:access-complexity>

<cvssv2:authentication>NONE</cvssv2:authentication>

<cvssv2:confidentiality-impact>NONE</cvssv2:confidentiality-impact>

<cvssv2:integrity-impact>NONE</cvssv2:integrity-impact>

<cvssv2:availability-impact>COMPLETE</cvssv2:availability-impact>

<cvssv3:base-score>7.5</cvssv3:base-score>

</eacvss>

44ECDML: a primer (EACVSS)<configuration-constraints>DEFAULT</configuration-constraints>

<availability>PUBLIC</availability>

<malware>false</malware>

<worms>false</worms>

<exploit_frameworks>metasploit</exploit_frameworks>

<exploit-quality>NORMAL</exploit-quality>

<metasploit_module_path>auxiliary/dos/http/ms15_034_ulonglongadd.rb</metasploit_module_path>

<publication_date>15-Apr-2015</publication_date>

<last_updated>15-Apr-2015</last_updated>

</exploit>

45Database maintenance?

• ~170K exploits known by Vulners• ~40K known by exploit-db• ~3500 metasploit modules• tens to hundreds daily

46Open questions

• correlating with other TI sources (apparently, we cannot drill down to particular exploit, nor we want to. CVE should be enough)

• optional extensions (even putting back things we dropped intentionally)

47

Putting vulnerability in context

• To estimate loss magnitude (asset management task);

• To estimate threat event frequency;

• To make sure your data are not obsolete.

48

Asset management

Does not need to be expensive.Be creative with your data sources!

• Monitoring systems;• Software inventory;• Hostname regexps;• AD entries;• Existing compliance scopes (duh);• anything that comes in mind.

49

Exposure/TEF estimation

No need for Skybox-like fancy staff (it is not as reliable as one might think anyway).

• Create reasonable scopes that fit your network segmentation;

• No attack graphs — just direct neighbourhood: watch for RCEs;

• Use IDS/SIEM data;• You probably underestimate your LAN exposure

(not necessary).

50

Closing scanning gaps

When your vulnerability scanner is too slow, try doing that without scanner.

Vulners to find new bugs in known softwareARP tables to find new computersnmap|diffAWS will do a lot of interesting things for you, too.

Get a smaller scope and rescan!

51

Handling exceptions

Make it an intentional annoyance

• Define a mandatory review date no later than company-wide policy permits

• Require *both* security and owner to sign it to be prolonged for the next period

• Compensate (EMET, WAF, sandboxing, etc)• Isolate to minimize impact to other systems/components

ALL ITEMS ARE MANDATORY

52

Measuring vulnerability management

Risk• Current residual risk• Average residual risk• Maximal residual risk?

If it’s not a risk, what is it? It is a different type of risk (not knowing something or not being able to respond in time)• Time to detect metrics (coverage estimations)• Time to remediate metrics• other ITSM metrics (do we have enough resources? do we

work efficient enough?)

And you need a pen test, too. After everything else is handled.

53

Questions?

arkenoi@gmail.com

top related