Friday, March 4, 2016

RSA Talk - “IOCs are Dead - Long Live IOCs!” - Ryan Kazanciyan

RSA Talk - “IOCs are Dead - Long Live IOCs!” - Ryan Kazanciyan

Ryan Kazanciyan, Chief Security Architect, Tanium ( @ryankaz42
Co-Author, Incident Response & Computer Forensics (book website: )

This talk was delivered 04 March 2016 at the RSA Conference in San Francisco. 

I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

I’ve always been skeptical of the threat intel (really mostly threat data) trend. It’s not a bad idea, but it seems like really just a new analogue to signature-based detection; it can only help detect something that someone else has already detected someplace else.

Ryan shares my concerns with the use of threat data, and gives some other reasons why its use is problematic. Not only is the data by definition incapable of deleting truly NEW threats, but it is inconsistent and often of dubious use even for its intended purpose(s). He gives some good ways to make better use of such data, as well as some methods of scouring your own systems for high-value threat data.

Five years ago, the compiling and sharing of indicators of compromise (IOC) seemed like it would “save the world” from attacks.

Today, this has still not become a reality.

  • Brittle indicators with a short shelf life
  • Poor quality of data in IOC feeds
  • hard to build effective home-grown IOCs
  • Tool for ingesting and detecting IOCs are inconsistent in quality
  • IOCs are applied to a limited scope of data

Threat Intelligence is not equivalent to threat data
- Intelligence includes context and analysis
- However, good threat data is required for useful intelligence
- For this talk, we’re just talking about threat data, not intelligence

IOCs are Brittle:
  • IP addresses and malware file hashes are most common
  • URLs/hostnames are next most common
  • File names are another common type
  • 4/5 of malware types last less than a week; 95% less than a month
  • C2 IPs and domains have a short lifespan
  • Shared hosting means malicious sites often share an IP with compromised hosts (leads to false positives)
  • Even paid feeds are not necessarily high quality
  • Informal look at IOCs from paid, subscriber-only feeds
    • File IOCs that include both hash and filename (filename easily changed, will lead to false negatives)
    • File hashes included for files that are unique to a specific host
    • Legitimate software libraries included as malware hashes because they were leveraged in some piece of malware
  • Hard to avoid being too specific (leading to false negatives) or too general (leading to false positives)
  • High-effort IOCs work for a specific investigation, but not for generic use across enterprises

IOC Detection Tools are Inconsistent:
  • Tools support (and don’t support) different observables from standards (OpenIOC, Cybox, STIX, YARA, etc.).
  • Logic structures in IOCs are not always implemented in the same way.
  • Data normalization is a problem.
  • The standards have some issues. E.g., OpenIOC was not created intentionally as a standard, but is merely the XML format created for Mandiant’s MIR tool. This has led to some serious issues.

Broadening the Scope of Endpoint Indicator Usage:
  • Most common host data in SIEMs:
    • AV/anti-malware
    • HIPS logs
    • Event log data, usually for only a subset (e.g. servers)
  • Things like file hashes will therefore simply never be seen in the SIEM.
  • Matching on forensic telemetry data
  • Matching on live endpoints:
    • Gives access to everything in memory, files on disk, and event logs.
    • Can be high-impact and hard to scale.
  • The Goal:
    • Mixture of the above methods to maximize the value of brittle IOCs.
    • Increase cadence of analysis as tools & resources permit.
    • Taking shortcuts in coverage (“I only need to check my most important systems”) will leave gaps and lead to failure.
    • Malicious code and actions rarely take place on the actual target servers.

Shrinking the Detection Gap:
  • The most relevant threat intelligence is what comes from within your own environment.
  • Over time, the effectiveness of looking for known IOCs has decreased.
  • Looking for attacker methodology and outlier files/behaviors has correspondingly become more effective.
  • The reality is that automation using known IOCs/threat data is good at finding the easiest things to find.
  • Preventative controls need to fill a large part of that gap, as does internal analysis.

Looking inward to hunt:
  • Derive intelligence from what is “normal"
  • Build repeatable analysis tasks — a repeatable process
  • More is not always better — start small with high-value indicators
  • “What easily observable outlier conditions do intruder actions create?"

Example of Duqu 2.0 report:
  •  All the various samples created a scheduled task to run an msiexec.exe command.
  • However, the provided IOCs consisted of a long list of file hashes and C2 IPs.

Example of analysis of scheduled tasks:
  • create list of what accounts are used to run scheduled tasks
  • create list of what actions/programs are being run using scheduled tasks
  • Look through this for outliers

"Hunting in the Dark” talk gives more detailed examples.

Things to check out:

Questions for your threat feed vendor:
  • Where is the data coming from?
    • actual IR engagements
    • honeypots
    • auto-generated sandbox data
    • firewall and other device/system data
  • What is the breakdown of observable types? (IPs vs URLs vs file hashes, etc)
  • What is the QC process? (if there is one!)

RSA Talk - “The Pivot” - Jonathan Trull

RSA Talk - “The Pivot” - Jonathan Trull, VP for Information Security, Optic ( @jonathantrull

This talk was delivered 04 March 2016 at the RSA Conference in San Francisco. 
I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

The idea that we need to move beyond just perimeter protection and do better on detection of and response to ongoing intrusions is a repeated theme in the industry over the past several years. Many organizations are still not really implementing this, though, or implementing it well.

This was a great talk with lots of good practical ideas for defense that are implementable my mid-sized organizations. See the notes for specific technical details, but the key points are:
  • Don’t just go with default logging settings on devices and security tools.
  • Central logging and analysis is key.
  • Develop a strategy of specific indicators to look for to make that central logging and analysis effective.

Attackers’ immediate goal is to exploit and compromise a host.
However, this is not their true goal. They want to get deeper and gain access to your key systems and information.

To do this requires the attacker to move on from that initial compromised host — to pivot.

On average, attackers’ “dwell time” in a victim network is 205 days (2015 numbers from Verizon DBIR).
Organizations are still most frequently made aware of compromises by law enforcement or other contacts from outside the organization.
"We don’t necessarily have to be that good, but we have to be better than this."

60% of attackers are able to compromise an organization within minutes.
75% spread from Victim 0 to Victim 1 within 24 hours.

Time is NOT on Our Side:
  • 50% of users open emails and click on phishing links/attachments within 1 hours.
  • Median time to first click is 1 minute, 22 seconds.
  • Half of CVEs are being actively exploited within a month of their publication.

Optiv Simulated Attack Lifecycle:
- Set up lab environment with nine common types of security software/tools
- Conducted simulated attacks: exploitation, lateral movement, exfiltration
- Monitored tools to see what they were able to do to aid in detection of these attacks

How attackers typically pivot & move laterally in a network:
  • leveraging native tools: cmd.exe, powershell, at.exe, Net use, WMI
    • Difficult to detect, as no software is written when these tools are being used
  • using tools to compromise creds: Mimikatz, WCE

Telltale signs of a pivot:
  • Signs can appear on the source host where the attacker is already operating, on the destination machine that they are trying to access, and on the network between them
  • Unusual use of commands that end-users rarely use: scheduled tasks (at.exe), WMI, PowerShell, RDP
  • nmap and ncat and other similar things occasionally; also sysinternals tools (PSExec, 
  • mapping shares
  • Windows Event Logs
  • Events to look for:
    • Account lockout (4740)
    • User added to privilege group (4728, 4732, 4756)
    • Security-enabled group modification (4735)
    • Successful User Account Login (4624)
    • Failed User Account Login (4625)
    • Account Login with Explicit Credentials (4648)
    • Process Created (4688)
    • Service Being Started (7035/7036)
  • Windows Logon Types:
    • Interactive
    • Network
    • batch (scheduled tasks)
    • Service
    • Unlock
  • Using Process Created Event (4688)
    • documents process, user, and parent process
    • Does NOT include command line arguments (by default)
    • This is disabled by default — can be enabled by Group Policy
      • Computer Configuration > Policies > Windows Settings > Security Settings > Advanced Audit Configuration > Detailed Tracking
    • Enable command line args by Group Policy also
      • Enable via GPO – “Include command line in process creation events”
  • Prefetch Files
    • Good place for forensic analysis to see executable, DLLs called, count of times process has run, most recent run time
    • WinPrefetchView
  • Windows Special Groups
    • Introduced in Windows 7/Windows Server 2008
    • Tracks logons of privileged accounts
    • Event ID 4964
  • Pass the Hash
    • Event ID 4624 (for success; 4625 if failed) - Logon Type 3, Auth Package NTLM
    • Filter: Not a domain logon, not an anonymous logon
  • New Scheduled Tasks
    • Event ID 7035 created by at[#].exe
  • Privilege Escalation
    • Login from one non-workstation host to another non-workstation host
    • Login from one workstation to another
    • Login with service account (or attempt to do so)
    • Creation of new domain admin (or elevation of account)

How to identify/detect the signs and defend against the pivot:
  • 100,000 foot view vs. In the weeds

Optic’s Results on Comparing Seven Common Endpoint Security Solutions:
  • Intentionally unpatched/vulnerable hosts
  • Endpoint security solution was the only defense measure on the host
  • Types:
    • Endpoint Protection Platforms (full suites)
    • Exploitation Mitigation
    • Exploit Detection and Response (EDR) with App Control (whitelisting)
    • EDR without App Control
  • None of the types of controls were silver bullets
  • None blocked most pivot attempts; EDR partially blocked most types
  • Generally they logged the info necessary to detect the pivot, but not clear out-of-the box (required research and testing to find)

  • Enable sufficient logging
  • Develop a threat model for how an attacker would go after your “crown jewels"
  • Central logging and analysis is key
  • Consider using honeypot(s)
  • Implement enhanced authentication for admins and pass-the-hash mitigation has some results and other details and recommendations 

RSA Talk - “Defense in Depth is Dead; Long Live Depth in Defense" - Matt Alderman

RSA Talk - “Defense in Depth is Dead; Long Live Depth in Defense" - Matt Alderman

Matt Alderman ( @maldermania ) is VP of Strategy at Tenable Network Security.

This talk was delivered 03 March 2016 at the RSA Conference in San Francisco. I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

I’m not convinced this is particularly valuable distinction. The title and terminology makes it sound more radical than it is. The real message appears to be simply that we need to more closely integrate and monitor our defenses, which is unquestionably a good point and a vital strategy.


Defense in depth isn’t helping us tackle the attacks we are facing.

The traditional defense in depth model includes:
  • Prevention
  • Detection
  • Response

We haven’t connected the different solutions at the different security layers, so there are gaps that intruders hide in.
The different solution at the different layers are often managed and owned by different group in the organization.

“It is time to declare that defense in depth is dead; we need a new approach."

The depth in defense model:
  • Visibility
    • Discover - assets (physical & virtual, apps, data, mobile, cloud)
    • Assess - vuln assessment, config audit, malware detection
  • Context
    • Monitor - log collection, activity monitoring, packet inspection, threat intel
    • Analyze - event correlation, anomaly detection, behavioral analysis
  • Action
    • Respond - Notification & alerting, remediation, patch mgmt
    • Protect - path installation, config changes, port/service modification, device isolation

This model provides:
  • Visibility
  • Context
  • Action

How do I get started?
  • Do you have continuous visibility to identify unknown assets/devices?
  • Do you have continuous visibility into the security state of your assets?
  • Do you have critical context to prioritize threats and weaknesses?
  • Do you have critical context to measure security posture & assurance?
  • Are you able to take decisive action to respond to attacks?
  • Are you able to take decisive action to remediate your assets?

Thursday, March 3, 2016

RSA Talk - "Sophisticated Attacks vs. Advanced Persistent Security” - The Irari Report team

This talk was given on 03 March 2016 at the RSA Conference in San Francisco by:

Ira Winkler, CISSP President Secure Menem @irawinkler 
Araceli Treu Gomes, Subject Matter Expert – Intelligence and Investigations, Dell SecureWorks @sleepdeficit_ 

I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

Nothing really new or revolutionary here, but a good summary overview of what adversaries are and aren’t doing to perpetrate attacks, and what organizations are and aren’t doing to stop them. Key takeaways:
  • Even most high-profile attacks really aren’t all that sophisticated, just persistent, adaptive, and opportunistic.
  • Security needs to be adaptive.
  • Assume you won’t achieve perfect prevention, so ensure you can backstop prevention with detection and response.
  • The role of the human is vital; it’s not just a technology problem.


Why the Hype Matters to Us (Why it Hurts Our Efforts)
- It destroys our focus
- It changes the story
- It leads to asking the wrong questions
- It deflects blame
- If the attacks are so “sophisticated” and even the top organizations can be hit, nobody will expect us to actually stop attacks.

“Sophisticated” Attack:   (the Hacking Team hack)
- Password was “passw0rd"
- Able to access and download data as engineer
- The network was apparently flat, allowing open access to data
- Sophisticated? HELL NO!

IRS Breach:
- 400k plus records compromised
- ~$50M dollars stolen
- Compromised authentication scheme
     - Required information “only the taxpayer had” (info from credit report/tax returns)
- IRS Commissioner said they couldn’t have stopped this, because:
     - Smart criminals used lots of advanced computers and hired smart people
- Went undetected for the first 400k attempts

Ashley Madison:
- Compromise of clients and client info
- Violated terms of service (didn’t delete accounts and data as promised)
- Probably carried out via SQL injection (one doc stolen and released by the attackers was an internal security audit saying they had a SQL injection problem!)
- Pass1234 was the root password on all servers
- Poor password encryption
- Network was poorly segmented, allowing for easy lateral movement

Anthem & Premera (and 275 other healthcare orgs):
- 80M records lost at Anthem, 11M at Premera
- Watering hole attack suspected at Anthem
- Phishing attack suspected at Premera
- Admin creds stolen
- Both went undetected for ~9 months
- Massive querying of data (i.e., it should have been detectable)

Common Problems:
- Improperly segmented networks
- Poor monitoring/detection
- Not monitoring what matters
- No whitelisting
- No multi-factor authentication
- Phishing messages

So what IS a sophisticated attack?
- Not caused by phishing
- Malware not detectable by signature
- Not an easily-guessable password
- Not exploiting a know vuln for which a patch was available
- Multifactor auth was in use
- Decent detection tech was in use and being paid attention to
- Proper network segmentation in use
- Least privilege in effect

Advanced Persistent Threat? 
No. ADAPTIVE Persistent Threat
- “Advanced” implies they are sophisticated and unstoppable
- “Adaptive” implies that they are finding the weakness in your system
- Successful APT attacks exploit unforced errors on your part

Advanced Persistent Security Program
- be adaptive
- assume failure
- exfiltration prevention > intrusion prevention
- disruption is an acceptable strategy

RSA Talk - "Proactive Measures to Mitigate Insider Threat" - Andrew Case

Andrew Case ( @attrc ), Director of Research at Volexity, an infosec advisory firm headquartered in Washington, DC.

This talk was given on 02 March 2016 at the RSA Conference in San Francisco. The talk was surprisingly well-attended; the most packed session I’ve been to. I guess insider threat is weighing heavy on people's minds these days?

I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

Andrew's case examples were very interesting, and the strategies he gives are sensible, if not revolutionary. Limiting and monitoring the use of removable media and of cloud file sync/storage services is a strong recommendation which I make to many of my clients. Identifying where your key intellectual property is located and concentrating monitoring on those locations is another excellent recommendation. Separation of duties is a common requirement, but a difficult one for many organizations to implement. Tighter controls on users at termination and inventory of issued equipment down to the level of noting serial numbers of hard drives and other components of laptops is also a stretch for most organizations.


PWC on insider-driven incidents:
- 70% of incidents
- 60% of incidents at manufacturing orgs

Verizon DBIR:  20.6% of incidents characterized as insider incidents

Approaches to insider threat

Typical approach is passive defense against insider threat:
-No special/extra logging or security measures
-No automated alerting or remote logs
- This is easy and provides the data needed for forensics after the fact
- However, anti-forensic techniques can defeat these measures, and they make no progress toward eliminating/preventing the threat

Next level is Detection:
- enhanced logging (e.g. file access, removable media usage)
- Generate alerts on defined events
- This can inhibit malicious insiders and find activity before it causes greatest potential harm
- Sometimes doesn’t allow for response until irreparable harm is done
- Requires significant active effort from security team

Next level is Prevention:
- Prevent use of removable media
- Block personal email and file sync/storage services
- Block end-user software installation
- Stops many activities before they start, and is cheapest approach once implemented
- May be a problem in company culture, and can inhibit productivity, especially for particular departments/users/roles

Andrew suggested that the ideal strategy may be somewhere between the Detection and Prevention models.

Real World Case Examples of Insider-related Incidents

Case #1:
Financial institution employee leaves and takes many employees and 1/3 of employees with him.
- also took many key documents with him

Investigation showed that the victim’s network was very open and access was not very limited.
User had access to file servers and applications/databases for which he had no legitimate need.
Data was removed via USB, personal email, and printing.

Solution recommendations:
  • Secure Network Architecture
  • Monitor file share access
  • Concentrate monitoring around key sensitive file data
  • Limit USB drive/removable media access
  • Limit use of personal email accounts and cloud file storage/sync
  • Address printing and scanning as an exfiltration method (hard problem)

Case #2: Abuse of Power
Plant manager at manufacturing company using “down time” on company’s machines to run a side business.
Some material were purchased personally, others were ordered using the company’s accounts.
Only detected when a machine malfunctioned.

Potential signs that were missed:
  • Perpetrator logged in to control systems during off hours 
  • Manufacturing jobs were scheduled with no associated customer work order
  • Perpetrator deleted files and logs to cover his tracks

Problem was, the plant manager was the primary operator/administrator of the systems whose logs could have indicated his malfeasance.

Solution Recommendations:
  • Monitor user logins
  • Monitor system usage
  • Alert on anomalous indicators of the above!
  • Don’t allow one person to control all aspects of key business processes. There must be someone else in the loop and someone else auditing the process.

Case #3: Offline Exfiltration
Victim organization had very tight data exile controls
User removed hard drive from his PC, brought it home, and used forensic tools to remove data
Hard drive was unencrypted

Solution Recommendations:
  • Utilize full disk encryption (FDE) for everything
  • Check out offline decryption capabilities of your FDE solution

Case #4: Anti-Forensics
Two key employees leave the victim company simultaneously.
Soon after, important clients began terminating contracts.
Company found their clients were moving to a brand new company founded by… the two departed employees.

Both employees had done factory reset on their company-provided Android phones.
One employee ran CCleaner before turning in his laptop.
Other employee replaced the hard drive on his laptop with a brand-new drive of same make and model.

Solution Recommendations:
- track application downloads and installs (prevent use of anti-forensics software)
- application whitelisting (prevent use of anti-forensics software)
- better termination procedures:
     - assess and preserve employee equipment post-termination
     - don’t immediately re-use systems after someone leaves
     - check components against inventory
     - check historical use of removable media
(Andrew made the point that these types of stricter checks might be done only in certain cases, e.g., key employees, those with access to highly sensitive data, and those leaving under bad circumstances.

Other Overall Recommendation:
- Consider bringing in an outside party for a “capture the flag” exercise, similar to an insider pen-test, to see if they can gain access to and exfiltrate specific data without detection.

Wednesday, March 2, 2016

RSAConference Talk "Giving the Bubble Boy an Immune System so He Can Play Outside" - Kevin Mahaffey

Kevin Mahaffey ( @dropalltables ) is the founder and CTO of Lookout, one of the first mobile-centric security/anti-malware companies. 

This talk is intended to explore how many large and forward-thinking companies are removing many traditional elements of security architecture (e.g., anti-virus, VPNs, firewalls) in favor of a data-driven security model. The talk was given on 02 March 2016 at the RSA Conference in San Francisco.

I'm providing a brief reaction/summary, and then my notes. The notes are my sort-of free-form notes, so if they are only semi-comprehensible.

I am a big fan of the concept of internal resilience and immunity as an approach to security, as opposed to building a bigger, better wall at the perimeter. This is a more and more important approach as mobile devices, BYOD, external cloud service providers, and other trends take hold in organizations. I'm not convinced that the data-driven approach is the road forward, though. Data analytics is a powerful tool, but at some point it becomes an exercise in navel-gazing. If you literally log and watch everything, including the system that is storing and analyzing the logs, the data grows virtually without limit. Big data technologies are making this more practical, but detection and remediation still lag. The ideas Kevin is sharing are very interesting, but these methods still seem like an enhancement to me, rather than a replacement for traditional security devices/software.

The best resource mentioned was the Google "Beyond Corp" paper.


The real world does not match the theoretical model of secure system architectures. Mobile and other devices may not be patchable by the organization, vendor-owned/managed systems are present, and users find ways to “work around” policies and safeguards.

The typical approach to security architecture attempts to create a sterile environment inside the network, keeping all the “bad things” out. The evolutionary analogy is that the skin provides a barrier, but it does not, and is not intended to, keep everything bad out. There is an intricate immune system to detect and defeat pathogens that make it past the skin level.

Least (manageable) privilege is a typical "solution" to the permissions problem
- complex to manage
- become calcified and doesn’t respond to changing requirements (“privilege accretion”)

"We need to engineer an immune system” for the organizational network.
- operationalized data + automation
Analogous to the way credit card fraud prevention works. You don’t need to get permission ahead of time to do something; instead transactions are analyzed and likely malicious/fraudulent actions are identified and dealt with.

Facebook and Square push user auth and some alert response to users and managers instead of IT or SecOps.

AEDA loop — Acquire, Enrich, Decide, Act

“I’ve never heard anyone say they have TOO MUCH visibility into their infrastructure."
If a given component were compromised, what specific data element would clearly indicate that?
“Should we put sensors on the device or the network?"
- both have problems
- on-device sensors have a compromise race condition; malware can potentially disable the sensor

The Privilege Accretion problem:
- privileges get added when needed, but not removed when no longer needed
- Square’s system   Diogo Monica’s talk at Security@Scale
- model privileges to roles
- Emergency “break glass” access; can be used, but generates an alert when used

Many security analytic data systems don’t have enough data to be effective
You’re forced to choose between too many false positives or too many false negatives.
The only way to get better is to add more context.
Other times, there is enough (or too much) data, but it’s not operationalized/useable.

Two techniques:
- analyzing data
     - data -> information -> knowledge -> wisdom
     - static/dynamic analysis of executables
     - parsing of protocols
     - data normalization
     - “You can’t extract information that’s unsupported by the underlying data."
- joining data
     - isolated data is of limited value
     - provides context
     - foreign key problem - can’t join datasets that have > 1 factor to correlate
     - data must be normalized for smooth joining

Must ensure that data sources are accounted for in terms of reliability and trust.

Input -> Model

Anomaly Detection
- Good, in that it can find novel threats
- Bad, in that new things happen all the time that are valid and benign
- anomalies, on their own, are not sufficient as indicators

Supervised Machine Learning
- train the system with inputs that have known outputs
- train the system to arrive at the expected output from those training inputs

Combined Systems are generally going to be the solution.

Malware Models:
-Known Malware
-Correlated 0-day
     - traverse connections to known malware
-Uncorrelated 0-day
     - expensive and noisy

Machines (currently) cannot make all the decisions.
Over-Automation or automating too quickly can create, essentially, an autoimmune disease.


Start by improving your IR team’s UX (user experience)
- gather all the data in one place (e.g., SIEM)
- ensure it is useable

Build feedback loops
- figure out what works and what doesn’t, and change functionality in response

Pull humans out, a little at a time
- start by having machines recommend actions, with humans approving
- if rejection rates are low (maybe under 1% or even 0.1%) you can 
- retain “circuit breakers” that keep a human in the loop if actions are particularly critical or if decision volume is high

Square “Sting” system sends some alerts to humans. 
- similar to how credit card companies ask the user if they have taken an action and if it was intentional
- also cuts rate of alert-creating actions

Saturday, January 16, 2016

Notes on "No Easy Breach" talk at Shmoocon by Mandiant Guys

This was a very informative talk on a seriously epic breach investigation by Matthew Dunwoody (@matthewdunwoody) and Nick Carr (@itsreallynick) of Mandiant. They were part of a 4-person team investigating/remediating a big APT/nation-state breach of an unspecified organization. The below are sort of stream-of-consciousness notes, so sorry if it's a bit of a mess.

The initial breach was via an "EFax" spearphishing message -- sweet!

1039 compromised systems
1000+ unique malware samples
1000+ unique C2 domains/IPs
7000+ attacker files including scripts & tools

Pace: Infected ~10 systems/day

Client insisted on pulling systems offline when infection found, despite responders' urging not to do so.

Due to the volume and pace of machines compromised, the team had to abbreviate the typical deep-dive forensic analysis to just quick triage.
Developed indicators to assist with more efficient analysis.

Focused on:

  • lateral mvmt
  • data theft
  • New back doors, etc.
  • deviations from typical known attacker TTPs

Used client personnel to assist with monitoring and analysis.
Leveraged SCCM to look for known files, directories, etc.

Attacker used anti-forensic techniques

  • secure deletion, moved from system to systems every 3 days or so
  • used strong crypto in C2, used exclusively compromised 3rd party sites and social media
"Rolling Remediation" showed our hand to the attacker and allowed them to know which evasion techniques were working and which weren't.
Client used varying technology across business units -- made analysis difficult.

Attacker used sysInternals "sdelete" tool, but that leaves a EULA Accept key in the registry.

Team emphasized the use of automation to find new examples of known IOCs.

Sparklines used for documenting and visualizing time & volume of activity

Lesson: "Add Visibility & Never Stop Looking"

Network time provides a reliable chronology.

"Once an attacker is found, fight to maintain line-of-sight" 

Persistence: run keys, .LNK files, services, WMI, scheduled tasks, overwriting existing scheduled tasks, over-writing legitimate files 

Unique malware (by hash, file name, file size, and C2) per host!

Bro IDS' ssl.log shows a lot of info on SSL sessions even if you can't decode them. One element is the cypher in use, and the attacker here was using an unusual cypher. Bro also showed the email used for the key and automatically ID'ed self-signed certs.

Prioritize the UNKNOWN

"Methodology IOCs" helped identify systems that had no known malware on them.

PyInstaller or Py2Exe, then packed w/ UPX

Advanced Techniques:
- used WMI to persist backdoors and schedule backdoors to be extracted and executed MONTHS IN THE FUTURE
- used PowerShell for backdoors and ran Invoke-Mimikatz (evaded AV)
- embedded PowerShell code in WMI class properties to execute on remote systems
- attacked Kerberos tickets to make tracking of lateral mvmt difficult

WMI forensics: parsed the strings on the endpoint (Willi Ballenthin has Python modules for parting this now on his Github)

Team enabled PowerShell 4.0 logging.

Final takeaway:
"You must match or exceed the attackers' intensity"

Friday, January 15, 2016

Shmoocon Firetalks 2016

So the below are my fairly raw and hopefully mostly-accurate notes on the firetalks given tonight (Friday, 15 Jan 2016) at Shmoocon. If some flow better than others... sorry, it's not prose, just stream-of-consciousness.

Amazing panel of judges at Firetalks this year: Jayson Street (), Brian Krebs (@briankrebs), Space Rogue (@spacerog), and... sorry, I did not catch the other fellow's name! (Probably someone very famous whom I should know on sight. Well, geek-famous, anyway.)

Matt Nelson (@enigma0x3) - "Red Team Upgrades - Using SCCM for Malware Deployment"

Abusing SCCM for malicious purposes, and how some admins fail to secure it properly.

"If you administer SCCM as a domain admin, you're doing it wrong!"

Why use SCCM in Red Teaming (seems like a question that answers itself!

  • manages a ton of clients
  • live off the land/blend in
  • helps identify strategic targets
  • provides a built-in persistence mechanism
Abusing SCCM in hunting, helps identify 

Abusing SCCM for compromise
  • Create a powershell script to fetch and execute your code
  • Since the org uses SCCM to install code this way all the time, your malcode install shouldn't stand out.
I have to say, getting into an organization's software distribution server as a red teamer is brilliant. The only higher level of 0wn@ge I've seen was when an APT got an organization to include their RAT in the IT team's gold build (Ghost image). It doesn't get any better than that!

Travis Goodspeed (@travisgoodspeed) - "Jailbreaking a Digital Two-Way Radio"
"I love China!"

Tytera MD380  中国排名第一  ("China's best-ranked!")

  • STM32F405 CPU
  • 1MB Flash / 168K RAM
  • HRC5000 Baseband
  • Two-slot TDMA (Poor man's GSM?)
  • Internationally trunked (repeaters

Programmed via a Windows application (as frustrating
Some of the error messages are in English, some in Chinese)

All frequencies used by emergency services, etc., are registered with FTC and easy to look up.
HOWEVER, there is a "talk group" number (26 bits) that you need to have in order to listen in/participate in their conversations.
Travis, however, patched the firmware to just match every

The firmware updates are encoded with 512-bit XOR.

Eliminating the Chinese font (which took up 1/4 of the RAM on the device) freed up a lot of memory for other things.

Dean Pierce (@deanpierce) - " Low-end Bug Bounties for the Masses"
"Technology is good. The proliferation of technology is what drives humanity."

"Bugs should be rare."
"Does anyone remember Full Disclosure?"

No more free bugs (circa 2009).
Industry-sponsored bug bounties are the new thing.
Of course, the underground 0-day trade is also a thing: shady people selling to shady organizations.

But there are still the small bugs in the small software, that are too small to
So I made a crappy website! It's a private mailing list, $10/mo subscription. (@cheapbugs)
People can post random crappy apsec/webapp bugs and get paid.
The money from the subscription fees goes to the researchers finding the bugs.

One of the key aspects is that it's not just about the bug; the write-ups are important. How they found it, the tools they used, etc. Also fully-functional POC exploits.

The philosophy is that the small bugs matter, too. No bugs left behind!

The judges helpfully pointed out that publicly dropping zero-days might not necessarily be... entirely OK from a legal perspective!

Wendy Knox Everette (@wendyck) - "Failure to Warn You Might Get Pwned"
Looking at software defects from a product liability law perspective.

In general product liability, consumers can recover based on three theories of liability:

  • Manufacturing defects
  • Design defects
  • Failure to warn

In most cases, modern EULAs are used to shield software manufacturers from liability; if you agreed to the EULA, you are bound to recover based on the terms of that contract.

Manufacturers can provide "risk reduction warnings" or "informed choice warnings," i.e., "hey, use at your own risk!" Obvious and generally-known risks don't necessarily require a warning.

One problem in the software realm is that different users with different purposes and levels of skill/experience would need very different warnings.

If a researcher finds a bug, reports it to the vendor, vendor doesn't fix... what is the liability situation?

Fear of stifling innovation is one policy reason why holding software makers liable could be undesirable.

Michael Ossmann (@michaelossmann) - "GreatFET, A Preview"

Based on the GoodFET project, for hardware hacking. "The GoodFET is an open-source JTAG adapter, loosely based upon the TI MSP430 FET UIF and EZ430U boards, as described in their documentation."

GreatFET is intended to make the virtues of GoodFET available to people who don't want to build their own boards.

The GreatFET project includes a main board and stackable add-on boards ("neighbors"). It has a beefy microprocessor with a high-speed USB interface (much faster than existing GoodFET boards). It also features a ONE HUNDRED PIN expansion bus. At that, it is still cheaper to mass-produce that the existing GoodFET boards.

  • Azalea - the primary board
  • Begonia
  • Crocus - inspired by The Next Hope Badge
  • Daffodil

When Space Rogue asked about cost, Michael's guess was about $30.

Best question: "Why not 'BobaFET'?"

@Da_667 (Tony) (umm... @Da_667) - "Fuck You, Pixalate!"

Amateur threat intelligence provider and malware analyst.
Threat, Inc. co-founder. Honeypot herder.

So Tony told a story about a claim from Pixalate about clients having been infected with "Xindi Botnet". Pixalate provided no IOCs, and Tony and friends were unable to find any info on this alleged malware. Yet Pixalate was stating they were going to go to the media about the matter. So Pixalate did so, and several media outlets ran with story.

Pixalate, by the way, is a data analytics firm involved in RTB ("real time bidding") for web ads. They also claim some "threat intelligence" capabilities related to ad analytics.

When Pixalate finally published their report...
They claimed to have discovered this botnet in 2015. They claimed that 6-8 million machines in over 5k organizations were infected/involved. The botnet allegedly was exploiting a bug in the OpenRTB protocol to manipulate the ad buying/bidding process. No hashes, no IPs, no other actual IOCs. Pixalate said in the report that any named organizations could contact them for infected IPs.

Finally, Pixalate did end up providing some URLs that were involved in the botnet (presumably as C2 servers).

Ron Bowes (@iagox86) - "DNS C&C"

How DNS works in 2 minutes. Ron clearly moving blazing fast,

DNS tunneling with dnscat2

DNS is awesome, because it bypasses (well, is allowed unaltered through) most firewalls and other security controls. The challenge is that DNS is totally stateless and has little insight into sources of requests. Also, the protocol only allows for queries in one direction.

The solution is to use the session_id field for state maintenance. Ron created a custom TCP-like protocol over top of DNS. The latest version encrypts all sessions by default, and also authenticates sessions with a shared secret. Another new development is the ability to tunnel other traffic over dnscat2, similar to "ssh -L".

Very entertaining demo, showing the fifteen or so commands available. First time the tunneling function was ever demonstrated in public. The amount of DNS traffic involved is astounding!

Link to his slides (actually, way MORE slides than he actually presented!)