Webinar wrap up: “Safeguarding your network from Data Exfiltration attacks”



Read a recent news article about a data breach and it will likely mention data exfiltration. It’s usually the last event of a string of seemingly preventable mishaps that result in a lot of people getting free credit reporting (or, as of late, uncomfortable conversations with your spouse). The exfiltration bit can usually be found right before a “I-told-you-so” quote from a vendor:

“OMG you guys, we all need to start dealing with these attacks! We can help, please Google us.” Capt. Obvious, Chief Quote Officer, Security Vendor, Inc.

But what exactly is data exfiltration and what solutions exist to prevent it? Exfiltration is a nebulous concept that doesn’t have it’s own Gartner Magic Quadrant. It’s also not something many organizations are focusing on. We find that the response to data exfiltration is to throw a bunch of different security technologies together – hoping that it covers the final stage of an attack. Organizations stitch together different security solutions and controls to prevent it: IDS/IPS, firewalls, DLP, endpoint agents, etc.

Then what does the industry do to test the racks of gear blinking away in the data center? A penetration test. Penetration testing serves is an important role within your security program but it ignores exfiltration testing, offering the customer a snapshot-in-time view of their security posture. This offers critical validation (or negation) of your security posture and is something every organization should be doing. However, historically the focus of penetration tests has been on the “tip of the iceberg”, from the outside in, and don’t take the overall environment and processes into account. It’s a very perimeter-intrusion-centric approach.

Organizations spend a lot of money on security. But does it work?  

Ask yourself: “Do our penetration tests cover all layers of our security budget?”

Last week our CTO, Trevor Hawthorn, presented on “Safeguarding your network from Data Exfiltration attacks. During the talk, Trevor discussed recent data breaches, how attackers typically transmit data out of the victim’s network during a data breach and why victim organizations are usually unaware of data being exfiltrated.

Testing for data exfiltration is very important, as it is part of the final steps taken by an attacker and the last chance to prevent a breach.

We believe the symphony of exfiltration requires an orchestrated approach, which is exactly why we developed XFIL.

XFIL is a patented data exfiltration testing platform that simulates the final stages of an attack: lateral movement and data exfiltration out of your network. The platform can help an organization test and validate existing security controls, network visibility, and identify weaknesses that could be exploited to circumvent those controls. XFIL simulates 260+ methods that are used to sneak data out of your network.

During our webinars, we try to save a lot of time for Q&A as it’s often the most informative part of the talk. Below are answers to some great questions about XFIL that were asked by attendees, for those who were unable to attend.

Q&A Recap:

Can we run XFIL from multiple networks? Yes, that is one of the intended use cases. Everyone has networks of differing levels of trust, so XFIL is meant to be run from those different networks and to test the layers of your defense.

What is a tactful way to address “we just had a pen test and they found nothing so are we certified and good to go?” (client has no IPS or SIEM so the findings are not accurate) You don’t want the absence of a really bad finding to be interpreted as a clean bill of health. Having a third party perform an assessment on your network/system/application is typically a time-based test that might not cover all the possible factors and situations. It is good that (at this particular moment) the test did not find any issues with your “fill in the blank” however you still need to ensure you have the appropriate levels of controls in place as well as the people, processes and visibility to protect it.

Do customers use this (XFIL) to test their MSSP? Absolutely, your Managed Security Service Provider is part of your overall “system” and security monitoring processes. It is always recommended to test and validate that all of the different components of your security program are working in an orchestrated manner.

Will organizations be allowed to use XFIL to test their third parties? The short answer is yes. Part of XFIL’s functionality helps you answer some of the difficult questions regarding your security, such as how porous is this network, what do the internal network controls look like, and how mature is the security program. Your third party vendors can be seen as an extension of your organization and if you have the contract language to do so, you could quickly understand the maturity of the security posture of the third party by running XFIL in their environment.

Does this help at all with PCI and/or gaining PCI compliance? PCI DSS (Requirement 1.2) requires firewall and router configurations to implement default deny. XFIL performs outbound port scanning that will identify allowed services for easy comparison with approved business requirements.

How do we go back and correlate alerts and events with what XFIL does? We have two answers for this question, how things work currently and what we’re planning to implement in the near future. Right now you can review the results of the XFIL assessment in the XFIL console as well as whatever log management/SIEM solution(s) you have in place. We hope to have an updated answer (or solution) such as a custom Splunk application to assist with correlation in a more efficient manner in the near future.

Can XFIL simulate malware indicators? Yes, there are several ways we currently do this today such as from a network standpoint beaconing out to known bad DNS names. As part of our product roadmap, we plan to build out various malware modules for clients where they can simulate recent malware indicators or command and control activities.

We appreciate all the questions and feedback from our webinar, if you have any additional thoughts and/or questions please feel free to send an email to the Stratum/XFIL team which can be reached via info@stratumsecurity.com or you can learn more about XFIL here: http://stratumsecurity.com/xfil/.

Slides from the webinar can be downloaded from here.

New Infographic: What the 2013 Verizon Data Breach Report tells us about phishing



I just published a new blog post on our ThreatSim blog “What the 2013 Verizon Data Breach Report tells us about phishing“. We put together an infographic that lays out some of the highlights from the report.

13 Practical and Tactical Cloud Security Controls in EC2


Cloud Security

Most cloud security blog posts, news articles, and other guidance found online address risks such as governance, contract language, forensics, and other higher-level topics. There isn’t a lot of tactical information that gives actionable advice on what you should be doing today to mitigate cloud-specific risks. Here at Stratum Security, most of our customers have some part of their infrastructure or corporate applications in some sort of cloud or outsourced hosting provider. Additionally, our ThreatSim SaaS is hosted entirely within Amazon’s Web Services (AWS) Elastic Computing Cloud (EC2). In this post I’ll list some of the things we’re doing to protect our own data in EC2.

I won’t spend too much time discussing the legal and contractual issues with the cloud. They are not too different from non-cloud outsourced hosting risks. There is a ton of resources and commentary available online that addresses this. It comes down to this: the stuff you care about, your data and availability of that data, lives and sleeps in someone else’s house. Yes, you own the data but it’s not 100% under your organization’s control. The biggest technical difference (among others) of having your data live in someone else’s data center or cloud is that you lose control over the hardware, and in some cases software, that stores, processes, and transmits your data.

If you are hosted on a cloud platform, you may share certain hardware components with other customers (e.g. the hypervisor). You need to understand what you can protect, where you lose visibility, and where you need/can apply extra security sauce.

Let me preface these recommendations with the following caveats:

  • This will be focused on EC2, and Infrastructure as a Service (IaaS) provider.
  • We’re an information security company with a niche SaaS solution. Our data may be more sensitive than your data.
  • Our recommendations are focused on security first, not necessarily performance. This is no one-size-fits-all.

As such, not all of the recommendations below are suited for every organization – but there are some controls that everyone can and should implement.

Below are a sampling of security controls we’ve implemented in our cloud.  While they are specific to Amazon EC2, most will apply to other IaaS services:

1. Use a Virtual Private Cloud (VPC)
EC2’s “public” instances are essentially all on one big 10.x.x.x network. When you launch a new instance, it is stood up on some random 10.x.x.x IP address. By default EC2 security groups prevent other instances from talking to your machine unless you start opening things up. We launch all of our instances inside our own VPC so that all of our instances are on a specific network subnet (It’s a class B) that we defined. We have two different subnets within our VPC:

  • DMZ – Contains boxes that have something open to the Internet (e.g. web servers, mail servers, jump box, etc.)
  • Private – Servers that store sensitive information and don’t have a need to be exposed at all to the Internet such as our MySQL servers. The machines in the DMZ each have an elastic IP associated with them, the machines on the private network do not.

The use of a VPC has several advantages:

  • You don’t need to keep track of a big list of transient EC2 10.x.x.x IP addresses that may change as you start/stop instances.
  • Security groups within VPCs give you the ability to do egress (outbound) filtering; something that non-VPC security groups do not support.
  • Organization. If your DMZ is the and your private range is it’s easy to tell what is what.

It’s easier to configure IP range security settings. For example, if you need to allow syslog (514/udp) from your DMZ to your central logging machine, you can easily permit rather than have to allow specific public cloud IPs.

Amazon Virtual Private Cloud Documentation

2. Create a DMZ and a Private Network Within Your VPC
The DMZ model has been in use forever for a simple reason: if your servers that are exposed to the dirty Internet are ever compromised, it limits the scope of where the attacker can go to (usually) one or two ports (e.g. SQL). This is a relatively “low cost” control for most applications and has been an accepted architecture forever.

3. Create Security Groups for Each Instance
Security groups are essentially simple firewall rules configured within Amazon’s hypervisor. Security groups are assigned to the instance at launch and cannot be changed later. That said, we created specific security groups for every instance in our environment with the exception of our landing web servers. Our landing servers are replicas of each other so it’s not a big deal to have a “one-size-fits-all” security group there. But our jump box, database servers, mail servers, etc. all have security groups named after the specific instances (e.g. mysql01, mailer02, etc.). This allows us the flexibility to apply very granular rules to every instance without having to worry about an unintended consequence of a security group change that has been applied to several instances.

Amazon Security Groups Documentation

4. Use A Jump Box to Access Your VPC
There’s likely no legitimate reason that every single instance needs SSH exposed to the Internet. If you have 10 instances, that’s 10 perimeters to worry about. Essentially this is a choke point that all administrative SSH access must use to get into the VPC. Once the user is on the jump box, he can then SSH into everything else within the VPC using SSH keys. The jump box is configured so that it is only able to SSH to other specific IP addresses within our VPC. All user actions are logged and only specific users have sudo rights. Furthermore, instances within the VPC are not allowed to connect to each other. All SSH access must be done via the jump box. This way, if something in the DMZ is compromised, the attacker won’t be able to hop to other devices within the DMZ subnet.

Since the security of this server is so important to the security of our environment we pile on several layers of security here. First, in order to login to this instance the user must be coming from a specific IP address. Second, the user must have a valid SSH key. Third, all users must use two-factor authentication (more on this next). We also do a lot of system level hardening that I won’t go into detail on here.

5. Use Two Factor Authentication on Your Jump Box
Just to be clear: passwords suck, are awful, and should never be the single thing between the Internet and critical data. Passwords of all length and complexity are lost, stolen, forgotten, guessed, key logged, reused, and cracked. There’s no reason that critical data should be protected with ONLY a password. Facebook, Gmail, Dropbox, and even our own ThreatSim service all support some form of two-factor authentication. Why would you not protect administrative access to your environment with at least the same effort as your Facebook account?

We use Duo Security for our two-factor solution. It’s free if you have 10 users or less and works great. One of Duo’s founders is Dug Song (of dsniff fame) who has a high degree of security street cred. For our setup, Duo has a Linux SSH module that sends me a push notification to my phone right after I successfully authenticate with my SSH key. If I approve the authentication request I press “Approve” in the Duo app and my terminal passes me to my shell. Duo is available for iOS, Android, Blackberry, etc. If you lose your laptop with your SSH keys, the attacker must have your phone in order to authenticate via SSH.

You can also use Google Authenticator with a PAM module if you want to go that route. You don’t need to buy SecurIDs for everyone. There’s a lot of easy ways to do this.

6. Restrict SSH Access To Your Jump Box With Security Groups
Lots of developers folks love to have SSH open to the entire Internet for emergency or remote support. The problem is that right at this very moment, as you are reading this, there is likely a non-public exploit for SSH being used by bad guys. By allowing the entire Internet access to your SSH port there is nothing stopping an attacker from exploiting your machine. At least if you only allow very specific IP addresses in your security groups you’d be protected. Or, if an attacker gets ahold of your SSH key (hacked/lost/stolen laptop, etc.) at least your instance (e.g. jump box) would be protected by an IP restriction.

Are you always on the go or have an often-changing dynamic IP? Try to get a static IP from your provider. Or, update your security group every time your IP changes. Yes, it’s a little bit of a pain but not as big of a pain of having your entire EC2 environment compromised.

7. Use Outbound Security Groups
This one isn’t always easy to implement but bear with me. There is no reason why highly critical servers within your VPC should be able to initiate a FTP connection to any server on the Internet. There isn’t. For example, would you consider it acceptable for your database server containing all of your customer data to make an outbound FTP connection to a server in China? What about IP ranges in Russia where the Russian Business Network is hosted? It’s better to use egress rules and only allow specific ports and protocols out to specific hosts. Here’s why: If an attacker ever does make it all the way into your database server within the VPC, why make it easy for him to transfer a dump of your database back to his server. Yes, you will need to allow your devices out to specific IPs on the Internet for patches, updates, NTP, etc. But that’s a short list that is worth making and maintaining.

8. Enable EC2’s Two-Factor Authentication for EC2 Console Access
Earlier I talked about using two-factor on SSH and a bunch of other good stuff. None of it matters if someone can just guess your Amazon password (yes, the same one you use to order a ton of tube socks with free shipping), and login to your AWS account. Amazon supports hardware tokens as well as Google Authenticator so that when your AWS password is compromised, the bad guys would need your hardware token or phone to access your account. There is no excuse to not do dual-factor on your AWS login.

9. Use a Host-Based Intrusion Detection System (HIDS)
One thing you lose when hosting your app on EC2 is visibility into the network. Amazon doesn’t give you a span port to run your IDS to watch for bad things happening over the wire. That said, we still want to have visibility into anomalies and incidents within our environment. Enter OSSEC. OSSEC is a free HIDS that monitors system logs and events for events that match a known signature. The events are sent in real-time to a central OSSEC server that then sends the events to our operations personnel via email who can review the reports in real-time. For example we know when the md5 checksum of critical system binaries change, new users are added, or when our syslog shows something strange going on (e.g. segfaults, daemons starting/stopping, hardware problems).

10. Use A Central Log Server
We tested several open source and commercial log collection solutions and ended up pressing the “Easy Button” and going with Splunk. We considered using Log Stash but it was more complex than Splunk and required us to monitor and manage several different processes (redis, elastic search, java, etc.). I won’t waste blog space extolling the virtues of Splunk, but it’s great for collecting or correlating log events across our entire infrastructure. We use it for troubleshooting, security event investigations, etc. From a security standpoint, if a machine gets compromised, you can obviously no longer trust it. At least if it was sending logs in real-time to a centralized server you can go to Splunk and figure out what is going on.

11. Encrypt Sensitive Disk Volumes
This is a big one. Most organizations now use full-disk hard drive encryption on laptops. Why? Because laptops (and the hard drives they contain) are not physically in your direct control and may be stolen or seized without your knowledge since they aren’t locked in the data center down the hall. Virtual disk volumes in the cloud are arguably more difficult to protect because they are virtual – they can live in many places at the same time and be cloned within seconds. It is for this reason that every organization that stores data in the cloud should consider using disk encryption. Several solutions exist including native OS disk crypto as well as SaaS disk encryption providers.

The first thing to do is to think about what information needs to be encrypted. Sometimes it is obvious, like MySQL database files, customer documents, etc. Depending on the nature of your application is may include things like email logs that contain email addresses, Splunk logs, HIDS logs, application logs. All of these may contain fragments of sensitive information that require protection.

For Ubuntu, this is a great article that contains step-by-step directions on how to set up disk encryption in EC2.

One seemingly obvious mistake that many people make is storing the key in the cloud along with the encrypted data. This is like locking your door and then taping the key on the lock. The whole point is to make it so if an attacker gets ahold of your EBS volume that they can’t mount or read the drive. Since you can’t store the key in the cloud this means that you can’t add your encrypted volume to /etc/fstab where it will be automatically mounted at boot. Since Amazon doesn’t allow console access, you will have to mount the drive manually after boot. This obviously impacts the resilience of your application as an unscheduled (or scheduled) reboot requires human intervention to enter the key and mount the volume.

Disk encryption and key management is complex and requires a great deal of planning that deserves its own blog post. Another issue not discussed here is more complex distributed file systems and the impact of encryption on performance.

12. Alert On Application Errors
If you are running an application that is exposed to the Internet you should be logging and alerting on application errors. One of the challenges with application security is that applications hide malice. Meaning that if someone is attacking your application in a subtle way (flipping URL parameters for example) it may not match a signature that your IDS/IPS/HIDS may catch. Or if an attacker makes your application do something out of the ordinary, the application may not generate an exception that is useful or obvious. If a user changes accountID=123 to accountID=124, is that malicious? Will the application tell you that someone is poking around? Since we’re a security company, we worked closely with our developers to build in application errors that tell us when someone is up to no good.

We use the open source Errbit that lets us know when something odd happens with our application. Errbit isn’t a security tool by design but provides valuable and actionable security information. All Errbit errors are forwarded in real-time to our operations folks who can evaluate if it was just a benign, anomalous error or something more sinister. For example, if a user attempts to access an application resource that they do not have access to, we receive notification. We have it tuned so that we only see alerts where  someone intentionally tampers with our application.

13. Internal and External Vulnerability Scanning
We perform regular internal external scanning of our infrastructure using Nessus. For the internal scanning we provide our scanner with an SSH key for authenticated (aka credentialed) scans that allows us to ensure that all devices have up to date patches and are configured properly. For external scanning not only do we scan exposed services for vulnerabilities but we also ensure that our security groups are configured as expected and there aren’t any surprises.

Getting More From Nikto – Part 1


Nikto, the well known web vulnerability scanner, derives its name from the movie “The Day The Earth Stood Still”.  I found that piece of trivia here. I wanted to write a series of posts about improving Nikto usage for web application and network assessments, as its use is commonplace.  If you are completely unfamiliar with Nikto, review the home page and these books for tips on basic usage. Throughout these posts, I will use Backtrack (BT5R3) as the OS distribution in the examples.  If you are using a different distribution, the majority of the information presented here should be relevant, but keep in mind there may be slight differences.  Starting with a few simple tips…

Update Nikto

Easy to do, easy to forget.  On BT5R3, Nikto 2.1.5 cannot be updated with the built-in update mechanism (UPDATE: version 2.1.5 has been officially released and now can be updated via nikto.pl -update). Alternatively, subversion can be used.

root@bt:~# cd /pentest/web/nikto/

root@bt:/pentest/web/nikto# svn update


Updated to revision 850.

Keep in mind subversion will prompt to resolve conflicts if you have modified any files (nikto.conf, for example). For Nikto versions 2.1.4 and older, use nikto.pl -update

Place Nikto in $PATH

By default, Nikto is not immediately accessible from a terminal in Backtrack. Certainly not a big deal, but this can be handy, especially when automating Nikto. Typing which nikto or which nikto.pl should validate Nikto is not in $PATH. In Backtrack, most tools are usually located within the /pentest/ subdirectory, and can easily be found with the locate command.

root@bt:~# locate nikto.pl


Create symlinks for core Nikto files.

root@bt:~# ln -s /pentest/web/nikto/nikto.pl /usr/bin/nikto.pl

root@bt:~# ln -s /pentest/web/nikto/nikto.conf /etc/nikto.conf

Modify /etc/nikto.conf to include location of critical files. Using your favorite editor, change the following lines:

# EXECDIR=/opt/nikto  

# Location of NiktoNIKTODTD=docs/nikto.dtd




If all went well, Nikto is now in $PATH and ready to go. Bonus.

Disable Prompts

Nikto will occasionally prompt before taking certain actions, such as submitting a fingerprint to cirt.net. Forgetting to change this behavior before kicking off an automated scan of a slew of HTTP/HTTPS services could result in a serious ‘DOH!’ moment. Modify /etc/nikto.conf to disable prompting. Change the following line:




Change The Default User Agent

Nikto’s default User-Agent string is “USERAGENT=Mozilla/5.00 (Nikto/@VERSION) (Evasions:@EVASIONS) (Test:@TESTID)”. Web Application Firewalls (WAFs) and other security devices may block Nikto scans with this User-Agent. Change it! You can determine the current browser’s User-Agent string by using a web proxy, sniffing, or using web based utilities. A gargantuan list of strings is available here, as well. Again, change the following line in /etc/nikto.conf:

USERAGENT=Mozilla/5.00 (Nikto/@VERSION) (Evasions:@EVASIONS) (Test:@TESTID)


USERAGENT=Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/4.0; InfoPath.2; SV1; .NET CLR 2.0.50727; WOW64)

Keep in mind that running a Nikto scan is usually not a stealthy endeavor (as stated on the home page), but this action can help to bypass simple pattern matching defenses.

Use The Maxtime Switch

Nikto scans that hang are a total bummer, especially in an automated setup. Specifying a maximum timeout can help to avoid a bigger loss in the event a security device uses rate limiting, tarpitting, shunning at the application layer, etc. Using ‘nikto.pl -H’ shows all available options, where the maxtime switch can be observed (this option does not exist in versions 2.1.4 and prior). The value for the maxtime parameter is specified in seconds. The following example will timeout after 4 hours:

nikto.pl -h -maxtime 14400

I suggest a value that reflects hours, but really this is unique to the environment and constraints of the assessment.

Use Alternate Log Formats

Nikto is able to log in a variety of formats (CSV, HTML,NBE [Nessus], TXT, XML, and even to Metasploit). The HTML format is interesting as it allows for quicker verification of findings, though the results can be repetitive. In my opinion, the XML format provides the greatest fidelity for parsing (even with grep) and is actually easy on the eyes as well. Here’s a snippet:

<item id=”001407″ osvdbid=”119″ osvdblink=”http://osvdb.org/119″ method=”GET”>

<description><![CDATA[/?PageServices: The remote server may allow directory listings through Web Publisher by forcing the server to show all files via ‘open directory browsing’. Web Publisher should be disabled. CVE-1999-0269.]]></description>





<item id=”750000″ osvdbid=”3268″ osvdblink=”http://osvdb.org/3268″ method=”GET”>

<description><![CDATA[/?wp-cs-dump: Directory indexing found.]]></description>





Grepping ‘<description>’ or ‘<namelink>’ yields an easy way to visualize results.

The easiest way to specify a log format is by specifying a file name with the relevant file extension for the output parameter. For example, to generate an XML logfile:

nikto.pl -h -maxtime 14400 -o localhost.xml

or to generate an HTML logfile:

nikto.pl -h -maxtime 14400 -o localhost.html


-Arch Grinds Rips

Yahoo sub-domain compromised – 456k passwords dumped


Rumors are running around in a few places that a Yahoo! web property was hacked via SQL injection. Looking at the dump file there are a few clues that it is in fact from Yahoo. This will, no doubt cause many users headaches. Here are some statistics of interest that use culled from the dump with Pipal:

Top 10 passwords
123456 = 1667 (0.38%)
password = 780 (0.18%)
welcome = 437 (0.1%)
ninja = 333 (0.08%)
abc123 = 250 (0.06%)
123456789 = 222 (0.05%)
12345678 = 208 (0.05%)
sunshine = 205 (0.05%)
princess = 202 (0.05%)
qwerty = 172 (0.04%)

Password length (count ordered)
8 = 119135 (26.88%)
6 = 79629 (17.97%)
9 = 65964 (14.88%)
7 = 65611 (14.81%)
10 = 54762 (12.36%)
12 = 21733 (4.9%)
11 = 21224 (4.79%)
5 = 5325 (1.2%)
4 = 2749 (0.62%)
13 = 2663 (0.6%)
14 = 1502 (0.34%)
15 = 844 (0.19%)
16 = 575 (0.13%)
3 = 303 (0.07%)
17 = 267 (0.06%)
20 = 187 (0.04%)
18 = 133 (0.03%)
1 = 118 (0.03%)
19 = 99 (0.02%)
2 = 72 (0.02%)
21 = 23 (0.01%)
28 = 23 (0.01%)

Single digit on the end = 47445 (10.71%)
Two digits on the end = 73663 (16.62%)
Three digits on the end = 31106 (7.02%)

Last number
0 = 17608 (3.97%)
1 = 46705 (10.54%)
2 = 24635 (5.56%)
3 = 29233 (6.6%)
4 = 17712 (4.0%)
5 = 17413 (3.93%)
6 = 17899 (4.04%)
7 = 20403 (4.6%)
8 = 17863 (4.03%)
9 = 19922 (4.5%)

Other interesting stats:
.gov: 158
.mil 446
gmail.com: 106,909
yahoo.com: 138,837
hotmail.com: 55,178
aol.com: 24,731

No word yet on if the passwords were hashed or sitting in the DB in plain text.

I feel like 2012 is becoming the year of the high-profile password dump. I’ve had more and more non-security people ask me how I store my passwords. First, just about every web site and service I use has a different password. Second, I am big fan of KeePassX. It’s easy, open source (and well scrutinized), and available on any platform that I need it to be on. I also use two-factor on those sites that offer it (e.g. Google, Facebook, etc.)


Follow us: @stratumsecurity

Analyzing DNS Logs Using Splunk


Back in December of 2011 I wrote a post on the ThreatSim blog called “Fighting The Advanced Attacker: 9 Security Controls You Should Add To Your Network Right Now“. One of the controls we recommended that folks implement was to log all DNS queries and the client that requested it:

Log DNS queries and the client that requested it: It’s been said that DNS is the linchpin of the Internet. It’s arguably the most basic and under appreciated human-to-technology interface. It’s no different for malware. When you suspect that a device has been compromised on your network, it’s important to be able to see what the suspected device has been up to. The DNS logs of a compromised machine will quickly allow responders to identify other machines that may also be infected.

Telling people to log DNS requests is super easy. Logging DNS queries is pretty easy too. Understanding how to ingest, analyze, process the data is where you need to bring in some tools. We love Splunk (as do many of our customers).  Splunk is the perfect tool for the job of figuring out what your clients are up to and who they are talking to. You can also set up real-time alerts if Splunk sees a DNS lookup for a known bad domain name.

Let’s implement our recommendation in real life. Here is what we did:

Read More

Page 1 of 612345...Last