Linux Incident Investigation Handbook for Security Analysts
TC / 2024-11-16
Introduction
Incident investigation is the process of analyzing security incidents to understand what happened, how it occurred, and what was impacted. It is a critical component of cybersecurity incident response, enabling organizations to contain damage and prevent future attacks. In a Linux environment, incident investigation involves examining system logs, files, and activities to piece together the sequence of an attack. This is especially important as Linux systems power a significant portion of servers and cloud infrastructure, making them attractive targets for cybercriminals (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay). Attackers no longer overlook Linux; instead, they have developed sophisticated malware and tactics to exploit Linux servers and devices (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay).
Types of Attacks on Linux: Linux systems face a wide range of cyber threats. Common attack types include:
-
Malware Infections (Ransomware & Cryptominers): Ransomware has evolved to target Linux servers and cloud environments, encrypting or destroying data for extortion (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay) (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay). Similarly, cryptojacking malware infects Linux systems to secretly mine cryptocurrency (often Monero) by exploiting CPU resources (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay) (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay), which can severely degrade performance. In fact, one report found that 89% of illicit crypto-miners on Linux use XMRig libraries to hijack computing power (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay).
-
Unauthorized Access & Exploits: Attackers frequently attempt to gain unauthorized access to Linux servers by exploiting unpatched vulnerabilities or misconfigurations. These initial access attacks can involve brute-force SSH login attempts, exploitation of software bugs, or the use of stolen credentials. Once in, attackers may install rootkits or backdoors to maintain persistence.
-
IoT Botnets and DDoS: Many Internet-of-Things devices and routers run Linux, and malware like Mirai has leveraged them to form botnets (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay). Compromised Linux-based IoT devices can be used to launch massive Distributed Denial of Service (DDoS) attacks or perform other nefarious activities (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay) (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay).
-
Web Server Attacks: Linux often runs web services (Apache/Nginx, databases, etc.), making it a target for web application attacks (SQL injection, remote code execution) and the planting of web shells. Attackers may deface websites, steal data, or use the server as a pivot to other systems.
These examples illustrate why effective incident investigation is vital. By promptly investigating suspicious activity on Linux systems, security analysts can identify indicators of compromise and respond before an incident escalates into a larger breach.
Preparation
Thorough preparation is the foundation of effective incident response. Having an incident response plan in place is paramount. An Incident Response Plan (IRP) defines the process to follow during a security incident, including roles, communication paths, and procedures. This plan should be established in advance and regularly updated and tested. Trying to improvise a response during an attack is dangerous – as one expert notes, attempting to make decisions in the high-pressure environment of an incident is a “recipe for disaster” (Incident Response: How to Create a Communication Plan (w/ Template)). Instead, organizations should prepare during calm periods by defining clear steps and responsibilities.
Key preparation steps include:
-
Incident Response Plan & Team: Develop a formal IRP document and designate an incident response team (e.g. a Computer Security Incident Response Team, CSIRT). The plan should outline who to contact in a crisis (including technical staff, management, legal, and communications personnel) and provide checklists for common incident types. Ensure the team knows their roles and conducts periodic tabletop exercises or drills to practice the plan.
-
System Hardening: Secure Linux systems before any incident occurs. Follow best practices for hardening: apply security updates promptly, disable unnecessary services, enforce strong authentication (SSH key or MFA for remote logins), and minimize users with root privileges. Hardening reduces the attack surface so that there are fewer weaknesses for attackers to exploit.
-
Time Synchronization and Backups: Ensure all servers have synchronized clocks (e.g. via NTP). Consistent timestamps are crucial for log correlation during an investigation (Linux Incident Response Guide - DFIR - Halkyn Security Blog). Maintain regular automated backups of critical data and system images, and test those backups. Ideally, keep some backups offline or offsite so they remain safe if an attacker tries to delete or encrypt data (Linux Incident Response Guide - DFIR - Halkyn Security Blog).
-
Logging and Monitoring: Enable detailed logging on all Linux systems and centralize those logs. Configure system logs (e.g. syslog, journald) to record authentication attempts, errors, network events, and other significant activities. Forward logs to a secure central log server or Security Information and Event Management (SIEM) system in real-time (Linux Incident Response Guide - DFIR - Halkyn Security Blog). This way, even if an attacker deletes local logs, the records persist elsewhere. Also consider enabling Linux auditing (auditd) for tracking file access and changes, and use intrusion detection systems where possible. Proper logging provides the raw data that investigators will pore over to reconstruct events.
-
Baseline and Asset Inventory: Know what “normal” looks like on your Linux systems. Maintain an inventory of hosts, their OS versions and installed software. Create baselines of normal CPU, memory, and network usage, as well as cryptographic hashes of critical system files or binaries (Linux Incident Response Guide - DFIR - Halkyn Security Blog). Baselines help analysts detect anomalies during an investigation (e.g. a foreign process running, or a system file that no longer matches its known-good hash). Storing reference hashes of key files (and updating them after patches) means you can later quickly spot if a file was altered by an attacker (Linux Incident Response Guide - DFIR - Halkyn Security Blog).
-
Access to Tools and Resources: In advance, ensure your team has access to necessary forensic tools (for log analysis, disk imaging, memory capture, etc.) and that these tools are authorized for use in your environment. Preparing a “jump kit” for incident responders – a ready laptop or set of tools that can be deployed at a moment’s notice – can save precious time during an incident.
Investing in preparation significantly streamlines the investigation process when an incident occurs. As the saying goes, “if you get the preparation right, the response will work better” (Linux Incident Response Guide - DFIR - Halkyn Security Blog).
(Resources: See the Resources and Tools section for templates like an open-source incident response plan and checklists that can help in the preparation phase.)
Initial Response
When a security incident is suspected on a Linux system, the actions taken in the first moments are critical. The goal of the initial response is to verify the incident, contain immediate damage, and activate the appropriate resources without tipping off the attacker or destroying evidence. Here are the immediate steps and considerations:
-
Detect and Validate the Incident: Confirm that an incident is occurring. This might start with an alert (from an IDS, SIEM, or monitoring system) or an observation (unusual system behavior or a suspicious log entry). Document who or what reported the issue and the time it was first detected. Record any initial indicators (e.g., an IP address triggering an alert, or a file identified as malware). This initial documentation is important for later analysis and chain-of-custody.
-
Activate the Incident Response Team: As soon as an incident is confirmed (or strongly suspected), escalate to your incident response team. Follow the established team activation process from your IR plan (Incident Response: How to Create a Communication Plan (w/ Template)). This usually means notifying the security manager or SOC (Security Operations Center), who will page the on-call incident handlers. Fast team mobilization is crucial. Use predefined communication channels – for example, an emergency Slack/Teams channel or a bridge call line dedicated to the incident – to coordinate among responders.
-
Establish Communication Guidelines: During an active incident, team communication should be frequent but controlled. Have an internal communications plan so that the technical team can share findings in real-time (often via an internal chat or war room). Also decide early how to handle external communication: designate a point person to manage any outreach to customers, media, or regulators if needed (Incident Response: How to Create a Communication Plan (w/ Template)). All information shared externally should be vetted to avoid misinformation. Importantly, consider using out-of-band communication for sensitive discussions if you suspect the attacker might be monitoring the company’s internal network or email. For example, if a critical server is compromised, avoid using it for coordination; use phone calls or an unaffected messaging system instead.
-
Secure the Environment (Containment Light): Without yet making major changes to the system, take quick actions to prevent further damage. For instance, if a Linux server is actively under attack, you might isolate it from the network (e.g., unplug cable or disable its switch port) to stop data exfiltration or malware spread. Be careful to preserve evidence – do not reboot the system or delete anything at this stage. If possible, take a snapshot if it’s a virtual machine. Short-term containment might also include blocking suspicious IP addresses at the firewall, or disabling a compromised user account to stop an intruder’s access.
-
Stakeholder Notification: Inform essential stakeholders about the incident in a timely manner. This includes IT management and potentially other departments like legal or compliance (especially if customer data might be involved). Executives should know in broad terms that an incident is being handled, but technical details can be reserved for the response team. If required by law or policy (for example, a breach of personal data), you may have to notify outside parties or authorities at this stage. Make these communications in line with your plan, focusing on facts and the steps being taken. Keeping stakeholders in the loop builds trust and ensures they can support the response (e.g., approving emergency resources or public communications).
Throughout the initial response, document every action taken (time, who performed it, and what was done). This action log will be invaluable later for both investigation and post-incident review (8 Steps for Data Breach Response and Investigation | Syteca) (8 Steps for Data Breach Response and Investigation | Syteca). The initial response sets the stage for a thorough investigation by stabilizing the situation and assembling the right people to proceed.
Investigation Process
Once the situation is contained enough to allow deeper analysis, the team moves into the investigation phase. This is where a security analyst will collect and examine evidence to determine the root cause, scope, and impact of the incident. A methodical approach is important to ensure nothing is overlooked and evidence integrity is maintained.
1. Evidence Collection: Begin by gathering all relevant data from the affected Linux system(s) and any connected systems. Key sources of evidence include:
-
System and Security Logs: Logs are often the primary source of truth in a Linux incident investigation. Examine system logs (e.g.
/var/log/syslog
or/var/log/messages
on many distros) for unusual entries, errors, or reboots around the incident timeframe. Check authentication logs (/var/log/auth.log
on Ubuntu/Debian or/var/log/secure
on Red Hat/CentOS) for failed login attempts, new user creations, or use of thesudo
command (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics) (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics). Authentication logs are essential for detecting unauthorized access attempts and escalation of privileges (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics). Also review application-specific logs (web server logs, database logs) for signs of malicious inputs or abnormal behavior. Analytic tools like the command-line trio ofgrep
,awk
, andsed
are extremely helpful here for filtering and searching through large log files. These venerable tools may seem old-fashioned, but they are “unsung heroes of the log analysis world” (Log Analysis Tools: Sed, Awk, Grep, and RegEx - TCM Security), allowing an investigator to swiftly find relevant entries (for example, usinggrep
to find all occurrences of an attacker’s IP or a particular error message). -
Running Processes and System State: Capture the state of the system as-is. On the affected Linux host, list running processes (
ps -aux
ortop -b -n1
) and note any unfamiliar or suspicious processes, especially ones running as root or listening on unusual network ports. Dump the process tree to see parent-child relationships (using tools likepstree
). Check for any persistent processes (malware often tries to restart or stick in memory). Also list loaded kernel modules (lsmod
) to spot any rogue kernel module (which could indicate a rootkit). Usenetstat -tulpan
orss -pant
to list open network connections and ports, identifying connections to unknown external addresses that could be command-and-control channels. If possible, also dump the logged-in users (who
andlast
) to see if any strange user sessions are active or have occurred recently. -
Memory (RAM) Dump: Volatile memory can contain a treasure trove of evidence (running malware, decrypted keys, injected code, etc.) that won’t be found on disk. If the incident is severe enough (e.g., a sophisticated malware infection), consider capturing a memory image of the Linux system. Tools like LiME (Linux Memory Extractor) or
avml
(Microsoft’s Azure Memory forensics tool) can create a full RAM dump without shutting down the machine. Ensure you have a procedure for this in advance, as loading a kernel module to dump memory should be done carefully (and may not be feasible on a production system without downtime). Memory analysis frameworks such as Volatility can later be used to analyze the dump for malicious processes, kernel hooks, and other indicators not visible from userland. -
Disk and File System Artifacts: If you suspect files have been tampered with or malware installed, you may need to collect disk evidence. This could range from specific files (binaries, scripts, etc.) to a full disk image for forensic analysis. Key areas to inspect on Linux include user home directories (for suspicious scripts, SSH keys, etc.), system binaries (compare hashes of
/bin
and/sbin
executables with known-good values), crontab entries (/etc/crontab
and user crontabs in/var/spool/cron/
for unauthorized scheduled tasks (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics)), and any world-writable directories (where an attacker might drop files). If feasible, take a forensic disk image using tools likedd
ordcfldd
(preferably writing to an external drive) – this preserves a point-in-time snapshot of the system for later deep analysis or legal evidence. However, capturing full images can be time-consuming, so in a live incident you may instead copy critical artifacts first (logs, etc.) and image after the situation is stable.
Throughout evidence collection, maintain the chain of custody for any data collected. This means meticulously documenting each piece of evidence (when it was collected, by whom, and where it is stored) and handling it in a way that preserves its integrity (Understanding Digital Forensics: The Importance of Chain of Custody | Infosec) (Understanding Digital Forensics: The Importance of Chain of Custody | Infosec). For digital evidence, this often involves calculating cryptographic hashes (SHA256, for example) of files or images at the time of capture and logging those hashes. Chain of custody ensures that if the evidence needs to be used in a court or internal investigation, its authenticity can be proven – any gaps or tampering could render it inadmissible (Understanding Digital Forensics: The Importance of Chain of Custody | Infosec). Treat the affected systems as crime scenes: only authorized incident handlers should collect or access the evidence, and every access should be recorded.
2. Analysis & Timeline Reconstruction: With data in hand, begin analyzing to piece together what happened. Start building a timeline of events. For example:
-
From the logs, determine the earliest signs of compromise. Identify suspicious events like a surge of failed logins followed by a successful one, a new user added to
/etc/passwd
, or an unexpected service starting. Note the timestamps. -
Correlate log events across sources. Perhaps the Apache web access log shows a strange request at 10:05, and the syslog shows a kernel error at 10:06, then the auth log shows root access at 10:07. Such correlations can indicate the attacker’s path (e.g., a web exploit to gain shell, then privilege escalation). Using tools or scripts to combine logs by timestamp can help reveal the sequence clearly.
-
Analyze any malware or suspicious files found. Calculate hashes (MD5/SHA1/SHA256) of unknown binaries or scripts and check them against databases (VirusTotal, hash repositories) to see if they are known malware. Inspect scripts or executables to understand their behavior (but execute them only in a safe sandbox/lab, never on a production system!). Sometimes simply strings search (
strings <file>
) on a binary can reveal clues (like URLs, IPs, or attacker messages). -
Review configuration files for signs of tampering. For instance, check
/etc/ssh/sshd_config
for any unauthorized changes (such as enabling root login or new authorized keys), or/etc/sudoers
for suspicious entries that grant broad privileges. Also examine scheduled jobs (cron
) as mentioned, since attackers often use cron jobs to re-establish access or run malware periodically (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics). -
If a memory image was taken, use memory analysis tools to look for hidden processes or network connections that wouldn’t appear in normal system commands. This can uncover stealthy malware (like userland rootkits) that hide themselves from
ps
orls
.
The aim of the analysis is to answer key questions: How did the attacker get in? What actions did they perform? Which systems or data were accessed? and How long have they been in the environment? The concept of Indicators of Compromise (IOCs) becomes important here. IOCs are the forensic clues that indicate an intrusion, such as a malicious file hash, an IP address that the attacker used, a domain name for command-and-control, or specific patterns in logs (Indicators of Compromise (IoCs): An Introductory Guide | Splunk) (Indicators of Compromise (IoCs): An Introductory Guide | Splunk). During analysis, maintain a list of IOCs as you discover them (e.g., the IP 203.0.113.45
appears in logs as the attacker’s source, or the filename /tmp/.x23
was a malware dropper). These IOCs will be useful for searching across other systems to see if the attack spread, and for post-incident detection improvements.
3. Scope and Impact Determination: As the investigation progresses, continually assess the scope of the incident. Determine exactly which hosts were compromised. For example, if one web server was breached, check if the attacker pivoted to other servers (look at connectivity and credentials). Identify what data, if any, was accessed or exfiltrated. If database servers or file shares were involved, was sensitive personal or financial data at risk? This assessment is important for deciding notification obligations and for focusing remediation. Also evaluate the impact – did the incident cause downtime, data loss, or just a minor nuisance? Impact will drive the urgency of recovery and the communication to stakeholders.
Throughout the investigation process, remain systematic and thorough. It’s useful to use a checklist to ensure you cover all areas (accounts, network, malware, logs, etc.). Many organizations utilize forensic analysis frameworks or case management tools to track findings. Document interim results: create an investigation log or timeline document as you go, not only at the end, so that insights are recorded. Remember that investigation is often iterative – you might find a lead in logs that sends you back to collect another piece of evidence, and that’s fine. Just be sure to update your evidence logs and maintain custody practices when doing so.
By the end of the investigation phase, you should have a coherent narrative of the incident: For example: “Attacker exploited vulnerability X on Server1 at time Y, installed malware Z, used stolen credentials to move to Server2, and exfiltrated database records via outbound connection on port 443.” This narrative will inform the next steps of threat analysis and eradication.
Threat Analysis
Threat analysis takes the findings from the investigation and puts them into context with respect to known threats and attacker tactics. In this phase, security analysts work to identify the specific malware, tools, or adversaries involved, and leverage threat intelligence to enrich their understanding of the incident.
Identifying Indicators of Compromise (IOCs): During investigation, you will have gathered IOCs such as file hashes, malicious IPs or domains, process names, filenames, or specific strings from malware. Now, analysts should systematically identify and catalog these IOCs. For each IOC, consider what it indicates – e.g., an MD5 file hash indicates a specific malware sample, an IP address could indicate a command-and-control server, a domain might be used for phishing or C2, and so on. IOCs are essentially the evidence that a security incident occurred (IOC Security: The Role Of Indicators Of Compromise In Threat Detection | Wiz). They can answer questions like “What evidence supports that this system was compromised?" and “Which artifacts show the attacker’s activity?".
Consulting Threat Intelligence Databases: With a list of IOCs, the next step is to consult threat intelligence resources to see if these indicators are already known from other incidents or research. This can greatly speed up understanding the nature of the attack. Some steps to take:
-
VirusTotal and Malware Repositories: Upload or look up the cryptographic hash of suspicious files or binaries on VirusTotal (a public malware database). If the file (hash) is known, you can get reports from multiple antivirus engines naming the malware family. For example, a file hash might immediately identify the malware as “XorDDoS Trojan” or “Mirai variant” if it matches known samples. VirusTotal may also show if the sample communicated with certain URLs or IPs, which gives more IOCs to examine.
-
Threat Intelligence Platforms: Use platforms like AlienVault OTX (Open Threat Exchange), IBM X-Force Exchange, or open sources like VirusTotal Intelligence to search for IP addresses or domain names seen in the incident. If the attacker’s C2 server IP is known, threat intel feeds might reveal that IP is associated with a known botnet or APT (Advanced Persistent Threat) group. For example, you might discover “IP 203.0.113.45 is a known hostile address linked to SSH brute-force campaigns”. Similarly, checking an email or domain could show it’s part of a phishing campaign. These platforms often provide context like when the indicator was first seen and in what types of attacks.
-
Threat Intelligence Sharing Communities: If you have access to industry sharing groups (like an ISAC for your sector) or open-source threat feeds, search your IOCs there. Communities often publish IOC feeds and threat reports. For instance, the open-source MISP (Malware Information Sharing Platform) allows organizations to share IOCs and threat information with peers. By cross-referencing, you might find that the same malware hash was found in another incident last month, and a detailed analysis report is available.
-
Leverage MITRE ATT&CK Framework: Based on the behaviors observed, map the attack to known tactics and techniques in the MITRE ATT&CK framework. For example, if you observed the attacker used a cron job for persistence and executed a base64-encoded payload, these correspond to specific ATT&CK techniques (like T1053 for scheduled task, T1140 for deobfuscation). MITRE ATT&CK is a knowledge base of adversary tactics and techniques; by mapping your incident to it, you can often guess what the attacker might do next or what group has used those techniques historically. Some threat intel platforms will even suggest likely threat actor groups based on a pattern of techniques.
-
Identify Attack Vector and Vulnerabilities: Threat analysis should confirm how the attacker initially breached the Linux system. Was it through an unpatched software vulnerability? If so, identify the CVE or vulnerability exploited (e.g., an out-of-date Apache Struts with a known CVE). If it was via stolen credentials, figure out how those might have been obtained (phishing? reused password from another breach?). Understanding the initial attack vector is crucial for eradication and preventing reoccurrence.
-
Assess Attacker Motives and Skills: By analyzing the tools and methods used, you can infer whether this was a targeted attack by a skilled adversary (e.g., custom malware, complex maneuvers) or an opportunistic attack (e.g., automated script kiddie or commodity ransomware). For example, finding a sophisticated rootkit suggests a higher level of adversary than a simple crypto-miner script. This assessment helps in prioritizing response and also in post-incident reporting to management (they will want to know if it was a random incident or part of a deliberate campaign).
Threat analysis is where incident response meets intelligence. If your organization has a threat intelligence team or subscription, collaborate with them. They might provide further context such as “This looks like the work of Threat Group X, which typically also installs a keylogger – check for that”. Additionally, they can advise on whether this incident is likely an isolated one or part of a broader targeting of your industry.
Using IOCs for Threat Hunting: The gathered IOCs should also be used to threat hunt across other systems to see if the attack has spread. For instance, search enterprise-wide logs for that malicious IP or the hash of the malware on other hosts. This can reveal if other machines were compromised by the same attacker (maybe earlier or later). Many organizations plug IOCs into their SIEM or EDR (Endpoint Detection & Response) systems to detect any matches across endpoints. This proactive hunting can contain an incident that might have otherwise gone unnoticed on a second system.
In summary, threat analysis adds intelligence to the raw findings of the investigation. It confirms the nature of the threat (e.g., identifying the malware as a known Linux rootkit or ransomware variant) and informs the response strategy (e.g., if it’s known ransomware, decryption might not be possible, so focus on restore; if it’s an APT actor, expect stealth and data theft). By understanding the adversary and tools, security analysts can make better decisions in containment and eradication, and also feed the information back into improving defenses (like updating firewall rules or enriching detection rules with these IOCs).
Containment, Eradication, and Recovery
After identifying what exactly happened and the scope of compromise, the next priority is to contain the threat, eliminate it from affected systems, and restore operations. This phase corresponds to the Containment, Eradication, and Recovery stages of incident response as defined by NIST ( Incident Response Lifecycle: Stages and Best Practices | Atlassian ). Each part is critical:
Containment: The goal of containment is to stop the incident from causing further harm. In the initial response, you likely performed short-term containment (like isolating a server). Now, implement more permanent containment measures as needed. Some steps:
-
Isolate Affected Hosts: For any Linux servers or devices confirmed to be compromised, keep them isolated from the production network until they are cleaned. This might involve keeping them off the network entirely or placing them on a quarantine VLAN. If an intruder still has an active session, isolation will cut off their access.
-
Network Containment: Update network access control to block malicious IP addresses or domains identified during threat analysis. For example, if the attacker’s C2 server IP is known, add firewall rules or intrusion prevention system (IPS) signatures to block any communication between your network and that IP range. Similarly, if malware was using a certain port/protocol (like IRC on a non-standard port), contain it by blocking that channel.
-
Account and Credential Containment: Disable or change passwords for any user accounts that were compromised or created by the attacker. On Linux, this could mean locking out a rogue user in
/etc/passwd
or expiring a password. If root or a sudo user account was compromised, that’s critical—disable remote root login (if not already disabled), and possibly rebuild credentials (since you can’t fully trust that account anymore). Also invalidate any exposed credentials: for instance, if AWS API keys or database passwords stored on the server might have been stolen, roll them. -
Temporary Service Shutdown (if needed): In some cases, you might temporarily shut down certain services to contain damage. For example, if a web server is constantly being exploited via a particular application, you might take that application offline until a patch is applied. Containment is about buying time and preventing escalation.
Crucially, containment should be done in a way that preserves forensic evidence. For example, rather than immediately powering off a system (which could destroy volatile evidence), it’s better to isolate network-wise and leave the system on for analysis or imaging. If you must shut it down (e.g., ransomware is spreading encryption fast), document that decision and the reason.
Eradication: Once contained, you can work on eradicating the malicious artifacts and vulnerabilities from your Linux systems. Eradication involves removing the attackers' footholds and addressing the root cause:
-
Remove Malware and Artifacts: Delete or quarantine malicious files identified (binaries, scripts, etc.). Remove any backdoors the attacker installed – this could include removing unauthorized users or SSH keys from
~/.ssh/authorized_keys
, deleting crontab entries that were added by the attacker, unloading malicious kernel modules, and killing any malicious processes still running. Be thorough: if there was a rootkit, ensure all its components are removed (or prepare to rebuild the system entirely, as rootkits are notoriously hard to clean fully). -
Patch Vulnerabilities: If the investigation determined that a specific vulnerability was exploited (e.g., an out-of-date Apache or a weak configuration), immediately apply the necessary patches or fixes. For instance, update the Linux kernel or software package to a non-vulnerable version, or adjust the configuration (disable the vulnerable module, change a setting). This prevents the attacker (or others) from using the same hole again. It’s common to discover during incident response that some systems missed patches; closing those gaps is a key part of eradication.
-
Improve Security Controls: Sometimes eradication means adding new security controls. For example, if the attack succeeded due to weak authentication, enforce SSH key-based auth or two-factor authentication going forward. If it was due to lack of monitoring, consider deploying an endpoint agent. Essentially, fix what allowed the incident to occur. One should also scan the system for any additional malicious code or tools the attacker might have left (using AV scanners, rootkit detectors like rkhunter or chkrootkit for Linux, or IoC-specific scanning with YARA rules).
-
Coordinate with Users/Admins: If the incident involved user accounts or data, work with system owners. For example, if an internal user’s workstation (even if Linux) was infected and then used to pivot, ensure that user’s machine is cleaned and their credentials reset. Often eradication in enterprise involves multiple teams (server admins, desktop support, cloud ops) to execute all remediations.
In some cases, the surest way to eradicate an advanced threat is to rebuild the system from scratch (or from a known clean backup). For instance, if a production Linux server was deeply compromised with root-level access by an attacker, many organizations choose to wipe and reinstall the OS or restore from a clean image backup, rather than trust that they could manually find and remove every implant. This approach ensures a clean start, though it requires good backups and downtime for the system. If you do a rebuild, remember to preserve the compromised image for later analysis if needed (don’t just format over it without an image if you might need evidence).
Recovery: After eradication, you transition to getting operations back to normal. Recovery means restoring systems to working condition and confirming they are secure:
-
System Restoration: Bring systems back online carefully. If you took systems offline or isolated them, reconnect them to the network after they are cleaned and fully patched. In the case of rebuilding, restore data from backups (after scanning that backup data to ensure it’s not re-infecting the system). Monitor the system closely upon reintroduction to the network for any sign of the attacker reappearing or any abnormal behavior.
-
Validate and Test: Before declaring victory, test the system to ensure all traces of the incident are gone. This might include running security scans or vulnerability scanners on the system to ensure the vulnerability is closed. Check logins and processes to make sure no unauthorized processes start up on reboot. Ensure that any compromised accounts have new credentials and that users can use the system normally again. Essentially, verify that the system is in a known good state.
-
Recovery of Operations: Resume business processes that were halted. For example, if a database was shut down to stop data theft, bring it back and ensure applications can connect. Decrypt or restore any data that was encrypted or corrupted during the incident (if ransomware encrypted files and they were restored from backup, verify integrity). If customers or users were affected by downtime, coordinate with IT and communications to bring services back and notify users that things are operational.
-
Increased Monitoring: Even after systems are restored, remain on high alert for a period of time. Often, incidents can repeat if something was missed. Implement heightened logging or use of an intrusion detection system to watch the recovered systems. Many teams will schedule a follow-up scan or review a week or so after recovery to catch anything they might have missed.
During recovery, it’s also important to address any compliance or notification actions if relevant. For instance, if personal data was breached, by this point the organization’s legal/compliance arm might need to notify affected individuals or regulators. The recovery phase is a good time to gather final evidence needed for those reports (e.g., confirming what data was accessed).
Finally, make sure to communicate to stakeholders when the incident is resolved and systems are secure. Management and potentially customers will want to know that the threat has been eradicated and normal operations restored. Provide a high-level summary if appropriate, focusing on the reassurance that things are under control.
Summary of this phase: Containment stops the bleeding, eradication removes the infection, and recovery heals the patient. In a Linux context, that could mean isolating a hacked server, removing the rootkit it had, rebuilding it from a clean install, and bringing it back to production with stronger defenses. All the while, ensure documentation of actions is maintained – which flows into the final step: the post-incident review.
Post-Incident Review
With the incident contained and systems back to normal, one of the most important tasks is to reflect on and learn from the incident. The post-incident review (often called a post-mortem or lessons learned session) is a chance to analyze the response process and improve for the future. It’s been noted that learning and improving is a part of incident response that is too often omitted (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy), yet it’s critical for strengthening security in the long run.
Key activities in the post-incident phase include:
-
Hold a Lessons Learned Meeting: Gather the incident response team and relevant stakeholders soon after the incident (ideally within a few days of closure) to discuss what happened and how the response went (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy) (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy). Include anyone who played a role (analysts, system owners, management, maybe third-party partners if they were involved). Walk through the incident timeline from detection to recovery. Identify what went well and what didn’t. Some questions to answer in this meeting: “Exactly what happened and when?”, “How effective were our detection and response actions?”, “Did our team follow the procedures, and were those procedures adequate?”, “What obstacles did we encounter?” (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy). This honest discussion should aim to find opportunities for improvement. For example, maybe the team found that communication was confused at first – so a clearer communication plan is needed. Or perhaps log data was incomplete, delaying analysis – so logging configuration needs enhancement.
-
Document Findings and Actions: The outcome of the lessons learned discussion should be documented in a post-incident report. This report typically includes a summary of the incident (cause, timeline of events, impact), how it was discovered, how it was contained/eradicated, and the remediation steps taken. Importantly, it should list the lessons learned and concrete recommendations for future prevention (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy). For example, a finding might be “System X was compromised due to missing patch Y; going forward we will implement an automated patch management on that class of servers.” Document any new IOCs or attacker TTPs discovered so they can be watched for in the future.
-
Update Incident Response Plan and Playbooks: Incorporate the lessons learned into your incident response plan and procedures. If the incident revealed a gap in the process (e.g., lack of clarity on who to contact at 2 AM, or no playbook for a Linux malware outbreak), update or create those sections in the IR plan (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy). Ensure that any checklists or runbooks are revised to include steps that might have been missed. The next time a similar incident happens, the responders should have better guidance. Also, if new tools were found necessary or newly acquired during the incident, update the resource inventory in the plan.
-
Enhance Preventative Measures: Use the insights to improve overall security posture. This could mean investing in new security tools, improving monitoring, or providing additional training to system administrators or developers. For example, if the investigation was hampered by lack of visibility, you might decide to deploy an endpoint detection agent on Linux servers for the future, or enable more verbose logging. If social engineering was a factor, initiate a security awareness refresher for staff.
-
Share Insights (Internal and External): Internally, share a summary of the incident and lessons learned with relevant teams. It’s important that others in IT and security learn from the experience. For instance, the desktop support team might learn to recognize the malware signs that were found, or the server team might learn the importance of those backups that saved the day. If appropriate and allowed, share sanitized IOCs or findings with the wider security community (this contributes to collective defense). Many organizations anonymously contribute what they learned to industry groups or threat intel feeds – for example, sharing new malware signatures to an open source repository so that others can detect it.
-
Celebrate Successes and Recovery: It’s also important in a post-incident review to acknowledge what was done well. Incident response can be stressful for the team. Highlight if quick thinking saved the database, or if a particular analyst’s detection prevented a larger breach. This not only boosts morale but also confirms which processes are effective. It reinforces good practices to continue.
The output of the post-incident phase is an actionable improvement plan. Each action item (e.g., “Implement centralized logging for all Linux servers” or “Develop an incident playbook for ransomware on Linux”) should be assigned to an owner and tracked to completion after the incident. This ensures that the organization actually benefits from the hard lessons learned, rather than shelving the report and forgetting about it.
In essence, the incident should leave the organization stronger and more prepared. Over time, by consistently doing post-incident reviews, an organization can significantly improve its security maturity. Incident response is a cycle – the lessons feed back into better Preparation for the next time (NIST Incident Response: 4-Step Life Cycle, Templates and Tips) (NIST Incident Response: 4-Step Life Cycle, Templates and Tips). Each incident, even if painful, is an opportunity to harden systems, refine processes, and educate staff. As NIST emphasizes, “each incident response team should evolve to reflect new threats, improved technology, and lessons learned.” (NIST SP 800-61: 4.1. Lessons Learned | Saylor Academy | Saylor Academy) By embracing this continuous improvement mindset, security analysts can ensure that the next incident (and there will always be a next one) is handled more efficiently and effectively.
Resources and Tools
Incident responders have a wealth of tools and resources at their disposal to aid in Linux investigations. Below is a curated list of recommended tools, frameworks, and references to help security analysts in each phase of incident response. Many of these are open-source and freely available, suitable for a distribution-agnostic approach.
Incident Response Plan Templates and Checklists:
-
Incident Response Plan Template (GitHub): An open-source incident response plan template is available from the Counteractive team (counteractive/incident-response-plan-template - GitHub). It provides a concise structure that you can customize for your organization, ensuring you don’t miss critical components (like team contacts, communication plans, and incident handling procedures). Similarly, organizations like NIST and SANS provide checklists – for example, the SANS Institute and U.S. federal agencies have incident handling checklists (see HHS.gov for a public example ([PDF] Incident Handling Checklist - HHS.gov)) – that can be adapted to your Linux environments.
-
Security Frameworks: Refer to well-known security frameworks for guidance. The NIST Computer Security Incident Handling Guide (SP 800-61) is an excellent official resource that outlines incident response steps and best practices ( Incident Response Lifecycle: Stages and Best Practices | Atlassian ). It is not Linux-specific but offers universally applicable guidelines. Additionally, the SANS Institute publishes a lot of free reading material and posters (like the “Linux Incident Response and Threat Hunting Poster” (LINUX Incident Response and Threat Hunting Poster | SANS Institute)) that highlight key Linux forensic artifacts and commands.
Logging and Monitoring Tools:
-
Centralized Logging Systems: Tools like Elastic Stack (ELK) – Elasticsearch, Logstash, Kibana – or Graylog can aggregate logs from multiple Linux systems, making searching and correlation easier during investigations. They often have web interfaces to query logs (e.g., find all occurrences of “Failed password” across servers). SIEM solutions (Security Information and Event Management), such as Splunk or open-source OSSIM, can provide real-time alerting on suspicious log patterns and integrate threat intelligence feeds for IOC matching.
-
Syslog and Audit Tools: Utilize Linux’s built-in logging daemons (
rsyslog
/syslog-ng
) to forward logs. auditd (the Linux Audit daemon) can be configured to record system calls and file access events which are invaluable for deep investigations (e.g., tracking which process edited a critical file). Auditd rules can log events like changes to/etc/passwd
or usage of theexecve
system call by unusual programs. Ensure these are tuned to avoid too much noise but still capture security-relevant info. -
Real-time Monitoring/EDR: Consider deploying endpoint monitoring tools on Linux servers. Some modern Endpoint Detection and Response (EDR) products (CrowdStrike Falcon, Microsoft Defender for Endpoint, etc.) have Linux agents that record process executions, file modifications, and network connections over time, presenting them in a timeline – extremely useful during analysis. Open-source alternatives include OSSEC/Wazuh (which is a host intrusion detection system that can log and alert on events) and Osquery, a tool that lets you query system state (processes, network connections, etc.) in SQL form – handy for threat hunting and incident triage.
Forensic Collection and Analysis Tools:
-
Live Response Tools: A number of scripts and tools can automate evidence collection from a live Linux system. For example, Libpcap/tcpdump can be used to capture network traffic if needed during an incident (to see live malicious communication). Tools like LinPEAS (Linux Privilege Escalation Awesome Script) are often used by attackers to enumerate systems, but defenders can also use them to quickly gather system info to spot anomalies. However, be cautious running automated scripts on production systems during an incident, ensure they are read-only data gathering.
-
Memory Capture & Analysis: As mentioned, LiME is an open-source tool for live memory capture on Linux. For analysis, Volatility Framework (Volatility 3 supports Linux memory analysis) or Rekall can be used to find malicious processes or network sockets in a memory image. These require profiles/memory symbols for the specific kernel version – Volatility has community-contributed Linux profiles.
-
Disk Forensics: For deep dive into disk images, tools from the Sleuth Kit (like
fls
,icat
, etc.) and the Autopsy forensic suite can be used to examine file system metadata, recover deleted files, and timeline file events. These are more often used after an incident, on an image, rather than during the live incident (due to time and complexity). If doing forensic disk analysis, mount images read-only (mount -o ro
) or use dedicated forensic workstations (like the SANS SIFT Workstation, an Ubuntu-based distro pre-loaded with forensics tools, or CAINE Linux). These specialized Linux distros can be very handy, as they contain a collection of open-source DFIR tools for memory, disk, and network analysis. -
Rootkit Detectors: If you suspect kernel-level compromises, tools like Chkrootkit and RKHunter can scan a Linux system for known rootkits and anomalies (like hidden processes or altered binaries). While not foolproof against advanced threats, they can catch common rootkits or suspicious signs.
-
Timeline Generation: Plaso (log2timeline) is a tool that can create a super-timeline from various log sources and file system timestamps, which can then be analyzed in Timesketch (an open-source timeline analysis tool). This is more of a heavy-weight forensic approach, but can be useful if you need to reconstruct complex sequences from many sources.
Threat Intelligence & Analysis:
-
Threat Intelligence Platforms: Engage with platforms like AlienVault OTX (free to use community threat intel) to search IOCs, or subscribe to feeds relevant to Linux threats (for example, threat feeds for SSH brute-force IPs, or Linux malware hashes). The MITRE ATT&CK knowledge base (especially the ATT&CK for Enterprise matrix focusing on Linux) is a great reference to understand techniques. There are also community projects mapping Linux-specific threats to ATT&CK that can guide what evidence to look for.
-
Threat Intelligence Resources: The GitHub repository “awesome-incident-response” (GitHub - meirwah/awesome-incident-response: A curated list of tools for incident response) is a highly curated list of incident response tools and resources, including sections on threat intelligence and knowledge bases. It lists feeds and services where you can look up known bad IPs/Domains and find reports on malware. Another community resource is VirusShare (for sharing malware samples/hashes among researchers) if you need to compare samples.
-
Malware Analysis Sandboxes: If you have a suspicious Linux binary and want to analyze its behavior, you can use sandbox services or set up your own analysis VM. Cuckoo Sandbox can be configured for Linux malware analysis (though it’s more commonly used for Windows). Online, services like Any.Run support some Linux analysis or you can manually run the sample in an isolated VM with monitoring tools like
strace
,ltrace
, or custom sandbox scripts to record what it does.
Collaboration and Case Management:
-
Incident Management Platforms: For managing the investigation and evidence, tools such as TheHive (an open-source incident response platform) can track cases, IOCs, and analyst notes in a structured manner. It integrates with analyzers like Cortex for automated threat intel lookups. Using a platform like this or even a ticketing system tagged for incidents ensures information is centralized.
-
Version Control for Notes/Timelines: Keep your analysis notes in a collaborative document or a wiki. Some teams use Markdown in a git repository or an internal wiki to document incident timelines so multiple analysts can contribute. Tools like Cherrytree (note-taking) or even just shared documents can be helpful to avoid siloing information during the analysis.
Scripts and Cheat Sheets:
-
Linux IR Cheat Sheet: There are community-driven cheat sheets listing useful Linux incident response commands. For instance, the repository Linux-Incident-Response on GitHub provides a comprehensive cheatsheet of commands for live forensics on Linux (GitHub - vm32/Linux-Incident-Response: practical toolkit for cybersecurity and IT professionals. It features a detailed Linux cheatsheet for incident response). It’s designed for quick reference to commands for user activity, processes, network, etc., during an incident (GitHub - vm32/Linux-Incident-Response: practical toolkit for cybersecurity and IT professionals. It features a detailed Linux cheatsheet for incident response). Having such a cheat sheet on-hand can speed up live response (so you don’t have to recall syntax under pressure).
-
Automation Scripts: Some open source scripts like
LinEnum
can enumerate system info quickly (be cautious using automation on live incidents). Also, consider writing your own bash or Python scripts to collect known important information from Linux systems in one go (for example, one that dumpswho; w; last; ps; netstat; iptables; relevant logs
into a report).
In summary, being familiar with these tools before an incident is key. It’s wise for analysts to maintain a toolkit (both a physical one, like a USB with bootable forensic OS and binaries, and a mental one of knowing which tools to reach for). Open-source resources can cover most needs, from evidence collection to analysis to threat intel. Equally important is staying updated: new Linux malware and attack techniques emerge, and so do new tools and updates. Regularly consult community resources like the DFIR community blogs, attend workshops, or practice in labs to keep skills sharp.
(Resources for further learning: The References section below lists official documentation and articles on Linux security and incident response for deeper reading. Additionally, consider participating in DFIR communities or forums where investigators share insights on recent Linux threats.)
Conclusion
Linux incident investigation is a challenging but essential task for security analysts. As Linux systems continue to be integral to enterprise infrastructure, cloud services, and IoT devices, they will remain in the crosshairs of attackers. A successful incident investigation in Linux environments requires a blend of technical expertise, methodical process, and proactive planning.
In this handbook, we covered a full lifecycle: from preparation (hardening systems and having an IR plan) to initial response (quick, decisive actions and communication), through deep-dive investigation (collecting logs, using powerful command-line tools, preserving evidence), into threat analysis (identifying IOCs and leveraging threat intelligence), and finally containment, eradication, and recovery to eliminate the threat and restore services. We also emphasized the often-overlooked but crucial phase of post-incident review, where lessons learned translate into improved defenses.
A few parting thoughts for security analysts and teams:
-
Preparation and Practice Pay Off: The organizations that weather incidents most effectively are those that invested in preparation – they knew who to call, what procedures to follow, and had monitoring in place to catch issues early. Regular drills and updates to the IR plan ensure that when a real incident hits, the team can respond swiftly and calmly. As the adage goes, “Hope for the best, but plan for the worst."
-
Stay Curious and Skeptical: During investigations, cultivate a forensic mindset. Question assumptions (for example, don’t assume an odd log entry is benign without verification) and follow the evidence. Small clues can unravel major findings. At the same time, be careful about false positives; corroborate evidence before declaring a breach. This balance of curiosity and healthy skepticism will serve you well.
-
Use the Community and Keep Learning: The cybersecurity field is constantly evolving. New Linux vulnerabilities, attack tools, and defense techniques emerge all the time. Engage with the security community – read up on recent Linux incidents, share knowledge with peers, and contribute back when you can. For instance, if you encounter a new strain of malware, consider anonymizing and sharing the IOCs or analysis so others benefit. Continuously improve your skillset by learning from each incident and staying current with security trends (follow reputable blogs, attend DFIR webinars, etc.).
-
Build Security Into Systems (Proactive Security): Incident response is reactive by nature, but its insights should feed into proactive measures. Over time, aim to reduce the number of incidents by addressing root causes: whether that’s better hardening, user training, or more robust detection mechanisms. Encourage a culture of security where developers, system admins, and employees are aware of best practices and common attack vectors. It’s far cheaper to prevent an incident than to handle one.
In conclusion, handling a Linux security incident is never “easy,” but with a structured approach and the right knowledge, it is absolutely manageable. By following the guidelines in this handbook, security analysts can approach Linux incidents with confidence – knowing how to systematically investigate and respond, while minimizing damage and learning to bolster defenses for the future. Each incident, handled well, makes the organization stronger and the analysts more experienced. In the ever-evolving battle of cybersecurity, such continuous improvement and resilience are the true markers of success.
Stay vigilant, document everything, and never stop honing your craft. The time and effort put into careful incident investigation not only resolves the issue at hand but also fortifies your Linux systems against the next wave of threats.
References
-
NIST SP 800-61 Rev. 3 – Computer Security Incident Handling Guide (April 2025). The official NIST guide detailing incident response life cycle (Preparation, Detection & Analysis, Containment, Eradication & Recovery, Post-Incident Activity) and best practices for handling incidents. (National Institute of Standards and Technology)
-
Infosec Institute – Chain of Custody in Digital Forensics. Article explaining the chain of custody concept for digital evidence and why maintaining proper documentation and integrity of evidence is critical (Understanding Digital Forensics: The Importance of Chain of Custody | Infosec) (Understanding Digital Forensics: The Importance of Chain of Custody | Infosec).
-
VMRay – Linux as a Primary Target for Attackers. Blog chapter discussing how Linux threats like ransomware, cryptojacking, IoT botnets, etc. have grown, with statistics on cryptojacking prevalence (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay) (Chapter 4: Linux as a Primary Target for Attackers Explore the Linux attack types - VMRay).
-
Halkyn Security Blog – Linux Incident Response Guide. A practical blog outlining Linux IR considerations, including preparation tips (time synchronization, logging to SIEM, baselining) (Linux Incident Response Guide - DFIR - Halkyn Security Blog) (Linux Incident Response Guide - DFIR - Halkyn Security Blog) and a recommended workflow for responding to a Linux incident.
-
TechTarget (Mike Chapple) – Incident response: How to implement a communication plan. Tips on communication best practices during incidents, emphasizing planning communications in advance and coordinating with stakeholders to avoid chaos (Incident Response: How to Create a Communication Plan (w/ Template)).
-
TCM Security – Invaluable Log Analysis Tools: Sed, Awk, Grep. Explains the use of traditional command-line tools for parsing logs, praising
grep
,awk
, andsed
as powerful allies for analysts working through large log files (Log Analysis Tools: Sed, Awk, Grep, and RegEx - TCM Security). -
Magnet Forensics Blog – 7 Essential Linux Forensics Artifacts. Overview of important Linux artifacts (bash history, syslog, auth logs, sudo logs, cron jobs, SSH configs, etc.) that investigators should examine on compromised Linux machines (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics) (7 essential Linux forensics artifacts every investigator should know - Magnet Forensics).
-
Wiz.io – IOC Security: The role of indicators of compromise. Article on understanding IOCs and using them in threat detection, comparing Indicators of Compromise vs. Indicators of Attack, and categorizing types of IOCs (IOC Security: The Role Of Indicators Of Compromise In Threat Detection | Wiz) (IOC Security: The Role Of Indicators Of Compromise In Threat Detection | Wiz).
-
GitHub – vm32/Linux-Incident-Response. An open-source repository containing a Linux IR cheatsheet and a script for live forensics. Designed to help responders quickly reference Linux commands/procedures during incidents (GitHub - vm32/Linux-Incident-Response: practical toolkit for cybersecurity and IT professionals. It features a detailed Linux cheatsheet for incident response).
-
GitHub – meirwah/awesome-incident-response. A curated list of incident response tools and resources (GitHub - meirwah/awesome-incident-response: A curated list of tools for incident response), including sections on disk imaging, evidence collection, log analysis, memory forensics, threat intel, etc. Excellent starting point to discover new IR tools.
-
CrowdStrike – Linux Logging Best Practices. Guide (2023) on configuring and managing Linux logs effectively for security monitoring. Useful for improving logging setup as part of incident preparedness.
-
AlienVault OTX – Open Threat Exchange. An open threat intelligence platform where analysts can find and share IOCs (IP addresses, file hashes, domains) associated with known threats. Useful for enriching investigation findings with global data.
-
TheHive Project – TheHive & Cortex. Open-source incident response platform (TheHive) and analyzer engines (Cortex) that help manage incident cases and automate artifact analysis (e.g., querying VirusTotal for a hash). Can greatly assist in case management and threat intel lookups.
-
SANS Institute Resources – Digital forensics and incident response whitepapers, and the SIFT Workstation (free Linux distro pre-loaded with forensic tools). SANS DFIR community provides many practical resources specific to Linux investigations (for example, SANS Poster for Linux IR, training course FOR577 for Linux Incident Response, etc.).
-
MITRE ATT&CK – Enterprise Matrix for Linux. A matrix of tactics, techniques, and procedures that adversaries use against Linux systems. Mappings can help analysts ensure they are checking for known techniques (persistence methods, privilege escalation, etc.) and can guide what evidence to look for during investigations.