Episode 23 — Interact With Attackers Safely: Communication Boundaries and Decision Triggers
When people imagine incident response, they often picture a team hunting an attacker in a system, but they rarely imagine that the attacker might try to talk back. In real incidents, communication with an attacker can happen directly, like a ransom note or a threatening email, or indirectly, like a message embedded in a hacked webpage or a signal sent through a compromised account. For beginners, the surprising part is that this interaction is not only a technical problem; it is a human problem with serious risks, because every message you send can give the attacker information, shape their behavior, and change the legal and safety situation. Interacting safely starts with one guiding idea: your goal is to protect people, protect evidence, and protect decision options, not to win a debate or satisfy curiosity. That means you need boundaries that prevent impulsive replies and decision triggers that tell you when communication is necessary, who should do it, and what should never be said. Learning these boundaries early is valuable because stress and fear make people reactive, and attacker messages are designed to exploit that.
A key concept is that communication with attackers is part of the incident threat surface, meaning it can create new vulnerabilities even if your technical defenses are strong. Attackers often use communication to gather intelligence, confirm what you know, test whether they still have access, or pressure you into actions that help them. If you reply from the wrong account, you might reveal which accounts are still trusted or monitored. If you ask detailed questions, you might reveal your investigation gaps and give them time to cover tracks. If you share internal status, like saying you are restoring backups, you might push them to sabotage recovery. Even silence can matter, because attackers may escalate if they think they are being ignored, but unsafe conversation is usually worse than no conversation. This is why safe interaction begins by treating attacker communication as a controlled operation, not as an ad hoc exchange between whoever saw the message and whoever feels like responding.
Before you even consider responding, you need to recognize the different forms attacker communication can take, because not all of it is obvious. A classic ransom note might be clear, but many attacker messages are disguised as normal business communication, such as a fake support ticket, a vendor email, or a message that appears to come from a colleague. Sometimes the attacker tries to pull you into a conversation outside your controlled channels, like asking you to move to a different messaging platform, or asking you to use a personal email address. Sometimes the attacker uses urgency and intimidation, such as a countdown timer or threats of public release, to force quick decisions. Sometimes they offer proof, such as a sample of data, which creates both emotional pressure and handling risk. The safe mindset is to assume that any interaction is designed to manipulate you, and that every instruction they provide could be a trap, including links, contact methods, and demands for specific behaviors.
Communication boundaries start with a simple rule that prevents many mistakes: do not engage directly from uncontrolled channels or personal accounts. If a staff member receives a threatening message, the safest first action is to preserve the message and report it through the incident process, not to reply. This boundary matters because the first reply often becomes the beginning of a negotiation pattern, and attackers will exploit whoever seems most emotional, most helpful, or least prepared. Another boundary is to avoid clicking, opening, or executing anything provided by the attacker, including links to chat portals or file-sharing sites, because those can deliver malware, track your activity, or reveal your location and environment details. A third boundary is to avoid discussing internal systems, internal names, internal processes, or evidence status with anyone outside the small set of people authorized to handle attacker interaction. For a beginner, the important point is that the safest default is to not respond until the organization makes a deliberate decision to do so.
Once boundaries are set, decision triggers tell you when and why communication might happen at all. One trigger is safety, such as threats of violence or threats that suggest immediate harm to people, which can require involving law enforcement or security teams quickly. Another trigger is operational necessity, such as when the attacker’s actions are causing ongoing harm and communication might buy time for containment or recovery, though this must be weighed carefully. A third trigger is legal or regulatory, where guidance may require specific handling and documentation of any attacker contact. A fourth trigger is strategic, such as when communication can help gather intelligence about the attacker’s claims, but only under controlled conditions. Importantly, a ransom demand alone is not automatically a trigger to engage, because engagement can increase attacker leverage. The decision to communicate should be a leadership decision informed by technical, legal, and risk perspectives, not a spontaneous reaction by the first person who sees a message.
If communication is authorized, the next safety principle is role control, meaning a designated communicator handles all attacker interaction. This is often someone trained in crisis communication or negotiation, supported by legal and technical advisors, because the communicator must be calm, consistent, and disciplined. The designated communicator should not be the person who is doing the hands-on technical work, because responders are overloaded and may accidentally reveal details or be manipulated by time pressure. The communicator should also avoid improvisation, because attacker conversations tend to evolve, and improvisation leads to inconsistent statements. Role control also includes limiting who can see the conversation, because broad visibility increases leakage risk and encourages backseat negotiation. When only a small group is involved, it is easier to keep language consistent and to maintain a careful record of what was said and why.
Safe interaction also requires a careful approach to identity and verification, because attackers often pretend to be someone else, and sometimes multiple parties are involved. If an attacker claims to represent a known group, that claim may be false, and responding as if it is true can lead you into a trap or a scam layered on top of the incident. Attackers may also try to prove identity by sharing stolen data samples, but even those samples can be misleading or dangerous to handle. A safer approach is to treat attacker identity as untrusted until verified through evidence, while still acknowledging that a threat exists. The organization also needs to verify its own identity handling, such as making sure the communicator’s contact method is controlled and does not reveal personal information. Even small details like email headers, language patterns, and timing can matter, but the main point is that you do not grant credibility by default. You create a controlled channel and you keep the conversation within your boundaries.
Another major safety issue is the risk of leaking sensitive incident data through your own words, which can happen even if you think you are being careful. For example, saying we have contained the incident can reveal that the attacker’s access is being challenged, which may trigger destructive behavior. Saying we are restoring from backups can reveal your recovery strategy and invite sabotage. Asking which systems were affected can reveal that you do not know, which may prompt the attacker to exaggerate claims or target additional systems. Even the tone of your message can reveal your urgency level, which attackers use as leverage. Safe communication uses minimal disclosure, focusing on controlling the conversation rather than sharing internal state. If you need to ask questions, you do so in a way that does not reveal what you know or do not know, and you avoid confirming details that the attacker might be testing.
Evidence preservation is also tightly connected to attacker interaction, because every message can become part of the evidence trail. A beginner might think evidence is only logs and files, but communications are evidence too, including timestamps, message content, contact methods, and any attachments. Preserving this evidence means capturing it accurately and keeping it in a controlled location, because the content may contain sensitive data or malicious links. It also means documenting who saw it, who handled it, and what actions were taken, because later questions may ask whether the organization responded appropriately. Safe interaction avoids altering evidence, such as forwarding attacker emails in ways that strip headers or copying text into uncontrolled documents. Instead, you maintain a careful chain of custody mindset even for communication artifacts. The discipline of evidence preservation helps you later when you need to reconstruct what happened and when you need to defend decisions.
A common misconception is that engaging with attackers is mainly about paying or not paying, but safe interaction is broader than that single decision. Even if an organization never intends to pay, the way it communicates can affect the attacker’s behavior and can affect the organization’s reputation and legal posture. Another misconception is that attackers always tell the truth in their messages, when in fact their claims are often strategic and sometimes completely false. They may exaggerate what they stole, claim they have backups they do not have, or threaten actions they cannot execute, because fear is a weapon. A third misconception is that technical teams can manage attacker communication casually, like responding to a spam email, when in reality this is a high-stakes interaction. Safe interaction treats attacker messages as a pressure tactic and focuses on reducing uncertainty through evidence, not through trusting claims. For beginners, this is the moment to recognize that the attacker’s words are not a report; they are a tool.
Decision triggers should also include an exit strategy, meaning you decide in advance what would cause you to stop communicating or change approach. If the attacker starts demanding unsafe actions, like running certain software or moving to an uncontrolled platform, that is a trigger to refuse and reassess. If communication is causing harm, like encouraging the attacker to accelerate destruction, that is a trigger to pause. If new evidence shows that the attacker is still active in your environment, that may be a trigger to change what you say and what you avoid saying. If the attacker threatens immediate harm to people, that may be a trigger to involve external support and shift the response posture. An exit strategy prevents you from sliding into a reactive pattern where each attacker message forces another reply. It also protects the communicator from emotional manipulation, because they have predetermined boundaries and can follow them without improvising under stress.
Finally, safe interaction with attackers must be integrated into the broader incident decision-making process, because communication is not separate from containment, investigation, and recovery. If the organization decides to engage, that decision should be coordinated with technical actions so you do not accidentally reveal what you are doing or create conflicting signals. If leaders are making decisions based on attacker claims, those claims should be weighed against evidence and treated as untrusted until verified. If staff are hearing rumors about negotiation, they should receive controlled internal updates so that fear and speculation do not spread. Safe attacker interaction is really about maintaining control of your own behavior when someone else is trying to take control away from you. When you set clear communication boundaries and you use decision triggers to guide when and how engagement happens, you reduce risk, preserve options, and keep the incident response focused on evidence and protection rather than on pressure and panic.