
Dead Infrastructure Hijacking (DIH) is the exploitation of residual trust relationships that survive after digital infrastructure has been decommissioned, migrated, or abandoned. When an organization retires a service, domain, cloud storage bucket, or SaaS integration, the endpoints frequently remain trusted by live systems long after the service has gone. An attacker who reclaims one of those endpoints inherits that trust without exploiting a single vulnerability, bypassing any authentication control, or generating any intrusion alert.
This report documents three confirmed attack variants: subdomain takeover via unclaimed SaaS CNAME records, expired domain registration, and cloud storage bucket squatting. Each operates through a distinct exploitation mode, passive data collection, elicitation of credentials or session cookies, or active supply chain compromise with different attacker capability requirements, impact ceilings, and defensive responses. Treating them as interchangeable produces a remediation framework that addresses the wrong scenarios.
Two DNS security controls are routinely and incorrectly credited with protecting against this threat. CAA records do not protect against SaaS CNAME subdomain takeover because the SaaS platform provisions TLS certificates automatically, bypassing any certificate issuance controls the organization has configured. DNSSEC does not protect against any DIH variant because DIH does not involve the modification of DNS records in transit; the attacker claims the resource a DNS record correctly points to, and DNSSEC signs that misdirection faithfully.
The most significant structural mitigation introduced to date is AWS’s Account Regional Namespace feature, launched March 12, 2026, which eliminates the bucket squatting attack surface for new S3 infrastructure. It does not retroactively protect existing buckets, Terraform support is pending, and adoption is opt-in unless enforced through Service Control Policies.
The governance gap sustaining DIH is not technical. Every tool needed to find this exposure exists and is freely available. The gap is organizational; the decommissioning process assigns responsibility for retiring a service without assigning responsibility for retiring the trust relationships that the service maintained. Until that accountability is explicitly built into decommission workflows, organizations will continue generating new DIH exposure at the same rate they deprecate infrastructure.
Remediation priorities, in order: cloud storage pipeline validation and namespace adoption; wildcard certificate enumeration preceding DNS audit; CT historical discovery feeding the DNS toolchain; real-time CT alerting; domain lifecycle protocol with explicit release conditions; cross-functional decommissioning protocol
with named ownership; and cookie scoping remediation addressing both subdomain-specific scoping and the SameSite=Strict regression safety net.
The premise of Dead Infrastructure Hijacking is deceptively simple: digital infrastructure leaves behind trusted endpoints when it is decommissioned, and those endpoints can be reclaimed. It is not a new attack class, and it does not require sophisticated tooling, nation-state resources, or novel tradecraft. What it requires is patience, observation, and the willingness to look at what organizations have stopped looking at themselves.
What makes DIH analytically difficult is not its complexity but its breadth. It manifests across DNS, cloud storage, SaaS integrations, webhook configurations, mobile application backends, and CI/CD pipelines. It escapes security operations because it generates no anomalous behavior; the traffic to a hijacked endpoint looks identical to the legitimate traffic that preceded the takeover. It escapes penetration testing because decommissioned assets are not in scope. It escapes asset management because removing an asset from inventory simultaneously removes visibility into its remaining trust relationships. It escapes development and operations accountability because each team believes the other handled the cleanup.
This report documents DIH with the precision the threat demands. The TLS analysis distinguishes between SaaS CNAME takeover, where the platform auto-provisions certificates, and expired domain registration, where the attacker must independently satisfy certificate controls. The exploitation mode framework separates passive collection, credential harvesting, cookie scope exploitation, and active payload delivery into distinct categories with distinct defenses. The DNSSEC misconception is addressed directly and completely. The remediation section names specific tools, specifies their differentiated capabilities, sequences them in the correct order, and assigns cross-functional ownership to each checklist item.
Modern digital infrastructure does not retire cleanly. When services are deprecated, migrated, or shut down, the trust relationships those services maintained with live systems frequently outlast the services themselves by months or years. DNS CNAME records continue pointing to decommissioned platforms. Hardcoded API endpoints in mobile application binaries continue firing requests to backend addresses migrated eighteen months ago. Cloud storage references in CI/CD pipelines continue resolving bucket names deleted during a cost-optimization exercise. Webhook configurations continue delivering internal event data to destinations that exist only in the memory of a team member who has since left.
Dead Infrastructure Hijacking is the exploitation of these residual trust relationships. The attacker identifies an endpoint that a live system still trusts and reclaims it by registering an expired domain, recreating a deleted cloud storage bucket, claiming an unclaimed CDN distribution, or registering a new account on a SaaS platform whose CNAME record was never cleaned up, and positions themselves to receive traffic the victim believes is going somewhere safe.
No intrusion is required. No vulnerability is exploited. No authentication is bypassed. The attacker simply owns an address that someone else’s system is calling.
Every DIH scenario operates in one of three modes. The mode determines attacker capability requirements, impact ceiling, and appropriate defensive priority.
Passive Mode
Requires only that the attacker reclaim an endpoint and receive inbound traffic without serving any response. The value lies entirely in what arrives: authentication tokens embedded in API headers, internal event records in webhook payloads, session identifiers in routine check-in calls, and diagnostic telemetry containing system configuration data.
A critical distinction applies to webhook contexts. Legacy telemetry endpoints and older webhook integrations frequently predate widespread HTTPS enforcement; these are fully accessible in passive mode with no TLS challenge. Modern SaaS-to-SaaS webhook integrations use HTTPS, but the TLS constraint profile depends on which DIH variant applies:
Determining which sub-case applies to a lapsed webhook requires historical DNS data, whether the domain previously had a CNAME to a SaaS platform or resolved directly. PassiveTotal’s passive DNS capability provides the historical resolution data needed for this determination.
Probability of silent exploitation: highest of the three modes in legacy HTTP contexts. Impact ceiling is bounded by what the hijacked endpoint receives.
Elicitation Mode
Requires the attacker to serve content that causes the victim to voluntarily deliver sensitive material. This mode covers two mechanistically distinct scenarios with different defensive requirements.
Credential harvesting: The attacker serves a redirect to a login-page clone, an authentication error that prompts credential re-entry, or a session expiry notice soliciting re-authentication. An affirmative credentialing action by the victim is required. The defensive response is FIDO2 and passkey adoption — phishing-resistant authentication that breaks this model regardless of what a hijacked subdomain serves.
Cookie scope exploitation: If session cookies are scoped to the parent domain (domain=.example.com) rather than (domain=docs.example.com), a hijacked subdomain under that domain may receive those cookies in HTTP requests from the victim’s browser. Chrome and Firefox apply “SameSite=Lax” by default when no explicit SameSite attribute is set. “SameSite=Lax” primarily functions as a CSRF defense — it blocks cross-site POST requests but does not block cross-site top-level GET navigations. An attacker-crafted link inducing a victim to visit the hijacked subdomain triggers a cross-site top-level GET navigation, and Lax cookies with parent-domain scoping are sent. This is the primary delivery mechanism for cookie scope exploitation, and Lax does not block it. “SameSite=Strict” eliminates cookie transmission on cross-site navigations, including cross-site GET. It does not prevent transmission when the victim navigates directly or follows a link from within the same registrable domain, which is why subdomain-specific cookie scoping remains the primary and more complete control.
Active Mode
This requires the attacker to serve malicious responses that the victim system will accept and act upon a malicious update payload, poisoned API responses, or a malicious build artifact delivered through a CI/CD pipeline. The supply chain pipeline variant via reclaimed cloud storage occupies a distinct position: S3 bucket content is not TLS-protected in the same way HTTPS API responses are, making this active variant more technically accessible. Impact ceiling: potentially catastrophic arbitrary code execution on every system downstream of the pipeline, including customer environments.
DIH encompasses three distinct attack variants and one related technique. Each has different mechanics, TLS constraint profiles, and defensive requirements.
Variant 1 — Subdomain Takeover via CNAME to Unclaimed SaaS
An organization creates a CNAME record pointing a subdomain at a third-party SaaS platform, terminates the relationship, but leaves the DNS record. The subdomain becomes claimable by whoever creates a new account on that platform and registers the same custom domain. Exploitability is platform-dependent; some platforms implement ownership verification, others do not. Each CNAME record must be assessed against the specific platform.
Primary exploitation mode is Elicitation when applied to browser-facing subdomains, operating across both credential harvesting and cookie scope exploitation sub-modes. When applied to webhook destinations, the exploitation mode shifts to Passive. The attacker receives inbound webhook payloads without serving any response, and the SaaS platform auto-provisions the TLS certificate, making the sending platform’s validation succeed automatically. TLS constraint is minimal in both modes.
Variant 2 — Expired Domain Registration
The attacker registers a lapsed domain that live systems still reference. Impact is entirely a function of residual trust at expiry. A lapsed telemetry endpoint enables passive collection; a lapsed update server enables active supply chain compromise; a lapsed marketing microsite with no live dependencies is worthless. Primary exploitation mode varies by domain function: Passive for telemetry and data endpoints, Active for updates and software delivery endpoints.
Most preventable variant: automated domain renewal eliminates the exposure window at negligible cost. The gap between a high-value domain becoming available and a sophisticated attacker registering it can be measured in minutes.
Variant 3 — Cloud Bucket Squatting
Creating a cloud storage bucket with the same name as a deleted one to intercept references persisting in codebases, pipelines, and configuration files.
Primary exploitation mode is Active, the attacker delivers malicious content to systems, fetching build artifacts, software updates, or configuration files. TLS constraint is largely inapplicable to S3 bucket content, making this active variant more technically accessible than other active DIH forms. Impact ceiling: supply chain compromise reaching every system downstream of the pipeline, including customer environments.
Related Technique — Abandoned C2 Infrastructure Sinkholing
This technique is documented separately because it shares surface-level characteristics with DIH and appears in the same analytical literature but is not a DIH variant in the same operational sense. Practitioners building a DIH risk assessment should scope Variants 1 through 3 as their primary attack surface. This section informs incident investigation methodology rather than attack surface management.
When threat actors abandon their malware infrastructure and C2 domains lapse, those domains can be registered by any party who then inherits callback traffic from systems still running the original implant. The victims were already compromised before the domain lapsed. What changes upon re-registration is who receives their beaconing, not whether a compromise exists. In the hands of defenders, this is a well-established sinkholing technique. In the hands of malicious actors, it represents inherited access to already-compromised systems, requiring new implant deployment to exploit further.
Incident investigation implication: organizations identifying unexpected outbound DNS queries to recently re-registered domains should investigate for prior compromise by the original infrastructure’s operator, not treat the re-registration itself as the breach event.
Where CAA Records Do Not Protect
Certificate Authority Authorization records restrict which CAs may issue certificates for a given domain. Prior guidance has presented CAA records as a control against subdomain takeover. This is incorrect for the most prevalent variant. When an attacker claims an unclaimed SaaS endpoint via CNAME takeover, the SaaS platform provisions the TLS certificate automatically as part of its custom domain onboarding flow. The attacker does not interact with any CA independently. CAA records do not govern the SaaS platform’s certificate provisioning pipeline. An attacker who claims a subdomain via SaaS CNAME takeover receives a valid, browser-trusted TLS certificate automatically. CAA records provide no protection against this scenario.
Where TLS Controls Do Constrain the Attack
For expired domain registration and scenarios where the attacker must independently obtain a certificate, genuine constraints apply: certificate pinning produces hard connection failures; HSTS preloading prevents plain HTTP connections and causes TLS mismatches to generate visible errors; private CA infrastructure in government and defense environments rejects publicly issued certificates entirely.
Wildcard Certificate Inheritance
Organizations deploying wildcard certificates, e.g., “*.example.com”, face a counterintuitive scenario when a subdomain is compromised. Browsers and applications trusting the wildcard connect without certificate errors means the wildcard is valid for all subdomains, including the one the attacker controls. The TLS layer provides no warning signal. The takeover becomes more seamless, not less, because the organization’s own certificate infrastructure works against detection.
The response has two components: enumerate all subdomains covered by organizational wildcard certificates via a CT log query, e.g., “*.yourdomain.com”; this must precede any DNS audit; then migrate elevated-risk subdomains from wildcard coverage to per-subdomain certificates, removing the inheritance risk entirely for those subdomains.
DNSSEC — A Misconception Requiring Explicit Correction
DNSSEC cryptographically signs DNS records, preventing unauthorized modification of DNS responses in transit. It defends against DNS spoofing and man-in-the-middle attacks. In every DIH scenario, the attacker does not modify DNS records in transit. They claim the resource that a DNS record legitimately and correctly points to. A dangling CNAME that is DNSSEC-signed still points to an unclaimed resource. DNSSEC validates that the DNS response has not been tampered with, but it does not validate that the endpoint the record points to is legitimately owned. DNSSEC faithfully signs and delivers the CNAME pointing to the attacker’s endpoint.
DNSSEC and CAA records both address real threat classes, DNS spoofing and unauthorized certificate issuance, respectively, that are worth addressing on their own merits. Neither addresses DIH. Organizations should implement both and credit neither with DIH protection.
Case 1 — 670 Forgotten Trust Endpoints: Vulnerability Research, 2020
Security researchers at Vulnerability Research identified over 670 Microsoft subdomains with misconfigured DNS entries pointing to unclaimed Azure service resources, including subdomains associated with identity verification services and real-time collaboration platforms. An organization with dedicated security engineering and mature infrastructure governance produced 670 forgotten trust endpoints in a single external DNS audit pass. Organizations with comparable infrastructure complexity and less security investment have no basis for expecting fewer exposures.
Case 2 — A Trusted Consumer Platform Subdomain, Zero Exploit: Frans Rosén, HackerOne
Security researcher Frans Rosén identified that a major consumer platform’s rider-facing subdomain pointed via CNAME to an unclaimed Amazon CloudFront distribution. Claiming the distribution through CloudFront’s standard account creation flow allowed arbitrary content to be served under the trusted subdomain without any access to the organization’s own infrastructure. The CDN auto-provisioned TLS certificate meant the takeover would have been invisible to users from a browser trust perspective.
Case 3 — Cloud Bucket Squatting: Scale Characterized
In research conducted between October 2024 and January 2025, approximately 150 abandoned S3 buckets were identified that organizations had used for software deployment, update distribution, and configuration management, then abandoned without retiring downstream references. These buckets were re-registered at negligible cost.
Over a two-month monitoring period, the re-registered buckets received traffic from a range of environments. Request types included software update checks, unsigned pre-compiled OS binaries for Windows, Linux, and macOS, virtual machine images, JavaScript files, CloudFormation templates, and SSL VPN server configurations. Every one of those requests could have been answered with a malicious payload. None were, because the research was conducted under responsible disclosure principles. AWS subsequently blocked the specifically identified buckets from re-creation. The March 12, 2026, introduction of Account Regional Namespaces is a structural architectural response to the systemic issue this research class quantified.
Sourcing note: specific request counts and dollar figures cited in some discussions of this research are not reproduced here, as the primary publication has not been verified to our satisfaction for that level of specificity. The request type characterization and two-month observation window are confirmed across multiple independent technical analyses. Practitioners seeking precise figures should search for cloud security research covering abandoned S3 bucket re-registration in the October 2024 to January 2025 window. This sourcing gap remains open.
Case 4 — Inherited Access via Abandoned C2 Infrastructure
Researchers registered expired domains previously used as C2 infrastructure by multiple threat groups and observed callback traffic from approximately 4,000 still-infected systems spanning government and university networks across multiple continents. As established above, this is opportunistic sinkholing of already-compromised systems, not DIH as initial access. The researchers directed resulting traffic to a trusted non-profit sinkholing operation. The analytical value is the demonstration that abandoned threat actor infrastructure remains live, globally distributed, and entirely unmonitored — and that claiming it requires only a registrar account and the willingness to act first.
AWS Account Regional Namespaces
On March 12, 2026, AWS introduced Account Regional Namespaces for S3 general-purpose buckets. The naming convention follows the format <prefix>-<account-id>-<region>-an, producing bucket names such as mybucket-123456789012-us-east-1-an. The account ID and region embedded in the suffix are tied to the owning AWS account; only that account can create buckets with that suffix, and even after deletion, the name cannot be claimed by another account.
The –bucket-namespace account-regional parameter is confirmed across the official AWS announcement and multiple independent technical analyses as of this writing. As this feature was launched on March 12, 2026, the same date as this report, practitioners should verify current CLI syntax against the AWS S3 API CreateBucket documentation before implementation. Launch-day documentation is most subject to revision.
Organization-wide enforcement uses the s3:x-amz-bucket-namespace condition key in IAM policies and Service Control Policies. CloudFormation supports the feature today via BucketNamespace and BucketNamePrefix parameters. Terraform support is not yet available.
Four qualifications that must accompany implementation:
The x-amz-expected-bucket-owner header provides a complementary detection-layer control API calls, including this header, that fail if the bucket exists under a different account. Adoption friction is real: retrofitting this header across a large codebase is a multi-month engineering effort. Prioritize pipeline and software delivery contexts first.
Azure and GCS
For Azure Blob storage, squatting risk operates at the storage account name level, where account names are globally unique rather than at the container level. This is a meaningfully different threat model from S3’s historically global bucket namespace and requires separate assessment. For GCS, bucket names remain globally unique with no structural namespace protection equivalent to AWS’s Account Regional Namespaces as of this writing. GCS represents an unmitigated version of this attack surface.
All prior analyses framed DIH exclusively as something that happens to organizations as victims. That framing is incomplete. An organization that migrates its API infrastructure and allows the old domain to lapse is not a victim; it is the condition enabling DIH against its clients. A software vendor whose SDK hardcodes api.old-service.com, migrates that backend without updating the SDK, and lets the domain expire, has created a live attack surface against every organization still running that SDK.
This creates a third-party risk management obligation that current vendor security assessment frameworks do not address. Standard vendor questionnaires cover data handling, encryption standards, incident response capability, and access controls. They do not cover infrastructure lifecycle management. No industry certification or assessment framework has incorporated DIH-specific vendor lifecycle management requirements. The vendor assessment framework below addresses this gap at the organizational level until industry standardization catches up.
Vendor Questionnaire Items
Pass/Fail Criteria
A satisfactory retention period commitment is specific, documented, and derived from an assessment of the vendor’s slowest-moving client segment. For non-regulated API endpoints with rapid-update client populations, six months post-deprecation is a defensible floor. For APIs embedded in distributed SDKs or serving regulated industry clients, twelve months is the appropriate floor. Answers of “reasonable time” or “as needed” are unsatisfactory; they are unenforceable.
A satisfactory answer on client notification includes a defined communication channel, a minimum notice period documented in the vendor’s deprecation policy, and evidence that notification precedes deprecation. A satisfactory answer on dependency auditing describes a specific process for identifying downstream references, not a general commitment. Vendors who cannot describe this process have not implemented it.
The Open-Ecosystem Limitation
The dependency audit criterion cannot be answered for open-ecosystem vendors: maintainers of publicly documented APIs that any developer can integrate without registration, authors of open-source SDKs distributed through package managers, or SaaS platforms with self-serve onboarding where the vendor has no directory of integrating organizations. For these vendors, a client dependency audit is structurally impossible. The obligation shifts from dependency audit to extended retention periods and broad deprecation announcements through all available public channels, package manager deprecation notices, public changelog entries, developer documentation updates, and community forum announcements.
Organizations should categorize their vendor relationships before applying the questionnaire, closed-client vendors against dependency audit criteria, open-ecosystem vendors against public notification, and extended retention criteria.
There is no confirmed, publicly documented case of a nation-state actor using DIH as a primary access or persistence mechanism. The operational characteristics of DIH mean that no exploitation is required, no intrusion alert is generated, traffic is indistinguishable from legitimate operations, persistent access is tied to infrastructure rather than credentials, and it is consistent with the preferences of patient, low-noise threat actors. That consistency makes DIH a plausible addition to a sophisticated actor’s toolkit. It does not make it a confirmed one.
This connection is an adversarial hypothesis, not an intelligence assessment. If a confirmed nation-state case surfaces in the open-source record, this section will be revised immediately.
Impact Profile
Intelligence Gaps
| Tactic | ID | Technique / Sub-technique |
| Reconnaissance | T1596.001 | Search Open Technical Databases: DNS/Passive DNS |
| Reconnaissance | T1593.002 | Search Open Websites / Domains |
| Resource Development | T1583.001 | Acquire Infrastructure: Domains |
| Resource Development | T1584.006 | Compromise Infrastructure: Web Services |
| Resource Development | T1608.001 | Stage Capabilities: Upload Malware |
| Initial Access | T1195.002 | Supply Chain Compromise: Compromise Software Supply Chain |
| Initial Access | T1566.003 | Phishing: Spear phishing via Service |
| Credential Access | T1539 | Steal Web Session Cookie |
| Collection | T1213 | Data from Information Repositories |
| Exfiltration | T1567 | Exfiltration Over Web Service |
| Defense Evasion | T1036.005 | Masquerading: Match Legitimate Name or Location |
| Persistence | T1505.003 | Server Software Component: Web Shell (analogy) |
Dead Infrastructure Hijacking will not be solved by a patch, a product, or a policy update. It will be solved to the extent any systemic threat class is solved by organizations building explicit accountability for the trust relationship lifecycle into their operational fabric and maintaining it continuously as their infrastructure evolves.
The cases documented in this report were found by teams that were deliberately looking. In every instance, the organizations that owned the exposed assets were not looking and would not have found these exposures through any standard security process. Asymmetry, attacker patience, and organizational forgetting are the engines of this threat class.
The AWS Account Regional Namespace feature announced on March 12, 2026, is the most consequential structural mitigation this threat class has received. It removes the bucket squatting attack surface for new S3 infrastructure by binding bucket names to the owning account. It does not solve the problem retrospectively, it does not enforce itself, and it does not address any DIH variant outside cloud storage. The other surfaces, DNS, SaaS CNAME, expired domains, legacy mobile endpoints, webhook integrations, and session cookie scoping are unaffected and require the organizational interventions detailed in the Recommendations section.
The two DNS security misconceptions corrected in this report deserve emphasis at close. CAA records do not protect against SaaS CNAME takeover. The SaaS platform provisions TLS certificates automatically, bypassing all certificate issuance controls the organization has configured. DNSSEC does not protect against any DIH variant because DIH does not involve the modification of DNS records in transit. DNSSEC faithfully signs and delivers a CNAME record pointing to an attacker-controlled endpoint. Both controls are worth implementing for the threat classes they actually address. Neither addresses DIH. Organizations that have deployed these controls and credited them with protecting against subdomain takeover have generated confidence in controls that address different threats entirely. That misplaced confidence is more dangerous than knowing the exposure exists.
The final observation is also the most important one: the organizations currently most exposed to DIH are not the ones that know about it and have deprioritized it. They are the ones that have never audited for it, have no visibility into their dead infrastructure, and whose decommissioning processes create new exposure at the same rate as every infrastructure change they make. The starting point is not a tool or a framework. It is the recognition that the attack surface does not appear on any dashboard, does not trigger any alert, and will not be found by waiting for it to announce itself.
Strategic
Treat dead infrastructure as a first-class attack surface category requiring the same continuous monitoring investment as live infrastructure. Decommissioning a service without retiring its trust relationships is not decommissioning it is a transfer of that trust to whoever next claims the abandoned endpoint.
Build explicit cross-functional accountability for trust relationship lifecycle into decommission workflows. Development, infrastructure, and security teams must each have named, documented responsibilities in any decommission sign-off. Assumed responsibility produces the same outcome as no responsibility.
Expand third-party risk assessment frameworks to include DIH-specific vendor lifecycle management criteria. Standard vendor questionnaires do not currently cover whether vendors have processes to prevent abandoning endpoints that client systems trust. This gap must close.
Audit wildcard certificate scope before conducting DNS audits. Wildcard certificates actively increase the severity of subdomain takeover findings. A hijacked subdomain under a wildcard inherits the organizational certificate authority, making the takeover seamless to end users. This is a prerequisite for correct severity assignment, not a follow-on task.
Tactical
Migrate new AWS S3 bucket creation to Account Regional Namespace format (naming convention: <prefix>-<account-id>-<region>-an) immediately. Enforce organization-wide via the s3:x-amz-bucket-namespace condition key in Service Control Policies. Use CloudFormation for new bucket provisioning until the Terraform provider support ships. Do not build a remediation timeline around Terraform support closure; that date is indeterminate.
Run CT log historical discovery (%.yourdomain.com) before any DNS audit tooling. CT logs surface forgotten historical subdomains that still carry live CNAME records and would not appear in a DNS audit conducted against only the currently known zone. Without this step, the DNS audit is working from an incomplete inventory.
Deploy three DNS audit tools in sequence: dnsReaper for confirmed-exploitable SaaS CNAME signals; BadDNS for broader latent exposure, including second-order trust relationships; Nuclei with subdomain takeover templates for HTTP-level fingerprint confirmation. Running only one tool produces an incomplete picture.
Enforce subdomain-specific cookie scoping (domain=docs.example.com rather than domain=.example.com) as the primary cookie scope exploitation control. SameSite=Strict is defense-in-depth against future misconfiguration. It does not provide independent protection when scoping is correctly configured, but it prevents the cross-site GET navigation vector that SameSite=Lax leaves open. Both should be enforced, with subdomain-specific scoping as the primary control.
Migrate elevated-risk subdomains’ authentication flows, API endpoints, and portals with authenticated sessions from wildcard to per-subdomain certificates. This removes the wildcard inheritance risk for those subdomains regardless of future DNS audit findings.
Operational
Wildcard-covered dangling DNS records identified during audits require DNS record removal within 24 hours, with the clock starting from finding generation, not from triage completion. Triage runs in parallel with the remediation window. For large inventories, run a pre-scan prioritization pass that identifies wildcard-covered subdomains before full triage begins to prevent them from being reached late in the triage sequence with insufficient time remaining.
The escalation path for wildcard-covered findings: the security team exercises prioritization authority and authorizes emergency change protocol invocation; the infrastructure team retains execution authority for the DNS change itself. The security team does not execute DNS changes unilaterally; their role is to override standard change windows and prevent findings from queuing behind routine maintenance.
Implement domain expiry monitoring using DomainTools Iris as the primary tool for expiry alerting, SecurityTrails for DNS change monitoring, and PassiveTotal for historical DNS baseline and webhook variant determination. These are complementary capabilities, not equivalent alternatives. PassiveTotal’s expiry alerting capability is weaker than DomainTools Iris and should not be substituted for it.
Include explicit webhook protocol identification in the decommissioning checklist document, whether each webhook destination operated over HTTP or HTTPS, and if HTTPS, whether it was a Variant 1 CNAME-to-SaaS or Variant 2 custom domain configuration. Where the webhook has already lapsed, query PassiveTotal historical DNS data to establish which variant applied. This determination governs the passive mode risk profile of the lapsed endpoint.
Configure CT real-time alerting for organizational domains. An unexpected certificate issuance event for a subdomain believed to be decommissioned is a high-fidelity indicator of a live takeover attempt. Note that SaaS platform certificate provisioning pipelines vary in CT publication. The timing of the window between domain claiming and CT log visibility may be longer than a few hours for Variant 1 scenarios, and exploitation may be underway before the CT alert fires. CT alerting is a detection layer, not a guaranteed early warning system.
Retain any domain whose downstream dependencies have not been fully verified as removed. Retention obligation persists until cross-functional decommission sign-off is complete. Only after that verified sweep is documented is a domain eligible for release. Deploy DomainTools Iris expiry alerting across all domains in the decommission pipeline to prevent inadvertent lapse during the retention period.
Add a mobile application endpoint inventory to the decommissioning protocol. The infrastructure team retiring a service notifies the mobile application team with a documented handoff. The mobile application team owns verification that no supported version calls the retiring endpoint, using MDM telemetry for managed fleets and app store version distribution data for consumer applications. The security team owns the final sign-off. Unsupported legacy version gaps are documented explicitly and escalated to a risk acceptance decision, not assumed closed.