Policy conversations about requiring AI-driven collision avoidance systems in drones are moving from hypotheticals to concrete rulemakings. Regulators and executives in 2025 have signaled a clear appetite for scaling beyond-visual-line-of-sight operations, and detect-and-avoid, or DAA, has become the focal point where safety, ethics, and commercial pressure collide. The U.S. Federal Aviation Administration and Transportation Security Administration published a broad Notice of Proposed Rulemaking to create 14 CFR Part 108, which explicitly foregrounds DAA capability and proposes consequential changes to right-of-way rules for Part 108 UAS. That proposal would require drones in certain operations to detect and give way to cooperative traffic and, in some shielded locations, to carry stronger DAA capabilities.

The push for faster integration is not only regulatory. A June 2025 White House executive order directs federal agencies to accelerate routine BVLOS operations and to deploy AI tools to streamline waiver and approval processes. That combination of policy pressure and operational demand is likely to accelerate adoption of AI-based perception and planning systems on commercial UAS platforms.

Any mandate that requires or rewards AI-based collision avoidance must do more than prescribe sensors and algorithms. From an ethics and public policy perspective there are three core obligations: first, to ensure demonstrable, measurable safety performance across the operational design domain; second, to make systems auditable, certifiable, and robust to foreseeable failures and attacks; and third, to allocate responsibility so that victims, operators, manufacturers, and regulators understand who is accountable when things go wrong.

On technical maturity, research in 2024 and 2025 shows progress but also important limits. Vision and learning-based systems have produced compelling demonstrations of airborne detection and tracking at constrained ranges and under controlled conditions, and advanced control methods can produce provable safety margins in some scenarios. Real-world test programs and academic work show promise, but they also reveal sensitivity to lighting, weather, sensor occlusion, and adversarial inputs. These constraints matter because policy makers who write mandates will be asking for reliable performance well beyond laboratory conditions.

Regulatory frameworks elsewhere add another dimension. The European Union brought the AI Act into force in 2024, creating a risk-based regime that places particularly stringent obligations on high-risk AI systems and AI embedded in regulated products. Collision avoidance for aircraft has a clear public-safety implication and will interact with those EU rules, especially where DAA is embedded in certified avionics or employed by providers operating inside the EU market. International aviation authorities are also working on RPAS and detect-and-avoid standards, which points to a near-term need for cross-jurisdictional harmonization.

Mandates that simply demand “AI-based collision avoidance” risk three ethical failures. First, a poorly scoped mandate can create a false sense of safety if it focuses on presence of machine learning rather than validated performance metrics. Second, mandates that ignore failure modes and adversarial risks could incentivize deployment of brittle systems that perform well on benchmark tests but catastrophically fail in the real world. Third, rules that shift the burden entirely onto other airspace users by reallocating right-of-way without ensuring affordable electronic conspicuity options will raise equity and safety concerns for general aviation, aerial applicators, emergency rotorcraft, and small operators. The proposed U.S. approach that grants Part 108 UAS different right-of-way responsibilities illustrates this tension: promising operational efficiency, while raising legitimate concerns among manned aviation stakeholders.

From these realities, a set of practical ethical design and policy principles follows.

1) Performance-based, not technology-prescriptive mandates. Regulations should set measurable safety targets and evaluation scenarios rather than specifying particular ML models or sensor suites. That lets innovators pursue different architectures while ensuring a common safety baseline. Testable metrics should include detection range and probability, time-to-react, false positive/negative rates under representative environmental conditions, and system latency under full-load operations. Where appropriate, regulators should require demonstration across edge cases and degraded-sensor scenarios.

2) Certifiable assurance and continuous monitoring. Collision avoidance that can alter a vehicle’s trajectory must be subject to aviation-grade assurance processes. That means rigorous verification and validation, traceable data and model provenance, and operational monitoring after fielding. For software that incorporates learning components, manufacturers should provide evidence of training data curation, distributional-shift analyses, adversarial robustness tests, and explainable failure-mode characterizations. Where the AI is safety critical, regulators should require independent third-party evaluation and periodic re-certification.

3) Clear operational design domains and graceful degradation. Mandates should require that manufacturers declare the operational design domain, or ODD, where the DAA system is certified to operate. If conditions fall outside that ODD, the system must degrade gracefully to a safe, predictable fallback mode—return-to-home, hover-and-wait, or prompt a human takeover—rather than producing unpredictable maneuvers. Regulators should mandate explicit, auditable transition criteria from autonomy to fallback.

4) Auditability, transparency, and incident reporting. To build public trust and to support post-incident learning, operators should be required to log sensor, perception, and decision outputs in a standardized, tamper-evident format. Mandatory incident reporting for DAA interventions and near-misses will let regulators and researchers assemble the empirical evidence needed to refine both technology and rules. Transparency rules must be balanced with legitimate commercial concerns about IP and security.

5) Cybersecurity, adversarial resilience, and anti-spoofing. Collision avoidance systems change the risk profile for malicious interference. Regulators should require threat modelling and adversarial testing for ML perception stacks, and certify protections for communications, firmware updates, and sensor data integrity. This must include consideration of spoofed ADS-B and electronic conspicuity signals, GPS jamming, and optical sensor spoofing.

6) Proportionate allocation of responsibility and equitable rules for mixed airspace users. If policy grants BVLOS drones conditional right-of-way in low-altitude airspace, the mandate must also address the cost and feasibility of parity measures for legacy manned aircraft and small operators. Portable electronic conspicuity solutions, subsidies for safety equipment, or operational restrictions for high-risk zones are all policy levers that can reduce inequitable burdens. The objective should be to avoid regulatory choices that privatize risk onto less-resourced airspace users.

7) International alignment and standards development. Aviation is inherently cross-border. National mandates will be most effective if they map to ICAO model rules and to regional frameworks like the EU AI Act, and if they reference common test suites and standards. Investing in multilateral standards for scenario libraries, conformance tests, and data-sharing protocols will lower friction for operators while improving safety.

8) Phased implementation with sandboxes and test ranges. Given technical complexity, regulators should use tiered approvals, operational sandboxes, and full-scale test ranges to gather operational data, especially for high-density urban environments and mixed-use corridors. The FAA’s Part 108 NPRM and White House directions to leverage test ranges and to accelerate approvals create an opening to structure those phased deployments responsibly.

Practical policy instruments that follow from these principles include mandatory third-party DAA performance certification, an FAA or interagency registry of certified DAA systems, standardized incident and near-miss reporting tied to administrative review, and conditional subsidies to help lower-cost operators equip with compatible conspicuity technologies. Rule writers should also consider safety margins that account for model drift, and require providers to demonstrate active monitoring programs that watch for degradations in field performance.

Ethics in this context is not academic. The public will judge mandates not on intentions but on outcomes: whether lives are safer, whether pilots are treated equitably, and whether the systems we authorize can be trusted under stress. Mandates that demand AI for collision avoidance can be ethically defensible, but only if they are written around measurable safety, robust assurance, shared accountability, and international cooperation. The alternative is a patchwork outcome where faster deployment creates new, predictable harms.

Two final operational cautions. First, regulators should avoid substituting presumptions of infallibility for rigorous testing. Safety should be demonstrated, not assumed. Second, because technology and attacks evolve, regulatory frameworks for DAA must be adaptive. Fixed, inflexible prescriptions for specific ML techniques will quickly become obsolete. Instead, build adaptive oversight mechanisms that require continuous evidence of safety while giving innovators room to improve.

If policy makers apply these principles, collision avoidance mandates can accelerate useful applications of drones while respecting the core ethical commitments of aviation: safety, transparency, and equitable risk allocation. That balanced approach will be necessary to make DAA an enabler of public benefit rather than a cause of new public distrust.