What FSD Reveals About Optimus

What FSD Reveals About Optimus
Photo by Bram Van Oost / Unsplash

Florida, 2025.
A driver rear-ends another vehicle while Autopilot, part of Tesla’s Full Self-Driving (FSD) system, is engaged.His hands are on the wheel. The system does not issue a takeover alert. The court assigns Tesla 33 percent of the liability.

The ruling is not about whether the system technically failed.
It is about whether the product’s design and presentation led a reasonable user to believe they could pay less attention.

The court said yes.

This is the first U.S. jury verdict to assign Tesla partial responsibility in an Autopilot-related fatality. The company is appealing. The appeal matters. The question the verdict raises will outlast it.

When the product works better, liability gets harder to separate

FSD is built to reduce driver involvement. It decides when to brake, accelerate, and change lanes. That is the point of the product.

When accidents happen, Tesla’s position has been consistent: the driver should have been monitoring at all times.

Placed side by side, the tension is obvious.
The product is designed to let the user relax.
The liability framework requires the user to remain fully engaged.

The Florida court looked at both and did not accept that they can coexist without consequence.

This is not a claim about bad intent.
It is a structural mismatch between how the product creates value and how responsibility is assigned when something goes wrong.

Why small errors matter more than big ones

Systems that fail constantly are easy to abandon.
The harder problem is a system that works well most of the time and fails in ways that are difficult to anticipate.

NHTSA’s 2025 investigation into FSD includes reports of red-light violations, inconsistent behavior at railroad crossings, and unexpected braking in routine conditions.

None of these are catastrophic on their own. What they reveal is something more basic: the user cannot reliably anticipate when the system’s judgment will diverge from their own.

Users are told to be ready to intervene at all times.
They are given no reliable signal for when that moment arrives.

At that point, supervision becomes a formal requirement rather than a practical one.

Optimus faces the same structure, amplified

Road environments are constrained. They have markings, signals, and shared conventions. Even so, it has taken more than a decade for autonomous driving to begin producing meaningful liability precedent.

Optimus is intended for environments without those constraints.
Homes. Kitchens. Shared spaces. Children nearby.

If a robot knocks over boiling water and someone is injured, responsibility must be assigned.
If it discards an object it classifies as waste and that object is medication, responsibility must be assigned.

The value proposition of a humanoid robot is autonomy.
Constant supervision defeats the point.

This is the same tension seen in FSD, carried into environments that are less predictable and less forgiving.

Regulation does not resolve the tension, it formalizes it

The EU AI Act introduces disclosure and oversight requirements for high-risk systems. Proposed liability frameworks shift evidentiary burdens toward producers when harm occurs.

These measures clarify who pays when something goes wrong.
They do not eliminate the underlying tradeoff.

As autonomy increases, it becomes harder to argue that the user is at fault.
As autonomy decreases, the product’s value erodes.

Regulation does not remove this tension. It records it.

What Optimus is actually waiting for

The limiting factor is not whether the technology can reach a certain threshold.

The unresolved question is who carries the consequences when autonomous judgment fails.

Tesla. Insurers. Users.
FSD has shown that none volunteer. Courts decide after the fact.

Optimus is waiting for a consensus that does not yet exist:
when a machine makes its own judgment, acts on it, and causes harm, whose problem is that?

The technology will keep advancing.
Whether the product can land in the real world is a separate question.