In California it's still not clear who gets a ticket in case of a moving violation and who gets points on their record when autonomous cars violate the law and who pays the fines and fees - so nobody does.
Are those mechanisms relevant or useful for regulating autonomous vehicles? It seems to me that you're applying a system designed to incentivize and manage the behavior of individual human drivers to an entirely different context. That doesn't make sense.
What does? Well, pretty much what California is doing. There's a regulatory agency tasked with defining rules for licensing self-driving systems to operate on state roads. Failure to comply with regulatory requirements, or evidence of failure to behave safely and effectively results in the state rescinding the license to operate. Of course, not every failure is of a magnitude that justifies license revocation, and how the maker of the system responds to problems is a key factor in determining an appropriate response.
In this case, Waymo had a significant problem. Waymo responded by immediately suspending service until, presumably, they figure out how to address the problem. Assuming they fix it, that's reasonable behavior that doesn't warrant much response by the regulator, except perhaps to look into Waymo's design and testing processes to see whether this gap is indicative of others.
This all makes a lot more sense than trying to fit policies designed for humans onto machines.