There is no evidence the roads will be any safer.
Waymo's accident record is strong evidence that they will be.
The point is that in California there currently appears to be NO penalty or state-wide mechanism for addressing traffic violations by a robotaxi. Police apparently have little choice but to just let them go on their way without any action (at least that is what police are doing).
Is that what they're doing? It seems to me that the appropriate action is to report the event to the regulator and have them follow up.
Do you have a link to information about what police are or aren't doing?
Oh, I should have mentioned: It was a semi-automatic rifle, with a high-capacity magazine (though the magazine was an internal tube, not a replaceable box). It even had a polymer stock though it was dark brown, not black.
Decades ago people were going to school with shotgun racks on their vehicles.
Heh. My dad used to keep a rifle and ammunition in his school locker. He'd carry the rifle into the school and put it in his locker every morning, and reverse the process every afternoon. Why? He hunted jackrabbits every afternoon, on his way home from school. The local farmers paid a bounty for jackrabbit ears because the rabbits ate their crops. He tried using a shotgun for a while, but the shells were a lot more expensive, so assuming sufficient skill to hit fast-moving rabbits with a rifle, it was the better choice.
Simpler times...
Imagine if those millions of dollars were spent on teaching students.
I'm sure the district would love to spend the money that way, but we live in a society that values easy access to guns more than it values safety, so the district's hand is forced.
This is Beverly Hills, and school districts are funded by property tax revenues. This school district has money coming out of its metaphorical ears.
That, of course, is also a problem, that some school districts are lavishly funded and others struggle mightily. But if the Beverly Hills school district weren't blowing $5M on questionable safety equipment, they'd be blowing it on something else.
Nobody stops for non working lights anymore.
This is absurd, of course they stop. Anyone who fails to stop is violating the law.
Dude, I wish I had mod points! This is the best laugh I've had all day.
I think the whole notion of applying a behavior-management program designed for individual drivers to a company operating a fleet of robot drivers makes no sense. It's a different situation, and calls for different regulatory strategies. I'm not saying there shouldn't be regulation of autonomous vehicles, just that it should be tailored to address that problem, rather than applying a solution designed for a different problem.
And, frankly, California's strategy seems like a good one. They're allowing systems to be built and tested on public roads because the systems will, when fully operational, yield enormous benefits to the people; safer roads, lower-cost transport, recovery of vast amounts of space currently devoted to parking lots, etc. They're also overseeing this testing, requiring regular reports, being ready to intervene and impose additional requirements or revoke permission to operate, etc.
In California it's still not clear who gets a ticket in case of a moving violation and who gets points on their record when autonomous cars violate the law and who pays the fines and fees - so nobody does.
Are those mechanisms relevant or useful for regulating autonomous vehicles? It seems to me that you're applying a system designed to incentivize and manage the behavior of individual human drivers to an entirely different context. That doesn't make sense.
What does? Well, pretty much what California is doing. There's a regulatory agency tasked with defining rules for licensing self-driving systems to operate on state roads. Failure to comply with regulatory requirements, or evidence of failure to behave safely and effectively results in the state rescinding the license to operate. Of course, not every failure is of a magnitude that justifies license revocation, and how the maker of the system responds to problems is a key factor in determining an appropriate response.
In this case, Waymo had a significant problem. Waymo responded by immediately suspending service until, presumably, they figure out how to address the problem. Assuming they fix it, that's reasonable behavior that doesn't warrant much response by the regulator, except perhaps to look into Waymo's design and testing processes to see whether this gap is indicative of others.
This all makes a lot more sense than trying to fit policies designed for humans onto machines.
Section 227.32 on page 11 says the autonomous vehicle test driver is mandatory. Earlier it says there should be a communications link between the driver and the vehicle, but it doesn't say it must go through a "network."
Thanks. I guess this requirement goes away when the system graduates out of "test" mode?
The requirement that there be a way to 'take over' the vehicle in case of a problem literally requires network access for a remote 'driver' to take over in case of a problem involving a 'driverless' vehicle.
How can a remote driver take over a vehicle's controls if there is no network?
I was looking for a reference. Luckily, ObliviousGnat was actually helpful.
They legally are obligated to work on a network, as they should be.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.dmv.ca.gov%2Fportal%2F...
What part of that document says they have to be networked? I skimmed it and didn't see anything like that. I found some stuff about remote operators, but those appear to be optional.
For a thermostat you would be looking at at least 10 years, arguably more due to the high cost.
Well, the Nest 2nd gen thermostat was sold from 2012-2015 (when it was replaced by the 3rd gen), and support ended in 2025, so, 10 years.
What I do not like about AI coding: the intellectual and memory challenges fade away. There is no more brainwork that I have liked about coding. Copy-pasting and especially auto-coding become boring quite fast, and I have no deep knowledge of the code. I do not have problems with it to think about: solutions to feel accomplished for. Those only come when I catch an AI doing something stupid.
I have exactly the same problem copying code I have found on the web and now AI. Typing it in instead of copy pasting is a huge help, especially if I change variable and function names and reformat on the fly.
I came across some Emacs elisp code I'd written about 25 years ago, and it looked pretty useful. Emacs didn't like it. I researched the functions and variables and they apparently had been rejiggered about 5 years later. I said to myself, Self, sez I, this could be an interesting AI test. I could probably make this do what I want in a few minutes now if I did it from scratch, but that wouldn't help me understand why it was written that way 25 years ago.
So I asked Grok. I was pleasantly surprised to find it understood 25 year old elisp code just fine, explained when and how they had been rejiggered, and rewrite it for the current standards. That was more than I had expected and well worth the time invested.
One other time Grok surprised me was asking how much of FDR's New Deal legislation would have passed if it had required 2/3 passage instead of just 1/2. Not only did it name the legislation which would not have passed, it also named all the legislation which had passed by voice vote and there was no way to know if 2/3 had voted for it. The couple of bills I checked did match and were not hallucinations. The voice vote business was a nice surprise.
I program now for fun, not professionally. The idea of "offshoring" the fun to AI doesn't interest me. But trying to find 25-year-old documentation and when it changed doesn't sound like fun, and I'm glad to know I can offshore at least some of the dreary parts.
"The value of marriage is not that adults produce children, but that children produce adults." -- Peter De Vries