IMHO the only sensible answer to is separate responsibility in the sense that a tragedy happened and someone has to try to help the survivors as best they can from responsibility in the sense that someone behaved inappropriately and that resulted in an avoidable tragedy happening in the first place.
It is inevitable that technology like this will result in harm to human beings sooner or later. Maybe one day we'll evolve a system that really is close to 100% safe, but I don't expect to see that in my lifetime. So it's vital to consider intent. Did the people developing the technology try to do things right and prioritise safety?
If they behaved properly and made reasonable decisions, a tragic accident might be just that. There's nothing to be gained from penalising people who were genuinely trying to make things better, made reasonable decisions, and had no intent to do anything wrong. There's still a question of how to look after the survivors who are affected. That should probably be a purely civil matter in law, and since nothing can undo the real damage, the reality is we're mostly talking about financial compensation here.
But if someone did choose to cut corners, or fail to follow approved procedures, or wilfully ignore new information that should have made something safer, particularly in the interests of personal gain or corporate profits, now we're into a whole different area. This is criminal territory, and I suspect it's going to be important for the decision-makers at the technology companies to have some personal skin in the game. There are professional ethics that apply to people like doctors and engineers and pilots, and they are personally responsible for complying with the rules of their profession. Probably there should be something similar for others who are involved with safety-critical technologies, including self-driving vehicles.