This varies a good deal from institution to institution. I've gone from places where you were expected (read, bullied) into achieving a certain pass rate (of aroud 90%, regardless of how much work the students were willing to put into the course), where the pass-rate is used only as a possible warning sign of operational difficulties. By far, the institution that I am currently in (which has the latter approach) has the best national reputation for teaching of all the places I have worked. If your course has a low pass rate, you'll be expected to identify it, justify it, identify issues that may have caused it, and provide a *reasonable* plan of action for improvement[1]. It works well.
Student attainment levels are a ridiculous metric, because they are so easily gamed. I plan the course, I deliver the course, I set the assessment, I write the exams, I mark the exams. If I want a pass rate of 94.34% on the nose, I can be damn sure to get it, no matter what level of oversight (short of the Orwellian) there is. Attach consequences to low student attainment, then you will force people to game the system. That's the way we are built as people.
But, it's a cheap, easy, and to those who no longer remember how teaching really works, convincing way to rate teacher ability. It's not really a big surprise that the worst teachers in many institutions (those who don't have the benefit of a high-repute research output to insulate them) often have the most consistently impressive pass rates. You raise a good point, but I think the problem exists primarily in those instutitions that don't really have the confidence to say 'This metric makes no sense'.
[1] A reasonable plan is not 'make the material easier', FWIW.