

0·
3 days agohow does that stop the checker model from “hallucinating” a “yep, this is fine” when it should have said “nah, this is wrong”
how does that stop the checker model from “hallucinating” a “yep, this is fine” when it should have said “nah, this is wrong”
the first one was confident. But wrong. The second one could be just as confident and just as wrong.
what makes the checker models any more accurate?
you don’t check your brain’s file system regularly?
opposite or not, they are both tasks that the fixed-matrix-multiplications can utterly fail at. It’s not a regulation thing. It’s a math thing: this cannot possibly work.
If you could get the checker to be correct all of the time, then you could just do that on the model it’s “checking” because it is literally the same thing, with the same failure modes, and the same lack of any real authority in anything it spits