@instography, it's clear that you know a lot about statistical sampling, but maybe a bit less than you think about the diffiuclties of sampling for opinion polls.
The fact is that, whether you do passive or targetted, face-to-face or phone, include mobile numbers or not, use a self-selecting panel or randomly contact, be genuinely random or targetted random, there is no way you can get a genuinely representative sample. There will always be biases.
You will get more older people than young. You can fix that by using online, but then it's self-selecting. And you still don't get enough 18-24. You get fewer genuinely poor people, and fewer properly rich people. You get more white and UK nationals that you should. It goes on and on.
Getting a representative sample is the incredibly difficult bit. It's relatively easy to adjust for age, sex, location, because you can know these. It's a little harder to adjust for socio-economics, because you're dependent on the person telling you the truth, and the measure you use being sound. Then comes the really difficult part in Scotland, especially on the referendum polling; how do you know if you've overpolled supporters of one party due to biases in your methodology?
In England this is generally done by asking people who they voted fro at the last election. But that doesn't work in Scotland due to the huge numbers who vote Labour for Westminster and SNP for Holyrood.
Finally, there's the artificiality of the question itself. Telling the pollster something isn't real. marking it on a ballot paper is. I think that accounts for the 1992 problem more than shy Tories - but no-one can ever know.
One specific way there could be a problem with indyref polling is this: how comfortable are Yes supporters saying that down a phone to an English-accented interviewer?
Insto, for you to assume that on the edge of the pollsters there are outliers, and the truth lies in the middle, shows that you have a good grasp of statistics in general but not enough knowledge of the history and methodology of political polling. It's an assumption not based in evidence. In the past, all pollsters have been wrong in the same direction on some elections.
The variables in the methodologies, and the range of potentially acceptable methodologies, mean that the range of uncertainty is very large on this one. Can you find any other issue on which pollsters differ so greatly when asking literally the same question?
I'd highly recommend the UK polling report blog - http://ukpollingreport.co.uk/ - where these things are regularly and well covered - without snide barbs like yours above.