It looks like jQuery uses that as a test template to determine browser support for features. I am not sure why this would ever be seen by a google bot, though. I was not aware that web crawlers typically ran any Javascript. That would mean that they are actually functioning as a web browser (which one I wonder?). Seems unlikely.
(Edit - see this: how do web crawlers handle javascript - indicates that google may try to pull some stuff from scripts. Surprised it would not be programmed to recognize something that's part of jQuery, do you use a nonstandard name for the include?)
Alternatively, is there any chance that the header for your jQuery include is not correct? Maybe it's being served with an HTML mime type, which most browsers would probably not care about since they type is also set by the script
include, but maybe a bot would decide to parse.
In any event rather than setting a redirect, why don't you just use robots.txt
? Add this line:
Disallow: /a
You could also try fixing jQuery. Obfuscating the link a little bit would probably do the trick, e.g. change the offending line:
div.innerHTML = " <link/><table></table><"+"a hr"+"ef='/a'"
+" style='color:red;float:left;opacity:.55;'>a</a><input type='checkbox'/>";
If google is smart enough to actually parse string concatenations, which would shock me, you could go one further and assign something like "href" to a variable and then concatenate with that. I can't believe their js scanner would go that far, that would be basically like trying to run it.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…