On the Frustration Inherent in Debunking
There is an interesting - and frustrating - phenomenon related to debunking: the time it takes to make a false statement is nearly infinitesimal in comparison to the time it takes to debunk it.
Example 1: Science thinks humans have spoken language for about 10,000 years now.
That is a number I made up out of thin air. Now, debunk it. (Wikipedia doesn't count as a source.) This claim obviously belongs to linguistics, although it also somewhat overlaps with anthropology, evolutionary biology (and therefore paleontology) and maybe to some extent some other sciences. At least there's a starting point! (For some of the claims I've come across this far, the starting point is, well, obvious but in a way that does not help much. I will elaborate on that later.)
Then one has to locate a source, evaluate if the source is credible, ... So, making that claim up and writing it took all of ten seconds. Debunking it took a bit of thinking, a survey through the online databases of books available at the local libraries, checking whether any of the authors actually are authorities in their fields, finding books that are sufficiently new and also peer-reviewed, a trip to the particular libraries that happened to have such books available, (in the worst case, waiting for the previous person to have borrowed them to return them. Here, that makes a few weeks in worst case), and reading through sufficiently many chapters to find a clear statement about how long science does think humans have had language. A procedure that takes some time. In this case, the reason I would think the claim is spurious if I came across it is because I've heard a quite different date in linguistics 101, and in some other literature on the topic. I don't keep a log of every bit of knowledge I come across and what sources I can refer to when I want to repeat such a claim.)
I ended up with Tecumseh Fitch's The Evolution of Language. Of course, Fitch's book does not in its table of contents give a heading that says right out that a particular chapter contains the age of human language (or at least terminuses a quo and ante quem).
Here already, I am cheating a bit: had I not known that my claim was bullshit in the first place, I doubt one source would have been satisfactory to establish that science does think differently - science is not the statement of one scholar, but in a way some kind of weighted average of lots of scholars. Of course, there may be sources from earlier centuries, from religiously inclined scholars, other fringe scholars, etc, who will make claims that are quite far off from these estimates, and some of these scholars may even have some respectable credentials. Oftentimes, such credentials were given in times when science was less rigorous or had less evidence. Some get their credentials from colleges that are ideologically driven and do not even try to keep up with real science.
One big hurdle here is the availability of books: not all libraries have books that answer each question that might appear, and not even a university's access to electrical libraries and journals might provide sufficient material to verify or debunk each and every claim.
I have, in fact, gone through this procedure with some claims that turned out to be accurate as well. Here appears an important fact as well:
the fact-checker is fighting up-hill. Sometimes, there is going to be the same effort invested no matter whether you are verifying a fact or debunking it. If the source you are investigating provides several minor supporting facts that are accurate and some major facts that are fabrications, the fact-checker runs into a frustrating situation, where he will end up evaluating a lot of claims for naught, even though the main thesis he is debunking indeed is wrong.
Still, it is clear that whoever is willing to mislead others has a great advantage, and if their readers are willing to be duped, the advantage is even greater.
Tecumseh Fitch does provide a terminus ante quem:
At a fundamental cognitive level, we humans are one species, each population possessing equivalent intellectual and linguistic capacities. This key fact enables us to infer with certainity that human linguistic and cognitive capacities were already fixed in our species by the time the first wave of human pioneers exited Africa and made it to Australia - at least 50 KYA (for a review, see Mellars, 2006). This time point (which is still controversial) represents the last plausible moment at which human linguistic abilities like those of modern humans had evolved to fixation in our species. [1, p. 273]
A worse kind of claim is those where even if a starting point is obvious, this still doesn't narrow the field down. This comes in claims like the similarity between some word in the Mayan language and Tamil, or unspecified similarities between Mexican and Hebrew [2, chapter 24] - both of which are important claims for Murdock's more wild theorizing. There are several Mayan languages, Katzner's The Languages of the World lists ten [3, p. 8]. Looking up the source provided by Murdock is a dead end - it only states this as a fact, not giving any source in which to verify it. [4, p. 9]
Now, I do not have access to dictionaries of these languages, and if I found a distinct lack of that word in a dictionary, there could be several explanations: it is a word in another Mayan language, it is a dialectal word that would not be included in a dictionary, it is a derived word form that is not included in a list of roots, ...
There is literally a dozen ways to wriggle out of being caught with one's pants down in that case. It is an unfalsifiable claim, as any attempt at falsifying it only leads to increasing amounts of work for the person investigating it with a skeptical mindset: ultimately, I will have to learn the ten Mayan languages (and the eleven other ones that appear in some other lists), all the extinct versions, and go to each and every village checking if the word actually exists there.
Obviously, science cannot work that way - its hands would be tied by those who would wish to mislead, having to spend all its time in a defensive posture against fabrication. This is why proper sources are required. A religious Hindu newspaper is not a proper source.
As for claimed similarities between Nahuatl and Hebrew, the supposed kind of similarities are not even mentioned. People with an interest in linguistics knows there are many dimensions along which a language can be similar: their grammars can behave in similar manners (as in the case of Finnish and Turkish, or Swedish and English, or ancient Greek and Russian), their vocabulary can be similar (as in the case of the Romance languages and English, or the Germanic languages and English), their phonologies can be similar (Norwegian and Swedish, Estonian and Finnish, Russian and Polish, Polynesian languages and Japanese, Tamil and Marthutunira). Of course, the person making such a claim probably is unaware of all these ways in which languages can be similar (or different).
Of course, this makes the person that is knowledgeable about these matters and who wants to do a really comprehensive debunking likely to run into problems: he will want to debunk every possible claim that he can imagine here (or at least show that what is claimed to follow from such a similarity does not follow).
Ultimately, a line must be drawn somewhere - vague claims that are not given sources at all (or where the source does not back its claim up) cannot be dealt with. The author has been dishonest, and misleading in not making a clearer claim. The claim must be rejected until a more specific claim can be provided instead.
And this is a trick Acharya S repeatedly uses.
Acceptable errors?
Even in real, serious,
scholarly literature, errors will creep in. No one can know all the fields that modern science and technology (or other scholarly fields) utilize. A favorite example of mine appears in some editions of Peterson and Davies' Computer Networks, A Systems Approach. It is a good book, well deserved of its position at many universities as the textbook for courses on networks. One minor error has sneaked in that I have noticed - and a really inconsequential one, at that.
First, there is the speed-of-light propagation delay. This delay occurs because nothing, including a bit on a wire, can travel faster than the speed of light. If you know the distance between two points, you can calculate the speed-of-light latency, although you have to be careful because light travels across different mediums at different speeds. [5, p. 42]
This text by and large is correct, the minor error being that information in fact can travel faster than light when light travels slower than c under exceptional circumstances. The text implies that the
speed of light in a medium is the fastest anything can travel in
that medium. Now, this is incorrect. Nothing can surpass the
speed of light in vacuum, in any medium would be a more accurate wording. Of course, this does not even get into the issue of phase velocity vs. group velocity.
We know by now that light can be slowed down to about 20 km/s. Simply sending information encoded in neutrinos through such a medium would violate the claim made there. However, no one will run into that in any situation where information is being transmitted for quite some time or for the purpose of actually transmitting information, and other situations in which information actually are transmitted faster than the speed of light in the medium are so exceptional as to make the claim in the text acceptable - the possible methods to do so are so impracticable they probably never will see any actual implementation. Counterexamples pretty much only serve as proofs of concept, and this is not even an issue in physics.
Ultimately, we conclude that this mistake isn't even a problem: no alternative physics or models of networks is being proposed on the basis of this claim in the book, and in practice, it will actually be correct as far as engineers of electronic communication are concerned for the rest of, well, probably all of history. However, no physicist will quote this book as evidence that modern physics rejects the notion of communication faster than the "local" speed of light in a medium - since the book is not written by a physicist, not meant to teach physics, and not a source for statements about physics. If a paper on physics referred to this volume in regard to physics, the student's advisor would do well to suggest another source be used. If the student referred to this volume in regard to some algorithm or solution used in electronic communication and investigated some physical solution to a problem mentioned, the student could very well refer to this book even in a physics paper - but even in that case, the book is only an authority within its field. (Even then, the student would do better to refer to some more up to date source about the particular problem.)
Is the flipped coin fair?
In maths, there is an interesting problem: how many times do you have to flip a coin (or dice) before you know (to some level of certainty), that the coin has been tampered with to increase the likelihood of one side over the other. Turns out we can never know for sure (without actually measuring the weight distribution of the coin or something), but we can know with some certainty. The more tests we perform, the closer to 100% certainty whether it's fair or not we reach - but we never get quite there.
Knowing the maths, we can also do this for a dice, or other sources of random numbers. Now, it would be joyous if we could do the same for scholars. How many claims do we have to investigate before knowing whether a scholar is crooked or honest, credulous or scholarly? I have already showcased an irrelevant error - one where the error did not substantially alter the quality of the work it occured in.
The scholarly community doesn't have a formal line an author has to pass to be rejected as a crackpot. If there were a good way of calculating the likelihood that someone was a crackpot based on the number of mistaken claims, that would be very useful. Refusal to go through peer-review yet maintaining that one's research (generally from secondary and tertiary sources) is ground-breaking is an important indicator. Sometimes, brilliant scholars in one field publish crackpot material in another. The psychology behind that is difficult to understand.
However, to let the claims made stand for themselves, I've decided to read her material and let it speak for itself, rather than allegations and aspersions as to her motives. So, we get back to the tedious fact-checking. Obviously, some claims do tend to have greater gravity than others, and to be more important for the conclusion than others - a sufficient sample of fabrications among them should be a good start? Is a sufficient sample of erroneous factoids given in support of the thesis also a good indicator?
Here the analogy to the tampered-with coin comes in: a reasonable sample of fabrications, errors and distortions should be sufficient to establish that a scholar is not credible - whether he or she intentionally is duping us, or is just lost in a maze of delusion or misunderstanding. Alas, I cannot make a definite statement that this or that claim in particular was the straw that broke the camel's back - but anyone who sees the sample of downright bullshit I will quote and debunk here must conclude that she is not a credible scholar.
Minor further observation
Of course, the charlatan has another advantage. Many charlatans of Acharya's kind sell something. There is a market for conspiracy theories. The market for debunking conspiracy theories is much much smaller - among her readers, few will be interested in a volume the raison d'etre of which is to take her to task for shoddy scholarship. This, combined with the imbalance between the charlatans and the defenders of reason that I already described, creates a situation where charlatans can thrive - debunking them simply does not pay off.
Just think, who would buy a book whose only purpose is showing another book (or some other books) wrong, without presenting any more groundbreaking theory than "the scholarly community, by and large, are right about things, and the things they're wrong about are either minor things or things that will be corrected as academia finds and evaulates new evidence"?
Even further suspicion
I wonder when a cease-and-desist letter will hit me over this - as I do use a fair share of quotes (but these quotes can be justified the way Acharya justifies her excessive quoting). If it happens, I will maintain my quotes fall under fair use. If a cease and desist note never arrives, I will be quite happy, and I will respect Acharya at least a bit for that. If one does, well, in that case I hope the sender of it tells me how to show that I am not misrepresenting the material I am debunking and also make it clear that I have actually read the books.
C.f. how a fan of Acharya's accuses a critic of not having read the book in the first place.
[1] The Evolution of Language, Tecumseh Fitch, 2010
[2] The Christ Conspiracy, the Greatest Story ever Sold, Acharya S, 1999
[3] The Languages of the World, Kenneth Katzner, 2002
[4] Hinduism Today, June 1995. Available at
http://www.scribd.com/doc/19007885/Hinduism-Today-June-1995
[5] Computer Networks, a Systems Approach. Larry L. Peterson, Bruce S. Davie. The error is present in both editions 3 and 4.