skeptical about the mainstream 3 (non-historical ‘fringe’ linguistics 3)

August 21, 2012

Hi again, everybody!

I turn here to some specific examples of the development and establishment of ‘paradigms’ in mainstream linguistics and of their core notions, and of skeptical reactions to these.

Some (quasi-)postmodernist feminist sociolinguists (one is Penelope Gardner-Chloros) have appeared to dismiss analyses developed by male sociolinguists as artefacts of the authors’ backgrounds while themselves advancing alternative (feminist) analyses which are also assumption-laden and no better supported by the evidence. When I made this point (in moderate language) in a seminar discussion, I had a frosty response from some feminist colleagues – despite myself identifying as a feminist. (I grant, of course, that such reactions are by no means universal. A further issue here involves the fact that the very notion of feminism has various, sometimes opposed interpretations.)

In another vein: the anti-prescriptivist approach to sociolinguistic and dialectological variation, best exemplified by the pioneering work of William Labov, arose as a professional reaction to very widespread prescriptivist folk-linguistic attitudes (‘you shouldn’t use ain’t, it’s bad English’, etc.).. More recently, it has in turn been challenged by writers such as John Honey, a historian with some knowledge of linguistics. Honey has argued that the case (at least in social terms) for a considerable degree of prescriptivism (especially regarding accents) remains strong, and that the mainstream academic sociolinguistic program which involves the wholesale modification of attitudes to accent and usage differences (even though it appears to fit in well with current egalitarian notions on a broader front) is in fact unrealistic. Although Honey clearly overstates his case in places, some of his points appear at least arguable, notably where he suggests that Labov exaggerates the coherence of some texts delivered in non-standard usage and the contrasting lack of coherence in some passages couched in more standard language.

In addition, most currently fashionable mainstream theories involving the structural analysis of language data fail at many points, making numerous predictions which are not borne out by the data, or else avoiding this only at the cost of a degree of non-specificity or abstraction which precludes empirical testing (empirical emptiness). For instance, some syntacticians committed to a basic NP+VP (Noun Phrase + Verb Phrase) analysis of sentence structure appear to assume that their theory (often left undefended) is so secure as to ‘trump’ any disconfirming data. They therefore have to deal with languages such as Welsh, where the Subject NP normally separates the Verb from the Object NP (Verb-Subject-Object word order, as in gwelodd y dyn y ddraig = ‘the man saw the dragon’), by adopting contrived and sometimes empirically indemonstrable analyses of such sentences involving covert. underlying/abstract NP+VP ordering; or at least they struggle to analyse these structures.

Some ‘nativist’ general linguistic theories, notably those associated with Chomskyan linguistics, involve, very centrally, the theory of linguistic universals and Universal Grammar (UG). These notions refer to alleged deep/abstract cross-linguistic universal features, especially in grammar but also in phonology and other aspects of language, which supposedly arise from the genetically-inherited, species-specific and very largely species-uniform mental faculty which, as Chomskyans hold, humans possess.

In opposition, some linguists, notably Geoffrey Sampson, have argued that the linguistic evidence actually supports the contrary view that we acquire language through our general intelligence, that UG does not exist, and that general psychological considerations are relevant here rather than specifically linguistic ones (as mentioned earlier). On this account, such universal features of human language as do exist are generated either by physiological constraints (these may include ‘double articulation’, the construction of meaningful morphemes out of individually meaningless phonemes) or by general psychological constraints. For example, Sampson interprets the data involving the British ‘KE’ family (many of whom struggled with language all their lives) in a very different way from Steven Pinker and other Chomskyans, regarding the relevant FOXP2 chromosome-code mutation as generating below-average general intelligence and thus causing difficulties with language but with much else besides; he would deny that the members of KE are of normal intelligence in other respects.

Indeed, various linguists, relying especially upon typological data, have argued that the apparent diversity of languages reflects deep dissimilarities, and that UG does not exist. For instance, Nicholas Evans and Stephen Levinson, citing various other researchers and accessibly summarised by Christine Kenneally, present wide-ranging data arguing for this view, including phonological evidence suggesting that even the supposedly basic Consonant-Vowel core syllable structure is not universal. If these linguists are correct, only very general ‘design features’ such as double articulation distinguish human language from pre-human communication systems (if indeed these features are in fact altogether absent in the latter). Among the many other prominent linguists who have argued extensively against Chomskyan views of these issues are Roy Harris, Peter Matthews, Ian Robinson, and the group of linguists who produced the Anti-Chomsky Reader.

Some Chomskyans are apparently offended by these criticisms, as if their views were analogous to religious doctrines rather than representing scientific findings which (like any such findings) might possibly prove to be mistaken. For instance, Sampson draws attention to the fact that the prominent Chomskyan linguist Neil Smith commented on his own views in terms of distaste. Such a response is indicative of a stance which can hardly be deemed scientific or even rational. Indeed, Chomsky’s early work is sometimes treated almost as an incorrigible revelation of truth.

I am not suggesting here that anti-Chomskyan linguists are effectively blocked from furthering their careers by some kind of Chomskyan ‘cabal’. Indeed, in some communities of academic linguists, notably in the UK and Australasia, non-Chomskyan viewpoints actually predominate. The best example is probably ‘systemic’ linguistics, which is especially associated with M.A.K. Halliday. (It should also be noted that Chomsky’s own early ideas, while clearly derived in part from notions which were then current in the mainstream, were initially perceived as highly radical and encountered severe criticism.) The point is rather that linguists espousing views very different from their own are often, as it seems, treated by Chomskyan linguists as less than worthy ‘opponents’. (At one time, the one course in non-Chomskyan linguistics offered in Chomsky’s own department was labelled ‘The Bad Guys’ by students and staff.)

Neither am I arguing here that the Chomskyan approach is altogether mistaken; it might indeed prove in the end to be largely correct. In addition, the views of Chomsky and his followers do display some variety; and, as one would expect in a scientific enterprise, they have also changed considerably over the years. The issue is rather that of the attitudes of some practitioners of Chomskyan linguistics.

More next time!

Mark


skeptical about the mainstream 2 (non-historical ‘fringe’ linguistics 2)

August 14, 2012

Hi again, everybody! Thanks for the interest in skepticism about mainstream linguistics. I’ll continue!

In addition to sheer conservatism and the ascribing of undue status to works produced by the famous, there are also other factors which may make it more or less difficult to publish. At any given time, some viewpoints – currently, for instance, ‘multiculturalism’ and some aspects of postmodernism – are very ‘trendy’ and indeed ‘politically correct’; papers espousing the relevant views are liable to be favourably regarded. Indeed, in public presentations (at conferences and such) where one can be identified it often requires courage to speak in criticism of such an offering. This trend also means that papers endorsing views contrary to those in political or cultural favour may struggle to achieve publication, even if they (and their authors) are otherwise sound; or, if they do achieve it, they may then be subjected to withering and arguably biased criticism. None of this implies that intellectual ‘rebels’ within the mainstream are necessarily correct in opposing majority mainstream viewpoints; the issue is that of critics obtaining a fair hearing, especially when they are well qualified on the matters at hand. Where this becomes difficult, the need for skepticism about the academic mainstream will obviously increase.

In fact, some skeptics have actually become known mainly for critiquing mainstream – if often contested – positions rather than non-mainstream ideas. Negative comments of this kind can sometimes be partisan and overstated, but in other cases they can be legitimate or arguably so.

Skeptical comments on mainstream linguistics, specifically, can be directed at a range of arguably unwarranted mainstream assumptions/ideas. These include: Chomskyan ‘nativism’; bizarre analyses of data adopted under the influence of unproven and often unlikely theories which are apparently regarded by some linguists as virtually immune to criticism; undefended and inadequately/inappropriately grounded analyses of basic grammatical structures; support for dubious but ‘trendy’ or ‘politically correct’ sociolinguistic theories (sometimes under postmodernist influence); exaggerated postmodernist ideas more generally; etc. Some specific examples will follow in later posts.

As noted, some of the skeptical criticism which the linguistic mainstream receives is produced by linguists themselves, as illustrated last time by the case of Göran Hammarström. In addition to Hammarström, various other ‘insiders’, thoughtful linguists who have been more able than most to remain independent of the various ‘paradigms’, have written of these matters in an essentially skeptical way (while not necessarily identifying as skeptics). Perhaps the most prominent of these linguists is Geoffrey Sampson, who has antagonised some other prominent linguists by arguing very persuasively that their pet theories are empirically empty or obviously contradicted by inconvenient data (see later on this issue). Sampson, in fact, goes some way along the road taken more indiscriminately by Amorey Gethin and others (again, see later), suggesting that many of the unexplained facts (cross-linguistic and language-specific) and many of the theoretical issues debated by linguists may find their solutions in other domains such as psychology, and that – while there is a clear role for linguistic description and the necessary generalisations – a truly valid general linguistic theory would thus be minimal in scope.

Some of the linguists who critique the linguistic mainstream are skeptical linguists turning their skepticism on their own mainstream (as they are often urged to do by the non-mainstream thinkers whom they criticise). Obviously, I myself identify as a member of this group. I would argue, in fact, that mainstream linguistics is perhaps more in need of skeptical attention than some other mainstream disciplines. One reason for this is the relative lack of consensus or orthodoxy in linguistics, and how this is handled. Obviously, on many major issues involving language almost all linguists do in fact agree with each other, at least in general terms. However, one does not have to penetrate far into linguistics to find disagreement on basic points. There are many competing ‘schools’, ‘paradigms’ and ‘frameworks’ within many of the branches of linguistics, differing from each other on such fundamental and basic issues as, for instance, the ‘true’ or most insightful grammatical analysis of sentences as straightforward as ‘Mark drank his beer’ in a language as well-described as English (the largest issue is that of whether this sentence divides into two constituents or three). Of course, all fields display some differences of this kind, despite displaying substantial cores of shared ideas. In the case of linguistics, however, the degree of disagreement is so great that the need for skeptical attention would appear greater than in some other disciplines.

Professional linguists have not been conspicuously effective in dealing with this problem. Some, especially those influenced by postmodernism, seem to adopt a quasi-relativist view on which the issue is (perhaps) acknowledged but is not presented as truly problematic, even where the different ‘frameworks’ appear to be offering incompatible analyses of the very same aspects of the matters in question. One can make any set of ‘assumptions’ which is not self-confounding or refuted by obvious facts, and can then extrapolate massively from these ‘assumptions’, with little fear that anyone will actually attempt to disprove them. Limited interest is shown in the question of how far the ‘assumptions’ and ‘paradigms’ upheld by a given group of linguists might actually prove demonstrably preferable to alternative ideas. A further problem here lies in the fact that different ‘schools’ do not by any means always agree even on what is valid and relevant evidence in such cases, or at any rate upon the relative importance of different types of evidence (for instance, some linguists regard typological surveys across many languages as crucially important in resolving issues of analysis and theory, while others prefer to rely mainly upon close, abstract analyses of one language or a few languages).

One reason for this situation lies in the relative intractability of linguistic data. Linguistics is an essentially empirical subject; but, in the more abstract or speculative areas of such a domain, it is not always easy to adduce decisive reasons or evidence for preferring one account or analysis to another. However, it is surely preferable to seek to address this kind of issue with whatever decisive evidence may be found, rather than to forge ahead at great length with any one ‘paradigm’ in circumstances where there can be little confidence that it really is the ‘best’ available.

The training of academic linguists and the nature of many linguistics departments contribute (often inadvertently) to these problems. Some departments have a strong bias towards one ‘paradigm’ or another. Many of these ‘paradigms’ have now developed in such depth and detail that students must spend several years familiarising themselves with one ‘paradigm’ before their grasp of the material is at such a level that they can make fresh contributions at the ‘cutting edge’. Differences within the ‘paradigm’ are discussed, but its basics are often left unchallenged. Furthermore, many of the central concepts and issues within each ‘paradigm’ are intelligible only within that ‘paradigm’.

More next time, including some specific examples!

Mark


skeptical about the mainstream 1 (non-historical ‘fringe’ linguistics 1)

August 7, 2012

Hi again, everybody! Thanks a lot for your ‘votes’! Seven people responded and I’ll try to deal with all proposed topic areas in due course. One topic area – skepticism about mainstream linguistics – obtained two votes, and so I’ll start there.

This is a topic which obviously has considerable potential relevance for skepticism about mainstream scholarship more generally, perhaps especially in the humanities.

Obviously, skepticism in any given intellectual discipline is typically directed at ideas towards the outer edges of that discipline. It generally focuses upon positions within the discipline (or dealing with its subject-matter) which are not merely controversial but so controversial or ‘strange’ that they can reasonably be called ‘non-mainstream’, ‘fringe’ or ‘non-standard’. Even in cases where the qualified thinkers are themselves seriously divided (so that there is no orthodoxy or consensus – although some positions may still be more controversial than others), comments of an overtly and specifically skeptical nature are relatively rare in the mainstream literature itself.

The explanation for the neglect of the mainstream by skeptics may seem obvious enough. The skeptical enterprise involves subjecting the claims of non-mainstream thinkers and practitioners – who are typically not themselves academics or professional researchers – to tests of the kind which are routinely undergone by the claims of mainstream scholars. The latter receive intensive and prolonged training and examination in the basics of their disciplines; their preliminary drafts and initial pilot studies are discussed and criticised by their colleagues and others; their ‘finished’ books and papers are exposed by house and journal editors to anonymous (‘double-blind’) peer-review and often rejected or returned for re-writing, and – if and when published – are assailed in a barrage of further criticism; their experiments are replicated again and again in a determined effort to find sources of error or alternative explanations. In contrast, a non-mainstream publication is typically a book written at a fairly popular level and published by the author or by a press with few academic pretensions, or an article in an ‘anomalist’ journal or on a web-site used largely by those who share the author’s basic non-mainstream position. There is sometimes a review process, but the authors, editors and reviewers – who often form a close-knit group, very much on the edges of the relevant scholarly worlds – agree in upholding the basic ideas which divide them from the mainstream; reviewers will generally attack only points of detail. In this context, skeptical scholars provide (albeit only after publication) the processes of testing and review which non-mainstream publications would otherwise lack. Naturally, their conclusions and assessments are usually negative.

Many scholars confronted by skeptics trained in their own field take the view that skeptical work of this kind is simply unnecessary in the context of mainstream thought. They believe that the safeguards outlined above really do work well enough to obviate the need for specifically skeptical examination. Skeptical linguists, for example, are sometimes asked what difference there is between skeptical linguistics, as applied to the mainstream, and just plain linguistics, conducted within the usual academic constraints. This view is understandable, and obviously it is not entirely inaccurate; but the amount of doubtful material which achieves serious publication might suggest that additional vigilance is indeed needed. Some non-mainstream authors actually suggest that skeptics should direct their attention at the mainstreams of their own disciplines as well as or even instead of at non-mainstream material; and, while one might not wish to take such an extreme view of the matter (almost diametrically opposed to that of some mainstream scholars as reported above), it is more than arguable that some mainstream ideas do warrant more skeptical attention than they tend to receive.

For instance, the degree of conservative bias which inevitably affects publication and acceptance of novel ideas probably does mean that some of the more obviously mainstream works which are published may indeed owe too much of their success to their mainstream status (although non-mainstream writers certainly exaggerate the degree to which such things occur).

One very interesting study along these lines was produced by the adventurous mainstream linguist Göran Hammarström, in an unfortunately little-read 1971 article which illustrates how the published views of very eminent linguists may appear ludicrous when looked at in a different (maybe more realistic or more ‘common-sense’) way and without undue respect for their reputations. Hammarström summarises four works of linguistics which may all appear nonsensical to the uncommitted intelligent reader. Three of these are journal articles; the fourth is a mainstream book (The Sound Pattern of English; New York, 1968). The three short pieces are very obviously non-mainstream, not to say bizarre, in nature, and the thinking involved is lacking in self-criticism. (Details on request!) Hammarström suggests that they were accepted in error by the editors of the journals in question; in the first two cases the editor was a dialectologist rather than a theoretical linguist, while in the third case the journal had a language-specific, not overtly linguistic focus.

On the other hand, the book in question was written by the very prominent linguists Noam Chomsky and Morris Halle; it was taken very seriously by the linguistic world as a whole when it appeared, and, while now dated in many respects, is still regarded as a ‘classic’. Hammarström argues that in fact the thinking set out in this work is little if any less ludicrous than that rehearsed in the three shorter pieces discussed earlier. Much of the theory developed in the book is highly abstract, counter-intuitive and seriously under-demonstrated. For example, abstract ‘underlying representations’ for the spoken forms of English words are established, often appreciably closer to current spellings than to orthodox phonemic representations. (This is associated with the Chomskyan claim that more abstract, non-phonemic spelling is psychologically preferable.) In associated works, theoretical findings based on these ideas are applied to other languages and indeed are treated (as is normal in this tradition) as of universal application if valid. However, Hammarström argues (quite cogently, and later supported by other linguists) that these and other such Chomskyan analyses are inadequately justified (both for English and cross-linguistically), and at times they appear simply bizarre. They have, it seems, been highly respected in large part because of the prestige and perceived authority of authors such as Chomsky and Halle – far greater than those of the authors of the three shorter pieces, who certainly had far inferior training in the discipline.

More on this general theme next time!

Mark


where to go next?

August 1, 2012

Hi all!

I’ll be happy to resume my blog, but I wonder WHICH sets of non-historical topics might be of most interest! Votes, etc? (Queries first if need be, of course!) Thanks! Mark

Language (itself sometimes mysterious) from mysterious sources (alien, channelled, etc)
Reversals and other alleged mysterious aspects of ‘normal’ language
Allegedly mysterious scripts, texts, etc. (non-historical issues)
Alleged animal ‘languages’ and language-learning abilities
Non-mainstream theories of language and the mind, and non-mainstream general theories of language
Language reform and language invention (as proposed by non-linguists)
Skepticism (by linguists and others) about mainstream linguistics


around the world in ‘mysterious’ scripts & texts (7) (‘fringe’ historical linguistics 18)

June 25, 2012

Hi again, everybody! Thanks for comments as ever! I turn now to the final set of cases of this kind. Some of these involve East Asia.

Dubious claims have been made regarding artefacts and written texts from a sunken civilization off the coast of Taiwan associated with the aboriginal Ketagalan group. Also in the Chinese world, Nu Shu (or Nü Shu) is a script and supposedly a language confined to women in one specific area within China. Unlike the standard logographic Chinese script, Nu Shu is syllabic (and hence phonological); each of its characters represents a syllable in the local ‘dialect’. It possesses considerably too small an inventory to represent all the syllables, including tonal distinctions; digraphs are used for the remainder. Zhou Shuoyi, reportedly the only male to have mastered the language, compiled a dictionary listing 1,800 characters, many of which are variant forms of Chinese characters. The origin of Nu Shu is unknown, but it has been suggested that it may date back as far as the third century CE.

Bruria Bergman claims in connection with her theory that a Japanese temple chant is in distorted Hebrew (mentioned earlier) that in 1935 one Kiyomaro Takeuchi discovered an actual document in the area in question (Herai) which dates from around 100 CE and is written in the kana syllabary (several hundred years before kana are known to have been used); this text allegedly shows that Jesus is buried in Herai, and contains his will. However, the document is probably a nineteenth-twentieth-century forgery.

Some cases of this general type are not closely associated with a particular region, because they involve portable manuscripts rather than inscriptions and are not linked with any identifiable language. The best known of these is the Voynich Manuscript, a genuinely mysterious medieval book-length work in an unfamiliar script, including illustrations; the topic may be botanical. Many decipherments have been advanced (some of them themselves book-length). The case arguably involves cryptography rather than linguistics, but either way the issue is my no means settled. Another such case involves the Rohonc Codex, which is of unknown date and may well be a hoax; there have been various attempts at translations (into Hungarian, an unidentified form of early Romance, Hindi etc.), mostly transparently non-mainstream in character.

A few non-mainstream theories involve written numerals. One such proposal, by Jason King, deals with the origins of the shapes of the ‘Arabic’ (apparently ultimately Indian) characters used to represent numbers (integers). Some such number-symbols, notably ‘Arabic’ 1, appear motivated: the symbol 1 is a single stroke. Most of the ‘Arabic’ symbols, however, appear arbitrary: for example, the character 9 does not obviously express the meaning ‘nine’. However, King holds that the ‘Arabic’ numerals 1-9 and also the zero sign (0) are not in fact arbitrary. The basic claim is that each symbol was invented so as to have angles corresponding in number with the meaning of the symbol. Thus, 0 has no angles, 1 (written as now usually printed) has one, 2 (written here as Z) has two, etc. King has to make various dubious assumptions in arriving at this view. For instance, he assumes that 1 was originally written as now printed; but in older versions it is typically a single vertical stroke with no angles. King does not offer any actual evidence that his forms are original ones; and he claims that they were invented by the Phoenicians rather than in India (although the usual Phoenician number-symbols were not in fact similar to the ‘Arabic’ symbols). In sum, it does not appear that King is correct here. The best that can be said is that he has drawn attention to a somewhat neglected matter.

I have now completed this summary survey of non-mainstream historical-linguistic and epigraphic claims. On request I will comment on claims regarding any particular language not so far discussed (especially linguistic rather than epigraphic claims). Apart from this, I now propose to look at non-historical aspects of ‘fringe’ linguistics. I may take a short break from blogging before embarking upon this. But thanks again for your support, and see you soon!

Mark


around the world in ‘mysterious’ scripts & texts (6) (‘fringe’ historical linguistics 17)

June 18, 2012

Hi again, everybody! Thanks for comments as ever! I turn now to issues of this kind involving Pacific territories.

The mainstream view of Pacific linguistic history is that the Polynesian languages as they spread eastwards from East Asia across the ocean, and the other Pacific languages, were unwritten until the beginning of European colonization. The only exception is the now small corpus written in the Rongorongo script of outlying Easter Island (Rapa Nui). Rongorongo lacks an accepted decipherment but is generally presumed (in the absence of other candidate languages) to encode an earlier stage of Rapa Nui, the contemporary Polynesian language of the island (settled around 400 CE); it is possible that it represents an independent invention of writing.

Hundreds of tablets written in Rongorongo existed as late as 1864, but most were lost or destroyed in that period and only twenty-six remain today; almost all of these are inscribed in wood. Each text has between two and over two thousand simple glyphs (some feature what appear to be compound glyphs). The longest surviving text is that on the ‘Santiago Staff’: around 2,500 glyphs, depending upon how the characters are divided. The glyph-types are a mixture of geometric figures and standardized representations of living organisms; each glyph is around one centimetre in height. Thomas Barthel provides a standard list.

Only Tablet Q has been carbon-dated, but the results limit the date only to after 1680 (in any event, some carbon-dates for Rongorongo are demonstrably inaccurate). Texts A, P, and V can be dated to the eighteenth or nineteenth centuries by virtue of being inscribed on European oars.

Some ‘decipherers’ themselves regard Rongorongo as local in origin. Sergei Rjabchikov (unusually ‘mainstream’ in this case) interprets the texts as in an early form of Rapa Nui. Barry Fell (see earlier) ‘deciphers’ the script with the aid of cave ‘inscriptions’ and other texts from New Zealand (see below); he treats the language as an artificial (priestly) Polynesian language closely related to Maori.

On the other hand, various non-mainstream writers have linked Rongorongo with scripts and languages from remote areas. A common choice is Indus Valley Script, itself currently undeciphered (see earlier); some Rongorongo characters superficially resemble those of IVS.
.
Stephen Fischer (one of the ‘decipherers’ of the Phaistos Disk) has argued that Rongorongo is in fact a modern invention and is logographic and ‘semasiographic’ in character (and thus, in part, not strictly linguistic). He reads the text on the Santiago Staff as a series of creation chants. Konstantin Pozdniakov notes that the Staff shares short phrases with a very few other texts but nothing with the rest of the Rongorongo corpus; and Jacques Guy argues that Fischer’s reading is untenable (and that if it were correct the text on the Staff would consist almost entirely of personal names). Paul Bahn and John Flenley support the Fischer ‘decipherment’, but without displaying linguistic expertise.

The prevailing mainstream opinion is that Rongorongo is not true writing but ‘proto-writing’, or even a limited system of mnemonics. This view was foreshadowed by some earlier writers, notably Katherine Routledge, who interpreted Rongorongo as an idiosyncratic mnemonic system in which the meanings of the glyphs varied from scribe to scribe

Another regional focus of non-mainstream theorizing involving scripts in the Pacific proper is New Zealand, which was settled from Eastern Polynesia around 1000 CE. The mainstream position is that here too the languages (Moriori and Maori) were unwritten until the colonial period. However, some non-mainstream authors offer hyper-diffusionist theories (similar to those applied to the Americas) involving unrecognized early visits to New Zealand on the part of voyagers from Asia, Europe, Africa etc. – some involving unrecognized early contact with the New Zealand Polynesians, who are themselves sometimes held to have settled the islands earlier than the given date (see for instance the works of Barry Brailsford).

Barry Fell claimed to have identified Libyan and Numidian script in New Zealand, and also found Polynesian elements on the Phaistos Disk. Ross Wiseman and others believe that they have found Egyptian and Phoenician inscriptions around New Zealand, confirming their hyper-diffusionist views of history. However, some of these are natural markings on rocks, which they are over-interpreting; others are indeed written language but contain errors and are surely fakes. With some other amateurs, Martin Doutré argues for an alternative hyper-diffusionist view of early New Zealand history involving early voyages by ‘Celts’ and members of other Eurasian groups. Doutré’s linguistics is of the usual non-mainstream type. Like Wiseman, he identifies ancient inscriptions in Eurasian languages in New Zealand and endorses the ideas of the ‘Viewzone’ group (who also link the Panaramitee Aboriginal rock-art tradition of Australia with their claims regarding a common world script in very early times).

I turn now to Australia, on the fringe of the Pacific. Many non-mainstream authors have offered and continue to offer hyper-diffusionist theories involving unrecognized early visits to Australia by long-distance voyagers. Some of these theories involve the supposed presence in Australia of inscriptions in Egyptian or Phoenician script, found on rock faces or associated with ruins (typically, in fact, of nineteenth-century origin) and ruin-like rock formations. (For cultural reasons, there are far fewer genuine pre-colonial ‘indigenous’ buildings in Australia than in New Zealand.) Some of these alleged inscriptions again contain errors and are surely fakes; others are over-interpreted natural markings.

One author who has proclaimed the presence in Australia of Egyptian hieroglyphic texts is Paul White, who endorses as genuine a set of rock carvings found in the National Park forest in the Hunter Valley, New South Wales. White (claiming support from an Egyptologist) argues that the inscriptions feature early forms of hieroglyphs which ‘correlate’ with archaic Phoenician and Sumerian sources, but this view of early Egyptian script is simply mistaken, and the text in question is now acknowledged as a fake.

Val Osborn claims to have found a Phoenician port in Sarina, Queensland, and other authors report Phoenician or Egyptian inscriptions from that state and from New South Wales, notably the prominent ‘anomalist’ Rex Gilroy. Gilroy and Brett Green have identified ‘texts’ linked with the ‘Gympie Pyramid’ in Queensland (which probably represents ruined nineteenth-century vineyard terracing) as Egyptian or Indian in origin.

More next time on a few additional cases (some of them involving East Asia).

Mark


around the world in ‘mysterious’ scripts & texts (5) (‘fringe’ historical linguistics 16)

June 12, 2012

Hi again, everybody! I turn here to claims regarding Indian scripts.

Many of these claims involve the interpretation of the Indus Valley Script (IVS). IVS has been found on tablets in the ruins of Mohenjodaro and Harappa and dated around 4,500-4,000 years BP. The Indus Valley Civilization, if IVS is genuinely a script (see below), is one of the oldest literate civilizations known, and the issues extend well beyond linguistics.

IVS is the subject of a vast scholarly literature but has no accepted decipherment or interpretation. The two most plausible candidates for the unidentified language represented are Indo-European (probably early Sanskrit/pre-Sanskrit) and Dravidian, the main language ‘family’ of Southern India; the best known language in this ‘family’ is Tamil. On the ‘Dravidian IVS’ theory, the later arrival in India of the IE-speakers might have contributed to the fall of the Indus Valley Civilization, or might alternatively have post-dated it altogether. The old mainstream notion of an ‘Aryan Invasion’ of India by users of IE around 3,500 years BP has long been modified; but if IE arose much further west, as is still accepted in the mainstream, the language ‘family’ must have entered India at some date.

Many of those who believe that IVS represents Dravidian invoke Brahui, the isolated Dravidian language of the Indus region, which they interpret as a survivor of early Dravidian domination in the region (but there are other, mainstream accounts of the situation of Brahui suggesting that the language was transplanted to the region at a much later date).

If IVS instead represents very early Sanskrit or the like, IE was in India much earlier than orthodox scholarship maintains, too early to permit any Aryan incursion in the second millennium BCE. The arrival of IE in India might, indeed, have been the event which triggered the development of the Indus Valley Civilization. Edwin Bryant has proposed a moderate version of the view that IE entered India at an early date, but (as I noted earlier) there are also stronger, clearly non-mainstream views, proposed by K. D. Sethna and others, according to which IE actually had its origins in India. An authoritative and generally accepted decipherment of IVS would be a very important factor in the solution to this historical problem.

There have been over 100 ‘decipherments’ of IVS, many by non-mainstream writers and those with political, cultural and linguistic biases. Predictably, most ‘decipherers’ favour either IE or Dravidian (or languages which may be related to Dravidian, such as Elamite), depending upon their own linguistic background or interests. IE interpretations of IVS include those of Barry Fell (see earlier), who believed that he had deciphered the script as representing early Sanskrit/pre-Sanskrit, George Feuerstein and his associates, David Frawley, Daniel Salas, Rama Sarker, etc. Dravidian interpretations include those of Tariq Rahman and Anand Sharan, who believes that IVS is still in use in Bihar State, India (not close to the IVS sites). Sharan therefore accepts a version of the ‘Aryan Invasion’, but (as a ‘Dravidian supporter’) he also denies that the Dravidian-speakers were culturally and technologically inferior to these invaders. His account of how in that case Dravidian came to be ‘pushed’ south is not entirely convincing. Of course, it is not agreed by mainstream Indologists that IVS is indeed still in use, in Bihar or anywhere else.

Clyde Winters and other Afrocentrists ‘decipher’ the script as Dravidian; they go on to link Dravidian generally, Sumerian and even Chinese with African languages held to have been widely diffused by an early African diaspora. Ivan van Sertima and his associates present a range of other Afrocentrist views. Most of the material in this work is non-linguistic in character, involving artefacts, ‘racial’ characteristics and such; but Walter A. Fairservis claimed that the language represented by the Indus Valley Script was Dravidian – which is hardly supported by his editor’s claim that the IVS-users were black Africans rather than Dravidians akin to dark-skinned contemporary Southern Indians (endorsed by Wayne B. Chandler, who believes that Dravidians later ‘inherited’ what was originally an African civilization).

Indologists Steve Farmer, Richard Sproat and Michael Witzel have proposed that IVS is in fact a non-linguistic symbolic system (see above) which was used by an elite in a multilingual situation and does not encode any particular language. They support this view with many arguments, including the total absence of long texts in IVS (the longest known text has only seventeen characters, and very few have more than ten); this would make IVS unique as a true script, if it were a script. Richard Sproat has also commented on some academic approaches to such issues which in the view of these three authors have not led in the direction of what they hold to be reliable solutions. Michael Witzel offers extended critiques of non-mainstream proposals in this area. William Bright also concludes that none of the ‘decipherments’ offered to date can be substantiated and that the methods adopted are often dubious).

There are also claims regarding mysterious artefacts, some of them bearing markings interpreted by some as short inscriptions in an otherwise unknown script, found submerged in Indian waters off Cambay.

There is a body of markedly non-mainstream work regarding an ancient civilization and language known as Naacal, allegedly carried to Mesopotamia, Egypt, India etc. in very remote ages by Mayan adepts. The first recorded use of the term is by the maverick archaeologist Augustus le Plongeon. Le Plongeon believed in a late-pre-historic world civilization centred on a Pacific continent known as ‘Mu’ or ‘Lemuria’ (later submerged, giving rise to pre-Polynesian cultures in places such as New Zealand) and massive early diffusion more generally. His ideas were linked with those of H.P. Blavatsky and were developed further by James Churchward , Wishar Cervé and others. Churchward claimed to have learned from a priest in India to read the Naacal language, written on ancient tablets which are said to represent fragments of a larger text. He also claimed to have verified the material from the records of other ancient peoples, although his references to ancient sources are typically ludicrously vague. (Le Plongeon also asserted that Jesus spoke Mayan on the Cross, and Churchward further claimed that the Greek alphabet, as normally recited, is really a poem in Mayan.)

More next time, heading still further east!

Mark


around the world in ‘mysterious’ scripts & texts (4) (‘fringe’ historical linguistics 15)

June 5, 2012

Hi again, everybody!  More on European scripts and ‘scripts’!

The feminist archaeologist Marija Gimbutas (who made major contributions to the study of the cultures regarded as the early speakers of Indo-European), and her followers such as Richard Rudgley, identify an ‘Old European Script’ in the Vinča symbols (Balkans), which they associate with a ‘lost’ Stone Age civilization, possibly a matriarchy.  In fact, it is not even clear that these markings really represent a script as such; and the discussions of ‘meta-language’, ‘alphabets of the metaphysical’, ‘feminine’ versus ‘masculine’ scripts, etc. appear obscure and tendentious.  Much of Rudgley’s specific ‘evidence’ is linguistic (or at least involves what are claimed to be early manifestations of written language), but this is discussed only within the framework of these highly controversial ideas.  Rudgley devotes much space to his interpretation of the rather scanty and equivocal evidence surrounding a) the nature of ‘pre-writing’ (often apparently overinterpreted; he refers to controversial writers such as Alexander Marschack) and the origins of written language and b) linguistic pre-history and the ‘deep-time’ relationships between language families.  He cites Gimbutas, Harald Haarmann and others on the supposedly apparent parallelisms between the various syllabic scripts of the Mediterranean and ‘Old European Script’.  Rudgley also engages in loose philology of the usual type.

More markedly non-mainstream analyses of the Vinča symbols include Toby Griffen’s claim to have deciphered three of the symbols as logographs, and the theory of a historical link with Etruscan script (see above) proposed by Radivoje Pešić.  Vasil Ilyov argues (tendentiously and implausibly) that carved symbols found in the territory which now constitutes (Slavic) Macedonia represent a pre-historic Macedonian ‘phonetic alphabet’ which is to be regarded as the ancestor of early Indian scripts and as one of the earliest written languages.  Those with other loyalties cite other pre-historic texts such as the Tartaria Tablets, found in Romania, or the Dispilio Tablet, found in Greece.

The runic alphabets are a set of related alphabets using letters known as runes to write various Germanic languages prior to the adoption of the Roman alphabet and for specialized purposes thereafter.  The variants of the system displayed different numbers of runes: Teutonic (24 letters), Anglo-Saxon (32), and Scandinavian (sixteen).  The Scandinavian variant is known as futhark (a term derived from the first six letters of the system: F, U, Þ, A, R and K).  The earliest runic inscriptions date from around 150 CE.  Most adherents of ‘rune lore’ identify the runes as of Germanic origin, while differing as to the precise area of origin.  However, many runes resemble characters from the Roman alphabet, often featuring straight lines in place of curves; other possible direct sources include the related Northern Italic alphabets.  As Germanic developed and diversified, the words assigned to the runes and the sounds represented by the runes themselves diverged somewhat; new runes were created and existing runes and groups of runes were renamed or rearranged, or even abandoned, to accommodate these changes.  The characters were generally replaced by the Roman alphabet as the cultures which had used runes underwent Christianization.  There has been and still is a great deal of non-mainstream thought associated with runes, involving theories to the effect that they are very ancient indeed and/or possess magical powers.

Various writers argue that runic writing in Hungarian pre-dates Germanic use of the system, in some cases dating from as long ago as 6,500 years BP, (although the earliest clear attestations actually date from the seventh century CE).  They accordingly suggest that Hungarian is the oldest written language and was spoken in the territory which now constitutes Hungary much earlier than mainstream historians would hold. Some link the Hungarian runes with cuneiform as used to write Sumerian (and later Akkadian).  Turgay Kurum instead finds a Turkish source for runes. There are many other non-mainstream theories regarding Hungarian and its written forms.  (See earlier on runic or allegedly runic inscriptions in the Americas.  I will turn later to the ideas of the occultist Von List and other occultists regarding runes. )

Nigel Pennick and others develop mystical notions around scripts formerly used to write Celtic languages, notably Irish Ogam (which I discussed last time) and the quasi-runic Welsh system known as Coelbren or Coelbren y Beirdd (‘the Bardic Alphabet’), which they regard as one of a set of genuinely ancient alphabets and which they believe was employed by bards to communicate secret messages (using a wooden frame with sticks representing letter-strokes) in medieval times when writing in Welsh was suppressed.  Other authors such as Alan Wilson and Baram Blackett also regard Coelbren as authentic and as linked with widely dispersed scripts around the world.  Jim Michael finds links between Coelbren and American ‘inscriptions’ as discussed above, suggesting for example that that the inscription on one stone tablet found in the USA is in Coelbren.  In fact, Coelbren was devised – as were many ‘traditional’ features of contemporary Welsh culture – by the eighteenth-nineteenth-century Welsh antiquarian and mystic Edward Williams (‘Iolo Morganwg’) as the supposed alphabetic system of the ancient Druids (parallel with the genuinely ancient Ogam in Ireland) and promoted after 1840 by his son Taliesin Williams.  It consists of twenty main letters and twenty others used to represent long vowels and the mutated consonants characteristic of Welsh (and of Celtic generally).

Moving further east … the early Mesoptamian culture of Sumer (Sumeria) arises repeatedly in this kind of context, because it is the earliest known genuine ‘civilization’.  In addition, Mesopotamia is a centre of what may well be an immediate pre-script phase of written semiotics; and the full-blown written Sumerian language – which can now be read – is the oldest known written language (and, moreover, is, as far as is known, ‘genetically’ isolated).  The Sumerian ‘cuneiform’ script was later adapted to write other, unrelated Mesopotamian languages such as the Semitic language Akkadian.

Zecharia Sitchin (an advocate of early extraterrestrial contact), John Allegro, David Rohl and others advance novel interpretations of the Sumerian language to suit their theses, but these do not in general involve other than piecemeal reinterpretations of the script per se.  More relevantly here, the early twentieth-century non-mainstream historian L.A. Waddell argues (tendentiously and unconvincingly) that the common ancestor of the Middle-Eastern and European abjads and alphabets – and indeed of Egyptian script – was in fact Sumerian cuneiform.

A very different non-standard interpretation of Sumerian script has been proposed by Peter Linaker.  Linaker proclaims the exaggerated view that twentieth-century synchronic structuralist linguistics requires that all linguistic structures be interpreted as systematic.  In fact, because of prior linguistic changes, any language at any given time is liable to display a varying proportion of unsystematic features.  These may be exemplified by synchronically irregular verb morphology, as manifested for instance in English past tense forms such as rose, for what would be the regular form *rised.  Forms such as rose exemplify older, now superseded morphological systems, often quite systematic in their day, which are no longer productive; no such new forms now develop in English.

Because of Linaker’s general stance on this point, he seeks covert systems which would explain apparently unsystematic features of language in synchronic ways.  He unreasonably regards the (in fact not uncommon) mixture of logographs and phonological spelling which characterizes the Sumerian cuneiform script as unsystematic and therefore mysterious, and goes on to argue that some features of the Sumerian script which are generally interpreted as phonological can be interpreted only by ignoring Sumerian phonology and focusing instead upon hitherto unrecognized semantic properties of the characters.  Linaker thus develops a theory involving the existence of covert, highly coherent systems of cuneiform characters.  Many of these involve alleged ‘double-entendres’, often with references to sexual matters, which Linaker (bizarrely) appears to believe would naturally not be overtly expressed in any culture.  In most cases, no persuasive empirical evidence is adduced in support of these novel readings.

More next time, starting with the Indus Valley Script!

Mark

 

 


around the world in ‘mysterious’ scripts & texts (3) (‘fringe’ historical linguistics 14)

May 28, 2012

Hi again, everybody!  Thanks again for your comments!

More about Greek scripts: Linear B is one of a number of syllabic scripts found in Crete during the twentieth century by archaeologists such as Arthur Evans.  In 1952 it was persuasively (and, to some, surprisingly) deciphered as very early Greek by the talented and well-informed amateur Michael Ventris and the linguist John Chadwick – although not only non-mainstream writers but also some mainstream scholars (notably Sinclair Hood, W.B. Lockwood and George Thompson) have rejected or at least questioned this decipherment.

Linear A, though visually similar to Linear B, cannot be read as Greek and has resisted authoritative decipherment; Cyrus Gordon’s Semitic interpretation has not been generally accepted.  (Note also other material by Gordon in which he argues that examination of Cretan texts corroborates his theory that Greek and Hebrew cultures stemmed from a common Semitic heritage.  See earlier for more on Gordon.)  The classicist Simon Davis reads Linear A – along with the ‘Minoan Pictographic’, Eteocretan, Cypro-Minoan and Eteocypriot scripts – as Hittite.  Other ‘decipherments’ of Linear A are offered by outright amateurs.  (I discuss the special case of the Phaistos Disk below.)

In a very different vein, Ross Hamilton argues that the specific letter-forms of the Greek alphabet were based on the patterns in the Great Serpent Mound (Ohio) and display spiritually significant links with this artefact.  Hamilton is aware that the alphabet had a Semitic source (very probably Phoenician) but garbles the details.  His philology is of the usual amateurish kind; for instance, he equates Greek ophion (‘serpent’) with the word Ohio. He also ignores well-established etymologies, and his own ‘evidence’ mostly involves impressionistic reactions.

The famous Phaistos Disk is a flat disk of baked clay, sixteen centimetres in diameter, which was presented to the learned world in 1908 by French and Italian archaeologists excavating the Minoan palace complex at Phaistos in South-Central Crete (built about 1700 BCE).  It is inscribed on each side with a text apparently running from right to left and spiralling in from the rim to the centre (though some read it with the opposite ductus).  There are some 240 character-tokens in all, representing 45 distinct types, some pictorial and some apparently abstract; they are divided into 61 groups by broken radial lines.  Very remarkably given the early date, the signs were impressed into the clay when it was soft by means of a set of cut punches.  Neither the Disk itself nor the characters resemble any other items yet discovered in the Aegean (including Linear A), and both the intended use of the artefact and the interpretation of the text remain mysterious. The body of material dealing with the Disk is too large to cover in detail here; but I’ll summarize.

Most professional scholars who have recently analyzed the text(s) on the Disk, especially those most relevantly qualified, consider that it is written in a syllabary (because of the actual and predicted total numbers of sign-types; see earlier on such tests).  However, there is also a mainstream consensus that the Disk probably cannot be deciphered because the text(s) is/are too brief.  (Extended bodies of text in the same script, or better still a bi- or multi-lingual text of some length such as the Rosetta Stone which was crucial in the decipherment of Egyptian, might resolve this problem.)

In contrast, some scholars have argued that the Disk is in fact a modern forgery.  Jerome Eisenberg supports this view with analysis of the possible motives of those involved in forging it and with close comparison of the forms and sequences of the symbols and those found in other ancient scripts.  Eisenberg clearly has a case, but his views have received trenchant criticism.  The Greek authorities have so far refused to allow thermo-luminescence analysis of the Disk, which would probably settle the matter (though this method is itself not unproblematic, as is illustrated by the case of Glozel).

Many (often less qualified) authors have advanced and continue to advance ‘decipherments’ of the Disk, sometimes in non-linguistic terms (calendars etc.) but more usually finding novel syllabic or non-syllabic writing systems – and often languages or locales favoured by themselves for extraneous reasons.  None of these proposals presents a justified overall reading; and naturally they all contradict each other.  The languages identified in these proposals include Greek of various types (some invented, some typical of the wrong period), various Semitic languages, Basque, Luwian or other Anatolian languages, Hittite, early Slavic and even Polynesian.

The Canadian Jean-Louis Pagé’s bilingual book links his ‘decipherments’ of the Disk and other mysterious texts with his own version of the ‘Orion’ theory of the Giza Pyramids, etc.  He upholds the historical reality of Plato’s Atlantis, locating it in the Arctic and attributing its destruction to a sudden polar shift in 9792 BCE; he also posits extraterrestrial intervention in the origins of human civilization; and he regards most of the Disk symbols as logographic/ideographic and pictographic (but it is not even clear which known or reconstructed language he thinks is represented, and he does not propose any phonological forms).

There has never been serious doubt about the pronunciation of the Etruscan language used by a powerful civilization in central Italy in pre-classical and early Classical times and written in a modified Greek alphabet (presumably originally learned from the Greek colonists of Italy).  However, the texts (mostly very short) resisted interpretation until recent times, and major issues remain.  But these issues mainly involve mainstream work and are thus largely outside my remit here (unlike some non-mainstream claims regarding the Etruscan language itself, which appears to be non-Indo-European; I may deal with these later).

The Picts were an Iron Age society which existed in Northern Scotland from around 300 to around 850 CE.  Stylized rock engravings on the ‘Pictish Stones’ have previously been interpreted as rock art, possibly heraldic in nature.  However, Rob Lee & colleagues conclude that the engravings in fact represent aspects of the Pictish language.  Arnaud Fornet argues that Lee’s group has misinterpreted the engravings in ascribing a linear order to the ‘texts’ and that the material genuinely is in fact artistic rather than linguistic (compare the Australian Panaramitee rock-art).  Other writers regard the Pictish rock-carvings as semiotic rather than linguistic, but with a range of interpretations.

The Picts also had a fully-fledged written language, employing the Ogam script used to write known (mainly Gaelic/Q-Celtic) languages.  The texts can thus be pronounced (as in the case of Etruscan), but they are not extensively understood and the language is unidentified.  The two main views are a) that it is P-Celtic (similar to early Welsh; P-Celtic was used further south in Scotland), and b) that it is a non-Celtic (and quite possibly non-Indo-European) language probably representing a very early settlement population; a minority view c) is that it is an unusual variety of Q-Celtic or intermediate between P- and Q-Celtic.  Further work both on this general issue and on the relationship between the new and the old findings is awaited.

More next time!

Mark

 

 


around the world in ‘mysterious’ scripts & texts (2) (‘fringe’ historical linguistics 13)

May 22, 2012

Hi again, everybody!

As Pacal has noted, a few qualified linguists have (surprisingly) endorsed some of the North American ‘epigraphist’ claims.  One of these linguists was Cyrus Gordon, a very erudite but increasingly non-mainstream Semiticist and the self-proclaimed decipherer of the allegedly Phoenician Paraíba Stone inscription found in Brazil.  Gordon’s decipherment of the Paraíba Stone has not been accepted by other linguists, and indeed the most common mainstream view is that it is a nineteenth-century forgery.  Gordon also upholds a Hebrew reading of the Bat Creek Stone (see earlier) and interprets (with Fell and the maverick Frank Hibben) the Los Lunas Decalogue Stone (also mentioned above) as an abridged version of the Decalogue (the Ten Commandments) in a form of early Hebrew.  As has been noted (thanks again, Pacal!), another such scholar is David Kelley, who urges scholarly caution but endorses some of the finds (notably the Grave Creek Mound Stone, which he regards as obviously alphabetic) as genuinely ancient.  Kelley obviously knew his linguistics, but his decisions as to the strength of the evidence for specific claims sometimes appear strange.

The most ‘sober’ and judicious epigraphists outside the linguistic mainstream, who reject the more dubious cases as non-linguistic or as fakes and display some knowledge of the relevant disciplines, include James Whittall, William McGlone et al.and David Eccott.  However, even these writers accept some of the epigraphist claims, without (as it seems) adequate justification.

I’ll now continue commenting on specific cases of (unpersuasive) non-standard ‘epigraphics’ around the world, recommencing with more cases from Central and South America.

Michael Xu proposes links between the Olmec script of Central America (now known from a date of 3,000 years BP) and the Shang Chinese script; but he does not appear to be very familiar with epigraphic or historical linguistic methodology.  Olmec has not been persuasively deciphered; thus one cannot be sure that any pairs of Olmec and non-Olmec symbols have the same meanings.  In addition, many of the symbols used by Xu are pictographic and as such are liable to be independently invented.  David H. Childress (who presents himself as something of an ‘Indiana Jones’ figure) relates the Olmec script to various Old World scripts including Egyptian hieroglyphs.  The Afrocentrist writer Clyde Winters ‘deciphers’ Olmec in terms of the (in fact relatively recent) African Vai writing system, used to write Mande/Manding languages.  R.A. Jairazbhoy links Olmec and other Central/South American cultures and languages with Egyptian and Chinese.

Marcel Homet claims to have discovered inscriptions in Cretan, Phoenician, Sumerian and other Old World characters in South America, some engraved more than 10,000 years BP among the Brazilian megaliths of Pedra Pinta.  Harold Wilkins relates South American material of this kind to Egyptian, Phoenician, Indian and other Asian scripts.  Erich von Däniken presents examples of ‘undeciphered inscriptions’ allegedly discovered in South America.  The Fuente Bowl (found in Bolivia) has been interpreted as bearing text in early Sumerian or other Mesopotamian languages incuneiform script’, or else as in a script related to the Phaistos Disk script, in Rongorongo, and in Indus Valley Script.

Turning to other continents: I’ve commented earlier on the inscriptional Chinese, Mongolian, Malayalam etc. allegedly found in various unexpected locations as reported by Gavin Menzies – and on the ideas of David Leonardi and others regarding the Hebrew and the Egyptian scripts.  Tarek Abdel is another writer who rejects the standard decipherment of Ancient Egyptian.  Abdel’s own decipherment is confusingly presented in poor English.  He does not seem to understand established methods: he believes that the original decipherer Jean-François Champollion and his successors were merely ‘guessing’ and often guessed wrongly.  As with Leonardi’s re-decipherment, it is strange, if this is so, that newly-found texts are regularly deciphered on the basis of the established decipherment with few anomalies persistently resisting analysis.

Another non-mainstream writer on Egyptian is Okasha El Daly, who believes that the Egyptian script had already been deciphered in the ninth century CE by Arab scholars, notably Abu Bakr Ahmad Ibn Wahshiyah.  However, it seems that – while these earlier scholars had indeed come to the insightful view that the script was by dynastic times predominantly phonological (contrary to appearances) – they did not take the further step (later enabled chiefly by the discovery of the Rosetta Stone with its parallel texts) of deciphering the texts in specific terms.

Some Latter-Day Saints sources continue to promote the veracity of the ‘Reformed Egyptian’ in their Book of Abraham and other texts associated with The Pearl of Great Price.  When the early LDS leaders claimed that this was the language of the plates which an angel lent to them to be mystically translated, Egyptian had not yet been deciphered by Champollion and others, but nothing learned since that time has confirmed LDS ideas on this front.  The small pieces of genuine Egyptian text presented in LDS sources were already known at the time and have subsequently been interpreted quite differently.

Because of the high status of Ancient (Classical) Greek culture and language (and the current reduced world importance of Greece and Greek), Greek and its scripts attract many non-mainstream theories.  Notably, the non-mainstream philologist Joseph Yahuda – supported by Panagiotes Kouvalakis, Konstantinos Georganas, Kostas Katis and others – believes (without adequate evidence) that examples of early pre-linguistic symbolization from the Aegean area represent early versions of the Greek alphabet.  The generally accepted derivation of the alphabet from the Phoenician abjad (consonantal alphabet) is thus denied.  It is also mistakenly stated that the alphabet is in fact derived from the syllabic Linear B script used to write early Greek; obviously, this latter claim appears to contradict the former.  George Chryssis holds that the Greek alphabet not only was invented and used by the Greeks before Phoenician times, but that it eventually made its way to the Levant, to be used first by the allegedly Greek-speaking Philistines and subsequently by the Phoenicians and the other Semitic-speaking peoples of that region (the reverse of the mainstream position).

Even among those non-mainstream authors who accept – along with mainstream Hellenists – the Phoenician origin of the Greek alphabet, there are novel claims regarding the date at which this took place.  The mainstream view is that the event should be dated to the ninth and eighth Centuries BCE, after a long period of illiteracy in Greece following the collapse of the Minoan and Mycenaean civilizations and the loss of their linear scripts.  Greek legend attributed the introduction of writing to the hero Cadmus; and Martin Bernal – who is best known for his theory that many key aspects of Greek thought, culture and language derived from Egyptian origins (see earlier) – argues that the transfer of literacy to Greece did indeed occur at a much earlier date than is generally supposed, around 1500 BCE.  He holds that the patterns of uniformity and diversity displayed by the various early regional forms of the alphabet (including derived scripts such as the Etruscan alphabet used in Italy), together with the distribution of letter-forms in the associated abjads, strongly suggest a much longer history of the system in the Greek-speaking world.  However, these arguments appear indecisive.  In addition, there is no actual trace of the Greek alphabet at these early dates.

Several non-mainstream theories about early Greek involve the poems attributed to the probably legendary poet Homer: the Iliad and the Odyssey, which were originally oral epics and very probably pre-date, in their earliest (lost) forms, the revival of Greek literacy arising from the introduction of the alphabet.  Barry Powell argues that a single ancient scholar invented the Greek alphabet precisely for the purpose of recording the Homeric poems.  Other classicists, while admiring Powell’s erudition, generally find his often technical arguments obscure, speculative and unconvincing.

More next time!

Mark