Print

Print


2013-06-25 20:50, George Corley skrev:
> On Tue, Jun 25, 2013 at 1:24 PM, BPJ <[log in to unmask]> wrote:
>
>>
>> I'm not. I'm just strongly opinionated, as you are WRT
>> capitalization. In both cases the people who design
>> orthographies/**transliterations/**transcriptions do what pleases
>> *them* best. It ain't their concern to please you or me, nor
>> should it be. Most people are able to muster good arguments for
>> doing things the way they do, even when that's the opposite way
>> from one's own preference. Orthographic nitpicking serves one
>> purpose only: to put down those who don't adhere to the 'rules',
>> which seldom are about preserving the expression of grammatical,
>> phonological or semantic distinctions, and often go against that
>> single valid concern[^1]. And I make money out of that idiocy!
>>
>
> We all have different preferences, but I do very much think it's valuable
> to use capitalization at the beginning of sentences and for proper nouns.
> That seems to be close to the minimum use for it, and they are both
> important signals for readers used to reading languages written in Latin
> script. To me, using capitalization is a "consider the audience" type of
> question -- your audience will benefit from using some capitalization
> standards. Of course, anyone is perfectly free to make an aesthetic choice
> contrary to that. I guess part of my issue is that, to me, the romanization
> is such a utilitarian thing that I really don't think too hard about the
> aesthetics of it, just the utilitarian concerns of elegance and
> accessibility. A long time ago I put down my general thoughts on how
> romanizations should be evaluated and thoughtfully constructed here:
> http://www.gacorley.com/blog/2011/11/14/design-parameters-for-romanization.html

I'm not myself opposed to capitalization. I use it except when
texting and I apply it to my conlangs even where their supposed
native scripts have nothing corresponding, but I *am* opposed to
normativism and prescriptivism, and thus to chastising others for
their orthographical and punctuational choices on such grounds.
I'll never buy that the problem is with the people who 'can't
spell', rather than with the prescriptivists upholding a too
complicated norm.  Yet I work as an editor...

>
>
>> [^1]:   A thousand years ago people in this part of the world made
>>          just dandy with a phonologically underspecified, caseless
>>          script precisely because it preserved all relevant
>>          grammatical distinctions!
>>
>
> Not sure where you are precisely, but many writing systems are still that
> way. A few also dispense with white space, which Roman script itself didn't
> have for a good long time. But modern readers of Roman script are used to
> white space and punctuation and capitalization
> ANDTHEYFINDTEXTLIKETHISQUITEDIFFICULTTOREAD evenmoresotextlikethis. I've
> heard arguments that even for contemporaries, Roman script was harder to
> read without whitespace than it would have been with it -- there are claims
> that very few people were able to read silently when there wasn't some sort
> of word separation.
>
I live on the west coast of Sweden and I mean the Scandinavian
Younger Fuþark/viking age and early medieval runic alphabet.

Its inventor(s) reduced the number of graphemes from 24 to 16 at
a time when the number of phonemes in the language had actually
increased, which deeply bothered and confused scholars around a
century ago and earlier. Clearly, they thought, texts must have
become so ambiguous as to be hardly decipherable! However this
was in practice not the case, and Einar Haugen in a 1969
contribution to a _Festschrift_ explained why: all phonemes which
occurred in unstressed syllables, and thus all phonemes which
occurred in inflexional endings and most derivational suffixes
could still be unambiguously represented. Moreover i-umlaut and
u-umlaut were for the most part still transparent processes, so
that using the same graphemes for the members of the pairs _u/y,
a/æ, a/ǫ_[^1] wasn't that much of a stretch for native speakers.
Thus the umlaut/non-umlaut vowel distinction not only didn't
exist in unstressed syllables, it also seemed partly conditioned
or 'fluid'. Not distinguishing voiced and voiceless stops was
also pretty manageable for native speakers, since they only
contrasted word-initially, in geminates and after nasals or
_l_;[^2] as a rule context would help native speakers
disambiguate. There remains only the problem that the distinction
between mid and high vowels was abandoned -- <u> *really* was
heavily overloaded, standing for all of _u, o, y, ø_ as well as
/œ/ to the extent it ever was a distinct phoneme --, but even
that is explainable: not only were there still visible traces of
a-umlaut, where _i_ > _e_ and _u_ > _o_, but above all _i/e_ and
_u/o_ actually had merged in unstressed syllables, and we know
from the spelling practices of early Latin-script MSS that their
allophones actually were in free variation, so it is not at all
surprising that the distinction between high and mid vowels also
seemed fluid to the script reformer(s). Bottom line: since all
grammatical distinctions could be unambiguously represented the
partial ambiguity in the spelling of the always word-initial[^3]
stressed syllables was manageable to native speakers.

Please tell me offlist if you want to read the original article, BTW.

/bpj

[^1]:   _ǫ_ == /ɒ/.
[^2]:   Gemination, nasals before stops and vowel length weren't
         inicated at all, but these practices were not innovations.
[^3]:   Almost all texts use·dots·between·words.  No doubt failing
         to do so increased ambiguity to unacceptable levels even
         for native speakers.  BTW that was the norm also among the
         Romans, except for careless, mass copied texts and the
         like.