Print

Print


On Thu, Oct 14, 2010 at 9:30 PM, And Rosta <[log in to unmask]> wrote:
> It's never been clear to me why an engelang should have any kind of regular
> derivational morphology. (That is, what sort of engelang design goals would
> be satisfied thereby?)

Ease of learning, for one, as maikxlx points out.  A language with
1000 root morphemes and three regular derivational operations is,
other things equal, easier to learn than a language with 4000 root
words and no derivational morphology.   And as Alex says, compactness
may argue for derivational morphology over idiomatic phrases.

On the other hand, irregular derivational morphology can be an aid to
ease of learning vocabulary, too, as compared to additional unrelated
root words.  I'd argue from experience (with Esperanto and
gj-zym-byn) that regular derivational morphology contributes more to
ease of use, though.  Syntactic "derivation" can be just as irregular
as derivational morphology; regularity or irregularity being held
constant, though, I'm not sure that derivational morphology vs. syntax
has any advantage in ease of learning/use as opposed to conciseness.

> ........ indeed, given
> that derivational morphology is essentially semantically irregular (else
> it's just a kind of syntax), there's a case for saying it should be
> irregular at the level of form too.

I'm not sure I follow.  As we discussed here a few years ago, you can
have idiomaticity or semantic transparency at any of several levels --
derived word, compound word, phrase, or clause.  I don't think I've
ever heard of a definition of "word", or a definition of the
difference between morphology and syntax, where semantic regularity is
a decisive criterion.  Usually it seems to be a matter of whether a
derived word contains bound morphemes that can't occur on their own,
or whether a compound word is pronounced differently than a
corresponding phrase.  And indeed, the definition of "productive" with
respect to derivational operations seems to imply a fair degree of
semantic regularity.   Derived words' meanings may drift over time
from their original semantically-regular derived meanings (usually, I
think, becoming more specific), but new words derived with a
productive operation must have fairly predictable meanings when first
used or we would not use said derivational operations to coin new
words for fear of being misunderstood.


On Sat, Oct 9, 2010 at 3:23 PM, John Vertical <[log in to unmask]> wrote:
> I think the problem is that abstractions represent generalizations, not
> specializations. From any one natural concept, you can make several of
> these: is "elephant" the arketype of "big", or "gray", or "thick-skinned"?

> The way I see around this is to not use derivational operations but
> comparisions: define "big" not simply as "elephantine property" but eg.
> "property shared by elephants and mountains". With suitable choice of
> original nouns, you may not need anything but a simple juxtaposition with
> implied generalization. Maybe a part-of-speech marker.

That seems to work pretty well with deriving modifiers for shared
properties of a diverse pair of nouns.  I'm not sure it works as well
for derivign words for generalizations or supersets of more basic
nouns, though.

> Hence, "parent" = "mother+father+NOUN";

Or possibly "ancestor"?   But we can avoid that by saying that the
derived superset must be the smallest semantically coherent set that
includes both of the paired nouns.

> "woman" = "mother+sister+NOUN";

Here, though, that principle would argue for the derivation meaning "kinswoman".

> "solar
> system" = "sun+moon+NOUN"

Couldn't that mean "astronomical body"?  Or "planet/star/etc., massive
enough to collapse into a spheroid but not into neutronium"?  Or if
you allow this kind of systematic superset as one interpretation of
this kind of constructuction, then it seems that those constructions
you propose for "parent" and "woman" could also mean something like
"family" or "ancestry" or "the set of one's close female relatives" or
something.

This system seems to make derived words more transparent on average
than my original proposed sketch, but it also makes them about twice
as long on average.  I think it would have to double the semantic
transparency to make it worth the cost of the doubled length; I don't
know how to measure the semantic transparency objectively, but at a
wild-assed guess it would be decreased on average by around 25-50%,
not enough to justify the cost, at least as a language-wide method.
Maybe it could be used selectively in cases where the increase in
transparency over a simple generalization-derivation seems
particularly large.

Also, some uses of this system seems especially vulnerable to And's
criticism: a language that derives a word "parent" as something like
"mother+father+NOUN" seems to have no advantage in conciseness sover a
language that uses a three-word phrase "mother or father" when the
generalization represented by English "parent" is needed.  This is
less a problem with generalizations to larger and more diverse
supersets, but with them, the potential for ambiguity of
interpretation may be correspondingly greater.

-- 
Jim Henry
http://www.pobox.com/~jimhenry/