On Fri, Nov 13, 2009 at 8:30 PM, Alex Fink <[log in to unmask]> wrote:

>>Part of the idea for evolving a conlang by this process is to leave
>>some things underspecified by the specimen sentences so that at each
>>round of new translations there is room for the next translator to
>>innovate, thus introducing mutations and/or novel features into the
> I didn't expect you to frame the exercise with as much human intervention in
> the translation as you did, actually, and I rather feel you've missed a
> trick.  By explicitly cooking half the data, you've made it so that there is
> already a correct answer to the analysis -- at least at the level of your
> initial analysis.  This way you actually have to go and be careful to set
> aside some space for more innovations!

True. I definitely need a better approach to the experiment.

> By contrast in Kirby et al's paper the initial languages were completely
> structureless.  My intuition is that, in the second experiment of that
> paper, if the language ever reached an entirely orthogonal morphemic state,
> it would also come fairly close to _stasis_ at that point.  And most of the
> "innovations" would be errors, and perhaps an occasional complete loss of
> attestations of a morpheme by chance.

Their universe of discourse was very restricted a well.

> What I would have done in the setup is something like this.  When generating
> the translation of each gloss, for each element E of any sort (e.g. "cat",
> past tense, an agent argument, ...) it shares with a previous gloss, choose
> a random previous gloss containing E, and select a random substring of its
> translation of a suitable length; for the elements that haven't appeared
> previously, just make something up.  Then combine all these elements (in
> some order?) into your new translation.  This way there would be definite
> commonalities among the translations, but they'd be all sorts of
> contradictory and the person doing the exercise would definitely have
> something to figure out.

It sounds interesting, but I'm not sure how I would actually implement
something like that. I'm certainly open to any ideas and suggestions,

> Then you could watch, for instance, how quickly the lexemes stabilise,
> whether and how the word order settles down, etc.  And of course you could
> throw in more variety in the glosses when you want to give the whole thing a
> kick.

It seems to me that if something is being done at the level of
complete sentences then word order is something that is going to have
to be settled almost immediately.

Maybe the project needs to start with something simpler than complete
declarative sentences. What if it started with tables of verb
conjugations generated randomly, and then some new verb roots to be
conjugated? Of maybe with tables of noun cases?

In any case, I'm beginning to think that complete sentences are too
big a bite to start with.