On Thu, 17 Jan 2013 18:49:46 -0500, Matthew Martin <[log in to unmask]> wrote:

>>Is there a limit on how many words can be in a conlang? 
>I'll approach this from the the angle of fandom

Whoa, now there's an angle I don't as a matter of course think about.

>Do you have in mind making the set of content words closed? (A fixed number nouns, adjectives, verbs, etc) This imho, makes for an interesting language because it puts the conlang creator and the fans on equal footing. Take Dothraki for an example, as far as I can tell, the content words form an open class-- so the creator can create a new word for "telephone" and it would be a nice tidy, short word. Fans can only use fan coined word, circumlocutions, possibly long derivations and maybe loan words from English, which fan communities generally don't like.  When the set of words is closed, then the path is clear: derivational morphology or phrases and everyone has the materials ready to do that and the result would be on par with what the language inventor might come up with.  It takes the endless waiting out of the game of being a conlang fan, it takes away semantic areas that are almost taboo like because of the lack of a transparent compound word or of a related root word.

I think we should be careful about distinguishing the model from the object modelled, here.  Having a closed class of content words is a downright anomalous property for a _language_ to have.  In particular, any CCoCW language is unconditionally rejecting of borrowings and inventions, and I can't think of a reason a natlang would get that way save violently extreme purism, of the sort which certain Earthly languages of states in the throes of nationalism have undergone for a time but isn't really stably typical of any natlang I'm familiar with, not even Icelandic.  Toki Pona is CCoCW, sure, but there it's part and parcel of the language's philosophical raison d'être, and that's the way to do it that seems appropriate to me: it's such an anomalous property that you should only adopt it if you really _mean_ it, as it were.  

For the _model_ to be so closed, on the other hand, is a much more reasonable thing, being merely a solution to the epistemological problem that we know a limited amount about the language being modelled, and we can't be sure what would be correct usage outside the areas that have been laid down for us.  
I can see how this might be an annoying position for a secondary author to be in, but well, myself, I'm not conlanging for an audience so it doesn't especially rise to be a concern (if I had e.g. just won the Dothraki job, thèn I might say otherwise.)

I would on the same lines recommend to Nichole _not_ to postulate that your language has a closed class of content words; its implausibility overwhelms these strange points in favour of it that live only in the realm of concerns of creators of hypothetical derivative works.  Best, in fact, to just keep mum on the point wrt your writing.  -- Of course, that doesn't answer the (presumed?) intended question of how many words you should yourself aim to create, but we've had a few good answers to that already.

>Another point to keep in mind is that if you have a large number of possible derivations, the more likely that someone will be able to create a semantically transparent compound or derived word. 

Well, I don't think it's as clear as all that.  Lemme put it this way: there are different dimensions in which a language might be closed or open.  If the derivational operations are fully productive and transparent, then all is as you say: this is the situation I might call (possibly lexically closed but) _derivationally_ open.  

From my model-vs-descriptum point of view, your reason to dislike lexical openness but appreciate derivational openness seems to be that with derivation, at least you have a half-decent chance of guessing a word which is actually correct in the descriptum, whereas with lexis you're stabbing among all possible stems in the dark.  But to say that a language allows every single thing that it lìkely allows is to say that it's completely regular.  The sort of derivational operations you end up with have to be Lego-brick predictable (okay, maybe in practical terms they can have a short, enumerated list of oddities); they won't be quirky and interesting in the way that improductive and nontransparent natlang operations often are.  

As will've become clear, I'm a naturalist, and natlangs do not operate on principles of "needfulness".  They have things like synonyms, and synonyms modulo register, and separate stems for pairs of senses which could've been gotten by productive syntax or morphology, which are such that nobody knowing only one of the pair would ever be driven to invent the other -- yet there it is.  A language whose development worked on the basis of need would just end up thin: instead of reflecting the speakers' concerns by the breadth of its gamut of lexical options at any given point in semantic space, and therefore looking lived-in, it'd everywhere have just enough to get by.  

Anyway, if one of my own conlangs somehow ever achieved a following, I think what I'd want to do for it after I could no longer tend to it at its level of demand is establish some sort of language academy for it.  Pick some of the users who'd achieved the subtlest familiarity with how things worked, and broadly shared my philosophy-of-conlanging values.  Then when a lacuna seems to arise, either an outright one or a paucity of options where there should be more, have them gather and generate and consider suggestions and decide on zero or more of them, which would then become canon.  Newly invented stems would be encouraged among the options where this makes sense; of course they should be concordant with established canonical stem shapes and sound symbolism and different compositions of different lexical strata and so on.  (Oh, and ixnay on the references like ‹ghotI'› for 'fish'.  Those are just stupid.)