On Mon, 13 Jul 2009 02:11:48 -0700, Nuno Raposo <[log in to unmask]> wrote:

>Take for example the problem you will have with indexing a typical story
>in ASL with multiple characters. You, me and him would be easy enough to
>develop words for. But what about all the slight variations that 3D
>space allows you. (Him over there, him to the right of the other guy I
>was just talking about, him on the left, him in the upper left hand
>corner) These may all be the same "sign" in ASL but they are indexed in

This isn't so unaddressable a problem, I think.  Note that I don't abstract
out the indexer as a single word with a special variability of location; its
variation of place is just the same as any other phonemic variation of place
to my analysis.  So really all we need for this indexing is a good set of
points in space that can be invoked as pronoun loci.  I've provided several
indexical places (they have the form nasal + obstruent), though I haven't
nailed down exactly where they sit.  I can produce even more variants in
slightly shifted locations with my "place modifications".  Search my
document for "pronominal locations".  

>I recall reading an article (don't quote me it was long ago and
>old research) that a native signer can hold something like 20+ of these

I asked Sai about this once and ISTR what he told me agrees with Lee more:
aside from referents which are visible or otherwise bound to their location
by more abiding conventions than just the pronoun assignments of this
exchange, speakers only use a handful of index positions; four would be a lot. 

>In "speech" these would all come out as "he, he, he, he"

This does raise a question I've wondered about, though: are there actually
any spoken natlangs that have such a 3p pronoun scheme?  i.e. that have
multiple third person pronouns (ASL: locations in space), such that I can
just use each of them for whatever I want (assuming I'm not using them for
actually located objects), with no restrictions such as noun classes or
proximacy / obviativity or any of that.  The loglangs like to do this sort
of thing (Lojban's ko'V and fo'V series), and if it weren't for signed
languages I'd think it quite unnatural; as it is I'm curious why spoken
natlangs don't do it more.

>Then there is the wonderful world of classifiers. All of them would
>easily find a phoneme to latch onto in your schema, but the meaning they
>transmit is based very much on their complex and exact movement in

Yes, this seems more difficult.  At least, it seems to have more
nondiscreteness about its fundamental nature, and nondiscreteness of
meaningful units wigs me out.  But again I don't think the problem is an
inherent one: I can of course discretise all that and encode all the various
positions and motion directions and speeds and other wiggles and whatever
else is necessary.  If realised exactly as encoded that would yield some
very robotic signing, but it's possible.  

Incidentally: I suppose it's standard in the community to call this family
of signs simply "classifiers"?  Sai does this too.  I think it's an
unfortunate use of terminology -- as I think of it the presence of
classifiers itself isn't really much to note at all, they're just noun
classes, and I'd rather restrict "classifier" to the handshape; it's the
fact that you can represent all these kinds of motion iconically and
nondiscretely and do whatever you want that's the exceptional thing.  Oh well.  

>But again, the idea of mapping signed phonemes to ASL phonemes, great
>idea. If you are looking for fluent signers to answer questions I would
>love to help out. I work as an ASL interpreter and any knowledge of
>linguistics comes only from dabbling, but I'm more than willing to share
>what I do know.

Thanks!  I do have a few signers around here I can ask, but will keep it in