First of all, I was very impressed by this paper, I'm glad somwone found a
conlang use for it
David Perlmutter, in *Sonority and syllable structure in American Sign
Language*, Linguistic Inquiry *23* no. 3 (1992), 407–442

Also, it just seems to me that some of the consonant sequences are a bit
clunky like
my only comment for that is that perhaps there might be some way to apply
the least marked *sequences of phonemes* in the oral language to the least
marked *sequences of phonemes* in ASL, unstead of just the least marked
phonemes to the least marked phonemes. I however have almost no experience
with ASL, so I'm not sure what this would entail, it seems to me that it
would be better not to change anything and have complexe clusters.

I really enjoy the idea, and think that it's execution is marvelous, however
I wonder how the Deaf community would react to a spoken dialect of their

On Sat, Jun 20, 2009 at 1:45 AM, Alex Fink <[log in to unmask]> wrote:

> Over the last month or so I've been sketching out a scheme by which words
> of
> American Sign Language can be transcribed at the phonological level into a
> spoken phonology.  I've written it up at
> Interested to hear your comments.  (Things unclear?  Anything I've analysed
> unjustifiably poorly?  Better ideas for phoneme mappings? ...)
> Alex