Print

Print


Garth Wallace, On 26/09/2011 20:42:
> On Mon, Sep 26, 2011 at 10:45 AM, And Rosta<[log in to unmask]>  wrote:
>> I am too ignorant to be able to understand "for an element on a stack, you
>> just need the content of the element and a pointer to the memory address
>> where the next element is found", tho I encourage you to explain it further
>> if you think I'm missing something important. My understanding of stacks in
>> human parsing (but also in Fith) is it proceeds left to right using a
>> last-in-first-out 'stack' (i.e. a kind of list) where only the item on the
>> top of the stack, i.e. the last item added to the list, is accessible to
>> processing.
>
> That's how the abstraction works (though I think a vertical metaphor
> makes more sense for a "stack"; left to right makes more sense for a
> queue, but that's just me), and that's how you'd usually draw a
> diagram of it, but there's no left or right on a memory chip. Instead,
> your program keeps the memory address of the topmost element of the
> stack. If it needs the data in that topmost element, it can access it
> directly by reading from that memory address. The topmost element in
> turn contains the address of the element below it, that element
> contains the address of the element below it, and so on. It's pretty
> simple for computers because memory is compartmentalized and
> enumerable.
>
> But, from what I understand (and I'm not a cognitive scientist),
> current models of short-term memory don't look anything like that.

So you think there is no plausible way of implementing the stack/shift-reduce algorithm in wetware? That wetware does somehow manage to implement the algorithm does have empirical support, because processing difficulty correlates with the number of items on the putative stack and with the number of operations during which an item remains on it. I have no idea what's going on at the neurological level, tho the phenomenon of gradual fading from short-term memory looks to the layperson like me like it has to do with neural activation levels. In sum, we have a hypothesis that gives good predictions about processing difficulty but, you tentatively suggest?, lacks any obvious way of being implemented in wetware.

Logan Kearsley, On 27/09/2011 01:03:
> On Mon, Sep 26, 2011 at 5:43 PM, And Rosta<[log in to unmask]>  wrote:
> [...]
>>> If we assume that the parsing of human natlangs is meaningfully
>>> stack-based, though, we can still observe that whatever the stack
>>> manipulation rules are, they must be far more complicated than those
>>> of Fith, seeing as how we know how to program a computer to parse Fith
>>> perfectly, but not English.
>>
>> I don't see the logic behind this reasoning. The reason why we don't know
>> how to program a computer to parse English perfectly is in large part due to
>> us not knowing what the rules of English are, what the structure of English
>> sentences is (and then partly that stuff like general knowledge appears to
>> be able to guide disambiguation, so the full parsing process requires the
>> full array of powers of the mind).
>
> Precisely. We cannot enumerate all of the relevant rules of English,
> particularly not in just a few pages, and more particularly not in a
> form that a computer can execute. We can do so with Fith. This is
> evidence that English is much more complex than Fith.

I'm not sure if Fith, qua thought-experiment, comes with any indication of how complex in totality its grammar is. But assuming for argument's sake that it is very simple, it will be true that English grammar is more complex, but not therefore true that English has more complicated stack manipulation rules. Since processing difficulty seems to correlate only with the (very simple) structure of surface syntax, under what seems to me like the best hypothesis the stack deals only with surface syntax; all the complexity is elsewhere and independent of the stack.
  
>> But why does this entail that there are
>> complex stack manipulation rules. Rather, it looks as tho the stack is dead
>> simple, with no fancy manipulation rules, but that the bit of processing
>> that uses it is only a small part of the overall mechanism.
>
> Very well; the complexity is in the interpretation of the abstract
> syntax tree, having left the stack behind, then. That just makes the
> point stronger- Fith gives no evidence of requiring any further
> interpretation once a structure is removed from the stack. If English
> does, then we've potentially isolated the regime in which human mental
> powers exceed those of Fithians.

I'm happy to agree with that. When I said "fithians could speak English better than humans" I meant only "Fithians, if speaking English, could cope with much mre complicated surface syntactic structures than humans".
  
>>> This is evidence that in some way human
>>> processing capacities are stronger than Fithians',
>>
>> I'm afraid I know too little of Fithians' capacities to be able to comment.
>> All I know about Fithians is that they have powerful working memories. All I
>> know about Fith is that it looks pretty much like a natural language except
>> for these utterly un-natlanglike stack-manipulation conjunctions.
>
> Their grammar is simpler than ours. It's missing a lot of the
> complexity- a lot of the *types* of complexity -that are demonstrated
> in human language. So, why are they missing features that we have? A
> simple explanation in the absence of any other data on Fithians is
> that they don't use languages like ours because it's difficult for
> them, just as stack conjunctions are difficult for us. Thus, we have
> some capacities that are stronger than Fithians'. It's not proof, but
> evidence.

OK. I hadn't really meant to enter into a discussion about the overall mental powers of Fithians; I'd meant them to represent only a processor agent with greater working memory than humans.
  
>> My claim is that it is the structure of the audibilia tree, including the
>> number of terminal nodes, that influences the processing cost. So the
>> problem with Palno is not that there is only one kind of syntactic object or
>> operation, but rather than there are too many object and operation instances
>> in surface syntax. I'll say a bit more in the other thread another time.
>
> I don't see how those can be uncorrelated. Fewer types of operations =
> less information encoded per operation = more operation instances. Too
> many instances of operations in surface syntax is not a quality that
> makes Palno sentences difficult; it's a quality that makes sentences
> in any language difficult. But it's the lack of fundamental operations
> that causes that quality to be manifested.
>
> As far as I can tell, the same argument applies to Fith.
>
> Perhaps you can explain why you see a disjunction between the number
> of operation types available and the number of instances in surface
> syntax?

Processing difficulty correlates with the shape of the bare unlabelled tree of surface forms. (To keep things simple, I set adjuncts aside in this discussion.) So I think that at the level of the stack, the only operation is one of combination, of syntactic linkage of the most generic and unspecified kind. These audible forms, that the stack processes, are mere symptoms or indices of true syntactic structure (i.e. the structure that gets semantically interpreted). I don't have any ideas about how the processing that goes from surface form to underlying syntax works, other than it's not part of the stack processing (because it doesn't correlate with processing difficulty).

For overall simplicity of the grammar, the underlying syntax will be as close to the structure of surface form as possible, completely homomorphous even, but for optimal processability (and also concision), surface form will contain as few items as possible, and hence will involve greater divergence from underlying syntax. I think this is the great challenge of the loglang: it's very simple to create a loglang in which surface form is homomorphous with underlying syntax, but how to create one in which surface form is as simple and concise as natlangs (yet retaining unambiguous encoding of logical form).

I'm not sure if I'll have answered your questions to your satisfaction, but if I haven't, do say so.

--And.