On 4 September 2015 at 04:49, Patrik Austin <[log in to unmask]> wrote: > I think we're finding sufficient agreement here :) Possibly. It does seem that we are slowly converging, but we have yet to see how long that state of affairs will persist.... >> That's exactly what And & I have both been saying. You call it a > mindmap, I call it a semantic graph, And calls it predicate-argument > structure. Predicate logic is (one of) the formal system(s) used to > describe those kinds of objects. What you seem to be saying here is > just different words for the same thing that And meant when he said > that a language must encode predicate-argument structure- > > As for me, I said it all depends on the definition and that the perspective was too narrow. Definition of and perspective on what, exactly? > You could say that FL and PAS are basically the same; I am not sure what you mean by "FL" here. If you mean "the set of languages that includes FL1 and FL2 as concrete examples": No, you can't. They are not the same ontological class, and it goes beyond just "comparing apples and oranges"- it's more like comparing biochemistry and dolphins. Dolphins have biochemistry, and you can learn about biochemistry by studying certain aspects of dolphins, and learn about certain aspects of dolphins by studying biochemistry, but they are *clearly* not the same thing. PAS = shape of mindmap = *not* a language. What you can say is that *if* a sentence in one of the FLs (or any language) is unambiguous, *then* it must unambiguously encode predicate-argument structure (PAS), which can also be represented by some expression in your preferred notation for predicate logic. Or in other words, it must correspond to exactly one mindmap. If, on the other hand, you meant it as a simple abbreviation for "formal logic"... well, OK, then. I would quibble that "formal logic" is actually the rules for manipulating and deriving theorems about PAS, rather than *being* PAS, but at least it's close. If you meant some third thing... well, then I'm lost. >>> In formal terms, whatever you do, it will always build on aS > >> Sure. But that's not interesting. It gives you no useful insight. The > *only* thing that tells you is that "human languages put words in > linear order". Well, duh. > > Hold on, there was a lot more than that. But really what I need is just a little solid falsifiable foothold I can base my theory on, and here's where I find it. Don't forget that Chomsky's UG, which also has the purpose of explaining why human languages are so similar, has failed to find any solid ground. > > With my method you can easily construct the syntax of his universal grammar. You can for instance start from FL2, but as an obligatory conjunction is not a property shared by all human languages, you fix that, and use a different method for embedding - if any. Of course people will disagree as to what properties are necessary, but the formal grammar approach will also give you the possibility to directly observe differences and similarities between formal grammars made for natural languages. > > You might ask if this isn't exactly what Chomsky is doing, but the essential difference is that, because he bases his work purely on natural language research (whose concepts may have led us astray, as Gil suggests), it will take forever to come to the correct conclusion. As we see from current discourses, Chomskyan research is stuck because the number of linguistic universals proposed is far greater than there are to be found in reality. In comparison to Chomsky, my method is the exact opposite: I start from bottom simplicity and add only what we positively know is necessary. This starting point is miles closer to finish, and just a few steps away. > > Whatever results I get, I can present them as proving Chomsky right: here's UG, done and done, now we know it's real - everyone's happy. But to be quite honest, to me it proves Chomsky's idea a little pointless as the actual grammar seems too simple to necessitate any hard wiring. For instance, many linguists will argue that some transitivity rules must be obligatory, but as a quick reference, you can die someone in Japanese, while the Finnish word for be can take an object; ditransitive verbs are not universal. As any engelanger knows, dumping transitivity variation means getting rid of 99% of the trouble. > > It's all about connecting dots that everyone can see. As boring as it may seem, this is what I do. Who says linguistics has to be interesting? :) Well, I suppose if MIT guys can get away with publishing a paper that does little more than provide evidence that "syntax exists, for realsies", then you might well have something to go on with "languages have to put words one after another". :) You could probably spin it into something interesting. Perhaps by presenting it as a challenge: "the only thing I can reasonably prove is that languages have to put words one after another- can anyone else prove the existence of at least one additional logically necessary constraint?" >>> One nice thing about the hyper-minimalistic FL1 is that the actual tree/mindmap seems pretty much like that of any human language, so it's easy to consider it as potentially relevant for natural languages. > >> Which is exactly why And and I (and the vast majority of engelangers > and semanticians) consider PL relevant for natural languages. PL > encodes what you are claiming FL1 encodes. And & I are arguing that > FL1, as you have described it, *can't* actually do that in an > unambiguous manner. (I thought of a couple logical possibilities for > how it could that would have been consistent with evidence to that > point, but then you didn't like any of them, so....) > > Yeah, I'm happy with the way it is. If you have an idea as how to turn FL1 into a real engelang, be my guest :) I'm not trying to turn it into an engelang. All I was trying to do was satisfy your criterion that it must be *possible* to produce unambiguous sentences in FL1 (regardless of whether or not it is *obligatory* to do so). If you are, however, OK, with the idea that FL1 is not, in fact, capable of unambiguous (as distinguished from *specific*, or non-vague) expression in sentences of more than 1 word, then it's perfectly fine that you not adopt any of my proposed solutions to that particular problem. > As we've seen, there exist many possible minimalistic grammars that can be claimed as engelangs, and I think it would be clever to do so, because each one can give us just a little more insight into how language works. You could find out which languages (semantics included) are fully functional, and put them into a table to look for a pattern. That could tell you what choices are possible or necessary, and maybe there's a chance an algorithm could be made once you've got the full picture. Although I think this kind of research should really have a funding. Indeed. >>> To answer the inevitable question why people did not choose this "optimal" structure in the first place, you can't properly use it in real time; in any effort to make a language that is good for expressing yourself with, you'd have to solve the problem of how to put the complex structures into linear form. In other words aS is the grammar, but it entails a problem that needs to be solved one way or another; my suggestion is that, looking at the mindmap, there are endless ways to do so. > >> Resolving exactly this problem is the challenge of the ergonomic > engelang, which no one has yet managed to produce a convincing example > of. > > Could that be something I would call an optimal language? Optimality can be reached based on given criteria. In minimalistic engelanging I think these are simplicity and expressive power. If it needs to be user friendly, I don't know yet what the formal criterion would have to be, but maybe it's not too hard to find if you look into paaS, FL2 and arithmetic syntax (not functions), the last two being kind of a posteriori languages. I can't speak for the engelanging community in general, but sure; I personally would be fine with "optimal language" as an alternative name for "ergonomic engelang". >> This should work: > > S -> S'S | 0 > S' -> aS' | 0 > >> Note, however, that this is an ambiguous grammar; the surface forms > that it produces are just lists of 'a's. Disambiguating it requires at > the very least adding an additional syntactic category to mark either > the beginning or the ending of an S' (sub-sentence) constituent (which > could be distinguished from the syntactic category 'a' by morphology, > without necessarily introducing an additional lexical category), or > actually distinguishing a second separate lexical category of function > words (like "topic" and "event") for that purpose. A brief addendum: this ends up very, very similar to the core structure of WSL. Not identical, but quite similar. And I have found that this style of organization runs up against severe ergonomic difficulties when you need to specify non-intersective relations, and/or introduce referents that are not themselves participants in the current event. This crops in things like genitives, identifying people by family relations, and so forth, and is the source of about half of the complexity of WSL that I haven't gotten around to formalizing yet. Interpretation rules for the S-prime grammar should also be relatively simple, but there is some trickiness in binding the implicit event argument properly. -l.