Print

Print


Alright, that would seem to work :)
I think I get it now.

BTW I was rethinking the mathematics of the FL2 tree. It's probably a completely normal structure, but anyway:

The first layer (top-down) has only one nod [].
The second layer adds n nods within [ [][][][] ].
The third layer adds n nods within each of those nods [ [ [][][][] ] [ [][][][] ] [ [][][][] ] [ [][][][] ] ]
etc.

So the number of nods would be 1, n, n x n...
Or in other words: n to the power of 0, n to the power of 1, n to the power of 2...
And as a geometrical shape it starts with a zero-dimensional dot, followed by a 1D line, a 2D square, a 3D cube, a 4D hypercube (tesseract), a 5D hypercube (penteract) etc.

I was looking at shapes since I came across the term "the space of possible languages".

It seems there's been a huge misunderstanding in generative linguistics. It says in some sources that when formal language theory was established, linguists saw that the space of possible (formal) languages is infinite; however human languages seem to be unpredictably close to one another. Chomsky actually wrote that if "Martian scientists" visited earth, they would conclude that phonological variation set aside, all earthlings speak basically one language.

It seems to me that the space of possible languages was seen like an endless 2D canvas where all human languages are confined to an area in an arbitrary location.

But now that we've been working on minimalist grammars, it's quite obvious that the shape of the space is actually an upside-down pyramid, starting from a narrow point (S --> a) and expanding step by step towards more complex grammars. As complexity increases, we see quickly arithmetic and PL syntax, followed by Toki Pona formal grammar (http://www2.hawaii.edu/~chin/661F12/Projects/ztomaszewski.pdf), Esperanto formal grammar (https://www.duo.uio.no/handle/10852/9411), simplest natural languages (contact languages) followed by more natural languages, and finally the most complex natural languages. I don't know of any more complex languages.

So, obviously human languages are as far from being in an arbitrary location as the shape is far from being an boundless canvas. 

The original idea was that because languages are assumed to be logically arbitrary, but all human languages are apparently based on certain patterns, THIS was the reason why children would have to be born with an innate knowledge of syntax: the space of possible languages is (was supposed to be) so huge there would be no way to master it, so if children were born tabula rasa, they could never learn all the logical rules of language during their lifetime (Chomsky called this the poverty of stimulus). 

I've added a chapter to my article with the pyramid shape of possible languages, where minimal grammars are the basis, human languages lay on top of them, and the infinite space goes on only beyond the complexity of human languages.