Dear reader, Stephen,
The recent submission by Stephen Davis raises an issue I have to get in to
every time I talk about the structural model of SGML, i.e. that of the
document grammars, DTD's. What do we need a document grammar for anyway?
And ok, if we need it, why does it have to be so complex? And so
restrictive? When I have a bad day and miss the eloquence to explain this,
people immediately ask 'so what is it good for then'?
The issue raised by Stephen concerns SGML, not the TEI. The TEI works
within the bounds, requirements, nature of SGML. Content models, and
therefore rules for placing element types at specific positions, are
inherent to SGML. If you don't want to be constrained by a model (oops!,
I nearly added 'of information'), you abandon half of SGML.
Ok. For some source types or types of document production this may be valid.
For instance, the OED uses SGML but has no (real) DTD. As another example,
for similar reasons, a publishing company for legal documents in the
Netherlands uses a 'weak' DTD, largely defined in terms of <div>s for
several separate kinds of divisions in the legal texts, simply because no
clear-cut 'strong' DTD can be devised. Too many exceptions.
If that is so, be it so. For other sources (and production work) a strong
DTD may be helpful or even required. I talked to a guy in document
production work, having several authors that are responsible for producing
course material. He just _loved_ this aspect of SGML. And I can image so.
If a writer is bound to a very strict set of constructs applicable to a
specific document type, SGML aware editors by nature punish all attempts
to deviate from that.
Somewhere inbetween is the TEI. The choice has been made to use the
structural model of SGML, that of defining rules for placing elements in the
text to signal some informational aspect. The _extent_ in which this has
been done may have been too strict, or too relaxed, that's for the
community to decide.
I just wrote: 'abandon half of SGML'. The other half is at least as
important as the first. And that's the _exchange model_ SGML offers. Maybe
this is not an issue Stephen wants to raise, but I'll stress it anyway: we
need a clear model of document exchange in order to leap over software,
hardware, distance and time. If we are entering all that text into the
computer, we want to be able to process the information in 20 years in the
same way (and, most likely, not with the same software) as we do today. We
do not want to loose the data because the software changed. We want to be
able to process the data on any machine, pass it on to any site, and
archive it for later access. And, in the process, we do not want to loose
a single bit of information. In my opinion, SGML offers us the best
available framework for this.
I had to get this off my chest, as I feel a tendency to question SGML as
an data encoding approach. (If that's not what you intended, Stephen,
I'll be more to the point where the TEI is concerned.
Stephen Davis writes:
Realistically, how often do we want to wrestle with why a
particular element isn't defined within another element? This
seems to let the container drive the content, where it should
really be the opposite. WHEREVER I need a <persname> I should be
able to use it. At this rate it looks as though it would be best
to define every element as possibly appearing within any other
element, in any order! And, actually, why not?
There may be several reasons for this. The designers may not have thought
of it. The designers may with good reasons have decided not to allow it.
There may be an alternative that serves the purpose in that context. If
you are talking about TEI here (which by the way is _not_ a DTD), and not
about just some SGML application, the designers, in my opinion, would want
you to do any of two things:
1) tell them you want the element, or elements of that kind, at that
position, i.e. change the guidelines, in order to give it an official
'ring'. This will tell the community that the element, by nature, is
valid in that context.
2) keep the guidelines for what they _are_, i.e. 'guides' in creating a
rich information-bound representation of a source. And add the change
to your local copy.
In this case you may even decide to record the change formally, for
which the guidelines give you a formalism. This way you will be able to
pass on the data in the variant form and allow the receiving party to
understand (or adapt) the DTD extracted and altered from the tag sets
to suit your (common) needs.
Note that you'll have to send along the document type definition you
have applied with the document instance. The P3 in public doesn't
change this (except, perhaps, for the most common use).
The second approach is valid in all cases: what you get is a framework
from which to start encoding the sources. You may alter the rules in any
way you like. For instance, add <persname> to the model class active at
the levels you want it to be acceptable. Model classes can be changed in the
document type definition subset, so you do not even have to alter the tag
Stephen also writes:
Perhaps we will need to rethink a good part of the structure of
SGML documents, e.g., to use broad hierarchies reflecting
significant structural components of the text, and then simply
defining an extended data dictionary that can be applied wherever
needed under any of the hierarchical levels.
It is the nature of the standard to allow such constructs. The standard has
been defined the way it is to make this possible. How do I explain? This
_is_ SGML! The core of this lies in production 101, and the note to 104,
stating that an entity declaration in the document type definition subset
overrules that of the public document type definition. In fact, the DTD
subset is read before the 'main set', and duplicate entity declarations
are ignored. This note is the key to what Stephen suggests. You simply set
up a generalized document structure, and default the models (element,
attribute) of these structures. In the subset, you can make a choice for a
more liberal, or a more restrictive model for the (set of) element types
in question. You plug in the constraints you need. Or you plug in sheer
freedom. Whatever is needed.
Perhaps, what Stephen is getting at here, is a _working strategy_, rather
than a 'change in the way we look at SGML documents'. Yes, a common
working strategy would be valid. If we look at the TEI's P3 as currently
defined, we see that different strategies in applying it are valid, i.e.
is valid SGML. Documents may however be encoded in a significantly different
way. This has been discussed on TEI-LIST before.
However, I feel Stephen intends to 'transcend' TEI, and challenges SGML as
an abstract language. SGML is 'abstract' on both levels mentioned above:
- abstract in how to signal and relate information in specific sources (DTD)
- abstract in how to encode the material (SGML declaration).
The first abstraction is challenged. I this framework Liam Quin, in a
And in the proceedings of the House of Lords, perhaps we might
<TITLE>The Right Worshipful Sir</TITLE> <Name>John Owen</Name>
Yes, we might. What we also might see is a table 1 in a relational system
that uses 'title' for a book title, and a table 2 that uses 'title' for
the title of a person. Nobody will challenge relational systems for this.
Everybody will say, as I hope, that it doesn't matter what you call it, as
long as you can process it in a sensible way. I truely see no difference
I find Stephens system designer's cry for uniformity in DTD's therefore
A name is a name is a name, right? Unfortunately, it quite shortly
won't be, and we systems people will find ourselves trying to
write retrieval and indexing tools that have to accommodate a
hundred different ideas of just what and where a personal name is.
I would (with all respect, Stephen) say that you are designing the system
in the wrong way if this poses a problem. The document management and
retrieval system should be able to process hundreds of different types of
documents just as a relational system must be able to process hundreds of
different kinds of tables and relations. So, there must be a way to tell
the system that specific (marked) parts of some document should be
processed this-and-that way. The system must be intelligent enough to deal
with name differences, diferences in content, or missing information.
To be overly explicit, nobody needs a "Netscape" for the TEI, for bicycle
maintenance manuals, or for whatever document type is currently in the
picture (and tomorrow may not be). We need a system that can be told what
processing is inherent to what document types, building upon common
knowledge, binding all documents (a document model). This model is offered
by SGML. It is also offered by HyTime for hypermedial aspects, and by
DSSSL for publishing issues. If these standards _succeeded_ in this? -time
will tell, but the generalized approach is not only attractive, but rather
a requirement for robust system design.
Sorry for the long reply,
Arjan Loeffen, Humanities Computing, Faculty of Arts,
Utrecht University, The Netherlands.
++31+302536417 (voice work), ++31+206656463 (voice home)