LISTSERV mailing list manager LISTSERV 16.5

Help for TEI-L Archives


TEI-L Archives

TEI-L Archives


TEI-L@LISTSERV.BROWN.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

TEI-L Home

TEI-L Home

TEI-L  March 2018

TEI-L March 2018

Subject:

Re: <c> tag

From:

Peter Flynn <[log in to unmask]>

Reply-To:

Peter Flynn <[log in to unmask]>

Date:

Sun, 4 Mar 2018 16:39:20 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (151 lines)

On 04/03/18 14:59, Ciarán Ó Duibhín wrote:
> Grateful thanks to Peter, Syd and Martin for taking the trouble to 
> answer, but I seem to have given everyone the impression that I want
> to transform a TEI text containing <c> tags into another text, or
> even two other texts.  That wasn't what I had in mind at all.

I suspected that might be the case, hence my cagey wording.

WARNING: you need to use a fixed-width font like Courier to read the
examples I give below.

> What I envisage is inputting a text containing <c> tags

Can we be clear; do you mean a valid (or at least well-formed) TEI XML
document which allows character-level linguistic markup? Or do you mean
just a chunk of text with pointy brackets around the letter 'c'?

> to a TEI-aware indexing or concordancing program.  Xaira is a program
> of this type, but, when it is extracting indexing terms (tokenising),
OK, another point of clarity needed. "Tokenising" in that sense may or
may not be the same thing as the operation performed by the XSLT2
function tokenize(). The XSLT2 function returns a sequence of atomic
objects which were identified because they were separated by the
specified delimiter. So tokenize($string,' ') when $string is the
sentence "All is discovered. Flee at once!" will return six words,
keeping the case and the punctuation. This may not be what Xaira means.

> I haven't been able to make it handle the <c> tags in the way which I
> might expect "non-lexical characters" to be handled, even when it is
> informed that the text is TEI-conformant, not just XML-conformant.
To be frank, I'd give up on what appears now to be an unsupported
utility if it isn't possible to do what you want. You just need to
define sufficiently for (eg) XSLT2 what you want to do.

> Briefly, a concordancing program (for example), written in a programming
> language, will read a text, extracting each token 

"Token" being what in this case? A word?

> (dropping non-lexical characters within it) 

OK, those identified by the c element type, or a list of characters to skip?

> and noting the token's offset within the text, 

Ah. That's an entirely different <insert your own cultural meme: mine is
a kettle of fish or a pair of sleeves>. Is the text normalised (all
multiple spaces and newlines converted to single spaces) first? Is the
presence of preceding non-lexical characters to be included in the
offset or not (presumably yes, otherwise it will never align)? And is
the additional space occupied by the TEI markup itself also to be taken
into account? Does the offset re-zero itself at points in the document
(eg start of new sections)?

> and putting a record into a file, 

What kind of record is this? A single line of unmarked characters? What
determines the start and end of a record?

> which is then sorted alphabetically on the tokens. 

You mean the *content* of the record (presumably tokens with their
associated offsets) is sorted? Or the records themselves (on what)?

> This sorted file is then read back, and for each record, we display
> the token (still without non-lexical characters)

The implication here is that 1 record = 1 token = 1 word. Is that
correct? In other words, for my earlier example, sorted:

1,All
25,at
8,discovered.
20,Flee
5,is
28,once!

> and, going back to the text, display a segment from around the offset
> (this time retaining the non-lexical characters). The output is a
> concordance, not another XML version of the text.

OK, now we are getting somewhere. This is called KWIC format (KeyWord In
Context), and was (is?) the standard output of text searches in the days
of unmarked text, and into SGML days (in the CELT project we used PAT
for searching SGML TEI P2; it was [a] blindingly fast, and [b] returned
KWIC). In the above example, with a span of 20 characters either side,
we would get

1. All:        ...ng is the sentence "All is discovered. Fl...
2. at :        ...is discovered. Flee at once!" will return...
3. discovered: ...he sentence "All is discovered. Flee at o...
etc.

> Concordance programs, which have been around for many decades, routinely
> handle non-lexical characters, which they call "padding". 

Normally you would define a list of these: comma, period, semicolon,
etc. I think what confused the issue was that you were giving an
alphabetic letter in the c element.

> With TEI markup, we can declare each instance of the character
> individually to be non-lexical or not, which is something I need to be
> able to do.  But few concordance programs can handle TEI markup, other
> than by stripping out the tags altogether.

Right. But it doesn't sound terribly difficult, and XSLT2 is IMNSHO
ideal for the purpose.

> A TEI-aware concordance program would do "the right thing" with every
> tag, including <c>. 

I suspect the definition of "the right thing" is different for every TEI
project of any significant magnitude. The CELT project has gazillions of
instances of the character-level element types used in linguistic markup
combined with the standard TEI features for editorial intervention,
semantic correction, lemmatisation and parallel readings, and physical
aspects like the rest of the name has been gnawed by rats. And every
project has its own list of "weird stuff", like we need lg within head
because some titles include fragments of poetry.

> If "non-lexical character" means anything, the right thing with <c>
> must be to omit or include the content depending on the operation.
> Tokenisation demands omission, display of context demands inclusion,
> at different points in the concordancing or indexing process.

Yep. All doable once "the right thing" has been defined.

> The background to all this is that I have texts in non-TEI markup,
> and programs which index and retrieve them, an essential feature of
> which is to take account of non-lexical characters.

I haven't had to do this at corpus level for many years; I would be
surprised if someone hasn't already done this in XSLT2 for TEI.

> I was considering writing a conversion from my own markup to TEI,
> with the object of making the texts more widely usable. 

That would be a very generous and public-spirited action.

> But unless there is a TEI construct for non-lexical characters, and 
> off-the-shelf TEI-aware programs for indexing, concording, etc. that 
> implement it, not only outside <w> but also within <w>, there would 
> be little point in such a markup conversion.

Apart from Xaira I don't know of anything off-the-shelf. But as Syd
implied, handling the text is not the problem; the problem is defining
what needs to be done for every element type in the TEI schema/DTD that
you are using.

///Peter

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996
November 1996
October 1996
September 1996
August 1996
July 1996
June 1996
May 1996
April 1996
March 1996
February 1996
January 1996
December 1995
November 1995
October 1995
September 1995
August 1995
July 1995
June 1995
May 1995
April 1995
March 1995
February 1995
January 1995
December 1994
November 1994
October 1994
September 1994
August 1994
July 1994
June 1994
May 1994
April 1994
March 1994
February 1994
January 1994
December 1993
November 1993
October 1993
September 1993
August 1993
July 1993
June 1993
May 1993
April 1993
March 1993
February 1993
January 1993
December 1992
November 1992
October 1992
September 1992
August 1992
July 1992
June 1992
May 1992
April 1992
March 1992
February 1992
January 1992
December 1991
November 1991
October 1991
September 1991
August 1991
July 1991
June 1991
May 1991
April 1991
March 1991
February 1991
January 1991
December 1990
November 1990
October 1990
September 1990
August 1990
July 1990
June 1990
April 1990
March 1990
February 1990
January 1990

ATOM RSS1 RSS2



LISTSERV.BROWN.EDU

CataList Email List Search Powered by the LISTSERV Email List Manager