great for coordinating a audio corpus with a derived textual corpus.
It's less helpful when generating an audio edition of a text-native
hypertext work to listen to on the bus on the way to work.
On 02/04/12 21:29, Laurent Romary wrote:
> Isn't<timeline> intended to help you doing this? http://hal.inria.fr/inria-00630289/en
> Le 2 avr. 2012 à 11:24, stuart yeates a écrit :
>> On 02/04/12 20:24, Lou Burnard wrote:
>>> *Well, the Guidelines are in a more or less constant state of revision,
>>> so we will certainly consider these revisions. However, as others have
>>> pointed out, they seem to be labouring under some misapprehension about
>>> the semantics of the<ptr> element. A<ptr> marks a point in the source
>>> where there is a reference to something else: that's all it does. The
>>> value of the @target attribute identifies the something else: that's all
>>> that it does. Your processing application has to decide for itself how
>>> to render the pointer since pointers (unlike words) don't have any
>>> obvious output format: in the case of the Guidelines we render the
>>> pointer elements with links and do some gymnastics to pick up an
>>> appropriate form of words to represent them, along with some forms of
>>> magic appropriate to the output medium (if you consider HTML active
>>> links to be magic), but we could do it in other ways.
>> On a related topic, I'm very interested in talking to anyone who's done anything cunning (or had any cunning thoughts) with regards to pointers / references and conversion to audio.
>> When we were doing our experiments in audio we were completely stumped by how to represent pointers, except for the the plain table of contents pointers which are natively representable in DAISY (and ePub, which has inherited the DAISY metadata format.
> Laurent Romary
> INRIA& HUB-IDSL
> [log in to unmask]