LISTSERV mailing list manager LISTSERV 16.5

Help for TEI-L Archives


TEI-L Archives

TEI-L Archives


TEI-L@LISTSERV.BROWN.EDU


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

TEI-L Home

TEI-L Home

TEI-L  July 1996

TEI-L July 1996

Subject:

SGML for access [long]

From:

Joe Clark <[log in to unmask]>

Reply-To:

Joe Clark <[log in to unmask]>

Date:

Mon, 15 Jul 1996 10:35:25 CDT

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (179 lines)

Hi. Lou Burnard told me about this list.
 
I am a freelance writer (for the moment) who has been interested in
television captioning and other captioning issues for 18 years. I've
written a dozen articles on the topic, I've given a presentation here
and there, and I run a mailing list on media-access topics
([log in to unmask]).
 
I know a *very* little bit about SGML and believe that the world needs
DTDs and other standards for four access technologies: captioning, audio
description, subtitling, and dubbing. Some background information
follows.
 
o First, definitions:
 
Captioning: Rendering dialogue and other sounds in written words. Sign
language has NOTHING TO DO with captioning.
 
Closed-captioning: Captions transmitted in the form of a code. You need
a decoder (or, more likely, just a decoder chip) to turn the captions
into visible words. Nearly all North American TVs carry decoder chips as
standard equipment now.
 
Open-captioning: Captions that are an indelible part of the picture and
are always visible. (Open-captioning effectively does not exist.)
 
[Note: Captioning and subtitling have as little in common as bicycles
and motorcycles. Three big differences are: Captions are in the same
language as the audio (with relatively rare exceptions), denote
meaningful sound effects, and move to indicate the position of the
speaker. Subtitles are a translation, ignore sound effects, and are
always located in the same spot on-screen.]
 
Audio description: Rendering visual details in a spoken narrative. In
audio description, a special narrator succinctly describes action,
settings, facial expressions, onscreen graphics, clothing, and other
visual details.  The narrator speaks out loud; A.D. is an auditory
medium, not a visual one.  Narrators typically speak during pauses in
dialogue or at other appropriate moments, but sometimes they narrate
over dialogue, over music, and so on.
 
--
 
How does this relate to information technology and SGML? Some factoids
to consider:
 
* TV closed-captioning of prerecorded programs in North America is done
using any of several rather primitive DOS programs. Real-time captioning
of live programs uses the same software and hardware with the addition
of a very skilled court reporter who enters dialogue into a stenotype
machine (along with other annotations necessary to captioning). Those
entries are in shorthand and are then translated into actual words via
lookup tables.  (This means that homonyms like "four," "for," "fore,"
"IV," and "4" require distinct keystrokes. It's not exactly easy keeping
track of all those keystrokes.) The words are then spit out for display
on a decoder-equipped TV.
 
* Closed-captioning in North America is encoded on Line 21 of the
vertical blanking interval. The VBI is a narrow band of
normally-invisible picture lines between the bottom and the top of the
TV picture. (That's not a totally accurate description, but if you have
a TV with a vertical-hold control, you can set the picture rolling
slowly and see the VBI as a mostly-black bar between the top and bottom
of the picture.) North American TV signals are made up of 525 lines
(again, not totally accurate); the top 21.5 lines are in the VBI and are
ordinarily invisible. (They're not magic.  They're perfectly visible if
you look for them. It's just that TV sets are adjusted to keep the VBI
out of sight.) Captions are encoded on line number 21 of those 21.5
lines. The caption codes are relatively wide rectangles of light that
flit back and forth. VCRs have no trouble recording and playing those
signals.
 
* CC in PAL-standard countries like most of Europe and Australia comes
about as an offshoot of the World System Teletext technology. You just
tune to a certain page of teletext (888, usually) and you suddenly see
captions on any captioned show. This technology uses several lines of
the VBI; all the encoding takes the form of tiny dots in the VBI which
are too small for anything but Super-VHS VCRs to record. This is a
severe limitation, but there are some provisos to it.
 
* Typography in both the Line 21 and WST systems is crap. Megacrap,
even.
 
* Captioning is a huge industry. Effectively all prime-time shows on all
networks, everything remotely resembling a newscast, many daytime shows,
thousands of home videos, most national commercials, lots of music
videos, training tapes, and more are captioned. This is a source of
money *and* a source of intellectual property. Think about it. But the
tools being used for captioning are very primitive.
 
* Audio description on TV is relatively rare. PBS is the biggest source
of A.D.; described programs carry a mix of descriptions + main audio in
the Second Audio Program subchannel of stereo TV. (If you have a stereo
TV-- most midrange to high-end models are stereo-- you can set your TV
to SAP.  Won't do you much good, though, for everyday TV-- only a few
stations broadcast in stereo and virtually none use SAP.) The
descriptions, then, are "closed": You needn't be bothered with them
unless you want to be.  Unfortunately, while all TV signals have a VBI,
not all have SAP, so A.D.  is not a ubiquitous medium the way CC is.
 
* WGBH, the Boston PBS &Uuml;berstation.  is a dynamo in access
technology. It is home to the Caption Center (oldest captioner on earth,
and the best, though their standards are slipping), the Descriptive
Video Service (does A.D. for PBS and other clients, and also sells a
small home-video line of movies with always-audible descriptions), and
the National Center for Accessible Media (researches new technologies,
like Web captioning and captioning in movie houses). I know many people
there and actually get along with some of them. www.wgbh.org. Even these
people aren't really thinking all that broadly about the potential of
access technologies, though again that has many provisos.
 
* To caption a prerecorded program, you transcribe it. Usually the
captions are an edited version of that transcript-- reading is slower
than speaking, and there are speed limits to caption transmission-- but
if you retained a verbatim transcript with all proper annotations of
sound effects (phone ringing, thunder, etc.) and speaker identification,
suddenly you have a viable text-only analogue of an audiovisual program.
 
* It gets better: Audio description typically happens during pauses in
dialogue. A.D. scripts, then, are quite short-- up to 100 or 200 bursts
of narration. However, it's possible to describe *a whole program*
nonstop, and in fact one project I'm working on will do just that. If
you unite either or both of these A.D. scripts with the CC script,
suddenly you have a rich and complete text-only approximation of an
audiovisual program.
 
* What can you do with that information? Archive it, either on the Web
or your own computer or elsewhere. Monitor it continuously for keywords.
(It is believed that the NSA has done exactly that for years.) Use it
for people who don't want to wait 20 minutes to download a choppy
videoclip from a Web site. And, of course, use it for its intended
purpose, access.
 
--
 
Where research is needed:
 
* SGML. Markups for everything from italics (which have reserved
functions in captioning along with all the regular uses of italics in
print) to speaker IDs to caption-on and -off times to various
annotations for A.D.  tracks are needed. How is this useful? Really
sophisticated captioning/A.D.  software could be developed. More
relevantly, existing nonlinear video-editing systems   a la Avid and
programs like Premiere and Acrobat could be extended to understand
SGMLified access codes. This same development process would have to
encompass subtitling and dubbing, too, which I am not talking a whole
lot about here.
 
        Also, if captions were stored as part of an SGML structure, they
could be automatically reformatted in real time for different display
devices, like an LED screen (with a character set different from TV), TV
pop-up captions, TV scroll-up captions, a continuous text-only stream
without paragraph and sentence breaks for computers, or an offscreen
large-print display for visually-impaired viewers.
 
        Or captions created with one software package could be read and
understood by another-- or another country's system. Right now it is
quite tedious to reformat Line 21 CC for PAL CC, and there are various
typographic issues that come up here.
 
* Web access. Trying to educate Webmasters that the WWW is not an excuse
to post pretty pictures is a battle we've already lost. But making those
graphics accessible *is* possible. Same with audioclips and videoclips.
 
* Subtitling and dubbing are the norm outside English-speaking
countries.  Both are possible in the same movie; it is then possible to
caption subtitled and/or dubbed movies.
 
--
 
So: I am interested in setting up a working group to create DTDs for
ONLY the four access technologies I mentioned. Softquad isn't
interested. Is anyone else?
 
 
                                        Joe Clark
                                    [log in to unmask]
                             <http://www.hookup.net/~joeclark>

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
October 2005
September 2005
August 2005
July 2005
June 2005
May 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996
November 1996
October 1996
September 1996
August 1996
July 1996
June 1996
May 1996
April 1996
March 1996
February 1996
January 1996
December 1995
November 1995
October 1995
September 1995
August 1995
July 1995
June 1995
May 1995
April 1995
March 1995
February 1995
January 1995
December 1994
November 1994
October 1994
September 1994
August 1994
July 1994
June 1994
May 1994
April 1994
March 1994
February 1994
January 1994
December 1993
November 1993
October 1993
September 1993
August 1993
July 1993
June 1993
May 1993
April 1993
March 1993
February 1993
January 1993
December 1992
November 1992
October 1992
September 1992
August 1992
July 1992
June 1992
May 1992
April 1992
March 1992
February 1992
January 1992
December 1991
November 1991
October 1991
September 1991
August 1991
July 1991
June 1991
May 1991
April 1991
March 1991
February 1991
January 1991
December 1990
November 1990
October 1990
September 1990
August 1990
July 1990
June 1990
April 1990
March 1990
February 1990
January 1990

ATOM RSS1 RSS2



LISTSERV.BROWN.EDU

CataList Email List Search Powered by the LISTSERV Email List Manager