I have no definitive answer for you, but my assumption has always been that those are x,y coordinates to use for an overlay. So the surface is a 200x300 space, filled by the image file, and the zone indicates the boundaries of an initial or miniature within that space.
> On Dec 15, 2017, at 8:41 AM, Martin Holmes <[log in to unmask]> wrote:
> HI all,
> There's an example in the Guidelines that has always bothered me, and I'm wondering whether anyone can explain it or remember why it was constructed this way. It's this one:
> which shows a fairly straightforward use of <surface> and <zone> to define an area on an image. What puzzles me is that the coordinate space defined on the surface, which is 200 x 300, is then subdivided in the zone/@points attribute, which uses floating-point numbers:
> <surface ulx="0" uly="0" lrx="200" lry="300">
> <graphic url="Bovelles-49r.png"/>
> <zone points="4.8,31.0 5.4,30.7 5.5,32.2 5.8,32.8 6.1,33.4 5.5,33.7 5.1,33.3 4.6,32.2"/>
> My question is: why do this? Why define a coordinate space that you then have to subdivide in this way? Since it's a user-defined coordinate space, there's no need to do this at all. If the resolution of the image is 200x300, then there's no real meaning in a value less than 1; if the resolution of the image is higher, then why not use it for the coordinate space?