|
Posted by Thomas A. Russ on 04/20/07 16:49
dpapathanasiou <denis.papathanasiou@gmail.com> writes:
> > This is unfortunate why? Because of the high correlation between
> > people who have something to say worth reading and those who can write
> > XML without screwing it up? Face it, HTML is a markup language
> > historically created directly by humans, which means you *will* get
> > good content with syntax errors by authors who will not fix it.
>
> But this problem was entirely preventable: if Netscape and early
> versions of IE had rejected incorrectly-formatted html, both people
> hacking raw markup and web authoring tools would have learned to
> comply with the spec, and parsing html would not be the nightmare it
> is today.
On the other hand, it could also be argued that, especially early on,
before web authoring tools existed, such laxity contributed to the
widespread adoption of html. By making the renderer not particularly
picky about the input, it made it easier for authors to hand create the
html pages without the frustration of having things get rejected and not
appear at all.
That provided a nicer development environment (somewhat reminiscent of
Lisp environments), where things would work, even if not every part of
the document were well-formed and correct. The author could then go
back and fix the places that didn't work. That would be true even if
correct rendering were strict, but I do think that laxness in
enforcement of the standards helped the spread of html in the early
days.
--
Thomas A. Russ, USC/Information Sciences Institute
Navigation:
[Reply to this message]
|