|
Posted by Alan J. Flavell on 02/04/06 17:45
On Sat, 4 Feb 2006, Andy Dingley wrote:
[...]
> For the final output, I can transform to HTML output and in some
> ways this is even easier than XHTML (it's hard to generate good
> Appendix C XHTML from most XSLT tools)
Good point.
> Appendix C XHTML is a kludge, but it's a kludge that works on the
> web and allows XHTML to be served in a way that's at no disadvantage
> compared to HTML.
In practical terms, yes, although it can reasonably be argued that it
*relies* on at least one browser bug.
> Now if I'm already going to be using XHTML internally, then who
> benefits from pushing it into another yet format just to serve out ?
Didn't you just give at least one answer to that point, a moment ago?
> I need a good argument for going back to HTML over Appendix C.
>
> Hixie's position is pure sophistry.
Whether you agree with his supporting arguments or not, there's
certainly one point where he's got it spot-on. Vast swathes of
so-called Appendix-C XHTML are in fact unfit to be called XHTML -
they're nothing more than XHTML-ish-flavoured tag-soup - the very
thing that XML claimed it was going to save us from.
The clue is that those who promote the use of XHTML - amongst authors
who have no idea why they are making that choice - have taken us from
a situation where there was one horrible legacy of HTML-flavoured tag
soup, to a situation where there are two horrible legacies of tag
soup, with none of the benefits that were claimed for XHTML. Most of
that stuff is useless as real XHTML anyway - it only gets rendered
tolerably because it's being parsed as "HTML with a deliberate bug".
As you have said yourself, it's easier to emit good HTML than it is to
emit good Appendix-C-compatible XHTML/1.0, *even* when your internal
process is XML-based. And, since the latter offers *no benefits
whatever to the existing web* as compared to the former (and even
relies on a widespread browser bug, and brings with it some quite
unnecessary additional complications), why not just keep on emitting
HTML, *until* the web is ready to deploy real XHTML with some real
additional benefits relative to either flavour of "text/html" ?
Otherwise, I'd venture a hunch that XHTML (at least most of what
currently purports to be XHTML) is due to fester in its own dreck,
alongside the festering HTML-flavoured tag soup legacy, and we'll need
some alternative clean solution (don't ask me what it might be), in
place of the one which XML claimed to offer but which seems to be
failing - except for a few commendable exceptions ("present company",
and all that).
> I'm still waiting for a screenshot of a browser demonstrating the
> "infamous SHORTTAG bug"
A pity, then, that I didn't keep a screen shot of emacs-w3 before it
got deliberately broken to avoid the problem. You don't have to
believe me, but it's nevertheless true. A web search reminds me that
we were discussing it in 2001, but I'm not sure just when emacs-w3 got
nobbled in that way. At that time, Toby Speight (for one) evidently
considered that the popular browsers were broken because of their
failure to implement this non-optional feature of SGML.
> If we take Hixie's own position of "Ivory tower SGML purist who
> hasn't even noticed the M$oft barbarians at the gate", then doctypes
> have always been flexible and extensible by SGML's rules.
Then we get into *real* sophistry, for example that HTML purports to
be an application of SGML while at the same time ruling-out constructs
which SGML forbids to be ruled-out. But this line of argument would
get us nowhere, if you only care about "what works in practice" never
mind the theory.
> Hixie is asking me to throw away the processing capabilities of XML
> in favour of pleasing the tiny handful of SGML-anoraks who even
> understand what the problem is. This is no bargain.
I don't think so. Here's his key advice:
|| If you use XHTML, you should deliver it with the
|| application/xhtml+xml MIME type. If you do not do so, you should
|| use HTML4 instead of XHTML.
^^^
If you interpret that word "use" to refer to what you deliver to the
web, *irrespective* of your internal process, then it seems to me to
be good advice, and consistent with what you said already.
He's asking you, for the time being, to do what you already described
above - have your process emit good HTML. You evidently don't have
any sympathy for the various pillars of the argument which he used to
support that advice, but it seems, from what you said above, that this
part of the advice is consistent with what you yourself said.
Your internal processes may be interesting to discuss, but in the
final analysis they're no concern of the web user: *their* only
justified concern is the quality of your final product as emitted from
your web server. As far as I'm concerned, you'd be welcome to code in
well-structured LaTeX, whatever, and convert that to HTML for the web
- the criterion being the quality of the final result, no matter what
your internal process.
> Meanwhile the rest of the world sees MS Office and Dreamweaver as
> appropriate HTML authoring tools, despite their absolutely glaring
> holes. The enemy here is bad and bogus markup with no structure
> whatsoever, not XHTML.
Indeed. But we now have a widespread practical demonstration (as if
it wasn't obvious that this was going to happen) that encouraging
tag-soup cooks to cook a different flavour of tag-soup goes nowhere
towards improving the quality of the web.
I'd have to blame it on the W3C for failing to foresee the
consequences of them offering a transition path from HTML to so-called
XHTML, instead of making it plain that it was meant to be a clean
break from an unwelcome legacy. That they would offer specifications
for "Transitional" and "Frameset" XHTML just made things worse.
regards
Navigation:
[Reply to this message]
|