Reply to Re: HTML preprocessor

Your name:

Reply:


Posted by Andy Dingley on 01/18/07 15:55

aa wrote:

> Client-side caching works when there are files external to a given HTML

The question is not "Does caching work for the other documents" but
rather "Does caching still work for the merged HTML, if I use a
particular technique to merge it ?"

For SSI, caching is fine.
For client-side assembly, caching is not possible.

> If you include these files at publishing, client side cache does not help
> downloading pages using the same, say footer.

The footer will already have been merged into the resulting HTML
document. Its content is cached (as part of the main document), the
file itself is not needed, not visible, and so the fact it's not cached
doesn't matter.


> Yet I have no concerns for this for I am not a graphic addict and my pages
> usually are very small hand coded things

The notion that "I can do something badly because I don't do much of
it" isn't conducive to developing good skills.

I do this. We all do this. But I don't _like_ doing this, but sometimes
I work on a big site and I can't do it any more. Then it's useful to
know beforehand how to do things right.

Learning to do things right is often hard and lengthy. Once you've
learned though, it's usually quicker and easier to do them right
anyway, and everywhere.


> If there is
> no access to server-side scripts, then JS works fine for to change menu I
> need to change and upload just one JS file no mater how many HTML pages use
> this menu.

This is a complete red herring. Your argument is "inclusion is good,
therefore client-side inclusion is also good".
Our argument is instead "server side inclusion is better than client
side inclusion". There is no contradiction here because there is no
overlap.

However no-one is advocating an absence of inclusion (the only case
worse than your advice). Server-side inclusion is easily available in
most cases and can still be obtained in the others, by less direct
routes.

> Anyone who does not fit into your model is, if you let me use your
> own lexicon, "a clue-proof idiot"

No, not anyone. Just someone, like yourself, who begins as merely
ignorant but then just becomes entrenched in their ignorance rather
than bothering to learn something. It's your choice. No-one else cares.

> > That is simply ridiculous. Of course navigation needs to be accessible
> > to minimal non-JS spiders.
>
> So if you do not accept the concept of different purposes of websites and
> different target audiences, then I respect your opinion and memorise it.

There are two fallacies in yoru argument here.

Firstly there are indeed "different websites". There are even two
"groups of websites", where one group cares about search engine
performance and one doesn't. However the first group is far, far bigger
than the second (kids' homepages and photos to share with the family).
Pretty much all of us here, whatever sort of page we write, care very
much about search engines.

Your fallacy though is to equate this categorisation with a
categorisation by purpose or implementation technology. It doesn't
matter if you're large or small, graphical or text, chances are that
you're in that huge group of search-engine-hungry sites. You simply
cannot say "My page is small and is made of badly-sized unreadable
text, therefore I don't care about search engines".

[Back to original message]


Удаленная работа для программистов  •  Как заработать на Google AdSense  •  England, UK  •  статьи на английском  •  PHP MySQL CMS Apache Oscommerce  •  Online Business Knowledge Base  •  DVD MP3 AVI MP4 players codecs conversion help
Home  •  Search  •  Site Map  •  Set as Homepage  •  Add to Favourites

Copyright © 2005-2006 Powered by Custom PHP Programming

Сайт изготовлен в Студии Валентина Петручека
изготовление и поддержка веб-сайтов, разработка программного обеспечения, поисковая оптимизация