|  | Posted by Rik on 01/12/07 18:26 
J.O. Aho wrote:> Rik wrote:
 >> Geoff Berrow wrote:
 >>> Message-ID: <378bb$45a7bf44$8259c69c$19192@news1.tudelft.nl> from
 >>> Rik contained the following:
 >>>> If there are reasonably few html snippets/pages it could be OK.
 >>>> Wouldn't want to try it with 1000+ files though, the filesystem
 >>>> becomes a bottleneck.
 >>> I couldn't say.  I always thought that's what the filesystem was
 >>> good
 >>> at.
 >>
 >> Well, it's not really designed to hold 1000+ files in one directory.
 >> Split them up in subdirs (for instance on first character) and it'll
 >> be much faster again.
 >
 > 1000 files are nothing, finding 10000 files takes not more than 0.02
 > - 0.04 seconds on a good file system, but of course if using
 > something like fat-file system, then things will be painful slow.
 
 I have to admit I'm not that into filesystems, I can only say I've
 witnessed it first hand on a FreeBSD server, where splitting the directory
 in subdirectories containing up to about 500-800 files increased
 performance considerably.
 
 Using fat on a server is just asking for it offcourse, not to mention
 highly difficult to maintain security.
 --
 Rik Wasmus
 [Back to original message] |