|
Posted by Steve on 07/27/06 10:15
On Thu, 27 Jul 2006 02:49:10 -0700, b007uk wrote:
> Hi all,
> I really need your advice.
> I have to store over a million files, 10 - 15 kb each, in one folder.
> The files are created by my php script, sometimes the old files are
> deleted and new ones are written.
> So, basically on every connection my script reads/deletes/ writes files
> from/to that folder.
> Right now i have only around 300 000 files in that folder, and it feels
> like its getting slower for that script to work. It does work at the
> moment, but i am not sure what will happen when there is over a million
> files there...
> Are there any limits of files that can be stored in a folder?
> Would it be better for me to use mysql? I am not sure how mysql will
> cope with millions of writes/reads
> What would you recommend?
> Thank you very much!
> p.s.I am running linux, fedora core 3
In a word... *you're crazy*!!! Look at the way that files are stored under
linux, with the different file systems. with ext2/3, god knows how many
levels of indirection you'll be going through to even amange to index the
directory.
You need to do a lot of reading, a lot of customization, and a load of
benchmarking to get this to work. And, tbh, I'd find another solution.
There must be a way to subdivide this data to get an acceptable number of
files ( thousands or less!!! ) in each directory.
Steve
Navigation:
[Reply to this message]
|