|
Posted by gosha bine on 06/30/07 10:57
Tyrone Slothrop wrote:
> I am writing a script to upload data to a delimited text file. The
> data is in the format:
> recordID \t the rest of the data ...
>
> It is significant that the recordID's are unique and that the newly
> uploaded data replaces the old data in the event of duplicate
> recordID's.
>
> The solution I came up with (trying to avoid having to loop through
> all of the records of both files looking for dupes) was to:
> * treat both files as arrays using file()
> * assign the keys of each line on each array using the alpha+recordID
> (prevents renumbering of key when ...)
> * running array_merge ($old_data, $new_data)
> * running ksort() on merged data to sort by recordID
> * imploding the data on \n and writing to the data file
>
> This seems to be working as planned. However, I am really concerned
> that the old data will not be overwritten by new when running
> array_merge.
>
> Comments, suggestions, snark, flames? TIA!
>
> The (simplified) code:
> [snip]
I've looked into your problem and I think I can suggest simpler code
//
function merge($old, $new) {
$new = str_replace("\t", "~\t", $new);
$all = array_merge($old, $new);
sort($all);
$all = implode("\n", $all);
$all = preg_replace('/(\d+) .* \n \1 ~ (.*)/x', "$1$2", $all);
$all = str_replace('~', '', $all);
return $all;
}
#test
$a = array(
'1 bbb',
'2 ccc',
'7 vvv',
'5 zzz',
);
$b = array(
'3 XXX',
'2 CCCC',
'5 ZZZ',
'9 TTTT',
);
echo merge($a, $b);
//
Although I agree with Andrew you should use databases for this
--
gosha bine
extended php parser ~ http://code.google.com/p/pihipi
blok ~ http://www.tagarga.com/blok
Navigation:
[Reply to this message]
|