You are here: Re: Tricky Error « PHP Programming Language « IT news, forums, messages
Re: Tricky Error

Posted by J Wynia on 05/10/05 19:56

dracolytch wrote:
> To be honest, I've already done this analysis. The SQL in the URLs are
> only the WHERE clauses. So, someone could go ahead and inject their own
> SQL to create their own custom WHERE clause. All the more power to 'em.

Most of the "classic" examples of SQL injection *are* in the WHERE
clause. Many are within a specific column name portion of the WHERE
clause. One of those classic examples is:

WHERE email = 'someone@example.com';

which, when injected can become

WHERE email = 'someone@example.com'; DROP TABLE members; --';

>
> This is not the kind of system where I'm doing my "Trust no input from
> the user" diligence, I'm just organizing available data in an
> accessable format.

Given that the solution I mentioned takes less than 15 minutes, it's
hard to justify *not* taking that level of diligence. To me, not passing
anything important (like SQL, usernames, passwords, account information,
etc.) on the URL is my default level of diligence for any project I'm
putting on the web at large.

>
> True, your form of caching would solve the given problem, but I think
> it may be over-solving the problem. It could also incur additional
> maintenance issues once the cache reaches a couple thousand entries.
>
> ~D
>
I actually use variations of the code I posted to do caching of a LOT of
things. In most instances, I also wrap a date check in there or a file
count that cleans up "all files created before $timestamp" or
"everything but the latest 100". If the cache is going to be big, I make
this a seperate script and schedule it to be called daily, etc. It's
such a simple solution, I don't see how it's "over-solving" the problem.

I actually also frequently do this sort of caching on result sets, RSS
and other remote XML, etc. as well. Many of my caches run well into the
10,000-50,000 file range and still perform quite well. In most cases, it
also sped up the application in the process as getting the contents of a
specific file is quicker than connecting to a database or making a
remote connection to fetch content. In most of my implementations, using
something similar to what I posted, I actually wrap the database fetch
itself in a cache check. It's still based on an md5() of the query, but
instead of storing the query, I store a serialized version of the
returned array from the database. I add a quick additional check next to
the file_exists to also check that it's new enough and delete as
appropriate.

As long as you either add cleanup code or know the churn rate of the
data is so low as to not be an issue, there is no maintenance. I see
literally hundreds of apps on Sourceforge, Hotscripts, etc. that are
making dozens of calls to databases on every page load for data that
changes twice a year. That's a complete waste of database time.

 

Navigation:

[Reply to this message]


Удаленная работа для программистов  •  Как заработать на Google AdSense  •  England, UK  •  статьи на английском  •  PHP MySQL CMS Apache Oscommerce  •  Online Business Knowledge Base  •  DVD MP3 AVI MP4 players codecs conversion help
Home  •  Search  •  Site Map  •  Set as Homepage  •  Add to Favourites

Copyright © 2005-2006 Powered by Custom PHP Programming

Сайт изготовлен в Студии Валентина Петручека
изготовление и поддержка веб-сайтов, разработка программного обеспечения, поисковая оптимизация