|
Posted by Rasmus Lerdorf on 03/25/05 09:34
Joshua Beall wrote:
> "Rasmus Lerdorf" <rasmus@lerdorf.com> wrote in message
> news:42435332.70807@lerdorf.com...
>
>>Joshua Beall wrote:
>>
>>>I am doing some work where I want to do locking, and prevent scripts from
>>>running in parallel. I see that I could use the semaphore mechanism, but
>>>I'd like for my code to be portable, and that extension is not enabled in
>>>many places.
>>
>>Sort of defeats the whole concept of a web server, but to answer just your
>>process id question, use getmypid()
>
>
> http://php.net/manual/en/function.getmypid.php
>
> It says "Process IDs are not unique"
>
> I really only need it to be unique at any given instant. I can do
> sha1(microtime().getmypid()) to generate a unique ID. But of course it is
> only guaranteed to be unique if indeed the process ID is not shared.
pids are not unique over time. They get re-used, but in any instant on
a single server, the pid is unique.
> The problem I am having is that people are double-submitting certain
> transactions. My first attempt to prevent this was to store a flag in the
> session record indicating whether or not certain transactions had been
> completed, but this turned out to be insufficient at times because users
> could try and initiate a second transaction before the first transaction had
> finished (and thus the system had not yet flagged the transaction completed
> in the session record). They then both completed in parallel, and voila,
> duplicate transactions again.
But a double-submit is likely to come from separate Apache processes, so
I don't see where the pid comes into the picture. If I reload a page
and resend the post data, that POST request is going to be processed a
second time most likely by a different httpd process. What you need to
do is put a unique (you can use the uniqid function) in the actual
transaction data and not allow the transaction if that token is already
present in your datastore.
-Rasmus
Navigation:
[Reply to this message]
|