|
Posted by yoko on 03/02/07 23:33
Different urls. What i mean is that it only works with blogs that have comments.
That is how the thing works.
Thanks...
Hello shimmyshack,
> On 2 Mar, 17:05, yoko <n...@na.ca> wrote:
>
>> But how are they determining the feed for that URL?
>>
>> http://torrentfreak.com/interview-with-bram-cohen-the-inventor-of-bit
>> ...
>>
>> Here is another link as well did they find the RSS for this how did
>> they do
>> it.http://www.digg.com/security/Department_of_Homeland_Security_requi
>> res...
>>
>> I installed both those components you told me to do for firefox very
>> cool.
>>
>> If i can get the rss data its prety easy to strip it down. Also how
>> do they determine different urls
>>
>> Thanks..
>> Hello shimmyshack,
>>> On 1 Mar, 21:26, yoko <n...@na.ca> wrote:
>>>
>>>> If you go here
>>>>
>>>> http://co.mments.com/
>>>>
>>>> Enter the following url to
>>>> trackhttp://torrentfreak.com/interview-with-bram-cohen-the-inventor
>>>> -o f-bit...
>>>>
>>>> Click track
>>>>
>>>> Now you will see it added to the page. How do they strip the page
>>>> to display it like that
>>>>
>>>> Please let me know if there are any examples out there or already
>>>> done php code to do this
>>>>
>>>> Thanks..
>>>>
>>> getfirefox and install the "firebug" and "web developer" addons.
>>> Then you can see the request being made behind the scenes.
>>> The browser posts the URL tohttp://co.mments.com/track/track
>>> a script on the co.mments.com server then retrieves the XML feed
>>> corresponding to that URL,
>>> and uses something like the SimpleXML PHP library to grab the title
>>> body and so on for that feed, using methods similar to DOM methods
>>> within javascript.
>>> The results are then escaped, and mixed with html, and finally
>>> inserted into the DOM within the browser. As Benjamin says its a
>>> process referred to as AJAX, the real work is done server side, with
>>> a
>>> javascript class inside the browser controlling the result in the
>>> browser. CSS finally controls the look and feel.
>>> I guess your next step depends on what you meant by the question!!
>>> If
>>> you meant HUWHAT HOW??? do they do that, then you're in for a real
>>> learning curve, if you meant how to they strip the HTML and make it
>>> look pretty, the answer is RSS Feeds, as well as a browser version
>>> there is often a more tightly controlled cut down version intended
>>> to
>>> be shared between computers, that's easy to parse for re-display,
>>> RSS
>>> is a much more accessible technology, just right click on an RSS
>>> symbol download it and open it in an editor, and change stuff in
>>> there
>>> and make it your own.
> I would imagine that digg - not a site I use - would have an API to
> its stories once you get an account, the "blog it" link is
> interesting, you have to have an accont as I say but once youve logged
> on I guess the RSS is available. (I havent checked that though!) It is
> actually possible to parse entire pages because remember the markup of
> xhtml page should be xml compliant and with a large site like DIGG
> things dont change too much and you can spare the time to tweak your
> code to match any small changes they might make, however I should
> imagine it is done using an xml link up using some kind of API or
> agreement with digg)
>
> Determining different URLs? Can you explain that a bit? cyl.
>
Navigation:
[Reply to this message]
|