.NET Remoting on port 80 -- reasonable?

    Date: 12/29/07 (C Sharp)    Keywords: software, xml, database, asp, web, linux, hosting, microsoft

    All,

    As those of us who have worked with attempting to make any logically partitioned/layered OO schema work between .NET client applications and WebServices know, it's simply not possible without going far out of your way to do a lot of extra work that the elegant OO inheritance/extension was supposed to alleviate anyway. When I am especially irked about this issue, I will construct job interviews with the following two questions spaced one after the other:

    n) Do you consider yourself a fully object-oriented developer, familliar with the concepts of inheritance, extension, abstract classes, interfaces, with the ability to both understand and work with the boundaries/"layers"/"tiers" between classes for a given software project (e.g. Database/Business Objects/User Interface)?

    m) If so, what do you consider the most effective way to transmit an instance or instances of a business object from one .NET application to another (either client -> server or server -> client)?

    The answers are invariably n) Absolutely, yes, I am God's gift to rational software development; and m) [whirr-clunk as Microsoft gears engage within brain] Using disconnected System.Data.DataSets since they're already XmlSerializable!

    Always gives me a laugh. Nevermind that employing such an approach basically necessitates the creation of a FOURTH boundary/"layer"/"tier" -- call it Middleware, or specifically in this hideous world, Serialization/Deserialization.

    Anyway, on to my question!

    Due to the headaches raised by the above subject, and the deeper object replication that goes on using .NET Remoting (e.g. regardless of what is actually transmitted down the wire, you get a complete, functional, fully type-specific object with its public/private members/properties/methods intact, with NO Reference.cs or any other whiz-bang class-redefinition machinery thrown in your face), I come upon a quandary.

    In my mind, at least up til the present moment, .NET Remoting is great for applications over whose general operational control you or your company or your trusted partner will maintain for the lifecycle of said application. With this kind of organizational scenario, routers and firewalls are generally surmountable, or at least negotiable, obstacles, since you know exactly from which machine(s) or subnet(s) certain TCP packets will be sent, and exactly to which machine(s) or subnet(s) those packets will be transmitted, and on what TCP port(s).

    .NET WebServices, on the other hand, are a better choice for software with an eventual goal of wider distribution to CUSTOMERS rather than colleagues or partners -- entities whose firewalls you do not control, and who will regard something as little as a request as to its status wrt your application's performance as reflecting an incredibly unprofessional design. In other words, it's not much of a limit to place on your customers that they can transmit HTTP over port 80, and/or HTTPS over port 443.

    But, as I finish up the previous 4 hours of banging my head into various WebService-related walls attempting to get class instances (or even just their public data -- I know how WebServices work and I understand it's for good reason) to replicate, I have to wonder -- what kinds of problems might there be lurking in, say, a widespread commercial application, some of whose clients employ .NET Remoting to communicate with their remote servers using TCP port 80? I don't see any particular *TECHNICAL* problems that are unsurmountable -- obviously, as usage scales, there will be more of a hassle doing load-balancing types of activities compared to, say, turning on IIS clustering and walking away. But that's a problem that is entirely out-of-scope for me -- if I get there with the subject of this question, and it's my biggest problem, I will be very happy.

    Anyone consider or try this in a relatively large/Enterprise-level "ASP" (in its original disambiguation, that is, an Application Service Provider hosting the requests of potentially hundreds or thousands or more clients) environment? I pulled the better part of a decade as a systems/network admin and general TCP/IP geek (UNIX/Linux), so I know that, at least as of ~2003, you couldn't tell your router (unless it had more processors than most mainstream servers in 2007) to ONLY allow traffic fitting the profile of HTTP requests out on port 80. So I don't really see any technical problems.

    However, any feedback is welcome. Thanks for reading this -- my fingers tend to run away from me. :)

    Source: http://community.livejournal.com/csharp/90393.html

« Quick ListBox question || GUI multithreading woes Pt. 2 »


antivirus | apache | asp | blogging | browser | bugtracking | cms | crm | css | database | ebay | ecommerce | google | hosting | html | java | jsp | linux | microsoft | mysql | offshore | offshoring | oscommerce | php | postgresql | programming | rss | security | seo | shopping | software | spam | spyware | sql | technology | templates | tracker | virus | web | xml | yahoo | home