1. Question regarding Collations

    Date: 06/08/12 (SQL Server)    Keywords: database, sql

    Does anyone have any experience on this group in working with Unicode, double-byte case-sensitive data in SQL 2008 R2?

    I would like to select a collation for my database that allows case-sensitive sorting/comparisons with Unicode data that could contain Japanese characters.  Whew...that's hard to say.

    Source: https://sqlserver.livejournal.com/76773.html

  2. Tracking Database Growth

    Date: 10/10/11 (SQL Server)    Keywords: database, sql

    I came across this article when doing some more research on documenting database growth over time.  It worked really well for me.

    Thank you vyaskn@hotmail.com!

    In this article I am going to explain, how to track file growths, especially the database files. First of all, why is it important to track database file growth? Tracking file growth, helps you understand the rate at which your database is growing, so that you can plan ahead for your future storage needs. It is better to plan ahead, instead of running around when you run out of disk space, isn't it? So, how can we track file growths? There are a couple of ways.

    The first approach:
    SQL Server BACKUP and RESTORE commands store the backup, restore history in the msdb database. In this approach, I am going to use the tables backupset and backupfile from msdb to calculate the file growth percentages. Whenever you backup a database the BACKUP command inserts a row in the backupset table and one row each for every file in the backed-up database in the backupfile table, along with the size of each file. I am going to use these file sizes recorded by BACKUP command, compare them with the previous sizes and come up with the percentage of file growth. This approach assumes that you do full database backups periodically, at regular intervals. 


    Click here to download the procedure sp_track_db_growth. ...

    **Please use free code responsibly.  Test and verify before deploying to production!


    Source: https://sqlserver.livejournal.com/75793.html

  3. DBA Position Open in North Texas

    Date: 07/20/11 (SQL Server)    Keywords: technology, database, sql


    --------------------------- -------------------------------------------
    1-2 years of total
    database experience
    required (preferably
    Oracle, Sybase, or SQL
    Server)
    Experience with Windows
    Server operating systems
    Experience with creating
    SQL scripts and setting up
    typical database
    maintenance jobs
    Experience working with
    development teams
    ------------------------------------------------------------------------
    The Database Administrator
    1 will participate in
    departmental projects by
    assisting in the
    development of project
    plans, documentation, and
    performing of project
    tasks. In support of the
    other DBA’s, He/She will
    perform daily maintenance
    tasks and participate in
    DBA on-call duty. The DBA
    I position will be
    responsible for installing
    and maintaining database
    technology in a
    multi-platform mission
    critical environment.
    ------------------------------------------------------------------------
    CONTACT:
    Jennifer Toal
    Research Analyst
    COMTEK-Group
    972-792-1045 Office
    972-467-2901 Mobile
    972-644-6602 Fax

    Source: https://sqlserver.livejournal.com/75585.html

  4. Production SQL DBA Opening in North Texas

    Date: 06/02/11 (SQL Server)    Keywords: database, asp, sql, security, microsoft

    Passing this along for a friend...If you know anyone looking, please let me know.  Pay terms seem to be a little higher than normal for that many years of experience.  

    Responsibilities:

    • Installation, configuration, customization, maintenance and performance tuning of SQL Server 2005 & 2008 including SSIS, SSAS and SSRS.
    • SQL version migration, patching and security management.
    • Monitor database server capacity/performance and make infrastructure and architecture recommendations to management for necessary changes/updates.
    • Perform database optimization, administration and maintenance (partitioning tables, partitioning indexes, indexing, normalization, synchronization, job monitoring, etc).
    • Manage all aspects of database operations including implementation of database monitoring tools, event monitoring, diagnostic analysis, performance optimization routines and top-tier support for resolving support issues.
    • Work with internal IT operations teams to troubleshoot network and server issues and optimize the database environment.
    • Establish and enforce database change management standards including pushes from development to QA, on to production, etc;
    • Proactively stay current with latest technologies and industry best practices associated to the position and responsibilities.
    • Provide development and production support to troubleshoot day-to-day database or related application issues.
    • Develop, implement and verify processes for system monitoring, storage management, backup and recovery.
    • Develop, implement and verify database backup and disaster recovery strategies.
    • Design and implement all database security to ensure integrity and consistency among the various database regions
    • Develop and maintain documentation of the production environment.
    • Manage SLAs and strict adherence to production controls - Sarbanes-Oxley (SOX) monitored via external audits
    Necessary Qualifications:
    • Must have experience on SQL Server 2005.
    • Good exposure on Installation, Configuration of database Clusters, Replication, Log shipping and Mirroring
    • Expertise in Troubleshooting and performance monitoring SQL Server Database server (Query Tuning, Server Tuning, Disk Performance Monitoring, Memory Pressure, CPU bottleneck etc.)
    • Expertise in T-SQL and writing efficient and highly performing SQL Statements.
    • Expertise in SQL Server Internals, wait events, profiler, windows events etc
    • Must have understanding of key infrastructure technologies such as Clustering, SAN Storage, Virtualization, Cloud services etc.

    Other nice to have experience:
    • System administration fundamentals including Installation, Configuration & Security setups.
    • Experience with SQL 2008 a plus.
    • Experienced in architecting high availability, business resumption and disaster recovery solutions
    • Microsoft SQL Server DBA Certification
    • Experience with SCOM/SCCM/SCSM is a plus
    • Extremely self motivated and ability to work within a globally dispersed team.
    Desired Skills:
    • Data Warehouse experience
    • VLDB experience highly desired
    • Experience with databases > 5 TB, processing 2 million + rows of data daily
    • MS SQL Server 2005 Transact-SQL (T-SQL)
    • Stored Procedure Development Communication Skills, work well with the team, and within team processes
    • Database and file size and space forecasting ability
    • Ability to manage a complex database system and assist the client with Database Integration for Future Business Intelligence efforts
    • Confio Ignite Performance
    Education & Work Experience:
    • Bachelor's degree in Computer Science, Business Administration or other
    • 10+ years experience as a Database Administrator 

    Source: https://sqlserver.livejournal.com/75423.html

  5. Starting with the Specs: Building Solid Code Review Procedure

    Date: 05/06/11 (SQL Server)    Keywords: database, security


    In our last entry, we introduced the concept of code review procedures.  Our first topic to consider in this life cycle is for the developer to take some time to understand the Business Requirements and Functional context.  These two critical tasks should in a perfect world be understood by all dba's in the SDLC of database code, but the developer has a unique opportunity to let his/her code communicate these requirements and context though coding best practices and adequate documentation.  Some items a developer, or a peer can look for in performing these 2 steps are the following:

    Satisfying Business Requirements & Functional Context

    • Has a knowledgeable user been consulted during the planning/architecture phase of code creation?

    • Did the architect make specifications for future growth and change needs of the application?

    • Has the developer reviewed the business requirements?

    • Do the developer and the business have the same understanding for required performance of the application?

    • Does the reviewer understand the code being reviewed?

    • Does your code adhere to corporate coding specifications (Yes, this is a business requirement, too)

    • At what layer in your business environment does the code execute?

    • Does the piece of code functionally achieve the stakeholder's need as documented in the project charter ?

    • What is the data size and volume worked with in this code?

    • What are the data archival requirements?

    • Have company security policies been complied with?

    • How will the application or change be installed and configured?

    • By what method will the development team preserve and version the code and objects affected?

    ( Thanks to [info]adina_atl for assisting with the checklist )

    Source: https://sqlserver.livejournal.com/74884.html

  6. Datafile Growth in SQL Server - Getting the Statistics Part I

    Date: 03/10/11 (SQL Server)    Keywords: database

     We create a database called ADMIN which stores our administrative information such as file space statistics. We use a combination of extended stored procedures and publicly-available code to log these statistics. Here is a samples:

    /* Get current space statistics. You can run this and store the results in a holding table. */

    CREATE PROCEDURE [dbo].[sp_file_space] @server_name sysname, @id int
    as

    declare @dbname sysname
    declare @cmd varchar(700),
    @lname_len int,
    @fname_len int,
    @fgroup_len int

    set nocount on

    select @cmd = 'use [?] select ' + convert(varchar,@id) + ',''' + rtrim(@server_name) + ''',db_name(),logical_name = name,fileid,upper(substring(filename,1,1)),filename,
    filegroup = filegroup_name(groupid),
    size_in_KB = size * 8,
    maxsize_in_KB = (case maxsize when -1 then 0
    else
    maxsize * 8 end),
    growth = (case status & 0x100000 when 0x100000 then growth else growth * 8 end),
    KB_growth_flag = (case status & 0x100000 when 0x100000 then 0 else 1 end) ,
    usage = (case status & 0x40 when 0x40 then ''Log Only'' else ''Data Only'' end)
    from sysfiles order by fileid'


    exec sp_MSforeachdb @command1=@cmd

    return 0


    GO

    ** Please be responsible with free code. Test and check before implementing in a production environment

    Source: https://sqlserver.livejournal.com/73512.html

  7. How do you track datafile growth?

    Date: 03/09/11 (SQL Server)    Keywords: database, sql

     Here's a good question for data environments today.  What methods do you employ to track datafile growth in your SQL Server databases?  Do you use a 3rd-party tool, or do you have a home-brew method?  I'll share my method once we read about other's ideas.

    Source: https://sqlserver.livejournal.com/73430.html

  8. Need a hack for changing default db

    Date: 11/08/10 (SQL Server)    Keywords: database, security

    I have a user who locked himself out of a database because his default db went into suspect mode.  His security policy was nice enough to bar anyone in the Windows Administrators group from logging in to the db.  He says he can't remember the two passwords for the administrative logins currently assigned to System Administrator  role on the server.  Any hope here?  I think he's screwed, personally.

    Source: https://sqlserver.livejournal.com/72656.html

  9. Category theory and query optimization

    Date: 05/23/08 (Algorithms)    Keywords: database, google

    Almost ever since I got acquainted with category theory it seemed to me like a perfect tool for formalizing database queries and their properties, and, consequently, for building optimization frameworks.
    I am considering this for my thesis, but I can't find any work about applying category theory to query optimization, and this frightens me a bit: looks like someone has long ago proved that this is impossible (why would it be?), and now noone even tries.

    Could you give me pointers to some works on the topic? I've googled the hell out of the internet, I found some works on formalizing data models with CT but nothing on exactly optimization.

    Source: https://algorithms.livejournal.com/99763.html

  10. Efficient full-text searchs on large sets of data

    Date: 10/15/10 (MySQL Communtiy)    Keywords: mysql, database, sql

    A database application I've written uses a table with around 600,000 rows. Each row has a text field 500-5000 characters long. I periodically need to find all the rows containing a particular phone number, name, or address, ie.'123-4567', 'john smith', '1950 Main St N'

    I'm doing this using
    SELECT * FROM `tb_archive` WHERE `text` LIKE '%john smith%' ORDER BY `date` DESC

    The problem is that it is too slow. Most searches take 30-60 seconds. If multiple searches are done the server response begins to slow to a crawl for other users.

    I've looked at mysql built in full-text indexing - but I'm not sure if it can work since I only need exact matches, don't care about relavence, and often search for numbers and short words.

    Any suggestions as to how I can do this more efficiently?

    Source: https://mysql.livejournal.com/138390.html

  11. Database in SVN

    Date: 10/09/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    Anyone know how to store a MySQL database schema (and maybe data from some tables hopefully based on table structure) into SVN?

    Source: https://mysql.livejournal.com/137943.html

  12. Effective coding

    Date: 09/22/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    I found an instruction set that said to list the first, and last name of all employees that had neither 'SON' nor 'DAUGHTER' listed in their dependency files. I came up with a query that gave me the results I wanted by assuming the database only takes SPOUSE, SON, and DAUGHTER, but assuming has always caused me trouble, so I would like to know: Is there any better way I could have gone about approaching this so it's not going by a general assumption of dep_relation = 1, and so forth?



    
    mysql> SELECT * FROM dependent;
    +-------------+------------+------------+-------------------+------------------+
    | dep_emp_ssn | dep_name   | dep_gender | dep_date_of_birth | dep_relationship |
    +-------------+------------+------------+-------------------+------------------+
    | 999444444   | Jo Ellen   | F          | 1996-04-05        | DAUGHTER         | 
    | 999444444   | Andrew     | M          | 1998-10-25        | SON              | 
    | 999444444   | Susan      | F          | 1975-05-03        | SPOUSE           | 
    | 999555555   | Allen      | M          | 1968-02-29        | SPOUSE           | 
    | 999111111   | Jeffery    | M          | 1978-01-01        | SON              | 
    | 999111111   | Deanna     | F          | 1978-12-31        | DAUGHTER         | 
    | 999111111   | Mary Ellen | F          | 1957-05-05        | SPOUSE           | 
    +-------------+------------+------------+-------------------+------------------+
    
    
    mysql> SELECT * FROM employee;
    +-----------+---------------+----------------+
    | emp_ssn   | emp_last_name | emp_first_name |
    +-----------+---------------+----------------+
    | 999666666 | Bordoloi      | Bijoy          | 
    | 999555555 | Joyner        | Suzanne        | 
    | 999444444 | Zhu           | Waiman         | 
    | 999887777 | Markis        | Marcia         | 
    | 999222222 | Amin          | Hyder          | 
    | 999111111 | Bock          | Douglas        | 
    | 999333333 | Joshi         | Dinesh         | 
    | 999888888 | Prescott      | Sherri         | 
    +-----------+---------------+----------------+
    
    
    
    What I generated was:
    
    SELECT emp_first_name, emp_last_name 
    FROM 
    	(
    	SELECT emp_first_name, emp_last_name,
    	dep_relationship, COUNT(dep_relationship) AS count 
    	FROM dependent 
    	JOIN employee ON dep_emp_ssn = emp_ssn 
    	GROUP BY emp_ssn
    	) tempTable 
    WHERE tempTable.count = 1 
    AND tempTable.dep_relationship != 'SON' 
    AND tempTable.dep_relationship != 'DAUGHTER';
    
    To get:
    
    +----------------+---------------+
    | emp_first_name | emp_last_name |
    +----------------+---------------+
    | Suzanne        | Joyner        | 
    +----------------+---------------+
    



    So really, I covered the "Son nor Daughter" case, but it's the count = 1 that I feel could be "bettered" so to speak.


    I appreciate the feedback. ;]

    Source: https://mysql.livejournal.com/137636.html

  13. MySQL

    Date: 08/07/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    I am not entirely sure if this is the right place to ask but I thought to give it a shot anyway.
    I want to create a new MySQL Database so I downloaded MySQL 4.1.22 for Windows and installed it. After installing it I have the program MySQL Command Line Client and MySQL Server Instance Config Wizard. For both of them I need a root user name and a root password but how do I know those, where can I find them?
    i hope you can help me.

    Source: https://mysql.livejournal.com/137190.html

  14. oracle blob to mysql blob *update*

    Date: 07/20/09 (MySQL Communtiy)    Keywords: php, mysql, xml, database, sql

    hey again
    i'm trying to convert an oracle db to mysql. so far everything is going ok except for the files in the db. the old admin was storing jpg, pdf, doc, xml files and whatever else you can think of into a blob field in the table on oracle.

    is this the best way to do things? i mean sure all you need to do is back up the database and presto you have all the files. but i'm having a hell of a time getting a backup. 440 entries in the table comes up to a 180 megabyte text file. phpmyadmin won't process the file because it times out, sqlyog and navicat are having trouble with the file size saying they're running out of memory.


    the closest i've come to completing this transfer is using toad for oracle and outputting the table to an mdb file. then in navicat i use the wizard and import the mdb, it sees the fields of the table perfectly, but the blobs come out having 0k. aside from that small issue i keep getting error 2006 when importing. and lose between 3 and 219 entries depending on the time i'm importing.

    so, at a loss, am i doing things right? is it possible to convert oracle blobs to mysql blobs? personally if i had written the original site i would just save a link to the actual file in the db and not store the actual file in the db. does that make sense?


    *UPDATE*
    so i tried using the migration tool, no luck. i'm missing libraries and my tech guy wasn't here yesterday. the site for the oracle libraries didn't want to work either.
    so this is what i ended up doing:
    in toad i saved a csv/txt file of the table without the blobs in the csv. cut the size down to 88k.
    i uploaded that via sqlyog into the db. no problems.
    then in toad i saved the blobs themselves as individual .dat files. so at least i have the files this way.
    the db actually has the filename in it, so it was just a simple extracting the filename, copying the .dat files from one folder to another on the server with a simple php script and changing the names to reflect their actual names. all the files can be opened no problem once the name have been changed.

    now i just need to link to said files from db to directory on the server.

    thanks again for the input :)

    Source: https://mysql.livejournal.com/136836.html

  15. Best schema for these requirements?

    Date: 06/19/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    I've been handed a legacy app with a mysql database to extend/upgrade. The system is a mess. Lab tests are stored in a single table, with over 200 columns containing marker values. The labtest table includes the DateTestCollected. A separate table holds patient demographics, including DateOfBirth and Sex.

    In the past, we scanned lab results and identified possible abnormalities based on one or more markers. The same cutoff values were used regardless of age or sex. So -- SELECT COUNT(*) FROM labtest WHERE marker_1>$m_1_cutoff_high OR marker_1<$m_1_cutoff_low; Even with the less than ideal schema, indexes on the marker columns made this kind of query work relatively well.

    New algorithms are being put into place. Instead of just looking at one or two markers, we now must look at the age & sex to determine which cutoff values to use, then apply our algorithm. I now have 12 possible variations of the same algorithm.

    (ie:

    WHERE
    date_of_test>'2008-01-01' AND
    date_of_test<'2008-04-01' AND
    (

    ( age_rage='0-9' AND sex='m' AND ( marker_1 < 876 AND marker_1 > 345) ) OR
    ( age_rage='10-19' AND sex='m' AND ( marker_1 < 824 AND marker_1 > 312) ) OR
    ( age_rage='20-29' AND sex='m' AND ( marker_1 < 798 AND marker_1 > 311) )

    ) OR (

    ( age_rage='0-9' AND sex='f' AND ( marker_1 < 987 AND marker_1 > 465) ) OR
    ( age_rage='10-19' AND sex='f' AND ( marker_1 < 813 AND marker_1 > 404) ) OR
    ( age_rage='20-29' AND sex='f' AND ( marker_1 < 701 AND marker_1 > 209) )

    ) OR (
    /* ect */
    )

    There are actually 12 variations to check against. This query is very slow, especially when iterating through 200+ markers (200 x 12 variations). EXPLAIN showed that MySQL is scanning all rows (50,000) - as soon as I applied a multi-column index (date_of_test, marker_1) MySQL stopped scanning the entire table.

    Under the current schema, each marker, age, sex, and date_of_test column is indexed separately (no multi-column indexes). There are limits to the number of indexes per table.

    So that's where I am. I have a legacy system in need of work, but I'm not sure of the best, most efficient way of approaching the problem.

    I thought of converting the labtest table to something like (labtest_id, marker_id, marker_value), but that alone doesn't solve the issue of having to find the age & sex of each patient to determine the correct marker cutoff value.

    What would be the least painful way of working through this?

    Source: https://mysql.livejournal.com/135617.html

  16. Pivs Not Showing Up?

    Date: 06/05/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    Hi Folks. I'm hoping this is a simple question, but its baffling me.

    I have a mySQL 5.0.27 DB. I am trying to give people "show view" and "create view" privs for a specific database. When I execute the commands to do this, they seem to be accepted with no problem - no syntax error, no warning. However when I then do a show grants on the person the "view privs" do not show up, and they confirm that they do not have those privs.

    I have flushed privileges as well, which was my first thought, but that is not the case.

    I do seem to be able to grant or revoke other privs.

    Views are new to us, but they were included in 5.0.1 so I should have them.

    Anyone have any thoughts?

    Source: https://mysql.livejournal.com/135180.html

  17. Message Board

    Date: 04/04/09 (MySQL Communtiy)    Keywords: database, web

    I want to write my own little message board.
    Post something…and allow others to respond, keeping message trees intact.

    I’ve no idea how to manage the database for something like this.
    Can anyone give me a recommendation (book or website) that might give me a clue?

    :)
    -246


    Source: https://mysql.livejournal.com/134519.html

  18. Indexes and DB Writes...

    Date: 03/30/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    We are all familiar with how valuable indexes are to mySQL (and in general) database reads. However - and specifically with respect to mySQL - what effect to indexes have to database writes?

    A colleague of mine, as we are changing our DB indexing structure, has argued that indexes harm database writes since the data must be written to the database and to the index. This is a compelling argument. However, in my last job in working with Sybase SQL databases we had instances where indexes would greatly increase write efficiency apparently. I had specific experience that this was the case, but no one was able to explain to me how that was the case given what I could see where the data would have to be written in two separate places.

    So, how do indexes effect the efficiency of database writes in mySQL?

    Source: https://mysql.livejournal.com/133925.html

  19. Selective Replication in mySQL 5.X

    Date: 02/03/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    Folks may remember last week when I was asking questions about mysqldumps. I was doing that in order to get a DB backup for a specific database we wanted to replicate on another server, and I've been working on that.

    After some hit and miss I think I nearly got it working, at least to a point where the replication started between the Master and the slave. It seemed to work for a few seconds, then bombed when the slave tried to execute a statement that was for a DB that it didn't have in it.

    The point is that of all the several databases on the Master machine, at this point we only want to replicate one on the slave machine. I thought I had had that set up in the slave's my.cnf with the line

    replicate-do-db - aid #aid is the database we want to replicate

    however High Performance MySQL says that this is not the way to do this since filtering will be done on the current default database. "This is not usually what you want." :-)

    The book does indicate that "On the slave, the relicate_* options filter events as the slave SQL thread reads them from the relay log." This make sense to me, and at this point it also makes sense that the log coming from the master has every statement coming into the DB. It goes on to say that "You can replicate or ignore one or more databases (emphasis mine)...based on LIKE pattern matching syntax." There is where I lose it. I understand how to use the LIKE syntax in mySQL statements, but not in this environment.

    Is it possible to set up a small battery of statements in the slave my.cnf along the lines of:

    replicate_ignore_table = .%

    Would that do it?

    Is anyone basically doing replication in a simple Master-Slave relationship where you are only replicating one DB? If so, how are you doing it?

    THANKS, folks!

    Source: https://mysql.livejournal.com/133717.html

  20. Restoring from a mySQLDump...

    Date: 01/28/09 (MySQL Communtiy)    Keywords: mysql, database, sql

    Folks, I'm having what I think is a weird problem restoring a DB from a mysqldump. What I'm trying to do is basically copy a DB from one mysql database server to another.

    The mysqldump command I used originally was:

    mysqldump --quick --add-locks --extended-insert -u root -p dbname > dbname.sql

    When this didn't work upon restoration I also tried:

    mysqldump --opt -u root -p dbname > dbname.sql

    Both these commands created the dbname.sql file with no problems or complaints.

    I copied the SQL file over to the target computer. I went into mySQL as root and did:

    create database dbname;
    use dbname;
    source dbname.sql;

    The import would start, but it wouldn't get very far. In the end, after not many seconds it would just...stop. Here is an example:


    Query OK, 8708 rows affected (0.11 sec)
    Records: 8708 Duplicates: 0 Warnings: 0

    Query OK, 8750 rows affected (0.12 sec)
    Records: 8750 Duplicates: 0 Warnings: 0

    Query OK, 8740 rows affected (0.11 sec)
    Records: 8740 Duplicates: 0 Warnings: 0

    Query OK, 8758 rows affected (0.11 sec)
    Records: 8758 Duplicates: 0 Warnings: 0

    Query OK, 8745 rows affected (0.12 sec)
    Records: 8745 Duplicates: 0 Warnings: 0


    And it would just stop here. I waited several minutes, and the load average of the machine went down to normal levels. The mySQLd stopped showing up on top.

    I'd then CNTRL-C the process and get:


    ^CQuery aborted by Ctrl+C
    ^CAborted
    mysql: 0 files and 1 streams is left open

    $


    I've done this a few times and the results have been consistent(ly bad). As I mentioned I also did two separate mysqldumps and both files failed to import the same way.

    While the import was happening I watched "top." There didn't seem to be a memory issue since the mysqld never used more than 0.3% of the memory, although it did use 96% of the CPU (which was fine).

    The mysqlds are running on the same operating system - Fedora 10 x_64. The only difference between them is the dump was done on mysql 5.0.27, and I am trying to source to a mysqld v5.0.67.

    Any thoughts?

    Source: https://mysql.livejournal.com/133572.html

Previous page  ||  Next page


antivirus | apache | asp | blogging | browser | bugtracking | cms | crm | css | database | ebay | ecommerce | google | hosting | html | java | jsp | linux | microsoft | mysql | offshore | offshoring | oscommerce | php | postgresql | programming | rss | security | seo | shopping | software | spam | spyware | sql | technology | templates | tracker | virus | web | xml | yahoo | home