|
-
Data warehouse community
Date: 12/05/15
Keywords: no keywords
Dear community members, Can anybody help me to find the data warehouse community?
Source: http://sqlserver.livejournal.com/77637.html
-
Data warehouse community
Date: 12/05/15
Keywords: no keywords
Dear community members, Can anybody help me to find the data warehouse community?
Source: https://sqlserver.livejournal.com/77637.html
-
Creating a "SQL Job Launch Shell" for lower-priveleged users
Date: 02/11/13
Keywords: sql
This is in response to my question on 2/4/2013 for SQL Version 2000 (should work in subsequent versions if you follow my comments)
Design: User Table Created w/ Trigger   CREATE TABLE [dbo].[prod_support_job_queue] ( [job_name] sysname NOT NULL, [step_id] int NOT NULL CONSTRAINT [DF__prod_supp__step___4959E263] DEFAULT (1), [action] nvarchar(6) NOT NULL, (Must be either START, CANCEL, or STOP) [ntlogin] nvarchar(32) NULL, --used to log who made the request [log_date] datetime NULL, [processed] char(1) NOT NULL CONSTRAINT [DF_prod_support_job_queue_processed] DEFAULT ('N') ) ON [PRIMARY]
CREATE TRIGGER [dbo].[ti_job_queue] on [dbo].[prod_support_job_queue] for insert as set nocount on
if ( update(job_name) ) begin declare @username varchar(30) declare @log_date datetime declare @job_name sysname -- Get the user's attributes. select @username = loginame from master..sysprocesses where spid = @@spid
select @log_date = getdate() select @job_name = job_name from inserted update prod_support_job_queue set log_date=@log_date, ntlogin=@username where processed ='N' and job_name=@job_name end
Procedures: - check_job_queue - fires off via scheduled SQL job. It reads from the prod_support_job_queue table
- make_job_request - procedure exposed to the production support team. This helps them insert records into the prod_support_job_queue table
- sp_isJobRunning - (Modified this procedure from THIS publicly available code in order for it to run on SQL 2000 )
Logic:- The user makes his request via the make_job_request stored procedure. He is required to enter a valid job name, action (which is either START, STOP, or CANCEL)
- check_job_queue runs every 10 minutes for check for new actions in the prod_support_job_queue table. It utilizes system stored procedures in msdb to start and stop jobs. For the CANCEL command, a simple update statement is issued to the processed field to exclude it from further processing checks.
- sp_IsJobRunning is called by check_job_queue in order to see if the requested job is already running before issuing any commands
I am adding fine-tuning to the check_job_queue procedure. Once that is done, I'll post the code for the two custom procedures check_job_queue and make_job_request
Source: http://sqlserver.livejournal.com/77452.html
-
Creating a "SQL Job Launch Shell" for lower-priveleged users
Date: 02/11/13
Keywords: sql
This is in response to my question on 2/4/2013 for SQL Version 2000 (should work in subsequent versions if you follow my comments)
Design: User Table Created w/ Trigger   CREATE TABLE [dbo].[prod_support_job_queue] ( [job_name] sysname NOT NULL, [step_id] int NOT NULL CONSTRAINT [DF__prod_supp__step___4959E263] DEFAULT (1), [action] nvarchar(6) NOT NULL, (Must be either START, CANCEL, or STOP) [ntlogin] nvarchar(32) NULL, --used to log who made the request [log_date] datetime NULL, [processed] char(1) NOT NULL CONSTRAINT [DF_prod_support_job_queue_processed] DEFAULT ('N') ) ON [PRIMARY]
CREATE TRIGGER [dbo].[ti_job_queue] on [dbo].[prod_support_job_queue] for insert as set nocount on
if ( update(job_name) ) begin declare @username varchar(30) declare @log_date datetime declare @job_name sysname -- Get the user's attributes. select @username = loginame from master..sysprocesses where spid = @@spid
select @log_date = getdate() select @job_name = job_name from inserted update prod_support_job_queue set log_date=@log_date, ntlogin=@username where processed ='N' and job_name=@job_name end
Procedures: - check_job_queue - fires off via scheduled SQL job. It reads from the prod_support_job_queue table
- make_job_request - procedure exposed to the production support team. This helps them insert records into the prod_support_job_queue table
- sp_isJobRunning - (Modified this procedure from THIS publicly available code in order for it to run on SQL 2000 )
Logic:- The user makes his request via the make_job_request stored procedure. He is required to enter a valid job name, action (which is either START, STOP, or CANCEL)
- check_job_queue runs every 10 minutes for check for new actions in the prod_support_job_queue table. It utilizes system stored procedures in msdb to start and stop jobs. For the CANCEL command, a simple update statement is issued to the processed field to exclude it from further processing checks.
- sp_IsJobRunning is called by check_job_queue in order to see if the requested job is already running before issuing any commands
I am adding fine-tuning to the check_job_queue procedure. Once that is done, I'll post the code for the two custom procedures check_job_queue and make_job_request
Source: https://sqlserver.livejournal.com/77452.html
-
SQL Job Administrators - SQL 2008 R2
Date: 02/04/13
Keywords: asp, sql, microsoft
I'm thinking about doing this because our number of ad-hoc requests to run jobs has increased to an annoying level. Does anyone out there have experience putting this into practice?
How to: Configure a User to Create and Manage SQL Server Agent Jobs (SQL Server Management Studio)http://msdn.microsoft.com/en-us/library/ms187901%28v=sql.105%29.aspx
Source: http://sqlserver.livejournal.com/77287.html
-
SQL Job Administrators - SQL 2008 R2
Date: 02/04/13
Keywords: asp, sql, microsoft
I'm thinking about doing this because our number of ad-hoc requests to run jobs has increased to an annoying level. Does anyone out there have experience putting this into practice?
How to: Configure a User to Create and Manage SQL Server Agent Jobs (SQL Server Management Studio)http://msdn.microsoft.com/en-us/library/ms187901%28v=sql.105%29.aspx
Source: https://sqlserver.livejournal.com/77287.html
-
Question regarding Collations
Date: 06/08/12
Keywords: database, sql
Does anyone have any experience on this group in working with Unicode, double-byte case-sensitive data in SQL 2008 R2?
I would like to select a collation for my database that allows case-sensitive sorting/comparisons with Unicode data that could contain Japanese characters. Whew...that's hard to say.
Source: http://sqlserver.livejournal.com/76773.html
-
Question regarding Collations
Date: 06/08/12
Keywords: database, sql
Does anyone have any experience on this group in working with Unicode, double-byte case-sensitive data in SQL 2008 R2?
I would like to select a collation for my database that allows case-sensitive sorting/comparisons with Unicode data that could contain Japanese characters. Whew...that's hard to say.
Source: https://sqlserver.livejournal.com/76773.html
-
SQL Server SP2
Date: 02/28/12
Keywords: sql
Hi, I have multiple instances on SQL Server 2008. We are planning to install SP2 only on one instance. What impact will be for the rest of instances and especially on shared components. Thank you!!!
Source: http://sqlserver.livejournal.com/76475.html
-
SQL Server SP2
Date: 02/28/12
Keywords: sql
Hi, I have multiple instances on SQL Server 2008. We are planning to install SP2 only on one instance. What impact will be for the rest of instances and especially on shared components. Thank you!!!
Source: https://sqlserver.livejournal.com/76475.html
-
Query fun and games
Date: 02/23/12
Keywords: xml, sql
I've found in general for SQL that there is more than one way to solve (almost) any problem. I've been playing around with query building today and decided to see how many ways I could solve a problem that recurs fairly frequently in my work, flattening subrecords into a single row.
This is my current standard solution, using the PIVOT function. It's quite fast, but limits you to a specific number of subrecords--it can be a high number, but you still have to decide on a maximum.
WITH cte AS (SELECT Person.contactid AS 'ID' , Person.FullName AS 'Name' , 'Activity' = Activity.a422_rel_activityvalueidname , 'Row' = ROW_NUMBER() OVER (PARTITION BY Person.contactid, Person.FullName ORDER BY Activity.a422_rel_activityvalueidname) FROM Contact AS Person INNER JOIN Task AS Activity ON Person.contactid = Activity.regardingobjectid) SELECT ID, Name , 'Activity1' = [1], 'Activity2' = [2], 'Activity3' = [3], 'Activity4' = [4], 'Activity5' = [5] FROM cte PIVOT (MAX(cte.Activity) FOR cte.[Row] IN ([1], [2], [3], [4], [5])) AS pvt
This is a new solution I found in surfing some SQL Server blogs, using FOR XML PATH to create a CSV list of values. It will include an indefinite number of subrecords, but only includes one field from the subrecords. It's significantly slower than the first example by at least an order of ten.
SELECT DISTINCT p.contactid AS 'ID' , p.FullName AS 'Name' , SUBSTRING((SELECT ', ' + Activity.a422_rel_activityvalueidname FROM task AS Activity WHERE Activity.regardingobjectid = p.contactid FOR XML PATH('')), 2, 4000) AS 'Activities' FROM Contact AS p INNER JOIN Task AS t ON p.contactid = t.regardingobjectid ORDER BY p.contactid
This ugly looking creature is what I used to use before PIVOT came along, using many, many multiple self-joins. I'm pretty sure I had a slightly more elegant (and faster!) version of this, but it's been a long time since I've had to create one of these things (fortunately). The performance is...not as bad as you might expect.
SELECT 'ID' = p.contactid, 'Name' = p.fullname , 'Activity1' = a1.a422_rel_activityvalueidname , 'ActivityDate1' = a1.actualend , 'Activity2' = a2.a422_rel_activityvalueidname , 'ActivityDate2' = a2.actualend , 'Activity3' = a3.a422_rel_activityvalueidname , 'ActivityDate3' = a3.actualend , 'Activity4' = a4.a422_rel_activityvalueidname , 'ActivityDate4' = a4.actualend , 'Activity5' = a5.a422_rel_activityvalueidname , 'ActivityDate5' = a5.actualend FROM Contact AS p INNER JOIN Task AS a1 ON p.contactid = a1.regardingobjectid LEFT JOIN Task AS not1 ON p.contactid = not1.regardingobjectid AND not1.activityid < a1.activityid LEFT JOIN Task AS a2 ON p.contactid = a2.regardingobjectid AND a2.activityid > a1.activityid LEFT JOIN Task AS not2 ON p.contactid = not2.regardingobjectid AND not2.activityid > a1.activityid AND not2.activityid < a2.activityid LEFT JOIN Task AS a3 ON p.contactid = a3.regardingobjectid AND a3.activityid > a2.activityid LEFT JOIN Task AS not3 ON p.contactid = not3.regardingobjectid AND not3.activityid > a2.activityid AND not3.activityid < a3.activityid LEFT JOIN Task AS a4 ON p.contactid = a4.regardingobjectid AND a4.activityid > a3.activityid LEFT JOIN Task AS not4 ON p.contactid = not4.regardingobjectid AND not4.activityid > a3.activityid AND not4.activityid < a4.activityid LEFT JOIN Task AS a5 ON p.contactid = a5.regardingobjectid AND a5.activityid > a4.activityid LEFT JOIN Task AS not5 ON p.contactid = not5.regardingobjectid AND not5.activityid > a4.activityid AND not5.activityid < a5.activityid WHERE not1.regardingobjectid Is Null AND not2.regardingobjectid Is Null AND not3.regardingobjectid Is Null AND not4.regardingobjectid Is Null AND not5.regardingobjectid Is Null
Using a recursive CTE almost works, except that for each main record it gives a row with one subrecord, another row with two subrecords, a row with three subrecords, and so on for as many subrecords as are available for that main record. It seems like there has to be a way around that, so if you have any ideas, let me know. Performance is not good, not horrible.
WITH cte AS (SELECT a1.regardingobjectid, a1.activityid , 'Activities' = CONVERT(nvarchar(1000), a1.createdon, 113) FROM Task AS a1 INNER JOIN Contact AS p ON a1.regardingobjectid = p.contactid LEFT JOIN Task AS not1 ON a1.regardingobjectid = not1.regardingobjectid AND a1.activityid > not1.activityid WHERE not1.activityid Is Null UNION ALL SELECT cte.regardingobjectid, a1.activityid , 'Activities' = CONVERT(nvarchar(1000), (cte.Activities + N', ' + CONVERT(nvarchar, a1.createdon, 113))) FROM cte INNER JOIN Task AS a1 ON cte.regardingobjectid = a1.regardingobjectid AND cte.activityid < a1.activityid WHERE NOT EXISTS (SELECT * FROM Task AS not1 WHERE cte.regardingobjectid = not1.regardingobjectid AND not1.activityid > cte.activityid AND not1.activityid < a1.activityid) ) SELECT 'ID' = p.contactid, 'Name' = p.fullname , cte.Activities FROM cte INNER JOIN Contact AS p ON cte.regardingobjectid = p.contactid ORDER BY p.fullname
Creating a custom aggregate function in CLR is another solution, but playing with that will have to be another day.
Source: http://sqlserver.livejournal.com/76055.html
-
Query fun and games
Date: 02/23/12
Keywords: xml, sql
I've found in general for SQL that there is more than one way to solve (almost) any problem. I've been playing around with query building today and decided to see how many ways I could solve a problem that recurs fairly frequently in my work, flattening subrecords into a single row.
This is my current standard solution, using the PIVOT function. It's quite fast, but limits you to a specific number of subrecords--it can be a high number, but you still have to decide on a maximum.
WITH cte AS (SELECT Person.contactid AS 'ID' , Person.FullName AS 'Name' , 'Activity' = Activity.a422_rel_activityvalueidname , 'Row' = ROW_NUMBER() OVER (PARTITION BY Person.contactid, Person.FullName ORDER BY Activity.a422_rel_activityvalueidname) FROM Contact AS Person INNER JOIN Task AS Activity ON Person.contactid = Activity.regardingobjectid) SELECT ID, Name , 'Activity1' = [1], 'Activity2' = [2], 'Activity3' = [3], 'Activity4' = [4], 'Activity5' = [5] FROM cte PIVOT (MAX(cte.Activity) FOR cte.[Row] IN ([1], [2], [3], [4], [5])) AS pvt
This is a new solution I found in surfing some SQL Server blogs, using FOR XML PATH to create a CSV list of values. It will include an indefinite number of subrecords, but only includes one field from the subrecords. It's significantly slower than the first example by at least an order of ten.
SELECT DISTINCT p.contactid AS 'ID' , p.FullName AS 'Name' , SUBSTRING((SELECT ', ' + Activity.a422_rel_activityvalueidname FROM task AS Activity WHERE Activity.regardingobjectid = p.contactid FOR XML PATH('')), 2, 4000) AS 'Activities' FROM Contact AS p INNER JOIN Task AS t ON p.contactid = t.regardingobjectid ORDER BY p.contactid
This ugly looking creature is what I used to use before PIVOT came along, using many, many multiple self-joins. I'm pretty sure I had a slightly more elegant (and faster!) version of this, but it's been a long time since I've had to create one of these things (fortunately). The performance is...not as bad as you might expect.
SELECT 'ID' = p.contactid, 'Name' = p.fullname , 'Activity1' = a1.a422_rel_activityvalueidname , 'ActivityDate1' = a1.actualend , 'Activity2' = a2.a422_rel_activityvalueidname , 'ActivityDate2' = a2.actualend , 'Activity3' = a3.a422_rel_activityvalueidname , 'ActivityDate3' = a3.actualend , 'Activity4' = a4.a422_rel_activityvalueidname , 'ActivityDate4' = a4.actualend , 'Activity5' = a5.a422_rel_activityvalueidname , 'ActivityDate5' = a5.actualend FROM Contact AS p INNER JOIN Task AS a1 ON p.contactid = a1.regardingobjectid LEFT JOIN Task AS not1 ON p.contactid = not1.regardingobjectid AND not1.activityid < a1.activityid LEFT JOIN Task AS a2 ON p.contactid = a2.regardingobjectid AND a2.activityid > a1.activityid LEFT JOIN Task AS not2 ON p.contactid = not2.regardingobjectid AND not2.activityid > a1.activityid AND not2.activityid < a2.activityid LEFT JOIN Task AS a3 ON p.contactid = a3.regardingobjectid AND a3.activityid > a2.activityid LEFT JOIN Task AS not3 ON p.contactid = not3.regardingobjectid AND not3.activityid > a2.activityid AND not3.activityid < a3.activityid LEFT JOIN Task AS a4 ON p.contactid = a4.regardingobjectid AND a4.activityid > a3.activityid LEFT JOIN Task AS not4 ON p.contactid = not4.regardingobjectid AND not4.activityid > a3.activityid AND not4.activityid < a4.activityid LEFT JOIN Task AS a5 ON p.contactid = a5.regardingobjectid AND a5.activityid > a4.activityid LEFT JOIN Task AS not5 ON p.contactid = not5.regardingobjectid AND not5.activityid > a4.activityid AND not5.activityid < a5.activityid WHERE not1.regardingobjectid Is Null AND not2.regardingobjectid Is Null AND not3.regardingobjectid Is Null AND not4.regardingobjectid Is Null AND not5.regardingobjectid Is Null
Using a recursive CTE almost works, except that for each main record it gives a row with one subrecord, another row with two subrecords, a row with three subrecords, and so on for as many subrecords as are available for that main record. It seems like there has to be a way around that, so if you have any ideas, let me know. Performance is not good, not horrible.
WITH cte AS (SELECT a1.regardingobjectid, a1.activityid , 'Activities' = CONVERT(nvarchar(1000), a1.createdon, 113) FROM Task AS a1 INNER JOIN Contact AS p ON a1.regardingobjectid = p.contactid LEFT JOIN Task AS not1 ON a1.regardingobjectid = not1.regardingobjectid AND a1.activityid > not1.activityid WHERE not1.activityid Is Null UNION ALL SELECT cte.regardingobjectid, a1.activityid , 'Activities' = CONVERT(nvarchar(1000), (cte.Activities + N', ' + CONVERT(nvarchar, a1.createdon, 113))) FROM cte INNER JOIN Task AS a1 ON cte.regardingobjectid = a1.regardingobjectid AND cte.activityid < a1.activityid WHERE NOT EXISTS (SELECT * FROM Task AS not1 WHERE cte.regardingobjectid = not1.regardingobjectid AND not1.activityid > cte.activityid AND not1.activityid < a1.activityid) ) SELECT 'ID' = p.contactid, 'Name' = p.fullname , cte.Activities FROM cte INNER JOIN Contact AS p ON cte.regardingobjectid = p.contactid ORDER BY p.fullname
Creating a custom aggregate function in CLR is another solution, but playing with that will have to be another day.
Source: https://sqlserver.livejournal.com/76055.html
-
Tracking Database Growth
Date: 10/10/11
Keywords: database, sql
I came across this article when doing some more research on documenting database growth over time. It worked really well for me.
Thank you vyaskn@hotmail.com!
In this article I am going to explain, how to track file growths, especially the database files. First of all, why is it important to track database file growth? Tracking file growth, helps you understand the rate at which your database is growing, so that you can plan ahead for your future storage needs. It is better to plan ahead, instead of running around when you run out of disk space, isn't it? So, how can we track file growths? There are a couple of ways.
The first approach: SQL Server BACKUP and RESTORE commands store the backup, restore history in the msdb database. In this approach, I am going to use the tables backupset and backupfile from msdb to calculate the file growth percentages. Whenever you backup a database the BACKUP command inserts a row in the backupset table and one row each for every file in the backed-up database in the backupfile table, along with the size of each file. I am going to use these file sizes recorded by BACKUP command, compare them with the previous sizes and come up with the percentage of file growth. This approach assumes that you do full database backups periodically, at regular intervals.
Click here to download the procedure sp_track_db_growth. ... **Please use free code responsibly. Test and verify before deploying to production!
Source: http://sqlserver.livejournal.com/75793.html
-
Tracking Database Growth
Date: 10/10/11
Keywords: database, sql
I came across this article when doing some more research on documenting database growth over time. It worked really well for me.
Thank you vyaskn@hotmail.com!
In this article I am going to explain, how to track file growths, especially the database files. First of all, why is it important to track database file growth? Tracking file growth, helps you understand the rate at which your database is growing, so that you can plan ahead for your future storage needs. It is better to plan ahead, instead of running around when you run out of disk space, isn't it? So, how can we track file growths? There are a couple of ways.
The first approach: SQL Server BACKUP and RESTORE commands store the backup, restore history in the msdb database. In this approach, I am going to use the tables backupset and backupfile from msdb to calculate the file growth percentages. Whenever you backup a database the BACKUP command inserts a row in the backupset table and one row each for every file in the backed-up database in the backupfile table, along with the size of each file. I am going to use these file sizes recorded by BACKUP command, compare them with the previous sizes and come up with the percentage of file growth. This approach assumes that you do full database backups periodically, at regular intervals.
Click here to download the procedure sp_track_db_growth. ... **Please use free code responsibly. Test and verify before deploying to production!
Source: https://sqlserver.livejournal.com/75793.html
-
DBA Position Open in North Texas
Date: 07/20/11
Keywords: technology, database, sql
--------------------------- ------------------------------------------- 1-2 years of total database experience required (preferably Oracle, Sybase, or SQL Server) Experience with Windows Server operating systems Experience with creating SQL scripts and setting up typical database maintenance jobs Experience working with development teams ------------------------------------------------------------------------ The Database Administrator 1 will participate in departmental projects by assisting in the development of project plans, documentation, and performing of project tasks. In support of the other DBA’s, He/She will perform daily maintenance tasks and participate in DBA on-call duty. The DBA I position will be responsible for installing and maintaining database technology in a multi-platform mission critical environment. ------------------------------------------------------------------------ CONTACT: Jennifer Toal Research Analyst COMTEK-Group 972-792-1045 Office 972-467-2901 Mobile 972-644-6602 Fax
Source: http://sqlserver.livejournal.com/75585.html
-
DBA Position Open in North Texas
Date: 07/20/11
Keywords: technology, database, sql
--------------------------- ------------------------------------------- 1-2 years of total database experience required (preferably Oracle, Sybase, or SQL Server) Experience with Windows Server operating systems Experience with creating SQL scripts and setting up typical database maintenance jobs Experience working with development teams ------------------------------------------------------------------------ The Database Administrator 1 will participate in departmental projects by assisting in the development of project plans, documentation, and performing of project tasks. In support of the other DBA’s, He/She will perform daily maintenance tasks and participate in DBA on-call duty. The DBA I position will be responsible for installing and maintaining database technology in a multi-platform mission critical environment. ------------------------------------------------------------------------ CONTACT: Jennifer Toal Research Analyst COMTEK-Group 972-792-1045 Office 972-467-2901 Mobile 972-644-6602 Fax
Source: https://sqlserver.livejournal.com/75585.html
-
Production SQL DBA Opening in North Texas
Date: 06/02/11
Keywords: database, asp, sql, security, microsoft
Passing this along for a friend...If you know anyone looking, please let me know. Pay terms seem to be a little higher than normal for that many years of experience. Responsibilities: - Installation, configuration, customization, maintenance and performance tuning of SQL Server 2005 & 2008 including SSIS, SSAS and SSRS.
- SQL version migration, patching and security management.
- Monitor database server capacity/performance and make infrastructure and architecture recommendations to management for necessary changes/updates.
- Perform database optimization, administration and maintenance (partitioning tables, partitioning indexes, indexing, normalization, synchronization, job monitoring, etc).
- Manage all aspects of database operations including implementation of database monitoring tools, event monitoring, diagnostic analysis, performance optimization routines and top-tier support for resolving support issues.
- Work with internal IT operations teams to troubleshoot network and server issues and optimize the database environment.
- Establish and enforce database change management standards including pushes from development to QA, on to production, etc;
- Proactively stay current with latest technologies and industry best practices associated to the position and responsibilities.
- Provide development and production support to troubleshoot day-to-day database or related application issues.
- Develop, implement and verify processes for system monitoring, storage management, backup and recovery.
- Develop, implement and verify database backup and disaster recovery strategies.
- Design and implement all database security to ensure integrity and consistency among the various database regions
- Develop and maintain documentation of the production environment.
- Manage SLAs and strict adherence to production controls - Sarbanes-Oxley (SOX) monitored via external audits
Necessary Qualifications:- Must have experience on SQL Server 2005.
- Good exposure on Installation, Configuration of database Clusters, Replication, Log shipping and Mirroring
- Expertise in Troubleshooting and performance monitoring SQL Server Database server (Query Tuning, Server Tuning, Disk Performance Monitoring, Memory Pressure, CPU bottleneck etc.)
- Expertise in T-SQL and writing efficient and highly performing SQL Statements.
- Expertise in SQL Server Internals, wait events, profiler, windows events etc
- Must have understanding of key infrastructure technologies such as Clustering, SAN Storage, Virtualization, Cloud services etc.
Other nice to have experience:- System administration fundamentals including Installation, Configuration & Security setups.
- Experience with SQL 2008 a plus.
- Experienced in architecting high availability, business resumption and disaster recovery solutions
- Microsoft SQL Server DBA Certification
- Experience with SCOM/SCCM/SCSM is a plus
- Extremely self motivated and ability to work within a globally dispersed team.
Desired Skills:- Data Warehouse experience
- VLDB experience highly desired
- Experience with databases > 5 TB, processing 2 million + rows of data daily
- MS SQL Server 2005 Transact-SQL (T-SQL)
- Stored Procedure Development Communication Skills, work well with the team, and within team processes
- Database and file size and space forecasting ability
- Ability to manage a complex database system and assist the client with Database Integration for Future Business Intelligence efforts
- Confio Ignite Performance
Education & Work Experience:- Bachelor's degree in Computer Science, Business Administration or other
- 10+ years experience as a Database Administrator
Source: http://sqlserver.livejournal.com/75423.html
-
Production SQL DBA Opening in North Texas
Date: 06/02/11
Keywords: database, asp, sql, security, microsoft
Passing this along for a friend...If you know anyone looking, please let me know. Pay terms seem to be a little higher than normal for that many years of experience. Responsibilities: - Installation, configuration, customization, maintenance and performance tuning of SQL Server 2005 & 2008 including SSIS, SSAS and SSRS.
- SQL version migration, patching and security management.
- Monitor database server capacity/performance and make infrastructure and architecture recommendations to management for necessary changes/updates.
- Perform database optimization, administration and maintenance (partitioning tables, partitioning indexes, indexing, normalization, synchronization, job monitoring, etc).
- Manage all aspects of database operations including implementation of database monitoring tools, event monitoring, diagnostic analysis, performance optimization routines and top-tier support for resolving support issues.
- Work with internal IT operations teams to troubleshoot network and server issues and optimize the database environment.
- Establish and enforce database change management standards including pushes from development to QA, on to production, etc;
- Proactively stay current with latest technologies and industry best practices associated to the position and responsibilities.
- Provide development and production support to troubleshoot day-to-day database or related application issues.
- Develop, implement and verify processes for system monitoring, storage management, backup and recovery.
- Develop, implement and verify database backup and disaster recovery strategies.
- Design and implement all database security to ensure integrity and consistency among the various database regions
- Develop and maintain documentation of the production environment.
- Manage SLAs and strict adherence to production controls - Sarbanes-Oxley (SOX) monitored via external audits
Necessary Qualifications:- Must have experience on SQL Server 2005.
- Good exposure on Installation, Configuration of database Clusters, Replication, Log shipping and Mirroring
- Expertise in Troubleshooting and performance monitoring SQL Server Database server (Query Tuning, Server Tuning, Disk Performance Monitoring, Memory Pressure, CPU bottleneck etc.)
- Expertise in T-SQL and writing efficient and highly performing SQL Statements.
- Expertise in SQL Server Internals, wait events, profiler, windows events etc
- Must have understanding of key infrastructure technologies such as Clustering, SAN Storage, Virtualization, Cloud services etc.
Other nice to have experience:- System administration fundamentals including Installation, Configuration & Security setups.
- Experience with SQL 2008 a plus.
- Experienced in architecting high availability, business resumption and disaster recovery solutions
- Microsoft SQL Server DBA Certification
- Experience with SCOM/SCCM/SCSM is a plus
- Extremely self motivated and ability to work within a globally dispersed team.
Desired Skills:- Data Warehouse experience
- VLDB experience highly desired
- Experience with databases > 5 TB, processing 2 million + rows of data daily
- MS SQL Server 2005 Transact-SQL (T-SQL)
- Stored Procedure Development Communication Skills, work well with the team, and within team processes
- Database and file size and space forecasting ability
- Ability to manage a complex database system and assist the client with Database Integration for Future Business Intelligence efforts
- Confio Ignite Performance
Education & Work Experience:- Bachelor's degree in Computer Science, Business Administration or other
- 10+ years experience as a Database Administrator
Source: https://sqlserver.livejournal.com/75423.html
-
Microsoft Tech-Ed
Date: 05/12/11
Keywords: no keywords
Anyone going to this next week? Is there anything I can report back to the group about if you are interested in going but can't?
Source: http://sqlserver.livejournal.com/75106.html
-
Microsoft Tech-Ed
Date: 05/12/11
Keywords: no keywords
Anyone going to this next week? Is there anything I can report back to the group about if you are interested in going but can't?
Source: https://sqlserver.livejournal.com/75106.html
|