Thursday, September 8, 2011

Auditing SQL Stored Procedures

Some of our more active OLTP databases contain hundreds of stored procedures. Over time they get replaced or become obsolete. With multiple apps and reports accessing the database it can be difficult to determine if a procedure is still in use.

To help identify the obsolete procedures I started by creating a simple table to hold my usage statistics.
I then created this little SQL snippet. I identified procs that I suspected were no longer in use and added the snippet. Putting a date in the comment is the easiest way to keep track of how long it has been active.


At a later date I would run the following query to see what's active. I can comment out the snippet from more active procs such as spOMInsertFill_1_0. You might want to keep a note of the high usage procs so you can profile them for performance at a later date.
FunctionNameLastRunNumberOfRuns
spOMUpdateOrder_1_12011-05-24 17:03:56.06060
spOMBreakLinksOnGTCOrder_1_02011-05-24 16:50:02.27012
spOMGetCommonTraders2011-05-24 16:50:01.3172
spOMGetAllAccountTypes2011-05-24 16:43:55.8431
spOMGetStatus2011-05-24 16:14:46.7731973
spOMInsertFill_1_02011-05-24 05:25:00.58013687
spOMRptSG2011-05-23 17:10:02.2603
spOMDeleteOrderFill2011-05-23 15:52:10.5132618

The above list provides all the procs that have been called but what about the ones that haven't? Running the following query will list all the procs where you have added your code snippet but don't appear in the table.

Name
dbo.spOMAdjustCamelliaAttention
dbo.spOMBreakLinksOnGTCOrder_1_0
dbo.spOMConvertTTToOMS_1_0
dbo.spOMDeleteOrderFill
dbo.spOMGetAlertColours
dbo.spOMGetAllAccountTypes
dbo.spOMGetArchiveOrderFillGroup
dbo.spOMGetArchiveSingleOrder
dbo.spOMGetArchiveStrategy
dbo.spOMGetArchiveStrategyOrders
dbo.spOMGetArchiveStrategyOrders_1_0
dbo.spOMGetAvailableDesks

This process is manual and intrusive but I only clean up old procs once or twice a year. It also helps that I have access to the procs to inject my code. One possible addition that may add value is to include APP_NAME() in the table so you know what application is calling the proc.
For those in larger (and more restrictive) corporate environments SQL Profiler may be the only option. It would be an interesting challenge to identify obsolete procs passively.

Tuesday, May 24, 2011

MQ on a shoestring - Basic setup

I needed to write an app that connected to an MQSeries queue manager in Chicago and read FIXML messages. I found plenty of posts about connecting and reading messages from MQ but what I needed was the basic setup required to actually connect to a queue manager. Everything I found assumed that all the prerequisite software was installed and I was ready to code.

First task was to identify and download the software. Luckily I had done some prototyping with MQ a couple of years ago. I had version 6.0 so that's what I went with. Navigating the IBM website can be painful at the best of times so I've included the names of the files you need so finding them should be easier.

WebSphere MQ Client
The zip file is named mqc6_win.zip (132MB). Installation was straightforward. Just click through to the end.

Message Service Client for C/C++ and .NET
The zip file is named ia9h.zip (65MB). Again installation was straightforward.

In the IDE I added the following components:


If the references are not available from Add References, the files are located here (based on a default installation):
  • C:\Program Files\IBM\WebSphere MQ\bin\amqmdnet.dll
  • C:\Program Files\IBM\WebSphere MQ\bin\amqmdxcs.dll
  • C:\Program Files\IBM\WebSphere MQ\bin\IBM.XMS.dll

This is the point where all those MQ examples are now possible, or at least within reach. Next we'll determine what information is needed to setup a connection.

Keeping track of SQL Connections

Just about all the applications I write have a database somewhere in the mix. Not only that, a number of the applications are packaged as windows services and run from a dedicated app server. There may be multiple instances of the same service running on multiple servers. With the dozens connections, so which connection belongs to which instance?

When generating the SQL connection I populate the Application section of the connection string. The format I use is App [Thread][App Version]. Keeping track of the thread can be useful when checking a log file



When releasing updated software I find this feature invaluable. It provides a simple method of determining who is using what version of your software. If you are feeling malicious, a right click and a Kill Process can force the user to upgrade.

Monday, March 28, 2011

SQL Report Server - The report server has encountered a configuration error

It's been a week of niggling problems since our SQL Server 2005 machine was migrated to a new domain. In this instance we have a report server hosted on a separate machine which hasn't been migrated yet. A number of reports on the machine have subscriptions set up. On the Monday after the migration, they all started to fail.




Firstly, I had a look in the log file. For me the log files were located here:
C:\Program Files\Microsoft SQL Server\MSSQL.1\Reporting Services\LogFiles
Digging through the overly verbose log information I found the following error. The job was failing due to an authorisation issue. I've highlighted the relevant part of the error.

ReportingServicesService!library!4!03/21/2011-20:56:09:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.,
AuthzInitializeContextFromSid: Win32 error: 110;
Info: Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.
ReportingServicesService!library!4!03/21/2011-20:56:09:: i INFO: Initializing EnableExecutionLogging to 'True' as specified in Server system properties.
ReportingServicesService!emailextension!4!03/21/2011-20:56:09:: Error sending email. Microsoft.ReportingServices.Diagnostics.Utilities.RSException: The report server has encountered a configuration error. See the report server log files for more information. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.
Running the reports manually worked, it was just the subscriptions that had the problem. In my case the report server back end was hosted on the database server that had been migrated so I focused my attention on that server.

When a new subscription is created, a SQL Agent job is created with GUID.



Not exactly user friendly but it was enough for me to narrow down that the root of the problem lay with SQL Agent.


The SQL Agent service account was in the local admins group so I assumed that should give it the carte-blanche needed to run unrestricted. Using Computer Management and looking at the Local Users and Groups I found a cluster of SQLServer related groups.




One group in particular was of interest to me, the SQLServer2005SQLAgentUser$(machinename)$MSSQL. It contained the old SQL Agent service account but not the new account. Adding the new service account fixed the problem.




After some additional research, I discovered that if I had modifed the service accounts using the SQL Server configuration manager rather than through the SCM, the new service account would have been automatically added to this group.


Yet another lesson learned.

Friday, March 25, 2011

SSIS: The module containing "%1" cannot be located, even though it is registered.

Last weekend our SQLServer 2005 server was migrated to a new domain. On the Monday morning, all our scheduled SISS packages started failing.

The errors indicated that all the transform tasks within the package were failing with a "The module containing "%1" cannot be located, even though it is registered." error. The only thing that had changed between Friday and Monday was the AD migration. A quick test of some of the packages confirmed that they ran on my local machine but not on the server.

Initially I suspected permissioning problem. I logged in using the new domain account of SQLAgent and used DTSRunUI to test one of the failing packages. The package failed with the same error as the scheduled job. I then logged in using the old SQLAgent account and re-ran the package. It failed with the same error. OK then, maybe it wasn't permissioning after all. I opened the dtsx file in a text editor and looked for one of the transform components:
<component id="97" name="Data Conversion 1" componentClassID="{C3BF62C8-7C5C-4F85-83C3-E0B6F6BE267C}" description="Data Conversion" localeId="-1" usesDispositions="true" validateExternalMetadata="True" version="0" pipelineVersion="0" contactInfo="Data Conversion;Microsoft Corporation;Microsoft SqlServer v9; (C) 2005 Microsoft Corporation; All Rights Reserved; http://www.microsoft.com/sql/support;0">
I copied the class ID and did a search in the registry.



I checked the path and discovered that the PipelineComponents directory was missing. A quick search found it in a nearby directory. With all the messing around on the server it looks like a slip of the mouse had moved the directory. Once restored, everything started working again.

In hindsight the problem was simple but it took some time for me to look beyond permissions and pick the physical package apart. Once I did that the problem was solved in minutes.

Lesson learned!

Tuesday, March 1, 2011

MSMQ Private versus Public queues

It's time again to change AD domains and migrate users and servers. From previous experience this has always been a painful process. At the moment I'm looking at MSMQ. In the past I've created queues as public. My reasoning being that public queues are more accessible than private queues and having them visible on AD is a good thing, right?

Well, no. Given the appropriate permissions you can access private queues from remote machines as readily as public queues. The main differentiators between public and private is that public queues are published on the AD domain, are browsable and can use MQ routing. All my applications use point to point connections and I don't MQ routing at this point. By not using these features, public queues have become more of a negative than a positive, especially with this domain move imminent.

The moral of the story is unless you have specific need of routing or managing the queues through AD, private queues are the way to go.

This technet article provides a more in depth description of the differences.

Wednesday, February 23, 2011

MTU pain with PATS TOF API

For the past few months I've been testing the TOF API against the PATS GT system. By and large the testing has been uneventful except for one persistent problem...

Symptoms

There are two connections to the ASP, one through a leased line and the other over the internet. Since day one there have been issues with the leased line. The TOF would connect and authenticate but then sit idle. What should happen is that all contract data should be downloaded, followed by any orders and fills.

Cause

Eventually we determined that the MTU size was being exceeded. The handshaking process worked because of the small packet size but when the contract download started, the process stopped as the packets were being dropped by the router.

The provider commissioned a Layer 3 MPLS solution. The underlying carrier allowed a maximum MTU size of 1500 bytes but this was reduced by the VPN overhead to 1478 bytes. ICMP was blocked by the firewall so fragmentation was not possible. The end result was that any packets over 1478 bytes were lost.

Solution

Three solutions presented themselves:
  • Use a fixed MTU size.
  • Enable ICMP between the hosts so a smaller MTU size can be negotiated.
  • Change to a Layer 2 Private Leased Line.
In the end, option 1 was implemented for the short term but option 3 being the target long term solution. It will be interesting to see if this eventuates.