Wednesday, November 14, 2012

MQ on a shoestring - Connection Details

Now that the necessary software has been installed, we can continue the configuration. The next item on the list is the connection details. The MQ admin needs to provide at least five pieces of information. If your admin keeps to IBM naming conventions, the setting should look similar to the following:


The username and password is required when connecting to an AS400. There may be ways to drop this requirement but within my organisation it was mandatory for all AS400 connections.

Moving back the code, add the following references.


You should be able cut and paste this and call it with relative ease.



As you can see from the above code there is a dedicated MQException. When raised, they include an additional piece of information; the ReasonCode. The following link provides an explanation of the codes and should give you enough information to do more research.

Wednesday, November 7, 2012

Marvelution APIv2 plugin fails with java.lang.NullPointerException

One of the Jenkins plugins I installed was for Marvelution JIRA Hudson Integration. All the other plugins loaded with no issues except this one. Specifically it failed to load the hudson-apiv2-plugin-5.0.4 plugin.

31/10/2012 3:06:03 PM org.apache.wink.server.internal.servlet.RestServlet init SEVERE: null java.lang.NullPointerException at com.marvelution.hudson.plugins.apiv2.wink.HudsonWinkApplication.getClassesFromPackage(HudsonWinkApplication.java:102) at com.marvelution.hudson.plugins.apiv2.wink.HudsonWinkApplication.getClasses(HudsonWinkApplication.java:78) at org.apache.wink.server.internal.application.ApplicationProcessor.process(ApplicationProcessor.java:84) at org.apache.wink.server.internal.DeploymentConfiguration.addApplication(DeploymentConfiguration.java:339)
...

Searching on the error pointed me to a ticket on Marvelution's JIRA but had been closed without a resolution posted.

Looking through the log file I found this entry.

INFO: Loading classes from Classpath Package: file:/C:/Program%20Files%20(x86)/Jenkins/plugins/hudson-apiv2-plugin-5.0.4/WEB-INF/classes/com/marvelution/hudson/plugins/apiv2/resources

When I did the initial install of Jenkins, I clicked through the default options which put Jenkins in C:/Program Files (x86)/Jenkins This got me thinking that the RestServlet didn't like the embedded spaces. I moved my Jenkins installation to C:\Jenkins and the problem was resolved.

INFO: Loading classes from Classpath Package: file:/C:/Jenkins/plugins/hudson-apiv2-plugin-5.0.4/WEB-INF/classes/com/marvelution/hudson/plugins/apiv2/resources
31/10/2012 4:33:12 PM org.apache.wink.common.internal.registry.metadata.ProviderMetadataCollector isProvider

In one of the posts on Marvelution, the poster mentions that the plugin works on production but not on their new test installation. I suspect they clicked through the install process the same way I did on their new 64bit Windows server.

Finally, a quick note on moving Jenkins.
  1. Stop Jenkins
  2. Move Jenkins directory to new location
  3. If you're using a windows service, search the registry for Jenkins.exe and amend the paths where necessary.
  4. Restart Jenkins

Wednesday, October 31, 2012

Putting TeamCity behind mod_proxy

As part of a new project I wanted to evaluate Jenkins and TeamCity on the same server. I decided that changing the root directory and placing them behind Apache using mod_proxy would be the way to go. The goal was to have two urls; http://myserver/tc and http://myserver/jenkins.

Setting up Jenkins was a simple affair but TeamCity was less straightforward. My search of putting TeamCity behind mod_proxy threw up this post: http://devnet.jetbrains.net/thread/275501. The solution posted at the end of the thread is the way to go but it didn't work on my Windows setup.

When you do the initial install, your directory structure will look similar to this:


Default TeamCity Directory Structure on Windows
Following the instructions in the post, step 1 moves the ROOT to tc (or whatever name you selected). This would give you .\webapps\tc\ROOT\. In fact you should rename ROOT to tc and end up with the structure below.


Modified TeamCity Directory Structure
My mistake was to take the move literally. I created the tc directory then called MOVE. The commands mv and MOVE will rename the current object if the destination doesn't exist.

Hopefully this will save you some time.

Thursday, September 8, 2011

Auditing SQL Stored Procedures

Some of our more active OLTP databases contain hundreds of stored procedures. Over time they get replaced or become obsolete. With multiple apps and reports accessing the database it can be difficult to determine if a procedure is still in use.

To help identify the obsolete procedures I started by creating a simple table to hold my usage statistics.
I then created this little SQL snippet. I identified procs that I suspected were no longer in use and added the snippet. Putting a date in the comment is the easiest way to keep track of how long it has been active.


At a later date I would run the following query to see what's active. I can comment out the snippet from more active procs such as spOMInsertFill_1_0. You might want to keep a note of the high usage procs so you can profile them for performance at a later date.
FunctionNameLastRunNumberOfRuns
spOMUpdateOrder_1_12011-05-24 17:03:56.06060
spOMBreakLinksOnGTCOrder_1_02011-05-24 16:50:02.27012
spOMGetCommonTraders2011-05-24 16:50:01.3172
spOMGetAllAccountTypes2011-05-24 16:43:55.8431
spOMGetStatus2011-05-24 16:14:46.7731973
spOMInsertFill_1_02011-05-24 05:25:00.58013687
spOMRptSG2011-05-23 17:10:02.2603
spOMDeleteOrderFill2011-05-23 15:52:10.5132618

The above list provides all the procs that have been called but what about the ones that haven't? Running the following query will list all the procs where you have added your code snippet but don't appear in the table.

Name
dbo.spOMAdjustCamelliaAttention
dbo.spOMBreakLinksOnGTCOrder_1_0
dbo.spOMConvertTTToOMS_1_0
dbo.spOMDeleteOrderFill
dbo.spOMGetAlertColours
dbo.spOMGetAllAccountTypes
dbo.spOMGetArchiveOrderFillGroup
dbo.spOMGetArchiveSingleOrder
dbo.spOMGetArchiveStrategy
dbo.spOMGetArchiveStrategyOrders
dbo.spOMGetArchiveStrategyOrders_1_0
dbo.spOMGetAvailableDesks

This process is manual and intrusive but I only clean up old procs once or twice a year. It also helps that I have access to the procs to inject my code. One possible addition that may add value is to include APP_NAME() in the table so you know what application is calling the proc.
For those in larger (and more restrictive) corporate environments SQL Profiler may be the only option. It would be an interesting challenge to identify obsolete procs passively.

Tuesday, May 24, 2011

MQ on a shoestring - Basic setup

I needed to write an app that connected to an MQSeries queue manager in Chicago and read FIXML messages. I found plenty of posts about connecting and reading messages from MQ but what I needed was the basic setup required to actually connect to a queue manager. Everything I found assumed that all the prerequisite software was installed and I was ready to code.

First task was to identify and download the software. Luckily I had done some prototyping with MQ a couple of years ago. I had version 6.0 so that's what I went with. Navigating the IBM website can be painful at the best of times so I've included the names of the files you need so finding them should be easier.

WebSphere MQ Client
The zip file is named mqc6_win.zip (132MB). Installation was straightforward. Just click through to the end.

Message Service Client for C/C++ and .NET
The zip file is named ia9h.zip (65MB). Again installation was straightforward.

In the IDE I added the following components:


If the references are not available from Add References, the files are located here (based on a default installation):
  • C:\Program Files\IBM\WebSphere MQ\bin\amqmdnet.dll
  • C:\Program Files\IBM\WebSphere MQ\bin\amqmdxcs.dll
  • C:\Program Files\IBM\WebSphere MQ\bin\IBM.XMS.dll

This is the point where all those MQ examples are now possible, or at least within reach. Next we'll determine what information is needed to setup a connection.

Keeping track of SQL Connections

Just about all the applications I write have a database somewhere in the mix. Not only that, a number of the applications are packaged as windows services and run from a dedicated app server. There may be multiple instances of the same service running on multiple servers. With the dozens connections, so which connection belongs to which instance?

When generating the SQL connection I populate the Application section of the connection string. The format I use is App [Thread][App Version]. Keeping track of the thread can be useful when checking a log file



When releasing updated software I find this feature invaluable. It provides a simple method of determining who is using what version of your software. If you are feeling malicious, a right click and a Kill Process can force the user to upgrade.

Monday, March 28, 2011

SQL Report Server - The report server has encountered a configuration error

It's been a week of niggling problems since our SQL Server 2005 machine was migrated to a new domain. In this instance we have a report server hosted on a separate machine which hasn't been migrated yet. A number of reports on the machine have subscriptions set up. On the Monday after the migration, they all started to fail.




Firstly, I had a look in the log file. For me the log files were located here:
C:\Program Files\Microsoft SQL Server\MSSQL.1\Reporting Services\LogFiles
Digging through the overly verbose log information I found the following error. The job was failing due to an authorisation issue. I've highlighted the relevant part of the error.

ReportingServicesService!library!4!03/21/2011-20:56:09:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.,
AuthzInitializeContextFromSid: Win32 error: 110;
Info: Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.
ReportingServicesService!library!4!03/21/2011-20:56:09:: i INFO: Initializing EnableExecutionLogging to 'True' as specified in Server system properties.
ReportingServicesService!emailextension!4!03/21/2011-20:56:09:: Error sending email. Microsoft.ReportingServices.Diagnostics.Utilities.RSException: The report server has encountered a configuration error. See the report server log files for more information. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ServerConfigurationErrorException: The report server has encountered a configuration error. See the report server log files for more information.
Running the reports manually worked, it was just the subscriptions that had the problem. In my case the report server back end was hosted on the database server that had been migrated so I focused my attention on that server.

When a new subscription is created, a SQL Agent job is created with GUID.



Not exactly user friendly but it was enough for me to narrow down that the root of the problem lay with SQL Agent.


The SQL Agent service account was in the local admins group so I assumed that should give it the carte-blanche needed to run unrestricted. Using Computer Management and looking at the Local Users and Groups I found a cluster of SQLServer related groups.




One group in particular was of interest to me, the SQLServer2005SQLAgentUser$(machinename)$MSSQL. It contained the old SQL Agent service account but not the new account. Adding the new service account fixed the problem.




After some additional research, I discovered that if I had modifed the service accounts using the SQL Server configuration manager rather than through the SCM, the new service account would have been automatically added to this group.


Yet another lesson learned.