Friday, December 6, 2013

RowCount is now a No-Account - Alternative for Setting Dynamic Top Rows

Had an issue come up in a couple of old stored procedures we needed to update, to replace the deprecated 'Set ROWCOUNT' with 'Top' for passing the number of rows we wanted selected.

ROWCOUNT was great for this for one reason, that being it provided an easy way to pass in a request for either some rows, or ALL rows, without knowing how many rows were in the table(s) beforehand.
All you needed to do was pass in 0 to get all rows. Any other amount got up to that many rows.

Well, I know ROWCOUNT has been deprecated and don't want to wait to find we're moving the databases to a newer server where it won't work, so I figure I'd just convert all such instances to use 'Top' instead.
In ignorance I thought I'd be able to set Top to 100% whenever the parameter passes in 0. NO GO, because Top followed by parentheses requires an integer, and Top followed by the @Setting variable without parentheses won't work.
Seemed my only solution was one of the following:
1: Use the variable setting and convert the query to dynamic sql:


declare @p int = 4 --pretend this is the incoming parameter
declare @v nvarchar(10)
declare @sql nvarchar(max)

if @p > 0
        set @v = ' Top '+ cast(@p as nvarchar) --to limit rows

else        set @v = ''--to draw all rows
 set @sql = N'select '+ @v + ' * from table'
exec sp_executesql @sql

With variable holding either the string ' Top [a number] ' or a blank string to allow all rows to be selected, depending beforehand on whatever the parameter held.

That wasn't a problem to me for a simple query, but we have complex queries with loads of joins, specific select fields, subqueries, function calls, where conditions involving both local variables and table columns, etc. To have to place all of that in an nvarchar @sql string stinks.

2: Second thought was just having 2 versions of the query, one with the Top included to take the integer value of the parameter when it was above 0, and the second version of the query with out 'Top'.
Again, easy enough if the query is simple, but unworkable when it is complex, with a ridiculous amount of repeated code, and 2 sets of everything to have to maintain when changes are made.

3: Perhaps the best solution in the end for simplicity, but it felt dirty to me.
If parameter is 0, reset it arbitrarily to the maximum value its datatype would hold, eg: 9223372036854775808 for a bigint, 2147483648  for an int, etc.
Then regardless of parameter value, you'll use 'Top (parameter value) ' in the query.
You'll know if your table might have more rows than this arbitrary ceiling, and not too many needing this solution would, but this solution just smelled to me. I don't like having to pull numbers out of a hat like that and employ them in a jillion procs. Just doesn't feel clean to me.

4: This method uses a CTE:

declare @rc int --rowcount value
set @rc = 3 --if set to 0 all rows get returned, otherwise the number of @rc rows
;with #CTE as (select * from table where whatever ='whatever')
 select * from #CTE where @rc = 0 --means all rows
 union select top (@rc) * from #CTE where @rc > 0 --means limited rows

This way, the real query is only typed in once, as the (select) for the CTE.
Using the union gets around the CTE limitation of only working with 1 select statement.
Only one part of the union will matter, so it's like being able to set 'Top' using an if/else. No need to set arbitrary limit set on rows.
It resulted, in my case anyway, in 2 index scans on the table in my proc, accounting for 78% of the work, and the union merge took 21%. I don't know yet the performance implication this has on a humongous query.
Worse comes to worse, I resort to choice #3.
Anybody out there have a better way to do this?

Thursday, November 14, 2013

Always Make Sure it's Plugged into the Wall! SQL Remote Connection into a Named Instance

Getting started using SAP BODS (Business Object Data Services). My testing playground are my 2 local SQL servers, the main SQL Server 2008 R2, and its stepbrother, SQLEXPRESS.
The jobs run on SAP don't run locally from my repository, but from a central network server.
Didn't realize that at first; had to learn the hard way.
Couldn't figure out for the life of me why executing jobs worked fine connecting with the my main SQL Server 2008 R2, which doesn't depend on a named instance, while the Express server could never be found.
Even after realizing, upon combing through the job's error output, that these jobs were not being locally executed, it didn't dawn on me right away that the Named Instance nature of the express server was the real issue.
I'm kinda dense that way sometimes.
Anyhoo, after combing back and forth like an ocean tide over all the configuration and security settings of the express server and my PC, looking for the magic bullet or best suspect to arrest, I finally received the epiphany.
The remote server executing the jobs only knew my local machine's name, which by default pointed to the main SQL server without needing a named instance. Adding the express's name to that was the equivalent of speaking in tongues to the job server, with it being an atheist.
No named instance, fine, let's get to it and work.
Named instance, "like, DUH??? Which way do I go George, which way do I go...?"
Supposedly there's always got to be one, but when Googling tech issues, you're lucky to even find one real solution, or even a mere clue to one.
Turned out, all I needed to do was turn the SQL Server Browser service on, along with making SQLEXPRESS's TCP/IP enabled (in SQL Server Configuration Manager under the server's protocols), which I'd already done further back in the process of futility.
Bang, boom! Named instance found! Job complete.
One last little thing, in express's TCP/IP protocol settings, I had to make sure what was listed under IPAll for dynamic ports, was set as THEE port for it, because the normal default SQL port, 1433, was already taken by 2008 R2.
 I just cut the port number from Dynamic ports and pasted into 'TCP Port'.

Tuesday, November 12, 2013

Proxy Fight and How to Be Credential

We had a devil of a time trying to figure out how to get an SQL Server Agent Job to run 3 DTSX packages that needed to collect their source data from 2 outside database servers.
The Server Agent by default executes all its jobs with its own service account as the user. Well, that didn't sit well with the other 2 databases, who didn't give a rats butt about that user - they weren't letting him/her/it in.
Finally figured out we could swap that default user with one that was a recognized user on the other 2 servers.
We had to set that user up as a Credential. In object Explorer, go to Security, and then right click to set up a new Credential. I just put the acceptable user id in as both the Credential name and Identity, along with that user's network pw. I left Encryption provider unchecked.
Then I set up the same as a proxy under SQL Server Agent.
Then I changed the default 'Run As' on each job step to the new proxy, which now came up as a drop menu choice.
Badda bing, bang, boom, the job now worked.
Once we were able to get our server's service account set up as a user on the other 2 servers, we switched the 'run-as' back.

Monday, September 23, 2013

Confounded Connection Settings

Didn't realize how drastically setting SQL connections changed from .Net v2 to .Net v3.5 and above.
Any old connections you have that were kept under 'appSettings' in web.config need to be moved to 'connectionStrings' and reformatted.
The way you then set those connections in your code-behind also must be changed.

This page has a great explanation of this:

See Session Variable Values in Visual Studio Breakpoints

Some developers aren't aware, or forget, how to see the breakpoint values of Session variables while debugging. Sometimes Visual Studio can test your breaking point in order to use breakpoints.
Mouse over the variable and the icon for a drop menu will appear.
Under 'Session', go to 'Keys'.
Under that, go to '_coll'.
Then open either '_entriesArray', if you know the variable's subscript, or '_entriesTable' to scroll through the variable names.
Open the subscript/name to find the key/value settings and click on the value.

If anyone knows how to bypass all this hoop jumping to get this info, I'd be much obliged.

Confounded Configurations

If you maintain different build configurations in Visual Studio .Net projects/websites, make sure you know which one is set as the active configuration whenever you go to do a debug run or publishing.
Always check Configuration Manager under the 'Build' menu before proceeding and change the Active solution configuration as needed.
Mine usually differ only in the SQL connections, each config having its own lil' ole mini web.config file (eg: Web.Release.config) for swapping the standard web.config connections with those needed by the specific configuration.

Great Page Explaining Stored Procedure Speed Issues

Had an issue with an SQL Server stored procedure running far slower than the same code as a stand alone query.
It was running a totally different execution plan due to the parameters.
This page explains how all that happens. The key to my issue was in the section ‘The Default Settings’

Pouring through the execution plans for procs can be an eye opener. I've found myself vastly improving performances with seemingly minor changes, such as reordering the join elements, adding indexes if missing on join columns, and in one case, using 'Forceseek' to stop the server from using a particular index and instead use a different one.

Just be aware that a new index that makes your current proc zoom, could adversely affect another proc using that table.

Ah Ha Moment of the Day - Where's This Master Page Coming From?

I'm obviously not a certifiable Visual Studio Guru yet. Ran into a situation many other Dot Net Developers may have, and not even realized it.
I had to revised a couple of Webform pages, moving something from one to the other. Seemingly a simple enough task.
Everything was fine until I did a local publish before submitting for deployment to Stage.
Unlike my local debug build, which produced no errors, this publish generated javascript errors on one page. Since I hadn't changed one single character on any JS or jquery code, I was at a loss.
The offending code was jquery and I feared a conflict maybe between it's dollar symbol and that used in the Telerik library, but ruled that out since no other page, all including both libraries, was generating this error.
At that point I started combing through my source codes and that from the browser, and poured through the master file included in the opening content page tag. I found some archaic 'script src' tags in the offending page that I thought perhaps should be in the master file instead. For all I knew, maybe one of my changes to the form structure was causing something to be read or executed in a different order than previous.
So I start shifting tags from content page to master, rebuild and WTF????????
Nothing I added to the master page was in the new content page source code!!!
I finally discovered, upon going through the various master page files, that despite the master page referenced in the content page, a totally different master page was being used. I confirmed this was the case after making tiny revisions in both and seeing the second master was definitely the one included in the build.
In the end, I fixed the jquery error by including a library script that I saw was in the other content page, which also had the same jquery method and wasn't throwing the error. Whatever the reason why my local debug build didn't throw the error, I wasn't about to fight with the Stage build to find out why. I gave it want it apparently wanted, returned the master pages to their original state, error gone.
BUT, I still wanted to know how the hell was the hard coded master page being swapped out and by what setting, or process? I'd already gone back and forth through all the Visual Studio menus I figured might be doing this via some project level setting.
Finally I discovered that Global.asax.cs was overriding the hard coded master with the second one. Learn something new every day. Had never considered that prior. Makes perfect sense if you don't want to risk globally changing the opening tag of every page on your site.

Friday, March 8, 2013

VB.NET Fixing

The hell dreaded error is probably the plague of many converting old VB6 applications to VB.NET, especially if, like me, you're new to working with either. Finally, finally, I stumbled upon a forum snippet on, page, and the guy's problem wasn't even mine, and wasn't even fixed, but one lil ole line of code he had fixed my issue, at least a few of them. Add controlname.CreateControl() in your form's designer page, before the rest of its properties are set. In my forms it was Me.controlname.CreateControl(). For whatever reason, Visual Studio 2008 saw fit to not include this in its code conversion. Almost got sucked into other forum opinions that you need special Active X licenses to get such legacy controls to work in .NET.
UPDATE: My suggestion above isn't for every such situation. On some forms I found that it was not a factor at all. Rather it was the way .Net forms are loaded from the form calling on them.
I had to switch the positioning of the first code accessing the values of any Active X elements to be after the 'Show' and/or 'Activate' commands for the form in which those elements reside.
Apparently, VB6 wasn't so concerned about certain things that VB .Net obsesses over.

Saturday, January 19, 2013

Robert Saylor dies after being handcuffed at Frederick movie theater |

Robert Saylor dies after being handcuffed at Frederick movie theater |

Better be effective followup by the news media and law enforcement on this. These employees' and deputies' actions better be scrutinized to the nth degree. Where was Saylor's aide companion when they confronted and agitated him? They couldn't see he likely had cognitive issues? Sure sounds like negligent homicide to me.