Friday, December 6, 2013

RowCount is now a No-Account - Alternative for Setting Dynamic Top Rows

Had an issue come up in a couple of old stored procedures we needed to update, to replace the deprecated 'Set ROWCOUNT' with 'Top' for passing the number of rows we wanted selected.

ROWCOUNT was great for this for one reason, that being it provided an easy way to pass in a request for either some rows, or ALL rows, without knowing how many rows were in the table(s) beforehand.
All you needed to do was pass in 0 to get all rows. Any other amount got up to that many rows.

Well, I know ROWCOUNT has been deprecated and don't want to wait to find we're moving the databases to a newer server where it won't work, so I figure I'd just convert all such instances to use 'Top' instead.
In ignorance I thought I'd be able to set Top to 100% whenever the parameter passes in 0. NO GO, because Top followed by parentheses requires an integer, and Top followed by the @Setting variable without parentheses won't work.
Seemed my only solution was one of the following:
1: Use the variable setting and convert the query to dynamic sql:

eg:

declare @p int = 4 --pretend this is the incoming parameter
declare @v nvarchar(10)
declare @sql nvarchar(max)

if @p > 0
        set @v = ' Top '+ cast(@p as nvarchar) --to limit rows

else        set @v = ''--to draw all rows
 set @sql = N'select '+ @v + ' * from table'
exec sp_executesql @sql

With variable holding either the string ' Top [a number] ' or a blank string to allow all rows to be selected, depending beforehand on whatever the parameter held.

That wasn't a problem to me for a simple query, but we have complex queries with loads of joins, specific select fields, subqueries, function calls, where conditions involving both local variables and table columns, etc. To have to place all of that in an nvarchar @sql string stinks.

2: Second thought was just having 2 versions of the query, one with the Top included to take the integer value of the parameter when it was above 0, and the second version of the query with out 'Top'.
Again, easy enough if the query is simple, but unworkable when it is complex, with a ridiculous amount of repeated code, and 2 sets of everything to have to maintain when changes are made.

3: Perhaps the best solution in the end for simplicity, but it felt dirty to me.
If parameter is 0, reset it arbitrarily to the maximum value its datatype would hold, eg: 9223372036854775808 for a bigint, 2147483648  for an int, etc.
Then regardless of parameter value, you'll use 'Top (parameter value) ' in the query.
You'll know if your table might have more rows than this arbitrary ceiling, and not too many needing this solution would, but this solution just smelled to me. I don't like having to pull numbers out of a hat like that and employ them in a jillion procs. Just doesn't feel clean to me.

4: This method uses a CTE:

declare @rc int --rowcount value
set @rc = 3 --if set to 0 all rows get returned, otherwise the number of @rc rows
  
;with #CTE as (select * from table where whatever ='whatever')
 select * from #CTE where @rc = 0 --means all rows
 union select top (@rc) * from #CTE where @rc > 0 --means limited rows

This way, the real query is only typed in once, as the (select) for the CTE.
Using the union gets around the CTE limitation of only working with 1 select statement.
Only one part of the union will matter, so it's like being able to set 'Top' using an if/else. No need to set arbitrary limit set on rows.
It resulted, in my case anyway, in 2 index scans on the table in my proc, accounting for 78% of the work, and the union merge took 21%. I don't know yet the performance implication this has on a humongous query.
Worse comes to worse, I resort to choice #3.
Anybody out there have a better way to do this?

Thursday, November 14, 2013

Always Make Sure it's Plugged into the Wall! SQL Remote Connection into a Named Instance

Getting started using SAP BODS (Business Object Data Services). My testing playground are my 2 local SQL servers, the main SQL Server 2008 R2, and its stepbrother, SQLEXPRESS.
The jobs run on SAP don't run locally from my repository, but from a central network server.
Didn't realize that at first; had to learn the hard way.
Couldn't figure out for the life of me why executing jobs worked fine connecting with the my main SQL Server 2008 R2, which doesn't depend on a named instance, while the Express server could never be found.
Even after realizing, upon combing through the job's error output, that these jobs were not being locally executed, it didn't dawn on me right away that the Named Instance nature of the express server was the real issue.
I'm kinda dense that way sometimes.
Anyhoo, after combing back and forth like an ocean tide over all the configuration and security settings of the express server and my PC, looking for the magic bullet or best suspect to arrest, I finally received the epiphany.
The remote server executing the jobs only knew my local machine's name, which by default pointed to the main SQL server without needing a named instance. Adding the express's name to that was the equivalent of speaking in tongues to the job server, with it being an atheist.
No named instance, fine, let's get to it and work.
Named instance, "like, DUH??? Which way do I go George, which way do I go...?"
Supposedly there's always got to be one, but when Googling tech issues, you're lucky to even find one real solution, or even a mere clue to one.
Turned out, all I needed to do was turn the SQL Server Browser service on, along with making SQLEXPRESS's TCP/IP enabled (in SQL Server Configuration Manager under the server's protocols), which I'd already done further back in the process of futility.
Bang, boom! Named instance found! Job complete.
One last little thing, in express's TCP/IP protocol settings, I had to make sure what was listed under IPAll for dynamic ports, was set as THEE port for it, because the normal default SQL port, 1433, was already taken by 2008 R2.
 I just cut the port number from Dynamic ports and pasted into 'TCP Port'.

Tuesday, November 12, 2013

Proxy Fight and How to Be Credential

We had a devil of a time trying to figure out how to get an SQL Server Agent Job to run 3 DTSX packages that needed to collect their source data from 2 outside database servers.
The Server Agent by default executes all its jobs with its own service account as the user. Well, that didn't sit well with the other 2 databases, who didn't give a rats butt about that user - they weren't letting him/her/it in.
Finally figured out we could swap that default user with one that was a recognized user on the other 2 servers.
We had to set that user up as a Credential. In object Explorer, go to Security, and then right click to set up a new Credential. I just put the acceptable user id in as both the Credential name and Identity, along with that user's network pw. I left Encryption provider unchecked.
Then I set up the same as a proxy under SQL Server Agent.
Then I changed the default 'Run As' on each job step to the new proxy, which now came up as a drop menu choice.
Badda bing, bang, boom, the job now worked.
Once we were able to get our server's service account set up as a user on the other 2 servers, we switched the 'run-as' back.

Monday, September 23, 2013

Confounded Connection Settings

Didn't realize how drastically setting SQL connections changed from .Net v2 to .Net v3.5 and above.
Any old connections you have that were kept under 'appSettings' in web.config need to be moved to 'connectionStrings' and reformatted.
The way you then set those connections in your code-behind also must be changed.

This page has a great explanation of this:
http://www.connectionstrings.com/store-connection-string-in-webconfig/

See Session Variable Values in Visual Studio Breakpoints

Some developers aren't aware, or forget, how to see the breakpoint values of Session variables while debugging. Sometimes Visual Studio can test your breaking point in order to use breakpoints.
Mouse over the variable and the icon for a drop menu will appear.
Under 'Session', go to 'Keys'.
Under that, go to '_coll'.
Then open either '_entriesArray', if you know the variable's subscript, or '_entriesTable' to scroll through the variable names.
Open the subscript/name to find the key/value settings and click on the value.

If anyone knows how to bypass all this hoop jumping to get this info, I'd be much obliged.

Confounded Configurations

If you maintain different build configurations in Visual Studio .Net projects/websites, make sure you know which one is set as the active configuration whenever you go to do a debug run or publishing.
Always check Configuration Manager under the 'Build' menu before proceeding and change the Active solution configuration as needed.
Mine usually differ only in the SQL connections, each config having its own lil' ole mini web.config file (eg: Web.Release.config) for swapping the standard web.config connections with those needed by the specific configuration.

Great Page Explaining Stored Procedure Speed Issues


Had an issue with an SQL Server stored procedure running far slower than the same code as a stand alone query.
It was running a totally different execution plan due to the parameters.
This page explains how all that happens. The key to my issue was in the section ‘The Default Settings’


Pouring through the execution plans for procs can be an eye opener. I've found myself vastly improving performances with seemingly minor changes, such as reordering the join elements, adding indexes if missing on join columns, and in one case, using 'Forceseek' to stop the server from using a particular index and instead use a different one.

Just be aware that a new index that makes your current proc zoom, could adversely affect another proc using that table.