Showing posts with label slow. Show all posts
Showing posts with label slow. Show all posts

Sunday, February 26, 2012

Extremely Slow Trigger Problem

I have an after update trigger that is killing me and I can't figure out why
.
When I run an update query that fires the trigger and look at the execution
plan, 85% of the cost is in the OPEN __0005 statement. I'm only updating 1
row, so this makes no sense to me. The I/O cost of opening the cursor is in
the 1100 range. Anyone got any ideas - I'm fresh out.
CREATE TRIGGER trgUpdate ON ITEMTASKS
FOR UPDATE
AS
DECLARE
@.ITE_ID INT,
@.TAS_ID INT,
@.STATUS TINYINT,
@.OLDSTATUS TINYINT
IF (@.@.ROWCOUNT = 0) RETURN
IF ( UPDATE( ITS_STATUS) ) BEGIN
SET NOCOUNT ON
DECLARE __0005 CURSOR LOCAL FAST_FORWARD FOR
SELECT I.ITE_ID, I.ITS_STATUS, I.TAS_ID, D.ITS_STATUS
FROM INSERTED I, DELETED D
WHERE I.ITE_ID=D.ITE_ID AND I.TAS_ID = D.TAS_ID
OPEN __0005
FETCH NEXT FROM __0005
INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
WHILE (@.@.FETCH_STATUS = 0 ) BEGIN
IF (@.STATUS = 6 ) BEGIN
UPDATE ITEMS
SET ITE_STATUS = 6
WHERE ITE_ID = @.ITE_ID
END
ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
@.TAS_ID)=1) BEGIN
UPDATE ITEMS
SET ITE_STATUS = 2
WHERE ITE_ID = @.ITE_ID
END
ELSE BEGIN
UPDATE ITEMS
SET ITE_STATUS = 1
WHERE ITE_ID = @.ITE_ID
END
EXEC ADD2SUMMARY_ @.ITE_ID, @.TAS_ID, @.STATUS, @.OLDSTATUS
FETCH NEXT FROM __0005
INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
END
CLOSE __0005
DEALLOCATE __0005
SET NOCOUNT OFF
ENDI strongly recommend you don't use cursors at all in triggers. Why
can't you re-write this and the contents of Add2Summary as set-based
SQL?
If you need more help then please come back with DDL, sample data
INSERT statements and show your required end result.
David Portas
SQL Server MVP
--|||Supposing that the cursor is necesary, I have two suggestions:
First -
Declare de cursor static and read_only
Second - Use EXIST in the following statement
-- ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
@.TAS_ID)=1) BEGIN
ELSE IF (@.STATUS = 2) AND exists(SELECT * FROM TASKS WHERE TAS_ID =
@.TAS_ID and TAS_ISLAST = 1) BEGIN
I do not know anything about the sp being executed, but the three updates
statements can be implemented without the use of the cursor.
AMB
"Arghknork" wrote:

> I have an after update trigger that is killing me and I can't figure out w
hy.
> When I run an update query that fires the trigger and look at the executi
on
> plan, 85% of the cost is in the OPEN __0005 statement. I'm only updating
1
> row, so this makes no sense to me. The I/O cost of opening the cursor is
in
> the 1100 range. Anyone got any ideas - I'm fresh out.
> CREATE TRIGGER trgUpdate ON ITEMTASKS
> FOR UPDATE
> AS
> DECLARE
> @.ITE_ID INT,
> @.TAS_ID INT,
> @.STATUS TINYINT,
> @.OLDSTATUS TINYINT
>
> IF (@.@.ROWCOUNT = 0) RETURN
> IF ( UPDATE( ITS_STATUS) ) BEGIN
> SET NOCOUNT ON
> DECLARE __0005 CURSOR LOCAL FAST_FORWARD FOR
> SELECT I.ITE_ID, I.ITS_STATUS, I.TAS_ID, D.ITS_STATUS
> FROM INSERTED I, DELETED D
> WHERE I.ITE_ID=D.ITE_ID AND I.TAS_ID = D.TAS_ID
> OPEN __0005
> FETCH NEXT FROM __0005
> INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
> WHILE (@.@.FETCH_STATUS = 0 ) BEGIN
> IF (@.STATUS = 6 ) BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 6
> WHERE ITE_ID = @.ITE_ID
> END
> ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
> @.TAS_ID)=1) BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 2
> WHERE ITE_ID = @.ITE_ID
> END
> ELSE BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 1
> WHERE ITE_ID = @.ITE_ID
> END
> EXEC ADD2SUMMARY_ @.ITE_ID, @.TAS_ID, @.STATUS, @.OLDSTATUS
> FETCH NEXT FROM __0005
> INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
> END
> CLOSE __0005
> DEALLOCATE __0005
> SET NOCOUNT OFF
> END
>|||Agreed, but I inherited the code and it's in production this way. It has
been working okay up until about a w ago, and I think there is an
underlying problem here I'm not seeing. Even with a cursor, it shouldn't
cost so much to open that cursor with 1 row in the inserted/deleted tables.
"David Portas" wrote:

> I strongly recommend you don't use cursors at all in triggers. Why
> can't you re-write this and the contents of Add2Summary as set-based
> SQL?
> If you need more help then please come back with DDL, sample data
> INSERT statements and show your required end result.
> --
> David Portas
> SQL Server MVP
> --
>

Extremely Slow Table

Hi,

I have a table defined as
CREATE TABLE [SH_Data] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Date] [datetime] NULL ,
[Time] [datetime] NULL ,
[TroubleshootId] [int] NOT NULL ,
[ReasonID] [int] NULL ,
[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[maj_reason_id] [int] NULL ,
[maj_reason_desc] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[ActionID] [int] NULL ,
[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[WinningCaseTitle] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[Duration] [int] NULL ,
[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[ConnectMethod] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[RouterUsedToConnect] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,
[WinXpSp2Installed] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL
,
[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[Acct_Num] [int] NULL ,
[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,
CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED
(
[TroubleshootId]
) ON [PRIMARY]
) ON [PRIMARY]
GO

Which contains 5.6 Million rows and has non clustered indexes on Date,
ReasonID, maj_Reason, Connection. Compared to other tables on the same
server this one is extremely slow. A simple query such as :

SELECT
SD.reason_desc,
SD.Duration,
SD.maj_reason_desc,
SD.[Connection],
SD.aolEnteredBy

FROM dbo.[Sherlock Data] SD

Where SD.[Date] > Dateadd(Month,-2,Getdate())

takes over 2 minutes to run ! I realise the table contains several
large columns which make the table quite large but unfortunately this
cannot be changed for the moment.

How can i assess what is causing the length of Query time ? And what
could i possibly do to speed this table up ? The database itself is
running on a dedicated server which has some other databases. None of
which have this performance issue.

Anyone have any ideas ?Do other queries which benefit of indexes also have bad performance ?

Where SD.[Date] > Dateadd(Month,-2,Getdate())

does a row based comparison, not using the indexes or what does the
query plan tells you about it ?

(Markt the query in QA and press CTRL+L) to see it.

Jens Suessmeyer.|||One possiblility is to change the PK to nonclustered and the index on
[Date] to clustered. If you often do range/grouping queries based on
[Date] then that should be useful, but it might also impact queries
using TroubleshootId, so you need to test any change with a number of
representative queries.

Other general advice would be to review the query plan in QA (Ctrl+K),
run UPDATE STATISTICS on the table, and also try tracing a typical
workload and running it through the Index Tuning Wizard to see what it
can recommend.

If you need more specific comments, you should post the query plan
(using SET SHOWPLAN_TEXT), and it might also be useful to know how many
rows are returned by the query.

Simon|||Yea ive tried a test on both and a query using the indexes take about
20 seconds less to run.

I tried the CTRL + L but its not making much sense to me.|||garydevstore (GaryDataStore@.gmail.com) writes:
> Which contains 5.6 Million rows and has non clustered indexes on Date,
> ReasonID, maj_Reason, Connection. Compared to other tables on the same
> server this one is extremely slow. A simple query such as :

Maybe some terminology is in order here. A road can be fast, but that
does not help you, if you car has a steering wheel out of order causing
you to zig-zag over the road. A car can be fast, but that does not help
if the road is in poor condition, so you cannot driver faster than 30 km/h
anyway.

In this case, the table is the road, and the query plan is the car. A
table itself does not move, but it can be badly fragmented in which case
it can be slow to drive through.

More likely, the query plan is not the best for the query. This is your
query:

> SELECT
> SD.reason_desc,
> SD.Duration,
> SD.maj_reason_desc,
> SD.[Connection],
> SD.aolEnteredBy
> FROM dbo.[Sherlock Data] SD
> Where SD.[Date] > Dateadd(Month,-2,Getdate())

There is a non-clustered index on Date. Assuming that rows are added
to this table regularly, there are presumably quite a few rows that
fits this condition. There are two ways for the optimizer to evaluate
this query: using the index, or scanning the table. The index is good
if only few rows are hit, but if many rows are hit the table scan is
faster. This is because, with the index you will need to read the same
page more than once.

The optimizer makes it choice of plan from the statistics SQL Server
has sampled about the table. The statistics may be out of date (even
if by default SQL Server auto-updates statistics). Try an UPDATE
STATISTICS WITH FULLSCAN, to see if this makes any difference.

But the road can also be in poor condition, that is the table can be
badly fragmented. This can be analysed with DBCC SHOWCONTIG and
remedied with DBCC DBREINDEX.

As suggested in other posts, you should look at the query plan, and see
if it says Clustered Index Scan or Index Seek + Bookmark Lookup.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Simon Hayes wrote:
> One possiblility is to change the PK to nonclustered and the index on
> [Date] to clustered. If you often do range/grouping queries based on
> [Date] then that should be useful, but it might also impact queries
> using TroubleshootId, so you need to test any change with a number of
> representative queries.

A covering index might be an option, too, especially if there are several
queries with multiple criteria.

One question to the OP: why do you have Date and Time both as timestamp
columns? Other remarkable things: all char columns seem to be unicode
(nvarchar) and have length (255). You might save space by changing to
varchar (if possible) and / or reducing the length. Also, this doesn't
really look like a normalized schema. I would at least expect having ids
for EnteredBy and probably some others.

> Other general advice would be to review the query plan in QA (Ctrl+K),
> run UPDATE STATISTICS on the table, and also try tracing a typical
> workload and running it through the Index Tuning Wizard to see what it
> can recommend.
> If you need more specific comments, you should post the query plan
> (using SET SHOWPLAN_TEXT), and it might also be useful to know how
> many rows are returned by the query.

Plus complete index DDL.

robert

Extremely slow runtime running SSRS on a cube

Hi everyone:

I am developing an SSRS report over a cube. When I drag and drop fields, it works fine. it runs in a few minutes. I am selectinng only from a single day - about 10,000 records. However, when I add some calculated fields it takes much longer. It's been running for 7 hours. The calculated fields fields are pretty simple. Some are selection of one field over another depending upon the value of a 3rd field. One is two fields multiplied together. One is a constant times a field. Something's obviously wrong here. Anybody seen this or have a solution?

Barry

If you use the MDX query designer (the GUI) in SSRS, I believe it puts the NON EMPTY keyword on both axes. So as soon as you put calculated measures into the query, it now has to do a non-empty against the calculation. This is MUCH more expensive unless you've properly defined a NON_EMPTY_BEHAVIOR.

Option 1: Do a search for NON_EMPTY_BEHAVIOR and add that to all your calculated measures.

Option 2: Flip over to the advanced tab where you can manually control the MDX query and start using the NonEmpty function instead of the NON EMPTY keyword. If you use the NonEmpty function, then you can specify which (physical) measure it should check.

That make sense?

|||

Thanks so much! That worked like a charm.

I am new to MDX, been using Oracle and am pretty helpless without the query builder.

|||

Well, it ran pretty fast, but my calculated field disappears when I switch to design mode. Here's what the qury looks like:

WITH MEMBER [Measures].[Fill Price]

AS 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )'

SELECT NONEMPTY ( { [Measures].[Slippage Tick Open Price], [Measures].[Slippage Open Price], [Measures].[Slippage G1 Bank], [Measures].[Quantity], [Measures].[Slippage Contract Open Price], [Measures].[Fill Price], [Measures].[Slippage Contract TWAP], [Measures].[Slippage Tick TWAP], [Measures].[Slippage TWAP], [Measures].[Slippage G1 Model] } )

ON COLUMNS, NONEMPTY ( { ([Fill Date].[Full Date].[Full Date].ALLMEMBERS * [Dim Team].[Team].[Team].ALLMEMBERS * [Dim Trader].[Trader Name].[Trader Name].ALLMEMBERS * [Dim Trade Info].[Hold Type].[Hold Type].ALLMEMBERS * [Dim Model].[Model Name].[Model Name].ALLMEMBERS * [Dim Market].[Report Name].[Report Name].ALLMEMBERS * [Dim TradeId].[Direction].[Direction].ALLMEMBERS * [Dim TradeId].[Spot 3 Month].[Spot 3 Month].ALLMEMBERS * [Dim TradeId].[Price All In].[Price All In].ALLMEMBERS * [Dim TradeId].[Open Price].[Open Price].ALLMEMBERS * [Dim TradeId].[TWAP].[TWAP].ALLMEMBERS ) } )

DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( [Fill Date].[Full Date].&[2007-08-08T00:00:00] : [Fill Date].[Full Date].&[2007-08-09T00:00:00] ) ON COLUMNS FROM [Trade]) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

The part referenced by 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )' Is what I want to select. It disappeared, though.

Barry

Extremely slow runtime running SSRS on a cube

Hi everyone:

I am developing an SSRS report over a cube. When I drag and drop fields, it works fine. it runs in a few minutes. I am selectinng only from a single day - about 10,000 records. However, when I add some calculated fields it takes much longer. It's been running for 7 hours. The calculated fields fields are pretty simple. Some are selection of one field over another depending upon the value of a 3rd field. One is two fields multiplied together. One is a constant times a field. Something's obviously wrong here. Anybody seen this or have a solution?

Barry

If you use the MDX query designer (the GUI) in SSRS, I believe it puts the NON EMPTY keyword on both axes. So as soon as you put calculated measures into the query, it now has to do a non-empty against the calculation. This is MUCH more expensive unless you've properly defined a NON_EMPTY_BEHAVIOR.

Option 1: Do a search for NON_EMPTY_BEHAVIOR and add that to all your calculated measures.

Option 2: Flip over to the advanced tab where you can manually control the MDX query and start using the NonEmpty function instead of the NON EMPTY keyword. If you use the NonEmpty function, then you can specify which (physical) measure it should check.

That make sense?

|||

Thanks so much! That worked like a charm.

I am new to MDX, been using Oracle and am pretty helpless without the query builder.

|||

Well, it ran pretty fast, but my calculated field disappears when I switch to design mode. Here's what the qury looks like:

WITH MEMBER [Measures].[Fill Price]

AS 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )'

SELECT NONEMPTY ( { [Measures].[Slippage Tick Open Price], [Measures].[Slippage Open Price], [Measures].[Slippage G1 Bank], [Measures].[Quantity], [Measures].[Slippage Contract Open Price], [Measures].[Fill Price], [Measures].[Slippage Contract TWAP], [Measures].[Slippage Tick TWAP], [Measures].[Slippage TWAP], [Measures].[Slippage G1 Model] } )

ON COLUMNS, NONEMPTY ( { ([Fill Date].[Full Date].[Full Date].ALLMEMBERS * [Dim Team].[Team].[Team].ALLMEMBERS * [Dim Trader].[Trader Name].[Trader Name].ALLMEMBERS * [Dim Trade Info].[Hold Type].[Hold Type].ALLMEMBERS * [Dim Model].[Model Name].[Model Name].ALLMEMBERS * [Dim Market].[Report Name].[Report Name].ALLMEMBERS * [Dim TradeId].[Direction].[Direction].ALLMEMBERS * [Dim TradeId].[Spot 3 Month].[Spot 3 Month].ALLMEMBERS * [Dim TradeId].[Price All In].[Price All In].ALLMEMBERS * [Dim TradeId].[Open Price].[Open Price].ALLMEMBERS * [Dim TradeId].[TWAP].[TWAP].ALLMEMBERS ) } )

DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( [Fill Date].[Full Date].&[2007-08-08T00:00:00] : [Fill Date].[Full Date].&[2007-08-09T00:00:00] ) ON COLUMNS FROM [Trade]) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

The part referenced by 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )' Is what I want to select. It disappeared, though.

Barry

Extremely slow report rendering - PLEASE HELP!

Hi,
We are running Reporting Services 2005 32-bit edition on a Windows 2003 R2
x64 platform, and SQL Server 2005 64-bit database server on Windows 2003 R2
x64. The Reporting Services catalog and the reporting datasource database
are both hosted remotely on the database server.
We are using the Reporting Services ReportViewer web-forms control version
8.0.50727.817 in our ASP.NET 2.0 32-bit web application. We are using custom
forms-authentication.
We have tested this same setup all hosted on a single machine, and it
flies - even with complex reports we can render a report in a couple of
seconds, so we are confident that our report queries and the database itself
are not an issue.
HOWEVER, as soon as we move the RS catalog database to the remote database
server (where it needs to be in our production environment), performance
becomes unacceptably slow - reports that renedered in a couple of seconds on
a single machine typically take 10 to 20 times longer to render.
Please, any advice on what could be causing this problem will be gratefully
received. We have explored things including network addressing, and ensured
that our database server name (which Reporting Services is configured to use
to talk to the database server) has an entry in the internal DNS.
Chris LewisHello Chris,
I understand that when you run the report on a x64 server which also host
the database, you get the report render slow.
You may run the following SQL Statement for the report server database.
select CAST(C.Name AS VARCHAR(20)) [ReportName],
E.TimeDataRetrieval,
E.TimeProcessing,
E.TimeRendering,
E.ByteCount,
E.[RowCount]
from executionlog E inner join catalog C
on E.ReportID = C.ItemID
Please let me know the result so that I can provide further assistance.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi,
We have been doing further testing, and we are 90% sure that custom (forms)
authentication and report access checking is at the root of the problem - I
think the local/remote catalog issue that I originally raised was a
red-herring, so please ignore this. We ran the query you supplied, and the
results showed that our database and the queries were performing well - no
problems with getting data for reports.
So, here is where we believe the problems lie...We have an Internet-facing
ASP.NET application that uses Reporting Services, and we have over 5000
individual users on our SQL Server database - each is registered with a
unique user ID on Reporting Services. When our custom authentication
components are called by SSRS to check access to reports, resources, etc.
for the logged-in user the access check is taking a long time to execute,
and we think we know why - here is a snippet of the code (taken from the
SSRS Forms authentication sample code, which we are using to a large extent)
in Authorization.cs which is a class within our custom authentication
extension component that implements IAuthorizationExtension. The key issue
is the foreach(AceStruct ace in acl) loop, which iterates once for each SSRS
user - and with 5000+ users (and growing rapidly) this is impacting
performance in a major way:
public bool CheckAccess(
string userName,
IntPtr userToken,
byte[] secDesc,
CatalogOperation requiredOperation)
{
AceCollection acl = DeserializeAcl(secDesc);
foreach(AceStruct ace in acl)
{
// First check to see if the user or group has an access control
// entry for the item
if (0 == String.Compare(userName, ace.PrincipalName, true,
CultureInfo.CurrentCulture))
{
// If an entry is found,
// return true if the given required operation
// is contained in the ACE structure
foreach(CatalogOperation aclOperation in ace.CatalogOperations)
{
if (aclOperation == requiredOperation)
return true;
}
}
}
return false;
}
My question is, how can we do this better and more efficiently? These access
checks are performed multiple times for each report request, so one question
I have is is there a way we can cache the results of a CheckAccess call for
a user, so we don't have to iterate through all the users again for that
user's session?
We need a solution that will scale for unlimited users.
Thanks for any advice.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:MvBzX9OIIHA.4268@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> I understand that when you run the report on a x64 server which also host
> the database, you get the report render slow.
> You may run the following SQL Statement for the report server database.
> select CAST(C.Name AS VARCHAR(20)) [ReportName],
> E.TimeDataRetrieval,
> E.TimeProcessing,
> E.TimeRendering,
> E.ByteCount,
> E.[RowCount]
> from executionlog E inner join catalog C
> on E.ReportID = C.ItemID
>
> Please let me know the result so that I can provide further assistance.
>
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hi,
(Sorry for repeat post)
We have been doing further testing, and we are 90% sure that custom (forms)
authentication and report access checking is at the root of the problem - I
think the local/remote catalog issue that I originally raised was a
red-herring, so please ignore this. We ran the query you supplied, and the
results showed that our database and the queries were performing well - no
problems with getting data for reports.
So, here is where we believe the problems lie...We have an Internet-facing
ASP.NET application that uses Reporting Services, and we have over 5000
individual users on our SQL Server database - each is registered with a
unique user ID on Reporting Services. When our custom authentication
components are called by SSRS to check access to reports, resources, etc.
for the logged-in user the access check is taking a long time to execute,
and we think we know why - here is a snippet of the code (taken from the
SSRS Forms authentication sample code, which we are using to a large extent)
in Authorization.cs which is a class within our custom authentication
extension component that implements IAuthorizationExtension. The key issue
is the foreach(AceStruct ace in acl) loop, which iterates once for each SSRS
user - and with 5000+ users (and growing rapidly) this is impacting
performance in a major way:
public bool CheckAccess(
string userName,
IntPtr userToken,
byte[] secDesc,
CatalogOperation requiredOperation)
{
AceCollection acl = DeserializeAcl(secDesc);
foreach(AceStruct ace in acl)
{
// First check to see if the user or group has an access control
// entry for the item
if (0 == String.Compare(userName, ace.PrincipalName, true,
CultureInfo.CurrentCulture))
{
// If an entry is found,
// return true if the given required operation
// is contained in the ACE structure
foreach(CatalogOperation aclOperation in ace.CatalogOperations)
{
if (aclOperation == requiredOperation)
return true;
}
}
}
return false;
}
My question is, how can we do this better and more efficiently? These access
checks are performed multiple times for each report request, so one question
I have is is there a way we can cache the results of a CheckAccess call for
a user, so we don't have to iterate through all the users again for that
user's session?
We need a solution that will scale for unlimited users.
Thanks for any advice.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:MvBzX9OIIHA.4268@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> I understand that when you run the report on a x64 server which also host
> the database, you get the report render slow.
> You may run the following SQL Statement for the report server database.
> select CAST(C.Name AS VARCHAR(20)) [ReportName],
> E.TimeDataRetrieval,
> E.TimeProcessing,
> E.TimeRendering,
> E.ByteCount,
> E.[RowCount]
> from executionlog E inner join catalog C
> on E.ReportID = C.ItemID
>
> Please let me know the result so that I can provide further assistance.
>
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hello Chris,
Have you consider to put the credential into the Session?
In ASP.NET, you could put anything you want to the Session.
Also, for more assistance, you may consider to check with
microsoft.public.dotnet.framework.aspnet newsgroup.
Hope this helps.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi,
Thanks for your response. I am not sure that we could get the information we
need from the aspnet newsgroup - I think this is more a Reporting Services
specific issue.
I agree that it would make sense to cache the credentials in the Session, if
that was possible to do. My next question is can we get access to the
ASP.NET session for our website from within the SSRS forms-authentication
custom security extension that is running in the context of the SSRS
ReportServer webservice? If so, can you provide a detailed code example of
how to do this.
Thanks, in anticipation.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:AoHoHnoIIHA.4200@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> Have you consider to put the credential into the Session?
> In ASP.NET, you could put anything you want to the Session.
> Also, for more assistance, you may consider to check with
> microsoft.public.dotnet.framework.aspnet newsgroup.
> Hope this helps.
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hello Chris,
I did not have such a code example yet.
Since this issue may related with Form Authentication and Asp.net session,
ASP.NET newsgroup will have more expert on this issue.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using

Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm

the only one querying the cube.

The AS server should not be a bottle neck (Windows 2003 x64-bit, dual

core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube

is very small with little data s well.

In addition to the slow query times when using Excel, using the Browser

in Visual Studio locally on the server results in queries taking longer

to execute than I would expect.

What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||

Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take

the following as my example. Let's say I have Product SKU and SKU

Description as available attributes in my Products dimension. While

editing the Products dimension, I notice that I can drag SKU

Description to create a relationship under Product SKU. I can also do

the opposite and drag Product SKU to create a relationship under SKU

Description. However, I can't do

both. Therefore, what is the difference between the 2 relationships.

Ultimately, a Product SKU can have only one SKU Description and

vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using

Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm

the only one querying the cube.

The AS server should not be a bottle neck (Windows 2003 x64-bit, dual

core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube

is very small with little data s well.

In addition to the slow query times when using Excel, using the Browser

in Visual Studio locally on the server results in queries taking longer

to execute than I would expect.

What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||

Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take

the following as my example. Let's say I have Product SKU and SKU

Description as available attributes in my Products dimension. While

editing the Products dimension, I notice that I can drag SKU

Description to create a relationship under Product SKU. I can also do

the opposite and drag Product SKU to create a relationship under SKU

Description. However, I can't do

both. Therefore, what is the difference between the 2 relationships.

Ultimately, a Product SKU can have only one SKU Description and

vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm the only one querying the cube.
The AS server should not be a bottle neck (Windows 2003 x64-bit, dual core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube is very small with little data s well.
In addition to the slow query times when using Excel, using the Browser in Visual Studio locally on the server results in queries taking longer to execute than I would expect.
What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take the following as my example. Let's say I have Product SKU and SKU Description as available attributes in my Products dimension. While editing the Products dimension, I notice that I can drag SKU Description to create a relationship under Product SKU. I can also do the opposite and drag Product SKU to create a relationship under SKU Description. However, I can't do both. Therefore, what is the difference between the 2 relationships. Ultimately, a Product SKU can have only one SKU Description and vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

extremely slow query

I have this querys, and when I run it, it took 15 minutes trowing the result, for the momment event is a table with 101 tuples and leftjets are near to 700 tuples, im gonna chech if I see where is the bottleneck.

Code Snippet

/**********************************************************************/
/* Jet Veto Cut 2
* leftJets jetbs should have Pt not bigger then maxAllowedPtForOtherJets
* see Hadronic Top Cut 2
* m_maxAllowedPtForOtherJets: ptOJets
*/
/*
* TTreeCut::SelectTopCombination, m_theTopComb
* min of m_okTopComb
*/

create view mTopComb
As
select j.*
from topComb as j
where (abs(sqrt(abs((j.j1Ee+j.j2Ee + j.Ee)*(j.j1Ee+j.j2Ee +j.Ee) -
((j.j1px +j.j2px + j.px)*(j.j1px +j.j2px + j.px) +
(j.j1py +j.j2py + j.py)*(j.j1py +j.j2py + j.py) +
(j.j1pz +j.j2pz + j.pz)*(j.j1pz +j.j2pz + j.pz))))
- 174.3))
=
(select min(abs(sqrt(abs((t.j1Ee+t.j2Ee + t.Ee)*(t.j1Ee+t.j2Ee +t.Ee) -
((t.j1px +t.j2px + t.px)*(t.j1px +t.j2px + t.px) +
(t.j1py +t.j2py + t.py)*(t.j1py +t.j2py + t.py) +
(t.j1pz +t.j2pz + t.pz)*(t.j1pz +t.j2pz + t.pz))))
- 174.3))
from topComb as t
where t.eventid=j.eventid)

GO

/*
* TTreeCut::SelectTopCombination, m_theLeftOverJets
* select m_okJets which are not contained in m_theTopComb
*/

create view leftjets
As
select distinct o.*
from okJets as o
where not exists (select o.idap from mtopcomb as j where j.idap=o.idap);

GO

create view jetVetoCut
AS
select distinct e.*
from events e
where not exists (select * from leftjets j where e.idevent=j.eventid and dbo.pt(j.idap)>70);

GO

The WHERE clause contains expressions that are extremely complex and that makes it very hard for the Query Optimizer to do accurate estimation. Typically column expressions at the WHERE clause cause optimziation problems, with plan choice or plan execution and they make the use of indexes impossible. I am guessing that for these reasons the query optimizer produces a suboptimal plan and hence your query is slow.

The following article describes a way to deal with complex expressions and enable the query optimizer to do proper estimation . I hope that you find this useful: http://blogs.msdn.com/queryoptteam/archive/2006/03/24/560089.aspx

Regards,

Leo Giakoumakis

|||

hi,

here's another alternative.. the idea of mTopCombComputed is to return the same result as your mTopCombView but the difference is that we minimized the query so that it would require less number of reads.

CREATE FUNCTION mTopCombComputed()
RETURNS @.mTopComb TABLE (
eventid int
, idap numeric -- just changed datatype according to your DDL
, Computed numeric -- just changed datatype according to your DDL
)
AS
BEGIN
INSERT
INTO @.mTopComb
SELECT j.eventid
, j.idap
, Computed = (abs(sqrt(abs((j.j1Ee+j.j2Ee + j.Ee)*(j.j1Ee+j.j2Ee +j.Ee) -
((j.j1px +j.j2px + j.px)*(j.j1px +j.j2px + j.px) +
(j.j1py +j.j2py + j.py)*(j.j1py +j.j2py + j.py) +
(j.j1pz +j.j2pz + j.pz)*(j.j1pz +j.j2pz + j.pz))))
- 174.3))
FROM topComb j
DELETE j
FROM @.mTopComb j INNER JOIN
(
SELECT eventid
, MIN(Computed) AS MinComputed
FROM @.mTopComb
GROUP BY
eventid
) t ON j.eventid = t.eventid
WHERE j.Computed <> t.MinComputed

RETURN
END
GO

ALTER VIEW leftjets
AS


SELECT DISTINCT
o.*
FROM okJets o LEFT OUTER JOIN
dbo.mTopCombComputed() c ON o.idap = c.idap
WHERE c.idap IS NULL

GO

ALTER VIEW jetVetoCut
AS


SELECT DISTINCT
e.*
FROM events e LEFT OUTER JOIN
leftjets j ON e.idevent = j.eventid
and dbo.pt(j.idap) > 70
WHERE j.eventid IS NULL

GO

Extremely slow median measures

Hi

I'm having a problem with extremely slow median measures.

I've created a named set of all record IDs and wrote the measure as Median( [All Records], [Measures].[Age] ). When I drop a dimension into one of the axes, it takes a very long time to calculate the median even at the top level of the hierarchy, and I suspect it's computing the median for all the members of that dimension, even before I've drilled down into them.

Anyone know a better method for this?

Your formula computes Median for all records always, regardless of the selection in Records dimension.|||

Sorry Mosha, I don't quite follow. I need this to be a generic measure that will return the median of any cell in the client browser. Since median cannot be preaggregated, I thought the only way to do this was to take the median of the set of all records in the current cell. The other option I had explored went something like this:

Median (

{ ( Axis(0)(0)( Axis(0)(0).Count - 1 ).Dimension.CurrentMember.All, [All Records].[ ID ].[ ID ] ) },

[Measures].[Age]

)

But this didn't work at all.

|||Sorry, this time I don't quite follow. What exactly do you mean by the following: "I need this to be a generic measure that will return the median of any cell in the client browser". Median of what ? Perha[s you could illustrate with couple of examples.|||

My mistake... Should have said median of a measure (e.g. age) within any cell in the spreadsheet. For example, if I have a 2 x 2 table with Male and Female as columns and marital status Single and Married as rows, I'd be showing the median age in each of the 4 cells. Other times, users would be interested in the median age for other combinations of factors, say Gender and Cancer, or Cancer and Socioeconomic Status, but they shouldn't have to select a different median measure for each combination. Sort of like a percentage/proportion against any dimension selected on the row/column axis, as discussed in the following post:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=720160&SiteID=1

|||

If you are guaranteed to always have two axes, then something like that may work

Median(CrossJoin(Axis(0), Axis(1)), Measures.[Age])

|||

Thanks Mosha, but no luck with that one either. There's probably no other way around this; that is, other than using the Fact Table primary key (Record ID) to select the set of all individual age values.

|||

Then I again don't understand your requirements :( Based on what you wrote before:

> For example, if I have a 2 x 2 table with Male and Female as columns and marital status Single and Married as rows, I'd be showing the median age in each of the 4 cells.

The formula that I wrote computes then median of these 4 cells and places it into each one of these 4 cells. I have verified it with AdventureWorks which has Gender and Marital Status attributes...

Extremely slow Excel MDX

Using Excel as a client is most of the time exceedingly slow. For example writing a simple query of the type:

SELECT [Measures].[Some Measure] ON 0,
[Product].[Product-Version].[Product] ON 1
FROM [Cubename]

in Management studio is in Excel transformed to:

SELECT NON EMPTY HIERARCHIZE(AddCalculatedMembers({DrillDownLevel({[Product].[Product-Version].[All]})})) DIMENSION PROPERTIES PARENT_UNIQUE_NAME ON COLUMNS FROM [Cubename] WHERE ([Measures].[Some Measure])

which takes several times longer to execute. As one starts drilling down it becomes increasingly worse with excel producing MDX that takes 100:s of times longer to execute then if I handwrite the mdx. This is with a very simple cube where Some Measure is not a calculated member. I can't even begin to imagine how slow it would be with a more complex cube. Is there anything to be done about this, any guidelines to follow to make it easer for Excel to generate "normal" mdx?

I had similar problem with Excel and OWC when accessing OLAP Cubes. Unfortunately, unless you optimize your cube, there is nothing can be done with how Excel generate mdx to retrieve data. Excel and OWC is closed code, and recently Microsoft announced that they will be stopping new releases for OWC. Howerver, Excel PivotTable has new version, take a look at Office 2007 in Beta version. It generates more efficient MDXs compared to Office 2003/2000.

Downside, it will take another 2-3 years for Office 2007 to be as popular as Office 2003, so, distribution of your solution in Office 2007 might be an issue if u decide to switch to Office 2007 in larger scale enterprise.

|||Thanks. Is there anything special you have in mind when you say "unless you optimize your cube"? Or do you mean the "ordinary" optimizations one does to make the server work decently fast? I will try to have a look at the 2007 beta though.|||

As mentioned earlier, you can take a look at Office 2007 sending bit different MDX queries. You also take a look at the ProClarity recently aquired by Microsoft see if you get better performance using it.

Also make sure you install latest service pack - SP1. There has been some performance improvements in it. You will see event more performance improvements in upcoming service pack 2. Watch for announcements of Community Technology Preview (CTP) to get your hands on upcoming SP2.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Extremely Slow connection time

Hi,
If absolutely nothing changed on the server and everyone=20
else is working fine...then its your desktop.
Have you looked the CPU, Memory usage on your desktop?=20
Your desktop connection could be dropping connection to=20
SQL Server, so it could be a network issue. Try pinging=20
the SQL Server with -t option.
If server and everyone else is OK, concentrate on your=20
desktop. Maybe there was a change in the network itself.
One thing you can try is maybe create a alias for the sql=20
server.=20
You can do that by :start\programs\microsoft sql=20
server\client network utility. Once the utility opens up,=20
go to the second tab (Alias) and click on 'add'. Enter the=20
server name and alias name and either use 'Named Pipes' or=20
TCP/IP. DO NOT change anything for the either of the=20
default setting unless you know for fact they have been=20
changed.
Try using 'named pipes', if performance does not increase,=20
try to use tcp/ip.
hth
DeeJay
hth
DeeJay
>--Original Message--
>Hi
>I have recently reinstalled XP Professional as well as=20
the MSSQL client tools.
>Our SQL 2000 server is running Win2K with 512MB RAM, and=20
has been doing so for quite a while, no changes where done=20
to the server while I reinstalled XP on my PC.
>Now my connection to the SQL server is extremely=20
sluggish, e.g. connecting via the Query Analyzer takes up=20
to 2 minutes, and so do any other subsequent actions, for=20
example opening the list of databases, tables or stored=20
procedures on the object browser.
>All my queries take ages to get going even for the=20
simplest queries.
>Previously the response was very reasonable, as are the=20
other developers PCs in the company, with similar set up.
>I have tried using both forms of security but have no joy=20
either way. I have used the IP address =E2?" no go.
>If I ping the NetBIOS name of the server the response is=20
instantaneous.
>
>Please help!!
>.
>Hi
Thanks for the response
I had already loaded the latest service packs and already set up the network
client utility, and tried bothTCP/IP as well as named pipes, all of these h
ad no effect.
Regards
Herbert|||Are other people able to connect to the server without any
problem?
If yes, then its your PC. Perhaps the XP re-install did
not work properly.
DeeJay

>--Original Message--
>Hi
>Thanks for the response
>I had already loaded the latest service packs and already
set up the network client utility, and tried bothTCP/IP as
well as named pipes, all of these had no effect.
>Regards
>Herbert
>.
>

Extremely Slow connection time

Hi
I have recently reinstalled XP Professional as well as the MSSQL client tool
s.
Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so f
or quite a while, no changes where done to the server while I reinstalled XP
on my PC.
Now my connection to the SQL server is extremely sluggish, e.g. connecting v
ia the Query Analyzer takes up to 2 minutes, and so do any other subsequent
actions, for example opening the list of databases, tables or stored procedu
res on the object browser.
All my queries take ages to get going even for the simplest queries.
Previously the response was very reasonable, as are the other developers PCs
in the company, with similar set up.
I have tried using both forms of security but have no joy either way. I have
used the IP address – no go.
If I ping the NetBIOS name of the server the response is instantaneous.
Please help!!Did you remember to install SQL Server 2000 SP3a after the reinstall?
Without it, you are vulnerable to the Slammer.
Hope this helps.
Dan Guzman
SQL Server MVP
"Herbert Schroder" <herberts@.mweb.co.za> wrote in message
news:9FF54BCC-2F28-4F5A-9D6A-06A4D4B1CA41@.microsoft.com...
> Hi
> I have recently reinstalled XP Professional as well as the MSSQL client
tools.
> Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so
for quite a while, no changes where done to the server while I reinstalled
XP on my PC.
> Now my connection to the SQL server is extremely sluggish, e.g. connecting
via the Query Analyzer takes up to 2 minutes, and so do any other subsequent
actions, for example opening the list of databases, tables or stored
procedures on the object browser.
> All my queries take ages to get going even for the simplest queries.
> Previously the response was very reasonable, as are the other developers
PCs in the company, with similar set up.
> I have tried using both forms of security but have no joy either way. I
have used the IP address - no go.
> If I ping the NetBIOS name of the server the response is instantaneous.
>
> Please help!!
>

Extremely Slow connection time

Hi,
If absolutely nothing changed on the server and everyone=20
else is working fine...then its your desktop.
Have you looked the CPU, Memory usage on your desktop?=20
Your desktop connection could be dropping connection to=20
SQL Server, so it could be a network issue. Try pinging=20
the SQL Server with -t option.
If server and everyone else is OK, concentrate on your=20
desktop. Maybe there was a change in the network itself.
One thing you can try is maybe create a alias for the sql=20
server.=20
You can do that by :start\programs\microsoft sql=20
server\client network utility. Once the utility opens up,=20
go to the second tab (Alias) and click on 'add'. Enter the=20
server name and alias name and either use 'Named Pipes' or=20
TCP/IP. DO NOT change anything for the either of the=20
default setting unless you know for fact they have been=20
changed.
Try using 'named pipes', if performance does not increase,=20
try to use tcp/ip.
hth
DeeJay
hth
DeeJay
>--Original Message--
>Hi
>I have recently reinstalled XP Professional as well as=20
the MSSQL client tools.
>Our SQL 2000 server is running Win2K with 512MB RAM, and=20
has been doing so for quite a while, no changes where done=20
to the server while I reinstalled XP on my PC.
>Now my connection to the SQL server is extremely=20
sluggish, e.g. connecting via the Query Analyzer takes up=20
to 2 minutes, and so do any other subsequent actions, for=20
example opening the list of databases, tables or stored=20
procedures on the object browser.
>All my queries take ages to get going even for the=20
simplest queries.
>Previously the response was very reasonable, as are the=20
other developers PCs in the company, with similar set up.
>I have tried using both forms of security but have no joy=20
either way. I have used the IP address =E2?" no go.
>If I ping the NetBIOS name of the server the response is=20
instantaneous.
>
>Please help!!
>.
>
Hi
Thanks for the response
I had already loaded the latest service packs and already set up the network client utility, and tried bothTCP/IP as well as named pipes, all of these had no effect.
Regards
Herbert
|||Are other people able to connect to the server without any
problem?
If yes, then its your PC. Perhaps the XP re-install did
not work properly.
DeeJay

>--Original Message--
>Hi
>Thanks for the response
>I had already loaded the latest service packs and already
set up the network client utility, and tried bothTCP/IP as
well as named pipes, all of these had no effect.
>Regards
>Herbert
>.
>

Extremely Slow connection time

Hi
I have recently reinstalled XP Professional as well as the MSSQL client tools.
Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so for quite a while, no changes where done to the server while I reinstalled XP on my PC.
Now my connection to the SQL server is extremely sluggish, e.g. connecting via the Query Analyzer takes up to 2 minutes, and so do any other subsequent actions, for example opening the list of databases, tables or stored procedures on the object browser.
All my queries take ages to get going even for the simplest queries.
Previously the response was very reasonable, as are the other developers PCs in the company, with similar set up.
I have tried using both forms of security but have no joy either way. I have used the IP address – no go.
If I ping the NetBIOS name of the server the response is instantaneous.
Please help!!
Did you remember to install SQL Server 2000 SP3a after the reinstall?
Without it, you are vulnerable to the Slammer.
Hope this helps.
Dan Guzman
SQL Server MVP
"Herbert Schroder" <herberts@.mweb.co.za> wrote in message
news:9FF54BCC-2F28-4F5A-9D6A-06A4D4B1CA41@.microsoft.com...
> Hi
> I have recently reinstalled XP Professional as well as the MSSQL client
tools.
> Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so
for quite a while, no changes where done to the server while I reinstalled
XP on my PC.
> Now my connection to the SQL server is extremely sluggish, e.g. connecting
via the Query Analyzer takes up to 2 minutes, and so do any other subsequent
actions, for example opening the list of databases, tables or stored
procedures on the object browser.
> All my queries take ages to get going even for the simplest queries.
> Previously the response was very reasonable, as are the other developers
PCs in the company, with similar set up.
> I have tried using both forms of security but have no joy either way. I
have used the IP address - no go.
> If I ping the NetBIOS name of the server the response is instantaneous.
>
> Please help!!
>

Extremely slow client connection to MSSQL

Hallo

i'm experiencing an extremely slow connection from a WXPP Sp2 client to a MSSQL2000 running on a W2k server. The client is running a VB6 application that connect with Windows authentication: every form requesting data opens with a long delay at the first launch; next attempts run normally fast.

In the same LAN there are some others identical clients, all running fine.

Every other network activity from that client is ok.

Where should i start to investigate from?

Can you check how much RAM is on this machine compared to the other machines.

The reason I say this is that if its slow the first time and quick thereafter, then its as if the machine is busy loading the first time, and as it has it memory thereafter, quick any subsequent times.

Regards,

Barry Andrew

|||

From what you described, looks like yourissue only happens between one machine and the server on you local network. There are several things that are normally helpful.

(1) "ping" to see the round trip latency between these two machines and compare it with others. Try multiple times. You can also try to copy files between machines and count the elapse time. If there is big discrepancies, there is network issues, you need to contact your admin to resolve it.

(2) Use provider prefixed connection string. For exampl, osql -E -Stcp:servername,portnunber. Explicit connection string can avoid the overhead of trying multiple different providers during connection and some of them are not supported by your server/network.

(3) If the delay happens after the connection, you can use SQL profiler. But according to your description, I think your problem is at connection stage.

HTH.

Friday, February 24, 2012

Extreme slow SQL Server 2005 and high CPU usage - Casting of values

Hello sql and .net gurus :-)
I have a problem with my websitewww.eventguide.it. It's completly developed under .NET 2 and SQL Server 2005 Express. My problem is the folowing:
The server is a Intel 3Ghz HT processor with 1GB Ram. No other page on the running system is a CPU consuming site. We optimized the SQL statements, the code, the caching and many other parts of the website (pooling on SQL access), but the SQL Server uses about 50% to100% of the CPU and about 400MB RAM all the time. The whole site seems to be very, very slow. In fact there are many of SQL operations on every page request, but we cache a lot of them in different ways (page output caching, application caching). So I don't understand we have so much performance problems. Any suggestions for optimised code in general? I read nearly all of the MS .NET performance papers - but real world experience is the missing part :-)

It is better to cast the values of a SQL reader like this
Dim String1 as String = Ctype(DataReader.item(0), String)
Dim Integer1 as Intger = Ctype(DataReader.item(1), Integer)
or like this
Dim String1 as String = DataReader.item(0)
Dim Integer1 as Intger = DataReader.item(1)

Thanks a lot for your help!
FOX

I visited your site, and honestly I cant answer your question. Your SQL CPU usage seems way off, but the problem appears to be bandwidth entirely.

I pinged your site, and traced it with good solid pathways and quick responses. But going into the Foto's (photos) section resulted in pictures taking an extremely long time to load even after the page was rendered. Why type of up-pipe is that server pushing?

|||What do you mean with: "Why type of up-pipe is that server pushing?"|||

Sorry, horribly formed question:

How much bandwidth is available to the server over your connection, dedicated to "upload"?

|||The upload of the server is about 10Mbit - enough to deliver requests in a fast way.
Well - look now the speed of the entire site. Today I changed the SQL database index of a 2.000.000 rows counting table (counter for images and other things) and from one moment to another I reduced CPU usage dramatically! Now I try to optimize also other table indexes to increase the speed further.
Any suggestions to handle indexes in a good way?
Thanks for your time!!!|||I had close to the same issue. I added the port to the IP addresss in the connection string. Don't ask me why this fixed the issue, but it did. For the RAM issue, I also had this. The server would go up to almost 500-600MB of ram.. and it barely had anything in it!!

So I made a batch script to restart the server, and setup a windows scheduled task to run it every other day.

Here is my batch:
net stop MSSQLSERVER
echo y
net start MSSQLSERVER

net stop hmailserver
net start hmailserver

Basically, this will restart your SQL server. I do this at about 3AM. It keeps the server running top notch as far as I know.

I had to add hmailserver in there because if you don't restart it also, it for some reason will fail to verify accounts. If you don't have hmail, remove it.|||Because if you don't specify the port, it first tries to contact the SQL Server on one of the standard ports, and if it's firewalled, or blocked, then it takes a bit for it to realize it and then switch over. Also, if you a fairly large latency between the web and sql server, it can reduce the time significantly because there is one less connection that needs to be established, and with a normal TCP/IP connection that does a 3-way handshake to start up, it's many times the latency between the two machines.

Extreamly slow performance on a view

Can anyone see anything that may be causing this view to return data
extreamly slowly, or have any tips on increasing the performance of this
particular view?
Alter View dbo.PreRegistration_V as
SELECT C.Customer,
C.Mail_Name,
C.First_Name,
C.Last_Name,
NULL Level1,
Null Level2,
Null Level3,
NULL MBRFULFILLST_Code,
A.Formatted_Line_1,
A.Formatted_Line_2,
A.Formatted_Line_3,
A.Formatted_Line_4,
A.Formatted_Line_5,
A.Formatted_Line_6,
A.Formatted_Line_7,
A.Country,
MR.Order_No,
MR.Order_Date,
MTG.Meeting_Name_1,
MTG.Meeting,
MR.Order_Total,
MR.User_N1,
MR.User_N2,
MR.User_n3,
MR.User_N4,
MR.Badge_Print_Status,
Null COM_Position,
Null Committee,
null End_Date
from Customer C WITH(NOLOCK),
meeting_registrant MR WITH(NOLOCK),
Address A WITH(NOLOCK),
Meeting MTG WITH(NOLOCK)
WHERE C.Customer = MR.Ship_Customer
and MR.Ship_Address_ID = A.Address_ID
and MR.Meeting = MTG.Meeting
and C.Record_Type = 'I'
and MR.Order_Status = 'A'
and MTG.Meeting = 'F05'
UNION
SELECT
C.Customer,
C.Mail_Name,
C.First_Name,
C.Last_Name,
NULL Level1,
NULL Level2,
NULL Level3,
NULL MBRFULFILLST_Code,
A.Formatted_Line_1,
A.Formatted_Line_2,
A.Formatted_Line_3,
A.Formatted_Line_4,
A.Formatted_Line_5,
A.Formatted_Line_6,
A.Formatted_Line_7,
A.Country,
MR.Order_No,
MR.Order_Date,
MTG.Meeting_Name_1,
MTG.Meeting,
MR.Order_Total,
MR.User_N1,
MR.User_N2,
MR.User_n3,
MR.User_N4,
MR.Badge_Print_Status,
CM.COM_Position,
CM.Committee,
CM.End_Date
from Customer C WITH(NOLOCK),
meeting_registrant MR WITH(NOLOCK),
Committee_member CM WITH(NOLOCK),
Address A WITH(NOLOCK),
Meeting MTG WITH(NOLOCK)
WHERE C.Customer = MR.Ship_Customer
and C.Record_Type = 'I'
and MR.Order_Status = 'A'
and MR.Ship_Address_ID = A.Address_ID
and MR.Meeting = MTG.Meeting
and CM.Customer = MR.Ship_Customer
and MTG.Meeting = 'F05'
and (CM.Committee in ('C0003500', 'C0000100', 'C0009000', 'C0003000',
'C0001000', 'C0001300') or CM.Committee like 'C0%')
UNION
SELECT C.Customer,
C.Mail_Name,
C.First_Name,
C.Last_Name,
NULL Level1,
NULL Level2,
NULL Level3,
NULL MBRFULFILLST_Code,
A.Formatted_Line_1,
A.Formatted_Line_2,
A.Formatted_Line_3,
A.Formatted_Line_4,
A.Formatted_Line_5,
A.Formatted_Line_6,
A.Formatted_Line_7,
A.Country,
MR.Order_No,
MR.Order_Date,
MTG.Meeting_Name_1,
MTG.Meeting,
MR.Order_Total,
MR.User_N1,
MR.User_N2,
MR.User_n3,
MR.User_N4,
MR.Badge_Print_Status,
CM.COM_Position,
CM.Committee,
CM.End_Date
from Customer C WITH(NOLOCK),
meeting_registrant MR WITH(NOLOCK),
Committee_member CM WITH(NOLOCK),
Address A WITH(NOLOCK),
Meeting MTG WITH(NOLOCK)
WHERE C.Customer = MR.Ship_Customer
and C.Record_Type = 'I'
and MR.Order_Status = 'A'
and MR.Ship_Address_ID = A.Address_ID
and MR.Meeting = MTG.Meeting
and CM.Customer = MR.Ship_Customer
and MTG.Meeting = 'F05'
and CM.COM_Position IN ('001CHAIR','062PP','060PRES','061VP','0
63EXECVP')
UNION
SELECT
C.Customer,
C.Mail_Name,
C.First_Name,
C.Last_Name,
MD.Level1,
MD.Level2,
MD.Level3,
MBR.MBRFULFILLST_Code,
A.Formatted_Line_1,
A.Formatted_Line_2,
A.Formatted_Line_3,
A.Formatted_Line_4,
A.Formatted_Line_5,
A.Formatted_Line_6,
A.Formatted_Line_7,
A.Country,
MR.Order_No,
MR.Order_Date,
MTG.Meeting_Name_1,
MTG.Meeting,
MR.Order_Total,
MR.User_N1,
MR.User_N2,
MR.User_n3,
MR.User_N4,
MR.Badge_Print_Status,
Null COM_Position,
Null Committee,
Null End_Date
from Customer C WITH(NOLOCK),
meeting_registrant MR WITH(NOLOCK),
Membership MBR WITH(NOLOCK),
MemberShip_Detail MD WITH(NOLOCK),
Address A WITH(NOLOCK),
Meeting MTG WITH(NOLOCK),
Committee_Member CM WITH(NOLOCK)
WHERE C.Customer = MR.Ship_Customer
and C.Record_Type = 'I'
and MR.Order_Status = 'A'
and MR.Ship_Customer = MBR.Ship_Customer
and MR.Ship_Address_ID = A.Address_ID
and MR.Meeting = MTG.Meeting
and MBR.Order_No = MD.order_No
and MD.Level2 in ('STAFF','IND','JR','STU')
and MBR.Order_Status = 'A'
and MTG.Meeting = 'F05'
UNION
SELECT
C.Customer,
C.Mail_Name,
C.First_Name,
C.Last_Name,
null Level1,
null Level2,
null Level3,
null MBRFULFILLST_Code,
A.Formatted_Line_1,
A.Formatted_Line_2,
A.Formatted_Line_3,
A.Formatted_Line_4,
A.Formatted_Line_5,
A.Formatted_Line_6,
A.Formatted_Line_7,
A.Country,
MR.Order_No,
MR.Order_Date,
MTG.Meeting_Name_1,
MTG.Meeting,
MR.Order_Total,
MR.User_N1,
MR.User_N2,
MR.User_n3,
MR.User_N4,
MR.Badge_Print_Status,
Null COM_Position,
Null Committee,
Null End_Date
from Customer C WITH(NOLOCK),
meeting_registrant MR WITH(NOLOCK),
Membership MBR WITH(NOLOCK),
MemberShip_Detail MD WITH(NOLOCK),
ACI_Current_Fellows CD WITH(NOLOCK),
Address A WITH(NOLOCK),
Meeting MTG WITH(NOLOCK)
WHERE C.Customer = MR.Ship_Customer
and C.Record_Type = 'I'
and MR.Order_Status = 'A'
and MR.Ship_Customer = MBR.Ship_Customer
and MR.Ship_Address_ID = A.Address_ID
and MR.Meeting = MTG.Meeting
and MBR.Order_No = MD.order_No
and MTG.Meeting = 'F05'Mike,
That's a pretty hefty view. Might consider using a stored procedure
instead. Have the appropriate indexes been created for each SELECT
statement? Are the indexes defraged, are the statistics up-to-date? Does
the data change very often - possibly an indexed view? Can you populate a
table with the results during off-peak hours and then query the table
directly (w/ appropriate indexes)?
Just a couple of thoughts.
HTH
Jerry
"Mike" <Mike@.discussions.microsoft.com> wrote in message
news:CC4032BC-603F-4846-B6F0-E06E676B6609@.microsoft.com...
> Can anyone see anything that may be causing this view to return data
> extreamly slowly, or have any tips on increasing the performance of this
> particular view?
> Alter View dbo.PreRegistration_V as
> SELECT C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> Null Level2,
> Null Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MTG.Meeting = 'F05'
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> NULL Level2,
> NULL Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> CM.COM_Position,
> CM.Committee,
> CM.End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Committee_member CM WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and CM.Customer = MR.Ship_Customer
> and MTG.Meeting = 'F05'
> and (CM.Committee in ('C0003500', 'C0000100', 'C0009000', 'C0003000',
> 'C0001000', 'C0001300') or CM.Committee like 'C0%')
> UNION
> SELECT C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> NULL Level2,
> NULL Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> CM.COM_Position,
> CM.Committee,
> CM.End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Committee_member CM WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and CM.Customer = MR.Ship_Customer
> and MTG.Meeting = 'F05'
> and CM.COM_Position IN ('001CHAIR','062PP','060PRES','061VP','0
63EXECVP')
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> MD.Level1,
> MD.Level2,
> MD.Level3,
> MBR.MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> Null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Membership MBR WITH(NOLOCK),
> MemberShip_Detail MD WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK),
> Committee_Member CM WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Customer = MBR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and MBR.Order_No = MD.order_No
> and MD.Level2 in ('STAFF','IND','JR','STU')
> and MBR.Order_Status = 'A'
> and MTG.Meeting = 'F05'
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> null Level1,
> null Level2,
> null Level3,
> null MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> Null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Membership MBR WITH(NOLOCK),
> MemberShip_Detail MD WITH(NOLOCK),
> ACI_Current_Fellows CD WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Customer = MBR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and MBR.Order_No = MD.order_No
> and MTG.Meeting = 'F05'
>
>
>|||Hi, Mike,
It is hard to say what causes the query slow performance without knowing the
DDL for your tables (what are the Indexes defined in the tables?). But one
thing for sure:
this part in the query
(CM.Committee in ('C0003500', 'C0000100', 'C0009000', 'C0003000',
'C0001000', 'C0001300') or CM.Committee like 'C0%')
can be mofdified as :
CM.Committee like 'C0%'.
And if you can use "UNION ALL" instead of "UNION", it will have better
performance. See BOL Union for detail.
Perayu
"Mike" <Mike@.discussions.microsoft.com> wrote in message
news:CC4032BC-603F-4846-B6F0-E06E676B6609@.microsoft.com...
> Can anyone see anything that may be causing this view to return data
> extreamly slowly, or have any tips on increasing the performance of this
> particular view?
> Alter View dbo.PreRegistration_V as
> SELECT C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> Null Level2,
> Null Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MTG.Meeting = 'F05'
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> NULL Level2,
> NULL Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> CM.COM_Position,
> CM.Committee,
> CM.End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Committee_member CM WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and CM.Customer = MR.Ship_Customer
> and MTG.Meeting = 'F05'
> and (CM.Committee in ('C0003500', 'C0000100', 'C0009000', 'C0003000',
> 'C0001000', 'C0001300') or CM.Committee like 'C0%')
> UNION
> SELECT C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> NULL Level1,
> NULL Level2,
> NULL Level3,
> NULL MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> CM.COM_Position,
> CM.Committee,
> CM.End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Committee_member CM WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and CM.Customer = MR.Ship_Customer
> and MTG.Meeting = 'F05'
> and CM.COM_Position IN ('001CHAIR','062PP','060PRES','061VP','0
63EXECVP')
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> MD.Level1,
> MD.Level2,
> MD.Level3,
> MBR.MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> Null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Membership MBR WITH(NOLOCK),
> MemberShip_Detail MD WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK),
> Committee_Member CM WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Customer = MBR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and MBR.Order_No = MD.order_No
> and MD.Level2 in ('STAFF','IND','JR','STU')
> and MBR.Order_Status = 'A'
> and MTG.Meeting = 'F05'
> UNION
> SELECT
> C.Customer,
> C.Mail_Name,
> C.First_Name,
> C.Last_Name,
> null Level1,
> null Level2,
> null Level3,
> null MBRFULFILLST_Code,
> A.Formatted_Line_1,
> A.Formatted_Line_2,
> A.Formatted_Line_3,
> A.Formatted_Line_4,
> A.Formatted_Line_5,
> A.Formatted_Line_6,
> A.Formatted_Line_7,
> A.Country,
> MR.Order_No,
> MR.Order_Date,
> MTG.Meeting_Name_1,
> MTG.Meeting,
> MR.Order_Total,
> MR.User_N1,
> MR.User_N2,
> MR.User_n3,
> MR.User_N4,
> MR.Badge_Print_Status,
> Null COM_Position,
> Null Committee,
> Null End_Date
> from Customer C WITH(NOLOCK),
> meeting_registrant MR WITH(NOLOCK),
> Membership MBR WITH(NOLOCK),
> MemberShip_Detail MD WITH(NOLOCK),
> ACI_Current_Fellows CD WITH(NOLOCK),
> Address A WITH(NOLOCK),
> Meeting MTG WITH(NOLOCK)
> WHERE C.Customer = MR.Ship_Customer
> and C.Record_Type = 'I'
> and MR.Order_Status = 'A'
> and MR.Ship_Customer = MBR.Ship_Customer
> and MR.Ship_Address_ID = A.Address_ID
> and MR.Meeting = MTG.Meeting
> and MBR.Order_No = MD.order_No
> and MTG.Meeting = 'F05'
>
>
>