Sunday, February 26, 2012

facing problem in running msde

hi all,
i down loaded masde 2000 release A on windows 2000 profesioanl.i installed
successfully. but i dont know how to run it. Actually one forlder is created.
inside that "Binn","data",some other files are there. but there is no thing
else.In, programm files no new tool is created.
do i have to down load anything more to run msde.
pls help me .
thanks,
vinoth
hi vinoth,
vinoth wrote:
> hi all,
> i down loaded masde 2000 release A on windows 2000 profesioanl.i
> installed successfully. but i dont know how to run it. Actually one
> forlder is created. inside that "Binn","data",some other files are
> there. but there is no thing else.In, programm files no new tool is
> created.
> do i have to down load anything more to run msde.
> pls help me .
> thanks,
> vinoth
please see my answer to your follow-up in [Re: Install MSDE on XP Home Ed.
Unable connect DB from other PC] thread
thank you
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||hi ,
thanks Montanari.
but still i have problem.
i downoaded a gui tool . when i try to connect to the server .it says no
server exists.
i gave MSSQLSERVER as instance name. SAPWD= "sa"
in cpntrol panel-> services MSSQLSERVER is running.
pls guide me.
thanks,
vinoth
"Andrea Montanari" wrote:

> hi vinoth,
> vinoth wrote:
> please see my answer to your follow-up in [Re: Install MSDE on XP Home Ed.
> Unable connect DB from other PC] thread
> thank you
> --
> Andrea Montanari (Microsoft MVP - SQL Server)
> http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
> DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
> (my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
> interface)
> -- remove DMO to reply
>
>
|||hi,
vinoth wrote:
> hi ,
> thanks Montanari.
> but still i have problem.
> i downoaded a gui tool . when i try to connect to the server .it says
> no server exists.
> i gave MSSQLSERVER as instance name. SAPWD= "sa"
> in cpntrol panel-> services MSSQLSERVER is running.
>
at connection time you have to provide the full name of the MSDE instance...
if it is a default instance you only need to specify the ServerName (or
(Local) for local connections) where, for named instances, the full name is
composed by ServerName\InstanceName (or (Local)\InstanceName for a local
named instance)
please check in control panel->management tools->services applet what your
instance name is...
if you only find an entry like
MSSQLSERVER
this is a default instance...
if your see something like
MSSQLSERVER$some_name
this will be a named instance... so you'll have to connect providing
ComputerName\some_instance
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply
|||hi thank u
very much.
u r so help ful.
thanks
vinoth
"Andrea Montanari" wrote:

> hi,
> vinoth wrote:
> at connection time you have to provide the full name of the MSDE instance...
> if it is a default instance you only need to specify the ServerName (or
> (Local) for local connections) where, for named instances, the full name is
> composed by ServerName\InstanceName (or (Local)\InstanceName for a local
> named instance)
> please check in control panel->management tools->services applet what your
> instance name is...
> if you only find an entry like
> MSSQLSERVER
> this is a default instance...
> if your see something like
> MSSQLSERVER$some_name
> this will be a named instance... so you'll have to connect providing
> ComputerName\some_instance
> --
> Andrea Montanari (Microsoft MVP - SQL Server)
> http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
> DbaMgr2k ver 0.15.0 - DbaMgr ver 0.60.0
> (my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
> interface)
> -- remove DMO to reply
>
>

facing problem in increasing connection string pool size sqlserver

Hi to all,

I am using a connection string like

data source=RemoteHostName;initial catalog=myDb;password=sa;user id=sa;
Max pool size = 200;

And now strange thing is happening ,, I am receiving error :

Timeout expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because all pooled connections were in
use and max pool size was reached

The SqlServer Activity Manager is telling that only 100 connections are pooled, and I guess that the Max pool size is 100, It is not being changed by my Connection string. As I am trying to change the default 100 pool size value to 200.

Huh , So stucked up , how to increase the Max pool size.. Is there any way .

I am getting worrried.

Any help ??

Thx and RegardsBill Vaughn is a well-regarded SQL Server expert. This article of his should help you out:The .NET Connection Pool Lifeguard -- Prevent pool overflows that can drown your applications.

Terri

Facing Problem

Well I am the MS-SQL user I always face the problem in my application.
After some time my application stop working goes in to the not
responding stage. Then I have to KILL some session which are SPID > 51.

After this my problem is getting resolved after this I can able to
create new sessions. Please help me ot with this issues.

regards,

ShaileshYou've really told us nothing here. Have you determined if there is a
blocked process? what happens if you run sp_who?

--
Tom

----------------
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com

"VU2LOC" <vu2loc@.gmail.com> wrote in message
news:1137493777.837548.11200@.g47g2000cwa.googlegro ups.com...
Well I am the MS-SQL user I always face the problem in my application.
After some time my application stop working goes in to the not
responding stage. Then I have to KILL some session which are SPID > 51.

After this my problem is getting resolved after this I can able to
create new sessions. Please help me ot with this issues.

regards,

Shailesh|||Hi Shailesh,

Check out if there are any transactions not being committed that may be
causing blocks..

Check for open tran, DBCC OPENTRAN - look it up in books online to
understand its use.

Run sp_who2 and check for any spids that are blocked, a number in the blk
column.

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials

"VU2LOC" <vu2loc@.gmail.com> wrote in message
news:1137493777.837548.11200@.g47g2000cwa.googlegro ups.com...
> Well I am the MS-SQL user I always face the problem in my application.
> After some time my application stop working goes in to the not
> responding stage. Then I have to KILL some session which are SPID > 51.
> After this my problem is getting resolved after this I can able to
> create new sessions. Please help me ot with this issues.
> regards,
> Shailesh

Facing Error in Select statement

I have written one procedure .in that i have put following login.but I got
the error.
Declare
@.purgecount int,
@.totcovchrg money,
@.totpaymntamt money,
@.totreimamt money
Select
@.purgecount = count(D.EventID),
@.totcovchrg = sum(CoverChrgAmt),
@.totpaymntamt = sum(PayDetail.PaymntAmt),
@.totreimamt = sum(CalcTotalReimAmt)
from DelReimb D, Reimburse R, ReimburseOut RO,
(select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
from PaymentsDetail
where isnull(InvalidPayInd,'') <> 'Y' and
PaymntInd not in ('A','B','X')
group by ClaimID,EventID) PayDetail
where
D.ClaimID = R.ClaimID and
D.EventID = R.EventID and
D.ClaimID = RO.ClaimID and
D.EventID = RO.EventID and
D.ClaimID = PayDetail.ClaimID and
D.EventID = PayDetail.EventID
Error :
A SELECT statement that assigns a value to a variable must not be combined
with data-retrieval operations.
Please tell me how to over come thisHi
You are missing a comma after the ISNULL statement
(select isnull(sum(PaymntAmt),0),PaymntAmt,Claim
ID,EventID
It is always worth posting DDL and example data
http://www.aspfaq.com/etiquette.asp?id=5006 so people can post your query
into Query Analyser
John
"spc" wrote:

> I have written one procedure .in that i have put following login.but I got
> the error.
>
> Declare
> @.purgecount int,
> @.totcovchrg money,
> @.totpaymntamt money,
> @.totreimamt money
>
> Select
> @.purgecount = count(D.EventID),
> @.totcovchrg = sum(CoverChrgAmt),
> @.totpaymntamt = sum(PayDetail.PaymntAmt),
> @.totreimamt = sum(CalcTotalReimAmt)
> from DelReimb D, Reimburse R, ReimburseOut RO,
> (select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
> from PaymentsDetail
> where isnull(InvalidPayInd,'') <> 'Y' and
> PaymntInd not in ('A','B','X')
> group by ClaimID,EventID) PayDetail
> where
> D.ClaimID = R.ClaimID and
> D.EventID = R.EventID and
> D.ClaimID = RO.ClaimID and
> D.EventID = RO.EventID and
> D.ClaimID = PayDetail.ClaimID and
> D.EventID = PayDetail.EventID
>
> Error :
> A SELECT statement that assigns a value to a variable must not be combined
> with data-retrieval operations.
>
> Please tell me how to over come this
>

Facing Error in Select statement

I have written one procedure .in that i have put following login.but I got
the error.
Declare
@.purgecount int,
@.totcovchrg money,
@.totpaymntamt money,
@.totreimamt money
Select
@.purgecount = count(D.EventID),
@.totcovchrg = sum(CoverChrgAmt),
@.totpaymntamt = sum(PayDetail.PaymntAmt),
@.totreimamt = sum(CalcTotalReimAmt)
from DelReimb D, Reimburse R, ReimburseOut RO,
(select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
from PaymentsDetail
where isnull(InvalidPayInd,'') <> 'Y' and
PaymntInd not in ('A','B','X')
group by ClaimID,EventID) PayDetail
where
D.ClaimID = R.ClaimID and
D.EventID = R.EventID and
D.ClaimID = RO.ClaimID and
D.EventID = RO.EventID and
D.ClaimID = PayDetail.ClaimID and
D.EventID = PayDetail.EventID
Error :
A SELECT statement that assigns a value to a variable must not be combined
with data-retrieval operations.
Please tell me how to over come thisHi
You are missing a comma after the ISNULL statement
(select isnull(sum(PaymntAmt),0),PaymntAmt,ClaimID,EventID
It is always worth posting DDL and example data
http://www.aspfaq.com/etiquette.asp?id=5006 so people can post your query
into Query Analyser
John
"spc" wrote:
> I have written one procedure .in that i have put following login.but I got
> the error.
>
> Declare
> @.purgecount int,
> @.totcovchrg money,
> @.totpaymntamt money,
> @.totreimamt money
>
> Select
> @.purgecount = count(D.EventID),
> @.totcovchrg = sum(CoverChrgAmt),
> @.totpaymntamt = sum(PayDetail.PaymntAmt),
> @.totreimamt = sum(CalcTotalReimAmt)
> from DelReimb D, Reimburse R, ReimburseOut RO,
> (select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
> from PaymentsDetail
> where isnull(InvalidPayInd,'') <> 'Y' and
> PaymntInd not in ('A','B','X')
> group by ClaimID,EventID) PayDetail
> where
> D.ClaimID = R.ClaimID and
> D.EventID = R.EventID and
> D.ClaimID = RO.ClaimID and
> D.EventID = RO.EventID and
> D.ClaimID = PayDetail.ClaimID and
> D.EventID = PayDetail.EventID
>
> Error :
> A SELECT statement that assigns a value to a variable must not be combined
> with data-retrieval operations.
>
> Please tell me how to over come this
>

Facing Error in Select statement

I have written one procedure .in that i have put following login.but I got
the error.
Declare
@.purgecount int,
@.totcovchrg money,
@.totpaymntamt money,
@.totreimamt money
Select
@.purgecount = count(D.EventID),
@.totcovchrg = sum(CoverChrgAmt),
@.totpaymntamt = sum(PayDetail.PaymntAmt),
@.totreimamt = sum(CalcTotalReimAmt)
from DelReimb D, Reimburse R, ReimburseOut RO,
(select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
from PaymentsDetail
where isnull(InvalidPayInd,'') <> 'Y' and
PaymntInd not in ('A','B','X')
group by ClaimID,EventID) PayDetail
where
D.ClaimID = R.ClaimID and
D.EventID = R.EventID and
D.ClaimID = RO.ClaimID and
D.EventID = RO.EventID and
D.ClaimID = PayDetail.ClaimID and
D.EventID = PayDetail.EventID
Error :
A SELECT statement that assigns a value to a variable must not be combined
with data-retrieval operations.
Please tell me how to over come this
Hi
You are missing a comma after the ISNULL statement
(select isnull(sum(PaymntAmt),0),PaymntAmt,ClaimID,EventID
It is always worth posting DDL and example data
http://www.aspfaq.com/etiquette.asp?id=5006 so people can post your query
into Query Analyser
John
"spc" wrote:

> I have written one procedure .in that i have put following login.but I got
> the error.
>
> Declare
> @.purgecount int,
> @.totcovchrg money,
> @.totpaymntamt money,
> @.totreimamt money
>
> Select
> @.purgecount = count(D.EventID),
> @.totcovchrg = sum(CoverChrgAmt),
> @.totpaymntamt = sum(PayDetail.PaymntAmt),
> @.totreimamt = sum(CalcTotalReimAmt)
> from DelReimb D, Reimburse R, ReimburseOut RO,
> (select isnull(sum(PaymntAmt),0) PaymntAmt,ClaimID,EventID
> from PaymentsDetail
> where isnull(InvalidPayInd,'') <> 'Y' and
> PaymntInd not in ('A','B','X')
> group by ClaimID,EventID) PayDetail
> where
> D.ClaimID = R.ClaimID and
> D.EventID = R.EventID and
> D.ClaimID = RO.ClaimID and
> D.EventID = RO.EventID and
> D.ClaimID = PayDetail.ClaimID and
> D.EventID = PayDetail.EventID
>
> Error :
> A SELECT statement that assigns a value to a variable must not be combined
> with data-retrieval operations.
>
> Please tell me how to over come this
>

Faced unknown problems on SQL Server (Urgent)

Hi all,

I'm facing some unknown errors occured in SQL server. SQL server prompt out the errors and I'm aware when are those errors occur.

Server displayed the sentences as below which I do not have any idea with its.


1) xpstar.dll to execute extended stored procedure 'xp_MSADEnabled'
2) xpsqlbot.dll to execute extended stored procedure 'xp_qv'

Anyone has any idea on how to solve those problem or any idea how is it caused?

I'm very appreciate for any reply and thank you for helping!!The second one is normally found in SQL Server's errorlog. I am not sure about xp_MSADEnabled. Have you registered the SQL Server with AD in Enterprise Manager? These messages are normally recorded in the errorlog, and do not normally indicate a problem.|||Hi,

Thank you for reply, MCrowley.
Any idea or suggestion on how to make the SQL server can execute back
because the system users could not login because of the database.|||Depends on what is wrong. What error are the users getting?|||Hi,

Users could not login to the system since both the issues happened. :S

Face problem into Report "ReportViewControl"

We have created report into Sql server 2005 and viewing this from
â'Reportviewcontrolâ' of Visual studio 2005. Now problem is some time report
first page displays only Report header and all data on Second page. Some time
its display report header and report details on same page.
Any one know how to solve this problem
Please help me out from this problem
I have made lots of R&D for it and finally come on conclusion that it is a
bug of control because of following reason
If report page has capacity of 25 records to display then first it tries to
display on single page. Now if we add one record then its display into
another page
In first case when record is too less then its display all record and report
header on same page
In second case if we have more record which can not display together with
Report header on same page then it will display report header only on first
page and report detail row on second page(as its try to fit all record on
single page)
In third case when we have more records which can not fit on single page
then it start to display record from first page (this time it is not possible
to it to fit all record on same page)Sounds alot like you've got the KeepTogether option checked?
Richard
"ecnindia@.newsgroups.nospam"
<ecnindianewsgroupsnospam@.discussions.microsoft.com> wrote in message
news:ED96230E-D761-41DD-9B46-C27C1C2BC883@.microsoft.com...
> We have created report into Sql server 2005 and viewing this from
> "Reportviewcontrol" of Visual studio 2005. Now problem is some time report
> first page displays only Report header and all data on Second page. Some
> time
> its display report header and report details on same page.
> Any one know how to solve this problem
> Please help me out from this problem
> I have made lots of R&D for it and finally come on conclusion that it is a
> bug of control because of following reason
> If report page has capacity of 25 records to display then first it tries
> to
> display on single page. Now if we add one record then its display into
> another page
> In first case when record is too less then its display all record and
> report
> header on same page
> In second case if we have more record which can not display together with
> Report header on same page then it will display report header only on
> first
> page and report detail row on second page(as its try to fit all record on
> single page)
> In third case when we have more records which can not fit on single page
> then it start to display record from first page (this time it is not
> possible
> to it to fit all record on same page)
>|||Thanks Richard
But i have not checked any option like "KeepTogether" or "fit in
one page".
"Richard Coltrane" wrote:
> Sounds alot like you've got the KeepTogether option checked?
> Richard
> "ecnindia@.newsgroups.nospam"
> <ecnindianewsgroupsnospam@.discussions.microsoft.com> wrote in message
> news:ED96230E-D761-41DD-9B46-C27C1C2BC883@.microsoft.com...
> > We have created report into Sql server 2005 and viewing this from
> > "Reportviewcontrol" of Visual studio 2005. Now problem is some time report
> > first page displays only Report header and all data on Second page. Some
> > time
> > its display report header and report details on same page.
> >
> > Any one know how to solve this problem
> > Please help me out from this problem
> >
> > I have made lots of R&D for it and finally come on conclusion that it is a
> > bug of control because of following reason
> >
> > If report page has capacity of 25 records to display then first it tries
> > to
> > display on single page. Now if we add one record then its display into
> > another page
> >
> > In first case when record is too less then its display all record and
> > report
> > header on same page
> >
> > In second case if we have more record which can not display together with
> > Report header on same page then it will display report header only on
> > first
> > page and report detail row on second page(as its try to fit all record on
> > single page)
> >
> > In third case when we have more records which can not fit on single page
> > then it start to display record from first page (this time it is not
> > possible
> > to it to fit all record on same page)
> >
>
>

fabricated hierarchical recordset

I want to use fabricated hierarchical recordset in VB6 using ADO. I wrote
code like
dim rs as adodb.recordest
set rs=new adodb.recordest
rs.fields.append "a1",adChar,30
Then in loop I put
rs.addnew
rs("a1")=...
re.update
when I associated this with hierarchical flexgrid I saw what I expected. On
the next step I added line
rs.fields.append "a2",adChapter
and this operator gave me error that I use wrong parms. Then I realized that
I should use specific connection. But with this connection adChar stopped to
work also. Is it possible to resolve this problem?
--
Aleks Kleyn
http://www.geocities.com/aleks_kleynHi Alex,

check next KB about how to create hierarchical recordset programmatically

http://support.microsoft.com/defaul...kb;en-us;196029

--
Val Mazur
Microsoft MVP
Check Virus Alert, stay updated
http://www.microsoft.com/security/incident/blast.asp

"ALEX KLEIN" <alekskleyn@.optonline.net> wrote in message
news:Vip2b.193279$_R5.71553770@.news4.srv.hcvlny.cv .net...
> I want to use fabricated hierarchical recordset in VB6 using ADO. I wrote
> code like
> dim rs as adodb.recordest
> set rs=new adodb.recordest
> rs.fields.append "a1",adChar,30
> Then in loop I put
> rs.addnew
> rs("a1")=...
> re.update
> when I associated this with hierarchical flexgrid I saw what I expected.
On
> the next step I added line
> rs.fields.append "a2",adChapter
> and this operator gave me error that I use wrong parms. Then I realized
that
> I should use specific connection. But with this connection adChar stopped
to
> work also. Is it possible to resolve this problem?
> --
> Aleks Kleyn
> http://www.geocities.com/aleks_kleyn

F8 to open object explorer. How to close?

You can use the F8 key to open object explorer. How do you close it with keyboard shortcut?

Unfortunately, it doesn't toggle closed (like in QA).

However, you can set the 'Auto-Hide' property to True and it will 'mostly' disappear when not in use.

|||

Arnie is correct. Please file this as an issue on Microsoft Connect. http://connect.microsoft.com/sqlserver/feedback/.

Paul A. Mestemaker II

Program Manager

Microsoft SQL Server Manageability

http://blogs.msdn.com/sqlrem/

F7 Key : Summary -> Object Explorer Details

Hi,

Where is the F7 key that in the past showed the Summary?

The Summary has been changed to 'Object Explorer Details' but is no more accessible by pressing the F7 key :-(

Why did you do that MS ?

You can set your keyboard scheme as that of Sql 2000. Go to Tools -> Options. Select Environment -> Keyboard and then choose the 'SQL Server 2000' Keyboard scheme.

Hope that helps.

Thanks,
Kuntal

|||

Sure it helps !

Thanks a lot.

|||With keyboard sheme SQL Server 2000. F2 for renaming doesn't work. I am not all happy with that sheme..., looking forward for a real fix or better walk-around.

F7 Key : Summary -> Object Explorer Details

Hi,

Where is the F7 key that in the past showed the Summary?

The Summary has been changed to 'Object Explorer Details' but is no more accessible by pressing the F7 key :-(

Why did you do that MS ?

You can set your keyboard scheme as that of Sql 2000. Go to Tools -> Options. Select Environment -> Keyboard and then choose the 'SQL Server 2000' Keyboard scheme.

Hope that helps.

Thanks,
Kuntal

|||

Sure it helps !

Thanks a lot.

|||With keyboard sheme SQL Server 2000. F2 for renaming doesn't work. I am not all happy with that sheme..., looking forward for a real fix or better walk-around.

F12 Not working

I just reformatted my HD, reloaded the OS and all applications on my dev
machine. Now that VS2003 is reloaded on the machine and I'm starting to
work in it again, I noticed that the F12 doesn't work anymore. Normally
when I click on a variable and hit F12 the focus will move to where that
variable is defined. Now it does nothing.
Any ideas on why this is so? I also loaded VS2005 Pro on this machine as
well, but assume that it's not causing the problem.
Thanks.
moondaddy@.nospam.nospamSorry, I posted this to the wrong group. It should have gone to
vsnet.general.
moondaddy@.nospam.nospam
"moondaddy" <moondaddy@.nospam.nospam> wrote in message news:...
>I just reformatted my HD, reloaded the OS and all applications on my dev
>machine. Now that VS2003 is reloaded on the machine and I'm starting to
>work in it again, I noticed that the F12 doesn't work anymore. Normally
>when I click on a variable and hit F12 the focus will move to where that
>variable is defined. Now it does nothing.
> Any ideas on why this is so? I also loaded VS2005 Pro on this machine as
> well, but assume that it's not causing the problem.
> Thanks.
> --
> moondaddy@.nospam.nospam
>

f1 -- need help

declare @.execution_string varchar(1000),
@.email_id varchar(500),
@.filename varchar(200),
@.reportname varchar(100),
@.osql_query varchar(1000),
@.fieldname varchar(8000),
@.fieldnametemp varchar(100),
@.query varchar(8000),
@.seperator1 varchar(2),
@.seperator2 varchar(2)

select @.fieldname =''
select @.fieldnametemp = ''
exec sp_serveroption [myserver] , 'data access', 'true'
set nocount on
declare reports_scheduled cursor for
select 'exec mydb.dbo.'+a.parameter_string , b.report_name from report_schedule a , report_master b
where a.Active='Y'
AND a.frequency like '%-'+SUBSTRING(datename(weekday,getDate()),0,3)+'-%' and a.report_no = b.report_no

open reports_scheduled
fetch next from reports_scheduled into @.execution_string,@.reportname
while @.@.fetch_status = 0
begin

declare @.l_str nvarchar(555)
select @.l_str ="CREATE procedure temp_proc as select top 100 * into temp_table from openquery ([myserver], """ + @.execution_string + """)"
exec (@.l_str)
exec temp_proc
select @.filename = 'c:\' + @.reportname+ convert(varchar(10),getdate(),112)+'.csv'
select @.filename= rtrim(ltrim(replace(@.filename,' ','')))
declare fnames cursor for
select name from syscolumns where id in (select id from sysobjects where name = 'temp_table')
open fnames
fetch next from fnames into @.fieldnametemp
while @.@.fetch_status = 0
select @.seperator1 = '"'
select @.seperator2 = '" '
begin
select @.fieldname = '''"'''+'+convert(varchar,replace(['+@.fieldnametemp + '],'+@.seperator1+','+@.seperator2+'))+'+ '''"'''+ ','+ @.fieldname
fetch next from fnames into @.fieldnametemp
end
close fnames
deallocate fnames
select @.fieldname = left(@.fieldname,len(@.fieldname)-1)
select @.fieldname
select @.query = 'echo select '+@.fieldname+ ' from temp_table > c:\temp_query.txt'
select @.query
exec master..xp_cmdshell @.query
select @.osql_query = 'osql -E -d mydb -i c:\temp_query.txt -s , -n -w 10000 -o '+ @.filename + ' -c'
select @.osql_query
exec master..xp_cmdshell @.osql_query

exec master..xp_sendmail @.recipients = '123@.123.com
,@.message ='This is a system generated mail. Please do not reply to this mail'
,@.attachments = @.filename
,@.subject = @.reportname

drop table temp_table
drop procedure temp_proc
fetch next from reports_scheduled into @.execution_string ,@.reportname
end
close reports_scheduled
deallocate reports_scheduled

The reults from any sp are taken and send to a given mail address by mail in excel format

This procedure gives me a error when i try to run it

Server: Msg 7391, Level 16, State 1, Procedure temp_proc, Line 1
[Microsoft][ODBC SQL Server Driver][SQL Server]The operation could not be performed because the OLE DB provider 'SQLOLEDB' was unable to begin a distributed transaction.

[OLE/DB provider returned message: New transaction cannot enlist in the specified transaction coordinator. ]

OLE DB error trace [OLE/DB Provider 'SQLOLEDB' ITransactionJoin::JoinTransaction returned 0x8004d00a].

DTC is up and running and no problems in that.

Any idea what might be causing this error .

It was running earlier and changes I made in the server were enabling replicationselect @.l_str ="CREATE procedure temp_proc as select top 100 * into temp_table from openquery ([myserver], """ + @.execution_string + """)"
exec (@.l_str)
exec temp_proc
exec temp_proc

What do you expect this to do?
Besides being bad practice to create SPs like this - why have you made it an SP when it looks like it could be dynamic sql.
execing it twice will give an error as will running your procedure again if it fails after the creation and before the drop.

Tables and SPs are meant to be permaneny objects and not created/destroyed dynamically.

If you need to do something like this I would recommend a global temp table - and as your structure does not change create it once outside the loop (using and 1=0 appended to the where clause) and truncate it before populating with an insert inside the loop.

I'm guessing your error is due to the provider for myserver whatever that is. If it's a self linked server check how it's set up.|||Tables and SPs are meant to be permaneny objects and not created/destroyed dynamically.

If you need to do something like this I would recommend a global temp table - and as your structure does not change create it once outside the loop (using and 1=0 appended to the where clause) and truncate it before populating with an insert inside the loop.

Thats the whole point ... I do not know the structure of the temp table ... its the resultset returned by the stored procedure executed in the openquery part.|||while @.@.fetch_status = 0
select @.seperator1 = '"'

select @.seperator2 = '" '
begin
select @.fieldname = '''"'''+'+convert(varchar,replace(['+@.fieldnametemp + '],'+@.seperator1+','+@.seperator2+'))+'+ '''"'''+ ','+ @.fieldname
fetch next from fnames into @.fieldnametemp
end

found the error

F I L T E R HELP!!! PLEASE!!!! THIS CAN'T BE THAT HARD

In SSRS 2005, how can I add this check (what do I have to put in the Expression, operator, and value in filters tab of table properties) to my table filter to ensure only customers that match one of the ORs are viewed in my report?

HERE IS A PRINT SCREEN OF THE FILTER TAB WHERE i WANT THIS ALL TO HAPPEN...PROPERTIES OF MY TABLE:
http://www.photopizzaz.biz/filtertab_ssrs2005table.jpg

essentially I want this check to filter records on my table...match to this criteria from my dataset:

(Fields!Branch.Value = '00002' and
Fields!CustomerNumber.Value = '0000002' or
Fields!CustomerNumber.Value = '0000003' or
Fields!CustomerNumber.Value = '0000004' or
Fields!CustomerNumber.Value = '0000155' or
Fields!CustomerNumber.Value = '0000156' or
Fields!CustomerNumber.Value = '0000159' or
Fields!CustomerNumber.Value = '0000160' or
Fields!CustomerNumber.Value = '0000161' or
Fields!CustomerNumber.Value = '0000118' or
Fields!CustomerNumber.Value = '0000153' or
Fields!CustomerNumber.Value = '0000152' or
Fields!CustomerNumber.Value = '0000108' or
Fields!CustomerNumber.Value = '0000158' or
Fields!CustomerNumber.Value = '0000133')

OR

(Fields!Branch.Value <> '00002' and
Fields!CustomerNumber.Value = '0000053' or
Fields!CustomerNumber.Value = '0000058' or
Fields!CustomerNumber.Value = '0000072' or
Fields!CustomerNumber.Value = '0000073' or
Fields!CustomerNumber.Value = '0000079' or
Fields!CustomerNumber.Value = '0000080' or
Fields!CustomerNumber.Value = '0000143' or
Fields!CustomerNumber.Value = '0000146' or
Fields!CustomerNumber.Value = '0000157' or
Fields!CustomerNumber.Value = '0000135')

Fav, Why don't you put these values into a SQL DB table and add it as an additional dataset for your report. Then, just reference the new column/table in the Expression definition.

Just a thought .....

|||

because while that would work, it's more overhead. I want to be able to use the reporting interface, that's what it's for. I'm not one to like to code all this crap in the backend for a report as complicated as the one I have. I'd like to be able to use the tool, SSRS which is what it's for and manage the filters, at least some of the major ones through the UI..I don't like managing everything in my stored procs and tables...that is way too much work...and that's why a reporting interface along with .NET and VB can help manage. I already have around 7 stored procs and 5 tables for this report...I don't need any more overhead....I don't want reporting to be 100% of my life at work...easier to maintain this with SSRS reporting filters. The problem is, SSRS 2005 is too new and nobody seems to know how to tell me how to form expressions in table filters at this point it looks like.

Why use reporting services if you can't harness filters like I want? think of it the reverse. If I did all this in SQL, first it would take me a year, second, then I would have no need for a UI...well, I don't have time to code everything in SQL, that's what UI is there to help with. I come from Crystal...if I had to code all that in SQL, adios to all my other projects. The stored procs behind my 5 datasets in this report have enough code in it....it's a nightmare.

|||

u can try this one

(Fields!Branch.Value = '00002' and
Fields!CustomerNumber.Value in ( '0000002' ,'0000003' , '0000004', '0000155' , '0000133') )

something like this, u can put all these constant values in expression after selecting IN operator

|||

thanks, so just so I have this right....in the filter screen, I'd put something like:

Expression Operator Value

Fields!CustomerNumber IN (Fields!Branch.Value = '00002' and
Fields!CustomerNumber.Value in ( '0000002' ,'0000003' , '0000004', '0000155' , '0000133') )

I need to be sure I'm putting your expression in the right place and that my operator is correct. So would you expression go on the expression side or Value when you look at my URL above?

|||

So, I tried this, but it doesn't like the syntax and how I set it up in this whole tab. Am I not doing this right?

http://www.photopizzaz.biz/filtertab_ssrs2005table2.jpg

|||

i think u have to use like this

Expression Operator Value

Fields!Branch.Value = '00002'

Fields!CustomerNumber.Value IN ( '0000002' ,'0000003' , '0000004', '0000155' , '0000133') )

use double quotes instead of single quotes if this does not work

OR else u can try this

Expression Operator Value

Fields!CustomerNumber.Value IN ( iif(Fields!Branch.Value = '00002' ,('0000002' ,'0000003' , '0000004', '0000155' , '0000133'), ('0000001'))

use iif function check this out in MSDN and try out

|||thanks, but what about the other OR piece?|||

i think u should use IIF for this - so for that OR piece u have to include all the ids where i have included ('000001')

that is, if use IIF function it needs three parameters, first parameter will be an expression, in ur case to check branch value is '000002',

second parameter will if the expression is true, include all the ids when branch value is '000002'

third parameter will be if the expression is false,include all the ids when branch value is not equal to '000002'

Extremly bad performance Stored Procedures

The last few days my sqlserver executes stored procedures badly.
When I run the Query Analyzer to execute a certain stored procedure it takes more than 2 minutes to execute the procedure. When I copy the contents of de SP to Query analyser to run it as an sql statement it find's the results within a second.
This behavior dissapear's after a while and comes back randomly.
I had the problem Friday afternoon then tuesday and now again.
Between these day's my sqlserver works fine.
Can anybody please help me with this problem.Have you tried using the "with recompile" option ? Your query plan is probably based on an outdated data distribution or schema. Running the "with recompile" option will regenerate the query plan. Also, are you parameters to the stored procedure vary enough that the execution plans change ? Do a comparison in query analyzer - using show execution plan.|||I was able to elimante the problem by altering de SP.
In the SP there where more than 4 joins to the same table.
When I made a user defined function and replaced those joins with this function, it all works fine.

But one question remains. How is it possible that the query analyser didn't have problems with the joins but de SP did have?|||Did you try the recompile ? Sometimes, if your table(s) involved in the query change enough - the query plan needs to change as well. When you run it in query analyzer, the query plan is generated dynamically. For the sp, it could still be using the original query plan when you created it. That is why I suggested to run the 2 in query analyzer with the "show execution plan".

Extremely weird SQL problem

Figure this...
INSERT INTO Recipient (RC_D_ID, RC_C_ID, RC_DATE)
SELECT DISTINCT 20000, subscriber.id_email, GETDATE()
FROM subscriber, Listmember
WHERE Listmember.LM_C_ID = subscriber.id_email
AND ((subscriber.banned = 0
AND subscriber.id_email < 5000000)
AND (Listmember.LM_L_ID = 253))
says ...
648842 rows affected
however...
SELECT DISTINCT 20000, subscriber.id_email, GETDATE()
FROM subscriber, Listmember
WHERE Listmember.LM_C_ID = subscriber.id_email
AND ((subscriber.banned = 0
AND subscriber.id_email < 5000000)
AND (Listmember.LM_L_ID = 253))
only returns 267564
in the Query Analyzer window.
And to top it all off... about 30 minutes after I
noticed it and investigated... it stopped happening.
Now both queries above affect the same number of rows.
This is the 4th time this has happened.
I haven't found any patters in log files or memory
settings or data types of fields.. .etc.
It's a brand new SQL Server 2000 Enterprise Edition.
6gb ram. Dual 3.06 ghz 1mb cache, Perc4/Dc Raid 10,
on 5x145gb scsi drives.
This problem wasn't happening on standard edition
that we migrated from about a month ago.
Any help would be greatly appreciated."Nick" <anonymous@.discussions.microsoft.com> wrote:
> Figure this...
> INSERT INTO Recipient (RC_D_ID, RC_C_ID, RC_DATE)
> SELECT DISTINCT 20000, subscriber.id_email, GETDATE()
> FROM subscriber, Listmember
> WHERE Listmember.LM_C_ID = subscriber.id_email
> AND ((subscriber.banned = 0
> AND subscriber.id_email < 5000000)
> AND (Listmember.LM_L_ID = 253))
> says ...
> 648842 rows affected
> however...
> SELECT DISTINCT 20000, subscriber.id_email, GETDATE()
> FROM subscriber, Listmember
> WHERE Listmember.LM_C_ID = subscriber.id_email
> AND ((subscriber.banned = 0
> AND subscriber.id_email < 5000000)
> AND (Listmember.LM_L_ID = 253))
> only returns 267564
> in the Query Analyzer window.
> And to top it all off... about 30 minutes after I
> noticed it and investigated... it stopped happening.
--
Hi Nick,
I would update statistics, free procedure cache and rebuild indexes if I see
inconsistent results like this. I will also review execution plans for any
discrepancies.
Hope this helps,
--
Eric Cárdenas
SQL Server support

Extremely Slow Trigger Problem

I have an after update trigger that is killing me and I can't figure out why
.
When I run an update query that fires the trigger and look at the execution
plan, 85% of the cost is in the OPEN __0005 statement. I'm only updating 1
row, so this makes no sense to me. The I/O cost of opening the cursor is in
the 1100 range. Anyone got any ideas - I'm fresh out.
CREATE TRIGGER trgUpdate ON ITEMTASKS
FOR UPDATE
AS
DECLARE
@.ITE_ID INT,
@.TAS_ID INT,
@.STATUS TINYINT,
@.OLDSTATUS TINYINT
IF (@.@.ROWCOUNT = 0) RETURN
IF ( UPDATE( ITS_STATUS) ) BEGIN
SET NOCOUNT ON
DECLARE __0005 CURSOR LOCAL FAST_FORWARD FOR
SELECT I.ITE_ID, I.ITS_STATUS, I.TAS_ID, D.ITS_STATUS
FROM INSERTED I, DELETED D
WHERE I.ITE_ID=D.ITE_ID AND I.TAS_ID = D.TAS_ID
OPEN __0005
FETCH NEXT FROM __0005
INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
WHILE (@.@.FETCH_STATUS = 0 ) BEGIN
IF (@.STATUS = 6 ) BEGIN
UPDATE ITEMS
SET ITE_STATUS = 6
WHERE ITE_ID = @.ITE_ID
END
ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
@.TAS_ID)=1) BEGIN
UPDATE ITEMS
SET ITE_STATUS = 2
WHERE ITE_ID = @.ITE_ID
END
ELSE BEGIN
UPDATE ITEMS
SET ITE_STATUS = 1
WHERE ITE_ID = @.ITE_ID
END
EXEC ADD2SUMMARY_ @.ITE_ID, @.TAS_ID, @.STATUS, @.OLDSTATUS
FETCH NEXT FROM __0005
INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
END
CLOSE __0005
DEALLOCATE __0005
SET NOCOUNT OFF
ENDI strongly recommend you don't use cursors at all in triggers. Why
can't you re-write this and the contents of Add2Summary as set-based
SQL?
If you need more help then please come back with DDL, sample data
INSERT statements and show your required end result.
David Portas
SQL Server MVP
--|||Supposing that the cursor is necesary, I have two suggestions:
First -
Declare de cursor static and read_only
Second - Use EXIST in the following statement
-- ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
@.TAS_ID)=1) BEGIN
ELSE IF (@.STATUS = 2) AND exists(SELECT * FROM TASKS WHERE TAS_ID =
@.TAS_ID and TAS_ISLAST = 1) BEGIN
I do not know anything about the sp being executed, but the three updates
statements can be implemented without the use of the cursor.
AMB
"Arghknork" wrote:

> I have an after update trigger that is killing me and I can't figure out w
hy.
> When I run an update query that fires the trigger and look at the executi
on
> plan, 85% of the cost is in the OPEN __0005 statement. I'm only updating
1
> row, so this makes no sense to me. The I/O cost of opening the cursor is
in
> the 1100 range. Anyone got any ideas - I'm fresh out.
> CREATE TRIGGER trgUpdate ON ITEMTASKS
> FOR UPDATE
> AS
> DECLARE
> @.ITE_ID INT,
> @.TAS_ID INT,
> @.STATUS TINYINT,
> @.OLDSTATUS TINYINT
>
> IF (@.@.ROWCOUNT = 0) RETURN
> IF ( UPDATE( ITS_STATUS) ) BEGIN
> SET NOCOUNT ON
> DECLARE __0005 CURSOR LOCAL FAST_FORWARD FOR
> SELECT I.ITE_ID, I.ITS_STATUS, I.TAS_ID, D.ITS_STATUS
> FROM INSERTED I, DELETED D
> WHERE I.ITE_ID=D.ITE_ID AND I.TAS_ID = D.TAS_ID
> OPEN __0005
> FETCH NEXT FROM __0005
> INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
> WHILE (@.@.FETCH_STATUS = 0 ) BEGIN
> IF (@.STATUS = 6 ) BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 6
> WHERE ITE_ID = @.ITE_ID
> END
> ELSE IF (@.STATUS = 2) AND ((SELECT TAS_ISLAST FROM TASKS WHERE TAS_ID =
> @.TAS_ID)=1) BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 2
> WHERE ITE_ID = @.ITE_ID
> END
> ELSE BEGIN
> UPDATE ITEMS
> SET ITE_STATUS = 1
> WHERE ITE_ID = @.ITE_ID
> END
> EXEC ADD2SUMMARY_ @.ITE_ID, @.TAS_ID, @.STATUS, @.OLDSTATUS
> FETCH NEXT FROM __0005
> INTO @.ITE_ID, @.STATUS, @.TAS_ID, @.OLDSTATUS
> END
> CLOSE __0005
> DEALLOCATE __0005
> SET NOCOUNT OFF
> END
>|||Agreed, but I inherited the code and it's in production this way. It has
been working okay up until about a w ago, and I think there is an
underlying problem here I'm not seeing. Even with a cursor, it shouldn't
cost so much to open that cursor with 1 row in the inserted/deleted tables.
"David Portas" wrote:

> I strongly recommend you don't use cursors at all in triggers. Why
> can't you re-write this and the contents of Add2Summary as set-based
> SQL?
> If you need more help then please come back with DDL, sample data
> INSERT statements and show your required end result.
> --
> David Portas
> SQL Server MVP
> --
>

Extremely Slow Table

Hi,

I have a table defined as
CREATE TABLE [SH_Data] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Date] [datetime] NULL ,
[Time] [datetime] NULL ,
[TroubleshootId] [int] NOT NULL ,
[ReasonID] [int] NULL ,
[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[maj_reason_id] [int] NULL ,
[maj_reason_desc] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[ActionID] [int] NULL ,
[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[WinningCaseTitle] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[Duration] [int] NULL ,
[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[ConnectMethod] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[RouterUsedToConnect] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,
[WinXpSp2Installed] [nvarchar] (255) COLLATE
SQL_Latin1_General_CP1_CS_AS NULL ,
[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL
,
[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS
NULL ,
[Acct_Num] [int] NULL ,
[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,
CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED
(
[TroubleshootId]
) ON [PRIMARY]
) ON [PRIMARY]
GO

Which contains 5.6 Million rows and has non clustered indexes on Date,
ReasonID, maj_Reason, Connection. Compared to other tables on the same
server this one is extremely slow. A simple query such as :

SELECT
SD.reason_desc,
SD.Duration,
SD.maj_reason_desc,
SD.[Connection],
SD.aolEnteredBy

FROM dbo.[Sherlock Data] SD

Where SD.[Date] > Dateadd(Month,-2,Getdate())

takes over 2 minutes to run ! I realise the table contains several
large columns which make the table quite large but unfortunately this
cannot be changed for the moment.

How can i assess what is causing the length of Query time ? And what
could i possibly do to speed this table up ? The database itself is
running on a dedicated server which has some other databases. None of
which have this performance issue.

Anyone have any ideas ?Do other queries which benefit of indexes also have bad performance ?

Where SD.[Date] > Dateadd(Month,-2,Getdate())

does a row based comparison, not using the indexes or what does the
query plan tells you about it ?

(Markt the query in QA and press CTRL+L) to see it.

Jens Suessmeyer.|||One possiblility is to change the PK to nonclustered and the index on
[Date] to clustered. If you often do range/grouping queries based on
[Date] then that should be useful, but it might also impact queries
using TroubleshootId, so you need to test any change with a number of
representative queries.

Other general advice would be to review the query plan in QA (Ctrl+K),
run UPDATE STATISTICS on the table, and also try tracing a typical
workload and running it through the Index Tuning Wizard to see what it
can recommend.

If you need more specific comments, you should post the query plan
(using SET SHOWPLAN_TEXT), and it might also be useful to know how many
rows are returned by the query.

Simon|||Yea ive tried a test on both and a query using the indexes take about
20 seconds less to run.

I tried the CTRL + L but its not making much sense to me.|||garydevstore (GaryDataStore@.gmail.com) writes:
> Which contains 5.6 Million rows and has non clustered indexes on Date,
> ReasonID, maj_Reason, Connection. Compared to other tables on the same
> server this one is extremely slow. A simple query such as :

Maybe some terminology is in order here. A road can be fast, but that
does not help you, if you car has a steering wheel out of order causing
you to zig-zag over the road. A car can be fast, but that does not help
if the road is in poor condition, so you cannot driver faster than 30 km/h
anyway.

In this case, the table is the road, and the query plan is the car. A
table itself does not move, but it can be badly fragmented in which case
it can be slow to drive through.

More likely, the query plan is not the best for the query. This is your
query:

> SELECT
> SD.reason_desc,
> SD.Duration,
> SD.maj_reason_desc,
> SD.[Connection],
> SD.aolEnteredBy
> FROM dbo.[Sherlock Data] SD
> Where SD.[Date] > Dateadd(Month,-2,Getdate())

There is a non-clustered index on Date. Assuming that rows are added
to this table regularly, there are presumably quite a few rows that
fits this condition. There are two ways for the optimizer to evaluate
this query: using the index, or scanning the table. The index is good
if only few rows are hit, but if many rows are hit the table scan is
faster. This is because, with the index you will need to read the same
page more than once.

The optimizer makes it choice of plan from the statistics SQL Server
has sampled about the table. The statistics may be out of date (even
if by default SQL Server auto-updates statistics). Try an UPDATE
STATISTICS WITH FULLSCAN, to see if this makes any difference.

But the road can also be in poor condition, that is the table can be
badly fragmented. This can be analysed with DBCC SHOWCONTIG and
remedied with DBCC DBREINDEX.

As suggested in other posts, you should look at the query plan, and see
if it says Clustered Index Scan or Index Seek + Bookmark Lookup.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Simon Hayes wrote:
> One possiblility is to change the PK to nonclustered and the index on
> [Date] to clustered. If you often do range/grouping queries based on
> [Date] then that should be useful, but it might also impact queries
> using TroubleshootId, so you need to test any change with a number of
> representative queries.

A covering index might be an option, too, especially if there are several
queries with multiple criteria.

One question to the OP: why do you have Date and Time both as timestamp
columns? Other remarkable things: all char columns seem to be unicode
(nvarchar) and have length (255). You might save space by changing to
varchar (if possible) and / or reducing the length. Also, this doesn't
really look like a normalized schema. I would at least expect having ids
for EnteredBy and probably some others.

> Other general advice would be to review the query plan in QA (Ctrl+K),
> run UPDATE STATISTICS on the table, and also try tracing a typical
> workload and running it through the Index Tuning Wizard to see what it
> can recommend.
> If you need more specific comments, you should post the query plan
> (using SET SHOWPLAN_TEXT), and it might also be useful to know how
> many rows are returned by the query.

Plus complete index DDL.

robert

Extremely slow runtime running SSRS on a cube

Hi everyone:

I am developing an SSRS report over a cube. When I drag and drop fields, it works fine. it runs in a few minutes. I am selectinng only from a single day - about 10,000 records. However, when I add some calculated fields it takes much longer. It's been running for 7 hours. The calculated fields fields are pretty simple. Some are selection of one field over another depending upon the value of a 3rd field. One is two fields multiplied together. One is a constant times a field. Something's obviously wrong here. Anybody seen this or have a solution?

Barry

If you use the MDX query designer (the GUI) in SSRS, I believe it puts the NON EMPTY keyword on both axes. So as soon as you put calculated measures into the query, it now has to do a non-empty against the calculation. This is MUCH more expensive unless you've properly defined a NON_EMPTY_BEHAVIOR.

Option 1: Do a search for NON_EMPTY_BEHAVIOR and add that to all your calculated measures.

Option 2: Flip over to the advanced tab where you can manually control the MDX query and start using the NonEmpty function instead of the NON EMPTY keyword. If you use the NonEmpty function, then you can specify which (physical) measure it should check.

That make sense?

|||

Thanks so much! That worked like a charm.

I am new to MDX, been using Oracle and am pretty helpless without the query builder.

|||

Well, it ran pretty fast, but my calculated field disappears when I switch to design mode. Here's what the qury looks like:

WITH MEMBER [Measures].[Fill Price]

AS 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )'

SELECT NONEMPTY ( { [Measures].[Slippage Tick Open Price], [Measures].[Slippage Open Price], [Measures].[Slippage G1 Bank], [Measures].[Quantity], [Measures].[Slippage Contract Open Price], [Measures].[Fill Price], [Measures].[Slippage Contract TWAP], [Measures].[Slippage Tick TWAP], [Measures].[Slippage TWAP], [Measures].[Slippage G1 Model] } )

ON COLUMNS, NONEMPTY ( { ([Fill Date].[Full Date].[Full Date].ALLMEMBERS * [Dim Team].[Team].[Team].ALLMEMBERS * [Dim Trader].[Trader Name].[Trader Name].ALLMEMBERS * [Dim Trade Info].[Hold Type].[Hold Type].ALLMEMBERS * [Dim Model].[Model Name].[Model Name].ALLMEMBERS * [Dim Market].[Report Name].[Report Name].ALLMEMBERS * [Dim TradeId].[Direction].[Direction].ALLMEMBERS * [Dim TradeId].[Spot 3 Month].[Spot 3 Month].ALLMEMBERS * [Dim TradeId].[Price All In].[Price All In].ALLMEMBERS * [Dim TradeId].[Open Price].[Open Price].ALLMEMBERS * [Dim TradeId].[TWAP].[TWAP].ALLMEMBERS ) } )

DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( [Fill Date].[Full Date].&[2007-08-08T00:00:00] : [Fill Date].[Full Date].&[2007-08-09T00:00:00] ) ON COLUMNS FROM [Trade]) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

The part referenced by 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )' Is what I want to select. It disappeared, though.

Barry

Extremely slow runtime running SSRS on a cube

Hi everyone:

I am developing an SSRS report over a cube. When I drag and drop fields, it works fine. it runs in a few minutes. I am selectinng only from a single day - about 10,000 records. However, when I add some calculated fields it takes much longer. It's been running for 7 hours. The calculated fields fields are pretty simple. Some are selection of one field over another depending upon the value of a 3rd field. One is two fields multiplied together. One is a constant times a field. Something's obviously wrong here. Anybody seen this or have a solution?

Barry

If you use the MDX query designer (the GUI) in SSRS, I believe it puts the NON EMPTY keyword on both axes. So as soon as you put calculated measures into the query, it now has to do a non-empty against the calculation. This is MUCH more expensive unless you've properly defined a NON_EMPTY_BEHAVIOR.

Option 1: Do a search for NON_EMPTY_BEHAVIOR and add that to all your calculated measures.

Option 2: Flip over to the advanced tab where you can manually control the MDX query and start using the NonEmpty function instead of the NON EMPTY keyword. If you use the NonEmpty function, then you can specify which (physical) measure it should check.

That make sense?

|||

Thanks so much! That worked like a charm.

I am new to MDX, been using Oracle and am pretty helpless without the query builder.

|||

Well, it ran pretty fast, but my calculated field disappears when I switch to design mode. Here's what the qury looks like:

WITH MEMBER [Measures].[Fill Price]

AS 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )'

SELECT NONEMPTY ( { [Measures].[Slippage Tick Open Price], [Measures].[Slippage Open Price], [Measures].[Slippage G1 Bank], [Measures].[Quantity], [Measures].[Slippage Contract Open Price], [Measures].[Fill Price], [Measures].[Slippage Contract TWAP], [Measures].[Slippage Tick TWAP], [Measures].[Slippage TWAP], [Measures].[Slippage G1 Model] } )

ON COLUMNS, NONEMPTY ( { ([Fill Date].[Full Date].[Full Date].ALLMEMBERS * [Dim Team].[Team].[Team].ALLMEMBERS * [Dim Trader].[Trader Name].[Trader Name].ALLMEMBERS * [Dim Trade Info].[Hold Type].[Hold Type].ALLMEMBERS * [Dim Model].[Model Name].[Model Name].ALLMEMBERS * [Dim Market].[Report Name].[Report Name].ALLMEMBERS * [Dim TradeId].[Direction].[Direction].ALLMEMBERS * [Dim TradeId].[Spot 3 Month].[Spot 3 Month].ALLMEMBERS * [Dim TradeId].[Price All In].[Price All In].ALLMEMBERS * [Dim TradeId].[Open Price].[Open Price].ALLMEMBERS * [Dim TradeId].[TWAP].[TWAP].ALLMEMBERS ) } )

DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( [Fill Date].[Full Date].&[2007-08-08T00:00:00] : [Fill Date].[Full Date].&[2007-08-09T00:00:00] ) ON COLUMNS FROM [Trade]) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS

The part referenced by 'IIF( [Dim Trade Info].[Hold Type]="CASH", [Dim TradeId].[Spot 3 Month], [Dim TradeId].[Price All In] )' Is what I want to select. It disappeared, though.

Barry

Extremely slow report rendering - PLEASE HELP!

Hi,
We are running Reporting Services 2005 32-bit edition on a Windows 2003 R2
x64 platform, and SQL Server 2005 64-bit database server on Windows 2003 R2
x64. The Reporting Services catalog and the reporting datasource database
are both hosted remotely on the database server.
We are using the Reporting Services ReportViewer web-forms control version
8.0.50727.817 in our ASP.NET 2.0 32-bit web application. We are using custom
forms-authentication.
We have tested this same setup all hosted on a single machine, and it
flies - even with complex reports we can render a report in a couple of
seconds, so we are confident that our report queries and the database itself
are not an issue.
HOWEVER, as soon as we move the RS catalog database to the remote database
server (where it needs to be in our production environment), performance
becomes unacceptably slow - reports that renedered in a couple of seconds on
a single machine typically take 10 to 20 times longer to render.
Please, any advice on what could be causing this problem will be gratefully
received. We have explored things including network addressing, and ensured
that our database server name (which Reporting Services is configured to use
to talk to the database server) has an entry in the internal DNS.
Chris LewisHello Chris,
I understand that when you run the report on a x64 server which also host
the database, you get the report render slow.
You may run the following SQL Statement for the report server database.
select CAST(C.Name AS VARCHAR(20)) [ReportName],
E.TimeDataRetrieval,
E.TimeProcessing,
E.TimeRendering,
E.ByteCount,
E.[RowCount]
from executionlog E inner join catalog C
on E.ReportID = C.ItemID
Please let me know the result so that I can provide further assistance.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi,
We have been doing further testing, and we are 90% sure that custom (forms)
authentication and report access checking is at the root of the problem - I
think the local/remote catalog issue that I originally raised was a
red-herring, so please ignore this. We ran the query you supplied, and the
results showed that our database and the queries were performing well - no
problems with getting data for reports.
So, here is where we believe the problems lie...We have an Internet-facing
ASP.NET application that uses Reporting Services, and we have over 5000
individual users on our SQL Server database - each is registered with a
unique user ID on Reporting Services. When our custom authentication
components are called by SSRS to check access to reports, resources, etc.
for the logged-in user the access check is taking a long time to execute,
and we think we know why - here is a snippet of the code (taken from the
SSRS Forms authentication sample code, which we are using to a large extent)
in Authorization.cs which is a class within our custom authentication
extension component that implements IAuthorizationExtension. The key issue
is the foreach(AceStruct ace in acl) loop, which iterates once for each SSRS
user - and with 5000+ users (and growing rapidly) this is impacting
performance in a major way:
public bool CheckAccess(
string userName,
IntPtr userToken,
byte[] secDesc,
CatalogOperation requiredOperation)
{
AceCollection acl = DeserializeAcl(secDesc);
foreach(AceStruct ace in acl)
{
// First check to see if the user or group has an access control
// entry for the item
if (0 == String.Compare(userName, ace.PrincipalName, true,
CultureInfo.CurrentCulture))
{
// If an entry is found,
// return true if the given required operation
// is contained in the ACE structure
foreach(CatalogOperation aclOperation in ace.CatalogOperations)
{
if (aclOperation == requiredOperation)
return true;
}
}
}
return false;
}
My question is, how can we do this better and more efficiently? These access
checks are performed multiple times for each report request, so one question
I have is is there a way we can cache the results of a CheckAccess call for
a user, so we don't have to iterate through all the users again for that
user's session?
We need a solution that will scale for unlimited users.
Thanks for any advice.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:MvBzX9OIIHA.4268@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> I understand that when you run the report on a x64 server which also host
> the database, you get the report render slow.
> You may run the following SQL Statement for the report server database.
> select CAST(C.Name AS VARCHAR(20)) [ReportName],
> E.TimeDataRetrieval,
> E.TimeProcessing,
> E.TimeRendering,
> E.ByteCount,
> E.[RowCount]
> from executionlog E inner join catalog C
> on E.ReportID = C.ItemID
>
> Please let me know the result so that I can provide further assistance.
>
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hi,
(Sorry for repeat post)
We have been doing further testing, and we are 90% sure that custom (forms)
authentication and report access checking is at the root of the problem - I
think the local/remote catalog issue that I originally raised was a
red-herring, so please ignore this. We ran the query you supplied, and the
results showed that our database and the queries were performing well - no
problems with getting data for reports.
So, here is where we believe the problems lie...We have an Internet-facing
ASP.NET application that uses Reporting Services, and we have over 5000
individual users on our SQL Server database - each is registered with a
unique user ID on Reporting Services. When our custom authentication
components are called by SSRS to check access to reports, resources, etc.
for the logged-in user the access check is taking a long time to execute,
and we think we know why - here is a snippet of the code (taken from the
SSRS Forms authentication sample code, which we are using to a large extent)
in Authorization.cs which is a class within our custom authentication
extension component that implements IAuthorizationExtension. The key issue
is the foreach(AceStruct ace in acl) loop, which iterates once for each SSRS
user - and with 5000+ users (and growing rapidly) this is impacting
performance in a major way:
public bool CheckAccess(
string userName,
IntPtr userToken,
byte[] secDesc,
CatalogOperation requiredOperation)
{
AceCollection acl = DeserializeAcl(secDesc);
foreach(AceStruct ace in acl)
{
// First check to see if the user or group has an access control
// entry for the item
if (0 == String.Compare(userName, ace.PrincipalName, true,
CultureInfo.CurrentCulture))
{
// If an entry is found,
// return true if the given required operation
// is contained in the ACE structure
foreach(CatalogOperation aclOperation in ace.CatalogOperations)
{
if (aclOperation == requiredOperation)
return true;
}
}
}
return false;
}
My question is, how can we do this better and more efficiently? These access
checks are performed multiple times for each report request, so one question
I have is is there a way we can cache the results of a CheckAccess call for
a user, so we don't have to iterate through all the users again for that
user's session?
We need a solution that will scale for unlimited users.
Thanks for any advice.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:MvBzX9OIIHA.4268@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> I understand that when you run the report on a x64 server which also host
> the database, you get the report render slow.
> You may run the following SQL Statement for the report server database.
> select CAST(C.Name AS VARCHAR(20)) [ReportName],
> E.TimeDataRetrieval,
> E.TimeProcessing,
> E.TimeRendering,
> E.ByteCount,
> E.[RowCount]
> from executionlog E inner join catalog C
> on E.ReportID = C.ItemID
>
> Please let me know the result so that I can provide further assistance.
>
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hello Chris,
Have you consider to put the credential into the Session?
In ASP.NET, you could put anything you want to the Session.
Also, for more assistance, you may consider to check with
microsoft.public.dotnet.framework.aspnet newsgroup.
Hope this helps.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi,
Thanks for your response. I am not sure that we could get the information we
need from the aspnet newsgroup - I think this is more a Reporting Services
specific issue.
I agree that it would make sense to cache the credentials in the Session, if
that was possible to do. My next question is can we get access to the
ASP.NET session for our website from within the SSRS forms-authentication
custom security extension that is running in the context of the SSRS
ReportServer webservice? If so, can you provide a detailed code example of
how to do this.
Thanks, in anticipation.
Chris
"Wei Lu [MSFT]" <weilu@.online.microsoft.com> wrote in message
news:AoHoHnoIIHA.4200@.TK2MSFTNGHUB02.phx.gbl...
> Hello Chris,
> Have you consider to put the credential into the Session?
> In ASP.NET, you could put anything you want to the Session.
> Also, for more assistance, you may consider to check with
> microsoft.public.dotnet.framework.aspnet newsgroup.
> Hope this helps.
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>|||Hello Chris,
I did not have such a code example yet.
Since this issue may related with Form Authentication and Asp.net session,
ASP.NET newsgroup will have more expert on this issue.
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using

Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm

the only one querying the cube.

The AS server should not be a bottle neck (Windows 2003 x64-bit, dual

core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube

is very small with little data s well.

In addition to the slow query times when using Excel, using the Browser

in Visual Studio locally on the server results in queries taking longer

to execute than I would expect.

What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||

Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take

the following as my example. Let's say I have Product SKU and SKU

Description as available attributes in my Products dimension. While

editing the Products dimension, I notice that I can drag SKU

Description to create a relationship under Product SKU. I can also do

the opposite and drag Product SKU to create a relationship under SKU

Description. However, I can't do

both. Therefore, what is the difference between the 2 relationships.

Ultimately, a Product SKU can have only one SKU Description and

vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using

Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm

the only one querying the cube.

The AS server should not be a bottle neck (Windows 2003 x64-bit, dual

core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube

is very small with little data s well.

In addition to the slow query times when using Excel, using the Browser

in Visual Studio locally on the server results in queries taking longer

to execute than I would expect.

What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||

Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take

the following as my example. Let's say I have Product SKU and SKU

Description as available attributes in my Products dimension. While

editing the Products dimension, I notice that I can drag SKU

Description to create a relationship under Product SKU. I can also do

the opposite and drag Product SKU to create a relationship under SKU

Description. However, I can't do

both. Therefore, what is the difference between the 2 relationships.

Ultimately, a Product SKU can have only one SKU Description and

vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

Extremely Slow Query Times

Is there something that I can do to improve the query times when using Excel to query my AS2005 cube? It's EXTREMELY slow, even if I'm the only one querying the cube.
The AS server should not be a bottle neck (Windows 2003 x64-bit, dual core AMD Opteron, 7 + gigs of memory, etc.). I fee like our cube is very small with little data s well.
In addition to the slow query times when using Excel, using the Browser in Visual Studio locally on the server results in queries taking longer to execute than I would expect.
What are some things that I can do to improve performance?

This is just a shot in the dark, but have you specified all the member property relationships between the attributes on your dimensions correctly? If you've created your dimensions using the wizard then the vast majority (possibly all) of your attributes will be directly linked to the key attribute only, and this can lead to less than optimal performance.

Here's an example of what I mean: say you have a Geography dimension with Continent, Country, State and City attributes and Address as the key attribute. You know there's a many-to-one relationship between Address and City, City and State, State and Country and Country and Continent, but by default the wizard will only create relationships on Address and City, Address and State, Address and Country and Address and Continent. What you need to do is go to the dimension editor in VS, then in the Attributes pane on the left hand side drag and drop attributes onto other attributes to create these relationships (AS knows about transitive relationships too, so you can delete ones like Address and Continent). Once you've done this then redesign your aggregations (usage-based optimisation might be a good idea too in the medium term) and reprocess, and you should see an improvement in performance.

Chris

|||Chris,

I know what you're referring to, but I don't really understand how to implement it correctly.

Take the following as my example. Let's say I have Product SKU and SKU Description as available attributes in my Products dimension. While editing the Products dimension, I notice that I can drag SKU Description to create a relationship under Product SKU. I can also do the opposite and drag Product SKU to create a relationship under SKU Description. However, I can't do both. Therefore, what is the difference between the 2 relationships. Ultimately, a Product SKU can have only one SKU Description and vice-versa.

Thanks!|||

In this case, yes, you have a 1:1 relationship between Product SKU and SKU Description, but I believe it's still beneficial to put the relationship in (probably by making Description a property of Product SKU, if the latter is the key attribute of your dimension). Relationships are useful for AS when it tries to design aggregations, use aggregations during querying, and for working out which attributes 'exist' with each other, all of which will improve query performance. Defining 1:1 relationships aren't going to have such a big impact though; do you have any 1:M relationships such as Product Category to Product SKU, Year to Month etc? Defining them in the dimension is likely to have a much more obvious effect.

extremely slow query

I have this querys, and when I run it, it took 15 minutes trowing the result, for the momment event is a table with 101 tuples and leftjets are near to 700 tuples, im gonna chech if I see where is the bottleneck.

Code Snippet

/**********************************************************************/
/* Jet Veto Cut 2
* leftJets jetbs should have Pt not bigger then maxAllowedPtForOtherJets
* see Hadronic Top Cut 2
* m_maxAllowedPtForOtherJets: ptOJets
*/
/*
* TTreeCut::SelectTopCombination, m_theTopComb
* min of m_okTopComb
*/

create view mTopComb
As
select j.*
from topComb as j
where (abs(sqrt(abs((j.j1Ee+j.j2Ee + j.Ee)*(j.j1Ee+j.j2Ee +j.Ee) -
((j.j1px +j.j2px + j.px)*(j.j1px +j.j2px + j.px) +
(j.j1py +j.j2py + j.py)*(j.j1py +j.j2py + j.py) +
(j.j1pz +j.j2pz + j.pz)*(j.j1pz +j.j2pz + j.pz))))
- 174.3))
=
(select min(abs(sqrt(abs((t.j1Ee+t.j2Ee + t.Ee)*(t.j1Ee+t.j2Ee +t.Ee) -
((t.j1px +t.j2px + t.px)*(t.j1px +t.j2px + t.px) +
(t.j1py +t.j2py + t.py)*(t.j1py +t.j2py + t.py) +
(t.j1pz +t.j2pz + t.pz)*(t.j1pz +t.j2pz + t.pz))))
- 174.3))
from topComb as t
where t.eventid=j.eventid)

GO

/*
* TTreeCut::SelectTopCombination, m_theLeftOverJets
* select m_okJets which are not contained in m_theTopComb
*/

create view leftjets
As
select distinct o.*
from okJets as o
where not exists (select o.idap from mtopcomb as j where j.idap=o.idap);

GO

create view jetVetoCut
AS
select distinct e.*
from events e
where not exists (select * from leftjets j where e.idevent=j.eventid and dbo.pt(j.idap)>70);

GO

The WHERE clause contains expressions that are extremely complex and that makes it very hard for the Query Optimizer to do accurate estimation. Typically column expressions at the WHERE clause cause optimziation problems, with plan choice or plan execution and they make the use of indexes impossible. I am guessing that for these reasons the query optimizer produces a suboptimal plan and hence your query is slow.

The following article describes a way to deal with complex expressions and enable the query optimizer to do proper estimation . I hope that you find this useful: http://blogs.msdn.com/queryoptteam/archive/2006/03/24/560089.aspx

Regards,

Leo Giakoumakis

|||

hi,

here's another alternative.. the idea of mTopCombComputed is to return the same result as your mTopCombView but the difference is that we minimized the query so that it would require less number of reads.

CREATE FUNCTION mTopCombComputed()
RETURNS @.mTopComb TABLE (
eventid int
, idap numeric -- just changed datatype according to your DDL
, Computed numeric -- just changed datatype according to your DDL
)
AS
BEGIN
INSERT
INTO @.mTopComb
SELECT j.eventid
, j.idap
, Computed = (abs(sqrt(abs((j.j1Ee+j.j2Ee + j.Ee)*(j.j1Ee+j.j2Ee +j.Ee) -
((j.j1px +j.j2px + j.px)*(j.j1px +j.j2px + j.px) +
(j.j1py +j.j2py + j.py)*(j.j1py +j.j2py + j.py) +
(j.j1pz +j.j2pz + j.pz)*(j.j1pz +j.j2pz + j.pz))))
- 174.3))
FROM topComb j
DELETE j
FROM @.mTopComb j INNER JOIN
(
SELECT eventid
, MIN(Computed) AS MinComputed
FROM @.mTopComb
GROUP BY
eventid
) t ON j.eventid = t.eventid
WHERE j.Computed <> t.MinComputed

RETURN
END
GO

ALTER VIEW leftjets
AS


SELECT DISTINCT
o.*
FROM okJets o LEFT OUTER JOIN
dbo.mTopCombComputed() c ON o.idap = c.idap
WHERE c.idap IS NULL

GO

ALTER VIEW jetVetoCut
AS


SELECT DISTINCT
e.*
FROM events e LEFT OUTER JOIN
leftjets j ON e.idevent = j.eventid
and dbo.pt(j.idap) > 70
WHERE j.eventid IS NULL

GO

Extremely slow median measures

Hi

I'm having a problem with extremely slow median measures.

I've created a named set of all record IDs and wrote the measure as Median( [All Records], [Measures].[Age] ). When I drop a dimension into one of the axes, it takes a very long time to calculate the median even at the top level of the hierarchy, and I suspect it's computing the median for all the members of that dimension, even before I've drilled down into them.

Anyone know a better method for this?

Your formula computes Median for all records always, regardless of the selection in Records dimension.|||

Sorry Mosha, I don't quite follow. I need this to be a generic measure that will return the median of any cell in the client browser. Since median cannot be preaggregated, I thought the only way to do this was to take the median of the set of all records in the current cell. The other option I had explored went something like this:

Median (

{ ( Axis(0)(0)( Axis(0)(0).Count - 1 ).Dimension.CurrentMember.All, [All Records].[ ID ].[ ID ] ) },

[Measures].[Age]

)

But this didn't work at all.

|||Sorry, this time I don't quite follow. What exactly do you mean by the following: "I need this to be a generic measure that will return the median of any cell in the client browser". Median of what ? Perha[s you could illustrate with couple of examples.|||

My mistake... Should have said median of a measure (e.g. age) within any cell in the spreadsheet. For example, if I have a 2 x 2 table with Male and Female as columns and marital status Single and Married as rows, I'd be showing the median age in each of the 4 cells. Other times, users would be interested in the median age for other combinations of factors, say Gender and Cancer, or Cancer and Socioeconomic Status, but they shouldn't have to select a different median measure for each combination. Sort of like a percentage/proportion against any dimension selected on the row/column axis, as discussed in the following post:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=720160&SiteID=1

|||

If you are guaranteed to always have two axes, then something like that may work

Median(CrossJoin(Axis(0), Axis(1)), Measures.[Age])

|||

Thanks Mosha, but no luck with that one either. There's probably no other way around this; that is, other than using the Fact Table primary key (Record ID) to select the set of all individual age values.

|||

Then I again don't understand your requirements :( Based on what you wrote before:

> For example, if I have a 2 x 2 table with Male and Female as columns and marital status Single and Married as rows, I'd be showing the median age in each of the 4 cells.

The formula that I wrote computes then median of these 4 cells and places it into each one of these 4 cells. I have verified it with AdventureWorks which has Gender and Marital Status attributes...

Extremely slow Excel MDX

Using Excel as a client is most of the time exceedingly slow. For example writing a simple query of the type:

SELECT [Measures].[Some Measure] ON 0,
[Product].[Product-Version].[Product] ON 1
FROM [Cubename]

in Management studio is in Excel transformed to:

SELECT NON EMPTY HIERARCHIZE(AddCalculatedMembers({DrillDownLevel({[Product].[Product-Version].[All]})})) DIMENSION PROPERTIES PARENT_UNIQUE_NAME ON COLUMNS FROM [Cubename] WHERE ([Measures].[Some Measure])

which takes several times longer to execute. As one starts drilling down it becomes increasingly worse with excel producing MDX that takes 100:s of times longer to execute then if I handwrite the mdx. This is with a very simple cube where Some Measure is not a calculated member. I can't even begin to imagine how slow it would be with a more complex cube. Is there anything to be done about this, any guidelines to follow to make it easer for Excel to generate "normal" mdx?

I had similar problem with Excel and OWC when accessing OLAP Cubes. Unfortunately, unless you optimize your cube, there is nothing can be done with how Excel generate mdx to retrieve data. Excel and OWC is closed code, and recently Microsoft announced that they will be stopping new releases for OWC. Howerver, Excel PivotTable has new version, take a look at Office 2007 in Beta version. It generates more efficient MDXs compared to Office 2003/2000.

Downside, it will take another 2-3 years for Office 2007 to be as popular as Office 2003, so, distribution of your solution in Office 2007 might be an issue if u decide to switch to Office 2007 in larger scale enterprise.

|||Thanks. Is there anything special you have in mind when you say "unless you optimize your cube"? Or do you mean the "ordinary" optimizations one does to make the server work decently fast? I will try to have a look at the 2007 beta though.|||

As mentioned earlier, you can take a look at Office 2007 sending bit different MDX queries. You also take a look at the ProClarity recently aquired by Microsoft see if you get better performance using it.

Also make sure you install latest service pack - SP1. There has been some performance improvements in it. You will see event more performance improvements in upcoming service pack 2. Watch for announcements of Community Technology Preview (CTP) to get your hands on upcoming SP2.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Extremely Slow connection time

Hi,
If absolutely nothing changed on the server and everyone=20
else is working fine...then its your desktop.
Have you looked the CPU, Memory usage on your desktop?=20
Your desktop connection could be dropping connection to=20
SQL Server, so it could be a network issue. Try pinging=20
the SQL Server with -t option.
If server and everyone else is OK, concentrate on your=20
desktop. Maybe there was a change in the network itself.
One thing you can try is maybe create a alias for the sql=20
server.=20
You can do that by :start\programs\microsoft sql=20
server\client network utility. Once the utility opens up,=20
go to the second tab (Alias) and click on 'add'. Enter the=20
server name and alias name and either use 'Named Pipes' or=20
TCP/IP. DO NOT change anything for the either of the=20
default setting unless you know for fact they have been=20
changed.
Try using 'named pipes', if performance does not increase,=20
try to use tcp/ip.
hth
DeeJay
hth
DeeJay
>--Original Message--
>Hi
>I have recently reinstalled XP Professional as well as=20
the MSSQL client tools.
>Our SQL 2000 server is running Win2K with 512MB RAM, and=20
has been doing so for quite a while, no changes where done=20
to the server while I reinstalled XP on my PC.
>Now my connection to the SQL server is extremely=20
sluggish, e.g. connecting via the Query Analyzer takes up=20
to 2 minutes, and so do any other subsequent actions, for=20
example opening the list of databases, tables or stored=20
procedures on the object browser.
>All my queries take ages to get going even for the=20
simplest queries.
>Previously the response was very reasonable, as are the=20
other developers PCs in the company, with similar set up.
>I have tried using both forms of security but have no joy=20
either way. I have used the IP address =E2?" no go.
>If I ping the NetBIOS name of the server the response is=20
instantaneous.
>
>Please help!!
>.
>Hi
Thanks for the response
I had already loaded the latest service packs and already set up the network
client utility, and tried bothTCP/IP as well as named pipes, all of these h
ad no effect.
Regards
Herbert|||Are other people able to connect to the server without any
problem?
If yes, then its your PC. Perhaps the XP re-install did
not work properly.
DeeJay

>--Original Message--
>Hi
>Thanks for the response
>I had already loaded the latest service packs and already
set up the network client utility, and tried bothTCP/IP as
well as named pipes, all of these had no effect.
>Regards
>Herbert
>.
>

Extremely Slow connection time

Hi
I have recently reinstalled XP Professional as well as the MSSQL client tool
s.
Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so f
or quite a while, no changes where done to the server while I reinstalled XP
on my PC.
Now my connection to the SQL server is extremely sluggish, e.g. connecting v
ia the Query Analyzer takes up to 2 minutes, and so do any other subsequent
actions, for example opening the list of databases, tables or stored procedu
res on the object browser.
All my queries take ages to get going even for the simplest queries.
Previously the response was very reasonable, as are the other developers PCs
in the company, with similar set up.
I have tried using both forms of security but have no joy either way. I have
used the IP address – no go.
If I ping the NetBIOS name of the server the response is instantaneous.
Please help!!Did you remember to install SQL Server 2000 SP3a after the reinstall?
Without it, you are vulnerable to the Slammer.
Hope this helps.
Dan Guzman
SQL Server MVP
"Herbert Schroder" <herberts@.mweb.co.za> wrote in message
news:9FF54BCC-2F28-4F5A-9D6A-06A4D4B1CA41@.microsoft.com...
> Hi
> I have recently reinstalled XP Professional as well as the MSSQL client
tools.
> Our SQL 2000 server is running Win2K with 512MB RAM, and has been doing so
for quite a while, no changes where done to the server while I reinstalled
XP on my PC.
> Now my connection to the SQL server is extremely sluggish, e.g. connecting
via the Query Analyzer takes up to 2 minutes, and so do any other subsequent
actions, for example opening the list of databases, tables or stored
procedures on the object browser.
> All my queries take ages to get going even for the simplest queries.
> Previously the response was very reasonable, as are the other developers
PCs in the company, with similar set up.
> I have tried using both forms of security but have no joy either way. I
have used the IP address - no go.
> If I ping the NetBIOS name of the server the response is instantaneous.
>
> Please help!!
>

Extremely Slow connection time

Hi,
If absolutely nothing changed on the server and everyone=20
else is working fine...then its your desktop.
Have you looked the CPU, Memory usage on your desktop?=20
Your desktop connection could be dropping connection to=20
SQL Server, so it could be a network issue. Try pinging=20
the SQL Server with -t option.
If server and everyone else is OK, concentrate on your=20
desktop. Maybe there was a change in the network itself.
One thing you can try is maybe create a alias for the sql=20
server.=20
You can do that by :start\programs\microsoft sql=20
server\client network utility. Once the utility opens up,=20
go to the second tab (Alias) and click on 'add'. Enter the=20
server name and alias name and either use 'Named Pipes' or=20
TCP/IP. DO NOT change anything for the either of the=20
default setting unless you know for fact they have been=20
changed.
Try using 'named pipes', if performance does not increase,=20
try to use tcp/ip.
hth
DeeJay
hth
DeeJay
>--Original Message--
>Hi
>I have recently reinstalled XP Professional as well as=20
the MSSQL client tools.
>Our SQL 2000 server is running Win2K with 512MB RAM, and=20
has been doing so for quite a while, no changes where done=20
to the server while I reinstalled XP on my PC.
>Now my connection to the SQL server is extremely=20
sluggish, e.g. connecting via the Query Analyzer takes up=20
to 2 minutes, and so do any other subsequent actions, for=20
example opening the list of databases, tables or stored=20
procedures on the object browser.
>All my queries take ages to get going even for the=20
simplest queries.
>Previously the response was very reasonable, as are the=20
other developers PCs in the company, with similar set up.
>I have tried using both forms of security but have no joy=20
either way. I have used the IP address =E2?" no go.
>If I ping the NetBIOS name of the server the response is=20
instantaneous.
>
>Please help!!
>.
>
Hi
Thanks for the response
I had already loaded the latest service packs and already set up the network client utility, and tried bothTCP/IP as well as named pipes, all of these had no effect.
Regards
Herbert
|||Are other people able to connect to the server without any
problem?
If yes, then its your PC. Perhaps the XP re-install did
not work properly.
DeeJay

>--Original Message--
>Hi
>Thanks for the response
>I had already loaded the latest service packs and already
set up the network client utility, and tried bothTCP/IP as
well as named pipes, all of these had no effect.
>Regards
>Herbert
>.
>