Showing posts with label calls. Show all posts
Showing posts with label calls. Show all posts

Friday, March 9, 2012

Fail to call child package

Have a parent package that calls many child packages (over 30) for a daily data warehouse update. On any given day, it randomly fails to call a child package with the following error:

Error 0x800706BE while preparing to load the package. The remote procedure call failed.

Its not failing the same package each day. I need a better explanation of this error message.

Environment: SQL2005 Enterprise Dec2005 RTM on Itanium64 with Windows DataCenter.

Are all the Execute Package tasks in the parent package configured identically? Where are the child packages stored?|||Yes, all Exec Package tasks are set up the same. All packages are stored in MSDB on our DataWarehouse server. Because we have such a large volume of packages (200), we are using folders to separate the packages. The parent package calls children in the Dimensions and various other folders. The call failures span many of the children folders, so there doesn't seem to be a pattern.

Fail part of a Stored Procedures but continue processing

Here is a fun one, and I am not sure its possible.
I have 2 SQL Stored Procedures.
The first one calls the second one. All the first one does is grab the
next database name and pass that variable to the second stored
procedures. Once the second one finishes it goes back to the first one
goes to the next database and repeats the process until all the
databases are finished.
The problem is if the second Stored Procedures fails, it fails the
first.
My Question; is there a way that if the second stored procedure fails,
that the first stored procedure will still continue onto the next
database? Just because one database locks up for some reason does not
me they all will.
USE [DBAdmin]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE proc [dbo].[UD_Rebuild_All_Database_Indexes]
@.maxFrag DECIMAL = 30.0, -- The maximum fragmentation to allow
@.indexSizeForRebuild INT = 100 -- Maximum index size to do a
rebuild instead of defrag
AS
DECLARE @.ExecSQLcmd varchar(1024)
DECLARE @.DBNum_to_Name int
SELECT @.DBNum_to_Name =min(dbid)
FROM master.dbo.sysdatabases
WHERE dbid > 4
WHILE @.DBNum_to_Name is not null
BEGIN
SELECT @.ExecSQLcmd='use ' + name + ' exec dbo.sp_UD_RebuildIndexes ' +
CONVERT(VARCHAR,@.maxFrag) + ', ' +
CONVERT(VARCHAR,@.indexSizeForRebuild)
FROM master.dbo.sysdatabases
WHERE dbid =@.DBNum_to_Name
-- Print (@.ExecSQLcmd) -- For Debugging
EXEC (@.ExecSQLcmd)
SELECT @.DBNum_to_Name =min(dbid)
FROM master.dbo.sysdatabases
WHERE dbid > @.DBNum_to_Name AND DATABASEPROPERTY(name, 'IsReadOnly') =
0
ENDHow are you running this procedure? Have you tried it in QA? Without
testing it, I believe it should continue on, as long as the caller lets it.
It does depend on the error you are getting (an error can stop the batch.)
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"Matthew" <MKruer@.gmail.com> wrote in message
news:1138314691.791878.263020@.o13g2000cwo.googlegroups.com...
> Here is a fun one, and I am not sure its possible.
> I have 2 SQL Stored Procedures.
> The first one calls the second one. All the first one does is grab the
> next database name and pass that variable to the second stored
> procedures. Once the second one finishes it goes back to the first one
> goes to the next database and repeats the process until all the
> databases are finished.
> The problem is if the second Stored Procedures fails, it fails the
> first.
> My Question; is there a way that if the second stored procedure fails,
> that the first stored procedure will still continue onto the next
> database? Just because one database locks up for some reason does not
> me they all will.
> USE [DBAdmin]
> GO
> SET ANSI_NULLS ON
> GO
> SET QUOTED_IDENTIFIER ON
> GO
> CREATE proc [dbo].[UD_Rebuild_All_Database_Indexes]
> @.maxFrag DECIMAL = 30.0, -- The maximum fragmentation to allow
> @.indexSizeForRebuild INT = 100 -- Maximum index size to do a
> rebuild instead of defrag
> AS
> DECLARE @.ExecSQLcmd varchar(1024)
> DECLARE @.DBNum_to_Name int
> SELECT @.DBNum_to_Name =min(dbid)
> FROM master.dbo.sysdatabases
> WHERE dbid > 4
> WHILE @.DBNum_to_Name is not null
> BEGIN
> SELECT @.ExecSQLcmd='use ' + name + ' exec dbo.sp_UD_RebuildIndexes ' +
> CONVERT(VARCHAR,@.maxFrag) + ', ' +
> CONVERT(VARCHAR,@.indexSizeForRebuild)
> FROM master.dbo.sysdatabases
> WHERE dbid =@.DBNum_to_Name
> -- Print (@.ExecSQLcmd) -- For Debugging
> EXEC (@.ExecSQLcmd)
> SELECT @.DBNum_to_Name =min(dbid)
> FROM master.dbo.sysdatabases
> WHERE dbid > @.DBNum_to_Name AND DATABASEPROPERTY(name, 'IsReadOnly') =
> 0
> END
>|||Matthew (MKruer@.gmail.com) writes:
> The problem is if the second Stored Procedures fails, it fails the
> first.
> My Question; is there a way that if the second stored procedure fails,
> that the first stored procedure will still continue onto the next
> database? Just because one database locks up for some reason does not
> me they all will.
Error handling in SQL Server is a messy topic. Some errors cancels the
entire batch, and in SQL 2000 there is no way you can catch that from
SQL.
If you are on SQL 2005, you can use the new TRY-CATCH construct.
If you are on SQL 2000, there is actually a workaround. It's not really
useful in the general case, but for a symin script like yours, it
would be OK. Since I have this in an article on my web site, I just post
the link: http://www.sommarskog.se/error-hand...#linked-servers
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx