Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Friday, March 23, 2012

Failed to enumerate changes in filtered tables

I met the following error
Failed to enumerate changes in the filtered articles.
Category: NULL Source:
Merge Replication Provider
Number: -2147200925
Message: Failed to enumerate changes in the filtered
articles.
Category: COMMAND
Source: Failed Command
Number: 0
Message: create table #belong_agent_-2147483646
(tablenick int NOT NULL, rowguid uniqueidentifier NOT
NULL,generation int NULL, lineage varbinary(255) NULL,
col v varbinary(2048) NULL)
Category: SQLSERVER
Source: ServerName
Number: 170
Message: Line 1: Incorrect syntax near '-'.
I have visit the link :
http://support.microsoft.com/default.aspx?scid=kb;en-
us;814916#appliesto
but i found no help?
Can any body help me out of this stuck?
Thanks a lot
Please post here a script for creation of DB schema and publication creation
script. That would help us to help you
Regards,
Kestutis Adomavicius
Consultant
UAB "Baltic Software Solutions"
"Mai Thoa" <anonymous@.discussions.microsoft.com> wrote in message
news:1ceb01c48cbd$18406f00$a301280a@.phx.gbl...
> I met the following error
> Failed to enumerate changes in the filtered articles.
> Category: NULL Source:
> Merge Replication Provider
> Number: -2147200925
> Message: Failed to enumerate changes in the filtered
> articles.
> Category: COMMAND
> Source: Failed Command
> Number: 0
> Message: create table #belong_agent_-2147483646
> (tablenick int NOT NULL, rowguid uniqueidentifier NOT
> NULL,generation int NULL, lineage varbinary(255) NULL,
> col v varbinary(2048) NULL)
> Category: SQLSERVER
> Source: ServerName
> Number: 170
> Message: Line 1: Incorrect syntax near '-'.
> I have visit the link :
> http://support.microsoft.com/default.aspx?scid=kb;en-
> us;814916#appliesto
> but i found no help?
> Can any body help me out of this stuck?
> Thanks a lot
|||Mai,
there is a reported issue for your situation:
http://support.microsoft.com/default...b;en-us;814916
hth,
Paul Ibison
|||Thanks Paul, But I'm wondering if the hot fix will help
me, cause it will cost for that hot fix :-(. Are there
any other reasons? The error I met does match the
description in your given link.
Thanks
Mai Thoa
>--Original Message--
>Mai,
>there is a reported issue for your situation:
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;814916
>hth,
>Paul Ibison
>
>.
>
|||Mai,
you could enable logging to get more info
(http://support.microsoft.com/?id=312292) or simply restart the merge
agent - this often removes the problem.
HTH,
Paul Ibison

Monday, March 19, 2012

Failed Job Message

This is a post
I am in desparate need of some help with SQL Server 2000. I recently created several jobs to run at night to update several tables in my SQL Server. I then realized that I could schedule one job with various steps, so I tried to delete the original jobs
with the stored proc. but could not seem to get it to work. While examining the sysjob table in the msdn database I deleted one of the old jobs. This is where the problem started. I have since deleted the job steps out of the sysjobsteps and the sysjo
bschedule as well as systaskids for that same job. I am still getting the error notification each morning that the job failed Unable to retrieve steps for the certain job. Where is this notification coming from? Because there is no jobs or job steps in
any of the tables. Any help would greatly be appreciated.
There is a bit more that goes on in deleting a job than just
deleting rows from these tables. Jobs can be stored in the
job cache so there are also procedures that check the job
cache and update it if necessary - I think it's through
sp_agent_notify if I remember correctly. There could also be
some other references to the jobs in some of the other job
tables. That's why modifying these tables directly isn't
recommended. You could try restarting SQL Agent if you
didn't do so after modifying the tables but I'd guess you
could still have some other dangling references to the job
which may or may not affect things.
You may want to consider restoring your msdb database from
prior to your modifying the job tables and then delete the
jobs using sp_delete_job which is the supported method of
dropping a job. Using sp_delete_job will handle all the
details, update the job cache if necessary, etc.
-Sue
On Tue, 20 Apr 2004 10:51:04 -0700, "Cartman"
<anonymous@.discussions.microsoft.com> wrote:

>This is a post
>I am in desparate need of some help with SQL Server 2000. I recently created several jobs to run at night to update several tables in my SQL Server. I then realized that I could schedule one job with various steps, so I tried to delete the original job
s with the stored proc. but could not seem to get it to work. While examining the sysjob table in the msdn database I deleted one of the old jobs. This is where the problem started. I have since deleted the job steps out of the sysjobsteps and the sysj
obschedule as well as systaskids for that same job. I am still getting the error notification each morning that the job failed Unable to retrieve steps for the certain job. Where is this notification coming from? Because there is no jobs or job steps i
n any of the tables. Any help would greatly be appreciated.

Failed Job Message

This is a post
I am in desparate need of some help with SQL Server 2000. I recently created several jobs to run at night to update several tables in my SQL Server. I then realized that I could schedule one job with various steps, so I tried to delete the original jobs with the stored proc. but could not seem to get it to work. While examining the sysjob table in the msdn database I deleted one of the old jobs. This is where the problem started. I have since deleted the job steps out of the sysjobsteps and the sysjobschedule as well as systaskids for that same job. I am still getting the error notification each morning that the job failed Unable to retrieve steps for the certain job. Where is this notification coming from? Because there is no jobs or job steps in any of the tables. Any help would greatly be appreciatedThere is a bit more that goes on in deleting a job than just
deleting rows from these tables. Jobs can be stored in the
job cache so there are also procedures that check the job
cache and update it if necessary - I think it's through
sp_agent_notify if I remember correctly. There could also be
some other references to the jobs in some of the other job
tables. That's why modifying these tables directly isn't
recommended. You could try restarting SQL Agent if you
didn't do so after modifying the tables but I'd guess you
could still have some other dangling references to the job
which may or may not affect things.
You may want to consider restoring your msdb database from
prior to your modifying the job tables and then delete the
jobs using sp_delete_job which is the supported method of
dropping a job. Using sp_delete_job will handle all the
details, update the job cache if necessary, etc.
-Sue
On Tue, 20 Apr 2004 10:51:04 -0700, "Cartman"
<anonymous@.discussions.microsoft.com> wrote:
>This is a post
>I am in desparate need of some help with SQL Server 2000. I recently created several jobs to run at night to update several tables in my SQL Server. I then realized that I could schedule one job with various steps, so I tried to delete the original jobs with the stored proc. but could not seem to get it to work. While examining the sysjob table in the msdn database I deleted one of the old jobs. This is where the problem started. I have since deleted the job steps out of the sysjobsteps and the sysjobschedule as well as systaskids for that same job. I am still getting the error notification each morning that the job failed Unable to retrieve steps for the certain job. Where is this notification coming from? Because there is no jobs or job steps in any of the tables. Any help would greatly be appreciated.

Failed Job Message

This is a post
I am in desparate need of some help with SQL Server 2000. I recently create
d several jobs to run at night to update several tables in my SQL Server. I
then realized that I could schedule one job with various steps, so I tried
to delete the original jobs
with the stored proc. but could not seem to get it to work. While examining
the sysjob table in the msdn database I deleted one of the old jobs. This
is where the problem started. I have since deleted the job steps out of the
sysjobsteps and the sysjo
bschedule as well as systaskids for that same job. I am still getting the e
rror notification each morning that the job failed Unable to retrieve steps
for the certain job. Where is this notification coming from? Because there
is no jobs or job steps in
any of the tables. Any help would greatly be appreciated.There is a bit more that goes on in deleting a job than just
deleting rows from these tables. Jobs can be stored in the
job cache so there are also procedures that check the job
cache and update it if necessary - I think it's through
sp_agent_notify if I remember correctly. There could also be
some other references to the jobs in some of the other job
tables. That's why modifying these tables directly isn't
recommended. You could try restarting SQL Agent if you
didn't do so after modifying the tables but I'd guess you
could still have some other dangling references to the job
which may or may not affect things.
You may want to consider restoring your msdb database from
prior to your modifying the job tables and then delete the
jobs using sp_delete_job which is the supported method of
dropping a job. Using sp_delete_job will handle all the
details, update the job cache if necessary, etc.
-Sue
On Tue, 20 Apr 2004 10:51:04 -0700, "Cartman"
<anonymous@.discussions.microsoft.com> wrote:

>This is a post
>I am in desparate need of some help with SQL Server 2000. I recently created sever
al jobs to run at night to update several tables in my SQL Server. I then realized
that I could schedule one job with various steps, so I tried to delete the original
job
s with the stored proc. but could not seem to get it to work. While examini
ng the sysjob table in the msdn database I deleted one of the old jobs. Thi
s is where the problem started. I have since deleted the job steps out of t
he sysjobsteps and the sysj
obschedule as well as systaskids for that same job. I am still getting the
error notification each morning that the job failed Unable to retrieve steps
for the certain job. Where is this notification coming from? Because ther
e is no jobs or job steps i
n any of the tables. Any help would greatly be appreciated.

Wednesday, March 7, 2012

Fact Tables and Clustered Indexes

A design question:
Say I have a medium-sized fact table w/ a handful of dimension key
columns (including a date) and a handful of measures.
I necessarily need a primary key on the composite dimension key
columns, and I don't know ahead of time which of my dimension key
column(s) will be the best candidate for a clustered index. I do plan
on putting non-clustered indexes on all my dimension key columns, and
the related dimension tables' key columns.
For the sake of argument, let's say we're not partitioning the fact
table.
Assume that new facts occur in time, the fact table grows with time,
and (nearly) all changes to the fact table occur as INSERTs.
Now, all things being equal, is there a benefit of adding a clustered
index to the fact table? Two options:
- Add an IDENTITY column, make it the primary key, and add the
clustered index to it.
- Add the clustered index on the date column, since it has a natural
order.
Basically, I'm after two answers in this scenario:
- Is there a benefit to having a clustered index on a table when the
application doesn't 'really' call for one?
- If so, is it better to add an IDENTITY column (adding size to the
table) or to pick an naturally ordered dimension key? A random key?
The fact's composite key?
Thanks much.
Steven D. Clark
stevec@.clarkdev.com
"Steven Clark" <stevec@.clarkdev.com> wrote in message
news:d7740507.0407100710.2b1644b5@.posting.google.c om...
> Basically, I'm after two answers in this scenario:
> - Is there a benefit to having a clustered index on a table when the
> application doesn't 'really' call for one?
In my opinion, there is never a reason NOT to have one; it's a freebie,
basically, unlike clustered indexes. It doesn't take up any extra disc
space, and you might as well order the data on the disc somehow, rather than
letting the server take care of it... So I always make sure that every table
has one.

> - If so, is it better to add an IDENTITY column (adding size to the
> table) or to pick an naturally ordered dimension key? A random key?
> The fact's composite key?
Some tips for clustered indexes:
- They assist with range queries and grouping. So try to use them for
columns that will be used for those kinds of operations (>, <, BETWEEN, etc,
or make it composite in the same order that you'll be grouping. If you do
composite, order the columns by selectivity, least selective first. This
will create a wider tree, which will result in somewhat quicker grouping.)
- Clustering on a random key is a very bad idea, because it will cause a
lot of page splits, leading to fragmentation. This will slow your data
loads. It will also give you no query benefits at all. So you'll actually
lose on this option.
- Clustering on an IDENTITY key or a DATETIME column that's
automatically set to the date the row is inserted will actually speed up
inserts as it will create a hotspot at the end of the table. So you'll
never have page splits when inserting new data. This can definitely help
speed your data load! Clustering on an IDENTITY will usually not help too
much with queries as, in my experience, most grouping and range operations
don't consider surrogates. Depending on your app, clustering on a DATETIME
as I described can help a lot, as a lot of queries will request data between
two dates, greater than a date, etc.
- Finally, clustering on the composite of your dimensions may be helpful
if you're grouping on them or requesting ranges. However, in the latter
case, remember that a composite index will only be used for searching if the
first column is part of the search criteria, so try to choose one of your
dimensions that will always be searched on (if that exists in your
warehouse).
I hope that answered your questions? Post back if you need some
clarification or further assistance.
|||> It doesn't take up any extra disc space,
That's not true. The clustered index uses the datapages as the leaves, so
that is disk space you use already anyway, but the nodes of the index still
take up extra space on disk. That doesn't mean btw that it is not a good
idea to have a clustered index on every table. In almost all cases the
performance improvement that a clustered index provides more than offsets
the extra disk space used.
Jacco Schalkwijk
SQL Server MVP
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23rDxElvZEHA.2388@.TK2MSFTNGP11.phx.gbl...
> "Steven Clark" <stevec@.clarkdev.com> wrote in message
> news:d7740507.0407100710.2b1644b5@.posting.google.c om...
> In my opinion, there is never a reason NOT to have one; it's a
freebie,
> basically, unlike clustered indexes. It doesn't take up any extra disc
> space, and you might as well order the data on the disc somehow, rather
than
> letting the server take care of it... So I always make sure that every
table
> has one.
>
> Some tips for clustered indexes:
> - They assist with range queries and grouping. So try to use them for
> columns that will be used for those kinds of operations (>, <, BETWEEN,
etc,
> or make it composite in the same order that you'll be grouping. If you do
> composite, order the columns by selectivity, least selective first. This
> will create a wider tree, which will result in somewhat quicker grouping.)
> - Clustering on a random key is a very bad idea, because it will cause
a
> lot of page splits, leading to fragmentation. This will slow your data
> loads. It will also give you no query benefits at all. So you'll
actually
> lose on this option.
> - Clustering on an IDENTITY key or a DATETIME column that's
> automatically set to the date the row is inserted will actually speed up
> inserts as it will create a hotspot at the end of the table. So you'll
> never have page splits when inserting new data. This can definitely help
> speed your data load! Clustering on an IDENTITY will usually not help too
> much with queries as, in my experience, most grouping and range operations
> don't consider surrogates. Depending on your app, clustering on a
DATETIME
> as I described can help a lot, as a lot of queries will request data
between
> two dates, greater than a date, etc.
> - Finally, clustering on the composite of your dimensions may be
helpful
> if you're grouping on them or requesting ranges. However, in the latter
> case, remember that a composite index will only be used for searching if
the
> first column is part of the search criteria, so try to choose one of your
> dimensions that will always be searched on (if that exists in your
> warehouse).
> I hope that answered your questions? Post back if you need some
> clarification or further assistance.
>
>
|||"Jacco Schalkwijk" <jacco.please.reply@.to.newsgroups.mvps.org.invalid > wrote
in message news:%23cIz3GDaEHA.2544@.TK2MSFTNGP10.phx.gbl...
> That's not true. The clustered index uses the datapages as the leaves, so
> that is disk space you use already anyway, but the nodes of the index
still
> take up extra space on disk. That doesn't mean btw that it is not a good
> idea to have a clustered index on every table. In almost all cases the
> performance improvement that a clustered index provides more than offsets
> the extra disk space used.
Thanks for the clarification on that... I also didn't think about fill
factor, which could also create the impression of more disc space being
used.

Fact Tables and Clustered Indexes

A design question:
Say I have a medium-sized fact table w/ a handful of dimension key
columns (including a date) and a handful of measures.
I necessarily need a primary key on the composite dimension key
columns, and I don't know ahead of time which of my dimension key
column(s) will be the best candidate for a clustered index. I do plan
on putting non-clustered indexes on all my dimension key columns, and
the related dimension tables' key columns.
For the sake of argument, let's say we're not partitioning the fact
table.
Assume that new facts occur in time, the fact table grows with time,
and (nearly) all changes to the fact table occur as INSERTs.
Now, all things being equal, is there a benefit of adding a clustered
index to the fact table? Two options:
- Add an IDENTITY column, make it the primary key, and add the
clustered index to it.
- Add the clustered index on the date column, since it has a natural
order.
Basically, I'm after two answers in this scenario:
- Is there a benefit to having a clustered index on a table when the
application doesn't 'really' call for one?
- If so, is it better to add an IDENTITY column (adding size to the
table) or to pick an naturally ordered dimension key? A random key?
The fact's composite key?
Thanks much.
Steven D. Clark
stevec@.clarkdev.com"Steven Clark" <stevec@.clarkdev.com> wrote in message
news:d7740507.0407100710.2b1644b5@.posting.google.com...
> Basically, I'm after two answers in this scenario:
> - Is there a benefit to having a clustered index on a table when the
> application doesn't 'really' call for one?
In my opinion, there is never a reason NOT to have one; it's a freebie,
basically, unlike clustered indexes. It doesn't take up any extra disc
space, and you might as well order the data on the disc somehow, rather than
letting the server take care of it... So I always make sure that every table
has one.

> - If so, is it better to add an IDENTITY column (adding size to the
> table) or to pick an naturally ordered dimension key? A random key?
> The fact's composite key?
Some tips for clustered indexes:
- They assist with range queries and grouping. So try to use them for
columns that will be used for those kinds of operations (>, <, BETWEEN, etc,
or make it composite in the same order that you'll be grouping. If you do
composite, order the columns by selectivity, least selective first. This
will create a wider tree, which will result in somewhat quicker grouping.)
- Clustering on a random key is a very bad idea, because it will cause a
lot of page splits, leading to fragmentation. This will slow your data
loads. It will also give you no query benefits at all. So you'll actually
lose on this option.
- Clustering on an IDENTITY key or a DATETIME column that's
automatically set to the date the row is inserted will actually speed up
inserts as it will create a hotspot at the end of the table. So you'll
never have page splits when inserting new data. This can definitely help
speed your data load! Clustering on an IDENTITY will usually not help too
much with queries as, in my experience, most grouping and range operations
don't consider surrogates. Depending on your app, clustering on a DATETIME
as I described can help a lot, as a lot of queries will request data between
two dates, greater than a date, etc.
- Finally, clustering on the composite of your dimensions may be helpful
if you're grouping on them or requesting ranges. However, in the latter
case, remember that a composite index will only be used for searching if the
first column is part of the search criteria, so try to choose one of your
dimensions that will always be searched on (if that exists in your
warehouse).
I hope that answered your questions? Post back if you need some
clarification or further assistance.|||> It doesn't take up any extra disc space,
That's not true. The clustered index uses the datapages as the leaves, so
that is disk space you use already anyway, but the nodes of the index still
take up extra space on disk. That doesn't mean btw that it is not a good
idea to have a clustered index on every table. In almost all cases the
performance improvement that a clustered index provides more than offsets
the extra disk space used.
Jacco Schalkwijk
SQL Server MVP
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:%23rDxElvZEHA.2388@.TK2MSFTNGP11.phx.gbl...
> "Steven Clark" <stevec@.clarkdev.com> wrote in message
> news:d7740507.0407100710.2b1644b5@.posting.google.com...
> In my opinion, there is never a reason NOT to have one; it's a
freebie,
> basically, unlike clustered indexes. It doesn't take up any extra disc
> space, and you might as well order the data on the disc somehow, rather
than
> letting the server take care of it... So I always make sure that every
table
> has one.
>
> Some tips for clustered indexes:
> - They assist with range queries and grouping. So try to use them for
> columns that will be used for those kinds of operations (>, <, BETWEEN,
etc,
> or make it composite in the same order that you'll be grouping. If you do
> composite, order the columns by selectivity, least selective first. This
> will create a wider tree, which will result in somewhat quicker grouping.)
> - Clustering on a random key is a very bad idea, because it will cause
a
> lot of page splits, leading to fragmentation. This will slow your data
> loads. It will also give you no query benefits at all. So you'll
actually
> lose on this option.
> - Clustering on an IDENTITY key or a DATETIME column that's
> automatically set to the date the row is inserted will actually speed up
> inserts as it will create a hotspot at the end of the table. So you'll
> never have page splits when inserting new data. This can definitely help
> speed your data load! Clustering on an IDENTITY will usually not help too
> much with queries as, in my experience, most grouping and range operations
> don't consider surrogates. Depending on your app, clustering on a
DATETIME
> as I described can help a lot, as a lot of queries will request data
between
> two dates, greater than a date, etc.
> - Finally, clustering on the composite of your dimensions may be
helpful
> if you're grouping on them or requesting ranges. However, in the latter
> case, remember that a composite index will only be used for searching if
the
> first column is part of the search criteria, so try to choose one of your
> dimensions that will always be searched on (if that exists in your
> warehouse).
> I hope that answered your questions? Post back if you need some
> clarification or further assistance.
>
>|||"Jacco Schalkwijk" <jacco.please.reply@.to.newsgroups.mvps.org.invalid> wrote
in message news:%23cIz3GDaEHA.2544@.TK2MSFTNGP10.phx.gbl...
> That's not true. The clustered index uses the datapages as the leaves, so
> that is disk space you use already anyway, but the nodes of the index
still
> take up extra space on disk. That doesn't mean btw that it is not a good
> idea to have a clustered index on every table. In almost all cases the
> performance improvement that a clustered index provides more than offsets
> the extra disk space used.
Thanks for the clarification on that... I also didn't think about fill
factor, which could also create the impression of more disc space being
used.

Fact Tables

What are fact tables? Y is it neccessary for Cubes...On 10.08.2006 12:05, somuthomas@.gmail.com wrote:
> What are fact tables? Y is it neccessary for Cubes...
http://en.wikipedia.org/wiki/Fact_table|||Remember the Rubic's Cube puzzle toy? Picture that as your OLAP cube. Each
colored square represents a cell. One row in your fact table becomes a cell
in your cube for a given measure.
It is the lowest level of detail and thus gives the ability to roll-up
values (aggregate) at various dimensional levels.
RDA Corp
Business Intelligence Evangelist Leader
www.rdacorp.com
"somuthomas@.gmail.com" wrote:

> What are fact tables? Y is it neccessary for Cubes...
>

Fact Table/Dimension and Multiple Level Dimension

Hi Guys,

I have two questions on Analysis Service for SQL SERVER 2005:-

a) Is it possible for me to use two tables from my database without Primary Key. As this use to work on my Analysis Service on SQL SERVER 2000.

b) I had a multiple Level Dimension in SQL SERVER 2000 with Key Column as one column in my table and name column as another in my table. I am not able to do the same in SQL SERVER 2005.

Urgent help is requried.

Regards,

Kaushal

a)You do not need primary keys in the source data base but I think you need to set logical primary keys in the data source view.

b)Key and name columns are still separated in Analysis Services 2005.

Regards

Thomas Ivarsson

|||

Hi Thomas,

Thanks for Reply.

a) Yes i do understand that, but then what happens if i want to use the columns used in Primary key as levels in a Dimension?

b) I am still not understanding how to take kare of Key and Name Columns?

Regards,

Kaushal

FAct TABLE LOOKUPs data

Hi,

Please help me out in loading the fact tables

I had used lookup on DIM table to get my SUK and if I use union transformation to get the out put from each lookup and then loading the data with some condition the data in my fact is not loading in a proper format.

The union transformation is splitting the out put in to different records

Please do inform me about which transformation should be used to get the data from lookup tables.

Or please do inform me the approach to load the fact table in SSIS.

I’m basically INFORMATICA resource and I’m implementing in terms of INFORMATICA

First of all, In SSIS you don't link ports(columns) separately; you link component input and outputs. The Dataflow should look like:

Source Component -->LKP1(get Dim1 SUK) -->LKP2(get Dim2 SUK) -->LKn(getDimn SUK) ...-->Destination Component

Notice you may want to configure the Lookup error output to either redirect or ignore errors as this component treats the no matches as errors.

In the first page of this forum there is a webcast that shows a similar approach:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=534505&SiteID=1

Please post back if something is not clear.

|||

Thank you for that information.

please do clarify me on the approach i had taken

the dataflow in my mapping is :

Source Component >LKP1 > LKP2 > Union Component > SCD Component > Destination Component.

i know the union component is a problem in my mapping but instead of union what component do i use. i'm trying to find out this solution.

or if possible give me your approach for fact tale load with lookup components

Thank you

|||Instead of a UNION, try connecting the output from Lookup 1 to Lookup 2 directly. If you do that, you don't need a union.|||

Sheikh Mohammed wrote:

Thank you for that information.

please do clarify me on the approach i had taken

the dataflow in my mapping is :

Source Component >LKP1 > LKP2 > Union Component > SCD Component > Destination Component.

i know the union component is a problem in my mapping but instead of union what component do i use. i'm trying to find out this solution.

or if possible give me your approach for fact tale load with lookup components

Thank you

AS John says, you don't need the UNION all component at all because you have single pipeline; that is the same concept that in Informatica I belive

fact table design question

SSAS 2005 - I have the following 3 tables:

T1: dimA_id, dimB_id, dimC_id, prod_id, dollar_amt_1

T2: dimA_id, dimB_id, dimC_id, cat_id, dollar_amt_2

T3: cat_id, prod_id.

cat_id and prod_id has a parent to child relationship.

In this case, I have to build two fact tables, right? There is no way to combine T1 and T2 into one fact table because the dollar_amt 1 and 2 are at a different level of prod_id and cat_id. I just wanted to make sure that I am doing the right thing. It seems that there are repeated data (dimA_id, dimB_id, dimC_id) in both tables.

Thanks.

But if cat_id/prod_id are modelled as a Parent-Child dimension (ie. using a single key for members at all levels, with a parent key), then data could be loaded at both levels from a single fact table.|||Good point. I will give it a try. Would this be a better appoach than the two fact-table one?|||Should be more straightforward, at least - but large parent-child dimensions can cause performance problems. If this dimension is small, it may not be a concern.|||The cat table has more than 1/2 million records and the prod table is 2-3 times bigger than the cat table. Is this considered large?|||

According to the AS 2005 Performance Guide, that is large, but you could try it and check:

Parent-child hierarchies

Parent-child hierarchies are hierarchies with a variable number of levels, as determined by a recursive relationship between a child attribute and a parent attribute. Parent-child hierarchies are typically used to represent a financial chart of accounts or an organizational chart. In parent-child hierarchies, aggregations are created only for the key attribute and the top attribute, i.e., the All attribute unless it is disabled. As such, refrain from using parent-child hierarchies that contain large numbers of members at intermediate levels of the hierarchy. Additionally, you should limit the number of parent-child hierarchies in your cube.

If you are in a design scenario with a large parent-child hierarchy (greater than 250,000 members), you may want to consider altering the source schema to re-organize part or all of the hierarchy into a user hierarchy with a fixed number of levels.

fact table design question

I have created a factSales table with dimDate, dimCustomer, dimProduct, dimSalesPerson tables. The dimensions are all joined with surrogate integer identity PK fields which serve as the composite key in the fact table.

It is possible for the same customer to place multiple orders for the same product from the same sales person on the same date. When this happens it seems to me that only the last order will be stored in the fact table. I want to have a row for each order. How would one design a fact table to accomplish this. The only truly unique piece of data from the OLTP is the sales order number.

factSales:

PK DateKey, int

PK CustomerKey, int

PK ProductKey, int

PK SalesPersonKey, int

Amount

UnitCost

Weight

ShippingCost

dimDate

PK DateKey, int, identity

SalesDate

dimCustomer

PK CustomerKey, int, identity

CustomerName

Address

dimProduct

PK ProductKey, int, identity

Product#

ProductName

DimSalesPerson

PK SalesPersonKey, int, idnetity

SalesPersonName

Dept

StartDate

EndDate

Hi,

You'll need to stick another key on the fact to make each row unique. Even if it is something simple like a count

e.g.

FK DateKey, int

FK CustomerKey, int

FK ProductKey, int

FK SalesPersonKey, int

FK OrderNumberOfTheDay, int

measures ...

If possible a time stamp might be another way, I am assuming though that the different orders happen at different times of the day. But basically another key in your ETL process will fix that problem.

Hope that helps,

Matt

|||

Thanks Matt. I was going to just add the sales order # as it is unique but I didn't want to violate and conventions that might cause problems down the line.

Thanks again!

|||

Hi John

Matt is right in that you should simply bring through the Sales Order Number then your DSV design will take care of the aggregation

Alternatively, unless you need to analyse the data by the Sales Order Number, (which some might argue you should use your OLTP for) you could aggregate the facts in your fact table ETL load process. This can increase performance both in processing the cube and queries if you have a lot of data

If you want to use any drillthrough functionality then keep the Sales Order Number in.

HTH

Tim

|||

Thanks Tim.

I had also considered that possibility. My actual OLTP/OLAP is more complex than I showed (I tried to keep it simple for this example). I would have to join several dozen records in order to aggregate the data and I may have a sales order record which does not yet have a ship date which will be the key date slicer. I can load my fact table with a pointer to a 0 date key or just skip SO records without a ship date (my next task to figure out); it would be very complex to aggregate them. Plus I was already storing the SO# in my fact table on the likely probability the users will want to drill back to the original table.

Fact Table Design Question

We're putting together our data warehouse, and I had a questions regarding
design of fact tables for our situation. We have invoices and payments to
those invoices...would I include all information in one fact table, or would
I separate them into two tables? If I do two tables, can I include two fact
tables in an OLAP cube?
Thanks in advance.
You can use a view...
it's even better to use a partitioned view...
Message posted via http://www.sqlmonster.com
|||Hi,
It’s really depending how often you will be analyzing invoices and payments
together.
Sometimes - you can built two cubes and join them into virtual cube.
Very often - you should include all information in your fact table design.
Tomasz B.
"T." wrote:

> We're putting together our data warehouse, and I had a questions regarding
> design of fact tables for our situation. We have invoices and payments to
> those invoices...would I include all information in one fact table, or would
> I separate them into two tables? If I do two tables, can I include two fact
> tables in an OLAP cube?
> Thanks in advance.
>
>

Fact Table Design Question

We're putting together our data warehouse, and I had a questions regarding
design of fact tables for our situation. We have invoices and payments to
those invoices...would I include all information in one fact table, or would
I separate them into two tables? If I do two tables, can I include two fact
tables in an OLAP cube?
Thanks in advance.You can use a view...
it's even better to use a partitioned view...
Message posted via http://www.droptable.com|||Hi,
It’s really depending how often you will be analyzing invoices and payment
s
together.
Sometimes - you can built two cubes and join them into virtual cube.
Very often - you should include all information in your fact table design.
Tomasz B.
"T." wrote:

> We're putting together our data warehouse, and I had a questions regarding
> design of fact tables for our situation. We have invoices and payments to
> those invoices...would I include all information in one fact table, or wou
ld
> I separate them into two tables? If I do two tables, can I include two fa
ct
> tables in an OLAP cube?
> Thanks in advance.
>
>

Facing problem with SSRS using Temporary tables in stored procedures

Hi,
I am facing a problem using temporary tables in SQL Server stored
procedures when using them in SQL Server Reporting Services.
It gives an error while fetching fields when creating a dataset in
report designer of MS Visual Studio.Net 2005.
Has anybody else faced the same problem?
Regards,
SunnyWhat's the specific error it's producing? You can see this if the procedure
raises errors or you do alot of SELECT...INTOs rather than declaring your
temporary tables.
-T
"sunny" <sunny.mohan@.gmail.com> wrote in message
news:1148742869.635884.186170@.38g2000cwa.googlegroups.com...
> Hi,
> I am facing a problem using temporary tables in SQL Server stored
> procedures when using them in SQL Server Reporting Services.
> It gives an error while fetching fields when creating a dataset in
> report designer of MS Visual Studio.Net 2005.
> Has anybody else faced the same problem?
> Regards,
> Sunny
>|||Temporarily modify your procedure to create a table. Use the temporary procedure to populate your dataset. After SSRS has the metadata from the table, you can drop the table and restore your original procudure.
From http://www.developmentnow.com/g/115_2006_5_0_0_763042/Facing-problem-with-SSRS-using-Temporary-tables-in-stored-procedures.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.com|||I used temporary tables in my stored procedures just fine without going
through this monkey business. One of the biggest problems is that users do
not change their command type to stored procedure!
If they did so, then it works just fine. I.e. when you create a dataset,
specify that the command type is StoredProcedure instead of the default
which is text. Then just type in the name of the stored procedure.
The other thing is to make sure they ARE running the latest Service Pack for
RS.
=-Chris
"Jeff" <nospam@.developmentnow.com> wrote in message
news:e6b4a988-3741-40f4-8359-487d262ce05c@.developmentnow.com...
> Temporarily modify your procedure to create a table. Use the temporary
> procedure to populate your dataset. After SSRS has the metadata from the
> table, you can drop the table and restore your original procudure.
> From
> http://www.developmentnow.com/g/115_2006_5_0_0_763042/Facing-problem-with-SSRS-using-Temporary-tables-in-stored-procedures.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com|||I'm having the same problem and the drop-down list for Command Type is greyed out, so I'm unable to change the default command type of text. How can I activate the command type drop-down list
From http://www.developmentnow.com/g/115_2006_5_0_0_763042/Facing-problem-with-SSRS-using-Temporary-tables-in-stored-procedures.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.com

Friday, February 24, 2012

extreme help with query of 2 tables into 1 long table

2 tables user id is key
Table (A) 05_Users
user_id | first_name |last_name|title|dept
64|John|Doe|director|cis
65|Jane|Doe|ceo|fina
and
Table(B) 05_Users_Details
user_id | detail_cd | group_cd | detail_value
64|06|awdM0|null
64|07|awdD0|null
64|2005|awdY0|null
64|FreeText|awdTxt0|I enjoy work
64|10|awdM1|null
64|09|awdD1|null
64|2004|awdY1|null
64|FreeText|awdTxt1|still here
64|local|pfmLEVL1|null
64|natial|pfmLEVL1|null
64|aapm|pfmAAPM1|null
64|FreeText|pfmFREE1|profess
65|etc
I'm trying to creat a query that will give me all user information into one
long table with the group_cd as a 'column title' and detail_cd as 'column
value', but if it finds the 'column value' of FreeText then detail_value
should be 'column value'.
So the table would look like this.
user_id | first_name
|last_name|title|dept|awdM0|awdD0|awdY0|awdTxt0|aw dM1|awdD1|awdY1|awdTxt1|pfmLEVL1|pfmLEVL1|pfmAAPM1 |FREE TEXT
64|John|Doe|director|cis|06|07|2005|I enjoy work|10|09|2004|still
here|local|natial|aapm|profess
65|Jane|Doe|ceo|fina etc
Some users have more information than other users and in these cases the
'column value' can be left as blank.
I don't need a webpage, but if it will help, will use.
IF YOU HAVE A BETTER WAY TO GET ALL THE INFORMATION ANY SUGGESTIONS WOULD BE
GREAT!
On Tue, 7 Jun 2005 10:26:02 -0700, BIGLU wrote:
(snip)
>IF YOU HAVE A BETTER WAY TO GET ALL THE INFORMATION ANY SUGGESTIONS WOULD BE
>GREAT!
Hi BIGLU,
First: The format you used to describe your data makes it very hard to
read and understand and almost impossible to reproduce. For future
postings, please include CREATE TABLE and INSERT statements for table
structure and sample data, as described here: www.aspfaq.com/5006.
Second: What you're trying to achieve looks like a pivot, or cross-tab
query. The front end/presentation layer is actually the best place for
that task. If you have to do it on the server, then try if you can adapt
the following to your needs:
SELECT u.UserID,
MAX(CASE WHEN d.DetailCD = 'awdM0' THEN d.detailValue END) AS
awdM0,
MAX(CASE WHEN d.DetailCD = 'awdD0' THEN d.detailValue END) AS
awdD0,
....
FROM Users AS u
INNER JOIN UserDetails AS d
ON d.UserID = u.UserID
GROUP BY u.UserID
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||Hugo,
When you say "front end/presentatio layer" can I import and do in access or
excel? If so, can you give me a link where I can do this in any of these? I
know how to export (I think I do.lol), but I'll need help with crosstab, etc.
Or is this something I can do in asp.net? or a third party component?
Thanks
"Hugo Kornelis" wrote:

> On Tue, 7 Jun 2005 10:26:02 -0700, BIGLU wrote:
> (snip)
> Hi BIGLU,
> First: The format you used to describe your data makes it very hard to
> read and understand and almost impossible to reproduce. For future
> postings, please include CREATE TABLE and INSERT statements for table
> structure and sample data, as described here: www.aspfaq.com/5006.
> Second: What you're trying to achieve looks like a pivot, or cross-tab
> query. The front end/presentation layer is actually the best place for
> that task. If you have to do it on the server, then try if you can adapt
> the following to your needs:
> SELECT u.UserID,
> MAX(CASE WHEN d.DetailCD = 'awdM0' THEN d.detailValue END) AS
> awdM0,
> MAX(CASE WHEN d.DetailCD = 'awdD0' THEN d.detailValue END) AS
> awdD0,
> ....
> FROM Users AS u
> INNER JOIN UserDetails AS d
> ON d.UserID = u.UserID
> GROUP BY u.UserID
>
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>
|||On Thu, 9 Jun 2005 09:42:06 -0700, LU wrote:

>Hugo,
>When you say "front end/presentatio layer" can I import and do in access or
>excel?
Hi LU,
I must admit that I have little expertise with respect toi front end
applications. But as far as I know, Access has some builtin
functionality to create a cross-tab table (look up "TRANSFORM" and
"PIVOT" in the online help, or use the crosstab query wizard). And Excel
can do crosstab reports as well.

>Or is this something I can do in asp.net?
Probably, but you'd better ask in a group for asp.net! <g>

>or a third party component?
Some third party applications that might help you generate the crosstab
at the server (though I still recommend against it!) may be found near
the end of this page: http://www.aspfaq.com/show.asp?id=2462
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)

Extraction software

Hi,
I need an extraction software that needs to be capable of capturing a table
(or tables) displayed on the web and converting it to a file as a list of
records with
comma separated values. The records will be converted to insert commands
after the cleansing process. The software will be used for the data
warehousing and mining project I have at UNF.
Thank you,
Mihaela
Google on "screen scraper" and you should get lots of information on these
kinds of products/techniques.
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
"Mihaela O" <MihaelaO@.discussions.microsoft.com> wrote in message
news:65926D4C-DF67-4129-877B-544D138A62FB@.microsoft.com...
> Hi,
> I need an extraction software that needs to be capable of capturing a
table
> (or tables) displayed on the web and converting it to a file as a list of
> records with
> comma separated values. The records will be converted to insert commands
> after the cleansing process. The software will be used for the data
> warehousing and mining project I have at UNF.
> Thank you,
> Mihaela
>

Extraction software

Hi,
I need an extraction software that needs to be capable of capturing a table
(or tables) displayed on the web and converting it to a file as a list of
records with
comma separated values. The records will be converted to insert commands
after the cleansing process. The software will be used for the data
warehousing and mining project I have at UNF.
Thank you,
MihaelaGoogle on "screen scraper" and you should get lots of information on these
kinds of products/techniques.
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"Mihaela O" <MihaelaO@.discussions.microsoft.com> wrote in message
news:65926D4C-DF67-4129-877B-544D138A62FB@.microsoft.com...
> Hi,
> I need an extraction software that needs to be capable of capturing a
table
> (or tables) displayed on the web and converting it to a file as a list of
> records with
> comma separated values. The records will be converted to insert commands
> after the cleansing process. The software will be used for the data
> warehousing and mining project I have at UNF.
> Thank you,
> Mihaela
>

Sunday, February 19, 2012

Extracting from linked tables into pivot

Hi there,
I've managed to come this far and I need a bit of help to finalise. I'm
extracting records correctly using the following:
select substring(D.MStockCode,1,3) as Style,
sum(case month(D.MLineShipDate)
when 1 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as Jan,
sum(case month(D.MLineShipDate)
when 2 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as Feb,
sum(case month(D.MLineShipDate)
when 3 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as Mar,
sum(case month(D.MLineShipDate)
when 4 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as Apr,
sum(case month(D.MLineShipDate)
when 5 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as May,
sum(case month(D.MLineShipDate)
when 6 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
end) as Jun
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S') and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3)
order by substring(D.MStockCode,1,3)
What I need to add is the quantity on hand from the InvWarehouse table. My
stock codes look something like this: 002-0300-10-WW-01 where the first 3
digits indicate the style (as in my code). In the InvWarehouse table there
are the same codes with a quantity on hand, and what I need is to sum the
total quantity on hand per style and add this field to the pivot (per style)
.
Thanking you in advance.
Kind regards,Hi
You are alredy joining to the InvWarehouse table therefore (if I understand
your problem) your summation of the quantity should be an extension of what
you have!
select substring(D.MStockCode,1,3) as Style,
sum(case month(D.MLineShipDate)
when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int) else 0
end) as Jan,
sum(case month(D.MLineShipDate)
when 1 then w.quantity else 0
end) as JanQuantity,
...
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S') and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3)
order by substring(D.MStockCode,1,3)
If this is not the case, Posting DDL and example data would help see
http://www.aspfaq.com/etiquette.asp?id=5006 also the expected results from
the sample data would be beneficial.
John
"CyberFox" wrote:

> Hi there,
> I've managed to come this far and I need a bit of help to finalise. I'm
> extracting records correctly using the following:
> select substring(D.MStockCode,1,3) as Style,
> sum(case month(D.MLineShipDate)
> when 1 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as Jan,
> sum(case month(D.MLineShipDate)
> when 2 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as Feb,
> sum(case month(D.MLineShipDate)
> when 3 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as Mar,
> sum(case month(D.MLineShipDate)
> when 4 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as Apr,
> sum(case month(D.MLineShipDate)
> when 5 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as May,
> sum(case month(D.MLineShipDate)
> when 6 then D.MBackOrderQty*substring(D.MStockCode,17,1) else 0
> end) as Jun
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S') and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3)
> order by substring(D.MStockCode,1,3)
> What I need to add is the quantity on hand from the InvWarehouse table. My
> stock codes look something like this: 002-0300-10-WW-01 where the first 3
> digits indicate the style (as in my code). In the InvWarehouse table there
> are the same codes with a quantity on hand, and what I need is to sum the
> total quantity on hand per style and add this field to the pivot (per styl
e).
> Thanking you in advance.
> Kind regards,|||Hi John,
The summation of the quantities is not date-dependent. Let me explain
exactly what I want:
The InvWarehouse table has a quantity on hand per stock item (this is the
quantity in stock at the current time, and is not date-dependent at all).
What I want to show is the stock item (actually the style number, which is
the first 3 digits of the stock item), it's quantity on hand (the stock
item's), and the outstanding sales orders per style (date-dependent).
Hope this clarifies.
Rgds,
"John Bell" wrote:
> Hi
> You are alredy joining to the InvWarehouse table therefore (if I understan
d
> your problem) your summation of the quantity should be an extension of wha
t
> you have!
> select substring(D.MStockCode,1,3) as Style,
> sum(case month(D.MLineShipDate)
> when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int) el
se 0
> end) as Jan,
> sum(case month(D.MLineShipDate)
> when 1 then w.quantity else 0
> end) as JanQuantity,
> ...
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S') and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3)
> order by substring(D.MStockCode,1,3)
> If this is not the case, Posting DDL and example data would help see
> http://www.aspfaq.com/etiquette.asp?id=5006 also the expected results from
> the sample data would be beneficial.
> John
>
> "CyberFox" wrote:
>|||Hi
DDL, Example data and expected output would have eliminated any ambiguity
when you post. These are untested:
If you just want to sum the QuantityInHand values using the same where
clause as your main query then you can do that with
SELECT substring(D.MStockCode,1,3) as Style,
sum(case month(D.MLineShipDate)
when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
else 0
end) as Jan,
...
SUM(D.QuantityInHand) AS InHand
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S')
and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3)
order by substring(D.MStockCode,1,3)
If you don't want that restriction then a subquery may be needed:
SELECT substring(D.MStockCode,1,3) as Style,
sum(case month(D.MLineShipDate)
when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
else 0
end) as Jan,
...
( SELECT SUM(I.QuantityInHand) FROM InvWarehouse I WHERE
substring(D.MStockCode,1,3) = substring(I.MStockCode,1,3) ) AS InHand
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S')
and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3)
order by substring(D.MStockCode,1,3)
or possibly using derived tables and joining them
SELECT A.Style, A.Jan, A.Feb,... B.Total
FROM
( SELECT substring(D.MStockCode,1,3) as Style,
sum(case month(D.MLineShipDate)
when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
else 0
end) as Jan,
...
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S')
and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3) ) A
JOIN
( SELECT substring(D.MStockCode,1,3) as Style,
SUM ( D.QuantityInHand ) AS InHand
from SorDetail D LEFT JOIN
SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
InvWarehouse W ON D.MStockCode = W.StockCode
where (M.OrderStatus = '1' or M.OrderStatus = 'S')
and D.MBackOrderQty > 0
group by substring(D.MStockCode,1,3) ) B ON A.Style = B.Style
ORDER BY A.Style
John
"CyberFox" wrote:
> Hi John,
> The summation of the quantities is not date-dependent. Let me explain
> exactly what I want:
> The InvWarehouse table has a quantity on hand per stock item (this is the
> quantity in stock at the current time, and is not date-dependent at all).
> What I want to show is the stock item (actually the style number, which is
> the first 3 digits of the stock item), it's quantity on hand (the stock
> item's), and the outstanding sales orders per style (date-dependent).
> Hope this clarifies.
> Rgds,
> "John Bell" wrote:
>|||John,
I've tried the sub-query option, but it didn't do what I was hoping for. Let
me simplify and give you some examples of my data, if you don't mind:
SorDetail table:
Itemcode BackOrderQty OrderDate
002-0200-10-WW-02 100 01/01/06
002-0200-11-WW-02 150 02/01/06
002-0200-12-WW-02 150 01/02/06
010-0300-16-ED-03 100 01/01/06
010-0300-16-MK-01 200 01/01/06
010-0300-16-TR-02 100 01/03/06
InvWarehouse table
ItemCode QtyOnHand
002-0200-10-WW-02 2000
002-0200-11-WW-02 1400
002-0200-12-WW-02 1500
010-0300-16-ED-03 1000
010-0300-16-MK-01 1000
010-0300-16-TR-02 1000
I need the following (considering that the style = first 3 digits of the
item codes)
Style QtyOnHand JanOrders FebOrders Mar
002 4900 250 150
0
010 3000 300 0
100
Thank you very much for your help so far...
Rgds,
"John Bell" wrote:
> Hi
> DDL, Example data and expected output would have eliminated any ambiguity
> when you post. These are untested:
> If you just want to sum the QuantityInHand values using the same where
> clause as your main query then you can do that with
> SELECT substring(D.MStockCode,1,3) as Style,
> sum(case month(D.MLineShipDate)
> when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
> else 0
> end) as Jan,
> ...
> SUM(D.QuantityInHand) AS InHand
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S')
> and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3)
> order by substring(D.MStockCode,1,3)
> If you don't want that restriction then a subquery may be needed:
> SELECT substring(D.MStockCode,1,3) as Style,
> sum(case month(D.MLineShipDate)
> when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
> else 0
> end) as Jan,
> ...
> ( SELECT SUM(I.QuantityInHand) FROM InvWarehouse I WHERE
> substring(D.MStockCode,1,3) = substring(I.MStockCode,1,3) ) AS InHand
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S')
> and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3)
> order by substring(D.MStockCode,1,3)
> or possibly using derived tables and joining them
> SELECT A.Style, A.Jan, A.Feb,... B.Total
> FROM
> ( SELECT substring(D.MStockCode,1,3) as Style,
> sum(case month(D.MLineShipDate)
> when 1 then D.MBackOrderQty*CAST(substring(D.MStockCode,17,1) AS int)
> else 0
> end) as Jan,
> ...
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S')
> and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3) ) A
> JOIN
> ( SELECT substring(D.MStockCode,1,3) as Style,
> SUM ( D.QuantityInHand ) AS InHand
> from SorDetail D LEFT JOIN
> SorMaster M ON M.SalesOrder = D.SalesOrder LEFT JOIN
> InvWarehouse W ON D.MStockCode = W.StockCode
> where (M.OrderStatus = '1' or M.OrderStatus = 'S')
> and D.MBackOrderQty > 0
> group by substring(D.MStockCode,1,3) ) B ON A.Style = B.Style
> ORDER BY A.Style
> John
> "CyberFox" wrote:
>|||Hi
This post is inconsistent with the tables/columns that you have posted
previously so it is even more confusing, make sure that you read
http://www.aspfaq.com/etiquette.asp?id=5006 and post something usable.
With:
CREATE TABLE SorDetail ( Itemcode char(17), BackOrderQty
int, OrderDate datetime)
CREATE TABLE InvWarehouse ( Itemcode char(17), QtyOnHand
int )
INSERT INTO SorDetail ( Itemcode, BackOrderQty, OrderDate)
SELECT '002-0200-10-WW-02', 100, '20060101'
UNION ALL SELECT '002-0200-11-WW-02', 150, '20060102'
UNION ALL SELECT '002-0200-12-WW-02', 150, '20060201'
UNION ALL SELECT '010-0300-16-ED-03', 100, '20060101'
UNION ALL SELECT '010-0300-16-MK-01', 200, '20060101'
UNION ALL SELECT '010-0300-16-TR-02', 100, '20060103
'
INSERT INTO InvWarehouse ( Itemcode, QtyOnHand )
SELECT '002-0200-10-WW-02', 2000
UNION ALL SELECT '002-0200-11-WW-02', 1400
UNION ALL SELECT '002-0200-12-WW-02', 1500
UNION ALL SELECT '010-0300-16-ED-03', 1000
UNION ALL SELECT '010-0300-16-MK-01', 1000
UNION ALL SELECT '010-0300-16-TR-02', 1000
My query:
SELECT substring(D.Itemcode,1,3) as Style,
sum(case month(D.OrderDate)
when 1 then D.BackOrderQty
else 0
end) as Jan,
sum(case month(D.OrderDate)
when 2 then D.BackOrderQty
else 0
end) as Feb,
sum(case month(D.OrderDate)
when 3 then D.BackOrderQty
else 0
end) as Mar,
( SELECT SUM(I.QtyOnHand) FROM InvWarehouse I WHERE
substring(D.Itemcode,1,3) = substring(I.Itemcode,1,3) ) AS InHand
from SorDetail D
LEFT JOIN InvWarehouse W ON D.Itemcode = W.Itemcode
WHERE D.BackOrderQty > 0
group by substring(D.Itemcode,1,3)
order by substring(D.Itemcode,1,3)
seems to give exaclty what you required, although it gives me an error if I
change the column order, so using:
SELECT A.Style, B.QtyOnHand, A.Jan, A.Feb, A.Mar
FROM
( SELECT SUBSTRING(D.Itemcode,1,3) as Style,
SUM(CASE MONTH(D.OrderDate)
WHEN 1 THEN D.BackOrderQty
ELSE 0
END) AS Jan,
SUM(CASE MONTH(D.OrderDate)
WHEN 2 THEN D.BackOrderQty
ELSE 0
END) AS Feb,
SUM(CASE MONTH(D.OrderDate)
WHEN 3 THEN D.BackOrderQty
ELSE 0
END) AS Mar
FROM SorDetail D
LEFT JOIN InvWarehouse W ON D.Itemcode = W.Itemcode
WHERE D.BackOrderQty > 0
GROUP BY SUBSTRING(D.Itemcode,1,3) ) A
LEFT JOIN ( SELECT SUBSTRING(i.Itemcode,1,3) as Style,
SUM(I.QtyOnHand) AS QtyOnHand
FROM InvWarehouse I
GROUP BY SUBSTRING(I.Itemcode,1,3) ) B ON A.Style = B.Style
ORDER BY A.Style
may be a better option.
John
"CyberFox" wrote:
> John,
> I've tried the sub-query option, but it didn't do what I was hoping for. L
et
> me simplify and give you some examples of my data, if you don't mind:
> SorDetail table:
> Itemcode BackOrderQty OrderDate
> 002-0200-10-WW-02 100 01/01/06
> 002-0200-11-WW-02 150 02/01/06
> 002-0200-12-WW-02 150 01/02/06
> 010-0300-16-ED-03 100 01/01/06
> 010-0300-16-MK-01 200 01/01/06
> 010-0300-16-TR-02 100 01/03/06
> InvWarehouse table
> ItemCode QtyOnHand
> 002-0200-10-WW-02 2000
> 002-0200-11-WW-02 1400
> 002-0200-12-WW-02 1500
> 010-0300-16-ED-03 1000
> 010-0300-16-MK-01 1000
> 010-0300-16-TR-02 1000
> I need the following (considering that the style = first 3 digits of the
> item codes)
> Style QtyOnHand JanOrders FebOrders Mar
> 002 4900 250 150
0
> 010 3000 300 0
> 100
> Thank you very much for your help so far...
> Rgds,
> "John Bell" wrote:
>

extracting from 2 tables what doesn't exist in both

what is the best way of extracting just the rows that existing in both table
s
that are not common to both.
I guess one way might be to firstly populate another table with what does
exist in both tables and then delete from that table what does exist in
table1 and you then have what exists in table1 but not in table2, then repea
t
process for table2.
this just seems incredibly clumsy and I'm sure there is probably a far
better way to code this.
your help much appreciated.Try,
select *
from
(
select t1.c1, ..., t1.cn
from t1 left join t2
on t1.pk = t2.pk
where t2.pk is null
union
select t2.c1, ..., t2.cn
from t1 right join t2
on t1.pk = t2.pk
where t1.pk is null
)
) as t
If do not mind about duplicated rows, then use "union all" instead.
AMB
"sysbox27" wrote:

> what is the best way of extracting just the rows that existing in both tab
les
> that are not common to both.
> I guess one way might be to firstly populate another table with what does
> exist in both tables and then delete from that table what does exist in
> table1 and you then have what exists in table1 but not in table2, then rep
eat
> process for table2.
> this just seems incredibly clumsy and I'm sure there is probably a far
> better way to code this.
> your help much appreciated.
>

extracting from 2 tables what doesn't exist in both

what is the best way of extracting just the rows that existing in both tables
that are not common to both.
I guess one way might be to firstly populate another table with what does
exist in both tables and then delete from that table what does exist in
table1 and you then have what exists in table1 but not in table2, then repeat
process for table2.
this just seems incredibly clumsy and I'm sure there is probably a far
better way to code this.
your help much appreciated.Try,
select *
from
(
select t1.c1, ..., t1.cn
from t1 left join t2
on t1.pk = t2.pk
where t2.pk is null
union
select t2.c1, ..., t2.cn
from t1 right join t2
on t1.pk = t2.pk
where t1.pk is null
)
) as t
If do not mind about duplicated rows, then use "union all" instead.
AMB
"sysbox27" wrote:
> what is the best way of extracting just the rows that existing in both tables
> that are not common to both.
> I guess one way might be to firstly populate another table with what does
> exist in both tables and then delete from that table what does exist in
> table1 and you then have what exists in table1 but not in table2, then repeat
> process for table2.
> this just seems incredibly clumsy and I'm sure there is probably a far
> better way to code this.
> your help much appreciated.
>