Showing posts with label size. Show all posts
Showing posts with label size. Show all posts

Sunday, February 26, 2012

facing problem in increasing connection string pool size sqlserver

Hi to all,

I am using a connection string like

data source=RemoteHostName;initial catalog=myDb;password=sa;user id=sa;
Max pool size = 200;

And now strange thing is happening ,, I am receiving error :

Timeout expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because all pooled connections were in
use and max pool size was reached

The SqlServer Activity Manager is telling that only 100 connections are pooled, and I guess that the Max pool size is 100, It is not being changed by my Connection string. As I am trying to change the default 100 pool size value to 200.

Huh , So stucked up , how to increase the Max pool size.. Is there any way .

I am getting worrried.

Any help ??

Thx and RegardsBill Vaughn is a well-regarded SQL Server expert. This article of his should help you out:The .NET Connection Pool Lifeguard -- Prevent pool overflows that can drown your applications.

Terri

Extremely long processing time

I have a historical table that is over 100 million records in size.
Processing this table in AS takes an extremly long time (about 2 hours). Is
there any way to speed this up in AS?
is it 2000 or 2005?
As for the processing, are your cubes optimized (keys from dimensions read
from fact?). In another words, if you look at the select query does it have
any join statements?
MC
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>
|||do you have a distinct count measure in the cube?
(this cause the SQL statement to be sorted by this column)
have you created partitions?
have you optimized the schema in the cube (AS2000)? (the select statement
used use less inner join commands between your fact table and your
dimensions)
does your SQL Server is on another server or the same server?
how many aggregation have you in your cube?
what is your server? (CPU, Memory...)
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>
|||The general rule of thumb is that a common server-class machine with a
reasonable I/O subsystem will do about 1 million rows per minute. That
varies based on many issues, such as the number of aggregates, storage type.
Your experience is a bit low, but not unreasonable.
For hints on how to improve, look at the AS Performance Guide at:
http://www.microsoft.com/technet/pro.../ansvcspg.mspx
Hope that helps.
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI Systems Team
SQL BI Product Unit (Analysis Services)
This posting is provided "AS IS" with no warranties, and confers no rights.
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>

Extremely long processing time

I have a historical table that is over 100 million records in size.
Processing this table in AS takes an extremly long time (about 2 hours). Is
there any way to speed this up in AS?is it 2000 or 2005?
As for the processing, are your cubes optimized (keys from dimensions read
from fact?). In another words, if you look at the select query does it have
any join statements?
MC
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>|||do you have a distinct count measure in the cube?
(this cause the SQL statement to be sorted by this column)
have you created partitions?
have you optimized the schema in the cube (AS2000)? (the select statement
used use less inner join commands between your fact table and your
dimensions)
does your SQL Server is on another server or the same server?
how many aggregation have you in your cube?
what is your server? (CPU, Memory...)
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>|||The general rule of thumb is that a common server-class machine with a
reasonable I/O subsystem will do about 1 million rows per minute. That
varies based on many issues, such as the number of aggregates, storage type.
Your experience is a bit low, but not unreasonable.
For hints on how to improve, look at the AS Performance Guide at:
http://www.microsoft.com/technet/pr...n/ansvcspg.mspx
Hope that helps.
--
Dave Wickert [MSFT]
dwickert@.online.microsoft.com
Program Manager
BI Systems Team
SQL BI Product Unit (Analysis Services)
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Nestor" <n3570r@.yahoo.com> wrote in message
news:u4GDtMV9FHA.2040@.TK2MSFTNGP14.phx.gbl...
>I have a historical table that is over 100 million records in size.
>Processing this table in AS takes an extremly long time (about 2 hours). Is
>there any way to speed this up in AS?
>

Friday, February 24, 2012

extremely big database file... SQL Server error?

I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest columns
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
Ed. S.
Telematica Inc.
Shrinking your database files has the effect of fragmenting your indexes.
If you're always shrinking files, then you're shooting yourself in the foot.
Leave the file as big as necessary and rebuild your indexes.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"byteman" <byteman@.discussions.microsoft.com> wrote in message
news:40A188D8-05F2-4008-BE43-A41510FE83A0@.microsoft.com...
I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest
columns
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
Ed. S.
Telematica Inc.

extremely big database file... SQL Server error?

I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest columns
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
--
Ed. S.
Telematica Inc.Shrinking your database files has the effect of fragmenting your indexes.
If you're always shrinking files, then you're shooting yourself in the foot.
Leave the file as big as necessary and rebuild your indexes.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"byteman" <byteman@.discussions.microsoft.com> wrote in message
news:40A188D8-05F2-4008-BE43-A41510FE83A0@.microsoft.com...
I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest
columns
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
--
Ed. S.
Telematica Inc.

extremely big database file... SQL Server error?

I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest column
s
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
--
Ed. S.
Telematica Inc.Shrinking your database files has the effect of fragmenting your indexes.
If you're always shrinking files, then you're shooting yourself in the foot.
Leave the file as big as necessary and rebuild your indexes.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"byteman" <byteman@.discussions.microsoft.com> wrote in message
news:40A188D8-05F2-4008-BE43-A41510FE83A0@.microsoft.com...
I have the following problem: the size of one of my production databases is
too big(60 gb), comparatively with the rest.
I compare the big db ("A") with another one ("B") that contains the same
tables.
I identify the larger table in db "A" (table X), which has 325000 records,
aprox. In db "B", table X has 88000 records, aprox. (less than 1/3 of the
records in "A")
I ran sp_spaceused (with updateusage = true) and obtain this values for the
total amount of space used by data for table X:
db "A": 1570392
db "B": 48688
the value in "A" is over 3 times the value in "B"... why?
The tables in the two databases are exactly the same (a nonclustered,
unique, primary key index, same constraints, same columns). In both cases,
the table has only one colum with data (the same column) and the rest
columns
with "null" value.
Both db are in SQL Server 2000, on the same server. Also, exists a
maintenance plan that run every night that backup and shrink this databases,
but the size of "A" remains the same.
The command dbcc checkdb found 0 allocation errors and 0 consistency errors
in the db "A".
What is the problem? Could be an error of SQL Server?
I'll appreciate any help.
--
Ed. S.
Telematica Inc.

Extrapolate the size of filegroups by 12, 24 ,and 36 months.

Hi All,
I have a large Database with two filegroups and I'm trying to extraplate the
size of the filegroups for 12,24,and 36 months. I don't have historic data,
but have current data as of Feb, Mar, & Apr. The days of these months are no
t
the same( Feb-28days, Mar-31days, Apr-30days) and the data are eg. (Feb - 4G
B,
Mar- 7GB, Apr-5GB)
How best do I go about extrapolating data size to 12, 24 and 36 months.
Thanks.
Message posted via http://www.webservertalk.comHello, Naana
A shot-in-the-dark estimation can be: 85GB for 12 months, 222GB for 24
months, 411GB for 36 months. I have used "Edit / Fill / Series / Linear
/ Trend" in Excel to get these numbers (also considering the number of
days in each month). However, for a more pertinent estimation, we
should have more informations abouth the kind of growth expected for
this data. Maybe the data is has some dependence to the number of
working days in each month (instead of all days), maybe it has some
other seasonal variations. However, I think this is not the best group
for such estimations.
Razvan|||Hi Razvan,
How did you get your estimation?. I used the Numbers (Feb-4GB, Mar-7GB, Apr-
5GB) that I provided earlier in Excel - "Edit / Fill / Series / Linear /
Trend" and didn't get your numbers, for 12months, 24months, and 36Months.
My estimation was from Feb - Jan, Sum of 12months, Sum of 12Months*2 for
24months, and Sum of 12months*3 for 36months.
Advise.
Razvan Socol wrote:
>Hello, Naana
>A shot-in-the-dark estimation can be: 85GB for 12 months, 222GB for 24
>months, 411GB for 36 months. I have used "Edit / Fill / Series / Linear
>/ Trend" in Excel to get these numbers (also considering the number of
>days in each month). However, for a more pertinent estimation, we
>should have more informations abouth the kind of growth expected for
>this data. Maybe the data is has some dependence to the number of
>working days in each month (instead of all days), maybe it has some
>other seasonal variations. However, I think this is not the best group
>for such estimations.
>Razvan
Message posted via webservertalk.com
http://www.webservertalk.com/Uwe/Forum...amming/200605/1|||I computed the average daily size for each month (approx. 0.14G for
feb, 0.23G for
mar, 0.17G for apr) and then I used "Fill / Series / ..." to
extrapolate the
daily size for the following months. Then I multiplied the daily size
with
the number of days in each month, and I summed these sizes (so I
considered
the extrapolated sizes also for the first three months, although we
have
real data for them).
I can send you the Excel file if you write me your e-mail address to:
rsocol [at] gmail [dot] com
But I repeat: it's just a shot in the dark. Your data may or may not
grow this way.
Razvan