Showing posts with label recommended. Show all posts
Showing posts with label recommended. Show all posts

Monday, March 26, 2012

MDAC

What version of MDAC does a client pc need to have to
access SQL Server 2000 SP3a, on a Windows 2000 SP3
server. MDAC 2.7 SP1?
What is recommended as a procedure to make sure this
version is rollout to users? How should that work'
What version comes with Windows XP?
Thanks,
JimVersions of MDAC earlier that MDAC 2.7 SP1 can access SQL Server 2000 SP3.
I am not aware of a version of MDAC that will not access SQL Server 2000
SP3. You would not have the ability to use the new features of SQL Server
2000 but still would have access.
Windows XP ships with MDAC 2.7.
Rand
This posting is provided "as is" with no warranties and confers no rights.|||So earlier version of MDAC are ok. Later versions than
what's installed on the SQL box you run the risk of not
being able to access all of the functionality on the SQL
Server.
Great. Thanks.
quote:

>--Original Message--
>Versions of MDAC earlier that MDAC 2.7 SP1 can access

SQL Server 2000 SP3.
quote:

>I am not aware of a version of MDAC that will not access

SQL Server 2000
quote:

>SP3. You would not have the ability to use the new

features of SQL Server
quote:

>2000 but still would have access.
>Windows XP ships with MDAC 2.7.
>Rand
>This posting is provided "as is" with no warranties and

confers no rights.
quote:

>.
>

MDAC

What version of MDAC does a client pc need to have to
access SQL Server 2000 SP3a, on a Windows 2000 SP3
server. MDAC 2.7 SP1?
What is recommended as a procedure to make sure this
version is rollout to users? How should that work'
What version comes with Windows XP?
Thanks,
JimVersions of MDAC earlier that MDAC 2.7 SP1 can access SQL Server 2000 SP3.
I am not aware of a version of MDAC that will not access SQL Server 2000
SP3. You would not have the ability to use the new features of SQL Server
2000 but still would have access.
Windows XP ships with MDAC 2.7.
Rand
This posting is provided "as is" with no warranties and confers no rights.|||So earlier version of MDAC are ok. Later versions than
what's installed on the SQL box you run the risk of not
being able to access all of the functionality on the SQL
Server.
Great. Thanks.
>--Original Message--
>Versions of MDAC earlier that MDAC 2.7 SP1 can access
SQL Server 2000 SP3.
>I am not aware of a version of MDAC that will not access
SQL Server 2000
>SP3. You would not have the ability to use the new
features of SQL Server
>2000 but still would have access.
>Windows XP ships with MDAC 2.7.
>Rand
>This posting is provided "as is" with no warranties and
confers no rights.
>.
>

Monday, March 19, 2012

Maximum Table Size

We have been recommended by our database designers that 20million rows is th
e
maximum number of rows a data warehouse should be on SQLServer.
What is the opinion of you guys on this benchmark? Have you seen tables
bigger than that? Did you notice any performance impact.
Our table is at 14million rows and is about 14gb, we are analysing all
opportunities at present and would like a second opinion please.
Regards,
Marc> What is the opinion of you guys on this benchmark? Have you seen tables
> bigger than that?
YES!

> Did you notice any performance impact.
There are always performance concerns. The guideline should be more focused
on proper indexing and row widths, rather than number of rows.
A|||My recommendation is to sack your database designers and hire some people
who know what they are talking about. Seriously. The 20 million row comment
is pure rubbish.
There are SQL Server databases (not many though, because not many companies
have that much data) that have tables with billions of rows. 20 million rows
should not present any problems at all in a properly designed SQL Server
database, and I have seen numerous tables that contain that number of rows
or more.14 GB in itself is not much for a SQL Server database, but if the
table only contains 14 million rows, that is 1000 bytes per row, a rather
large rowsize for a datawarehouse fact table. A datawarehouse fact table
should almost exclusively contain numeric columns, and these columns should
be designed to be as small as possible, usually they are 4 byte integers. If
that is the case you are talking about 250 columns in that table, which is
possible from a proper logical design point of view, but sounds a bit much
too me. SQL Server can of course easily handle 250 columns in a table, it is
usually an indication of a bad database design though, to have that many
columns in one table.
Jacco Schalkwijk
SQL Server MVP
"marcmc" <marcmc@.discussions.microsoft.com> wrote in message
news:55BE7712-1CAE-4E6A-9EEA-05286669D9D2@.microsoft.com...
> We have been recommended by our database designers that 20million rows is
> the
> maximum number of rows a data warehouse should be on SQLServer.
> What is the opinion of you guys on this benchmark? Have you seen tables
> bigger than that? Did you notice any performance impact.
> Our table is at 14million rows and is about 14gb, we are analysing all
> opportunities at present and would like a second opinion please.
> Regards,
> Marc|||Depends on a lot of things
for example the wider the tables the slower the selects (less rows on a page
)
CPU, Disk etc etc etc
http://sqlservercode.blogspot.com/
"marcmc" wrote:

> We have been recommended by our database designers that 20million rows is
the
> maximum number of rows a data warehouse should be on SQLServer.
> What is the opinion of you guys on this benchmark? Have you seen tables
> bigger than that? Did you notice any performance impact.
> Our table is at 14million rows and is about 14gb, we are analysing all
> opportunities at present and would like a second opinion please.
> Regards,
> Marc|||> We have been recommended by our database designers that 20million rows is theed">
> maximum number of rows a data warehouse should be on SQLServer.
If your database designers say this then what do they recommend you do about
it? Sounds like either a sales pitch or a lame excuse for poor performance.
There is no fixed limit on the number of rows in a SQL Server table. Even if
there were such a limitation it would be irrelevant in a DW scenario because
you can implement a partitioned view across many tables on many different
devices.
14GB is a fairly modest sized data warehouse. DW on the terabyte scale is
pretty normal in SQL Server today.
David Portas
SQL Server MVP
--|||thx jacco,
we have a table 95 columns wide with 3228 characters.
the table is actually 18million rows(my mistake), we have had some
performance issues with it such as when linked to other large
tables(4million+ records).
Do you think the number of records or the row size is more important
Most of our columns are integers [ID's] but we do have some with
smalldatetime and one with a 19 char length!
we have people coming to talk teradata/oracle etc etc but no one has yet
identified the database design flaws yet. How can we look at this in more
detail especially in respect to the 20million maximum rowcount table size!
Appreciate your input
Marc|||As David emphasised, there is no limit to the number of rows in a table in
SQL Server. That the (flawed) design and lack of performance tuning skills
of your database designers doesn't allow for a reasonable performance with
20 millions row on your system, is their fault and not SQL Server's.
It's almost impossible to give proper advice in a newsgroup post on database
design flaws and performance issues in what seems a reasonably large system.
The best thing you can do is get an independent consultant in for a few days
to check the system. It will cost some money upfront, but will save you
loads in the long run. A number of MVPs work as independent consultants, and
I can forward your details to them if you contact me offline.
Jacco Schalkwijk
SQL Server MVP
"marcmc" <marcmc@.discussions.microsoft.com> wrote in message
news:40A756CA-0E23-4F44-8EE0-D742C1EF9C99@.microsoft.com...
> thx jacco,
> we have a table 95 columns wide with 3228 characters.
> the table is actually 18million rows(my mistake), we have had some
> performance issues with it such as when linked to other large
> tables(4million+ records).
> Do you think the number of records or the row size is more important
> Most of our columns are integers [ID's] but we do have some with
> smalldatetime and one with a 19 char length!
> we have people coming to talk teradata/oracle etc etc but no one has yet
> identified the database design flaws yet. How can we look at this in more
> detail especially in respect to the 20million maximum rowcount table size!
> Appreciate your input
> Marc
>|||> Do you think the number of records or the row size is more important
Neither. Optimal design and implementation are immeasurably more important.
Lousy design can destroy performance with only a few thousand rows. Since yo
u
(or your namesake) just stated in another thread that you "always use
cursors" you may not need to look any further than that for an explanation o
f
why you can't scale.
David Portas
SQL Server MVP
--
"marcmc" wrote:

> thx jacco,
> we have a table 95 columns wide with 3228 characters.
> the table is actually 18million rows(my mistake), we have had some
> performance issues with it such as when linked to other large
> tables(4million+ records).
> Do you think the number of records or the row size is more important
> Most of our columns are integers [ID's] but we do have some with
> smalldatetime and one with a 19 char length!
> we have people coming to talk teradata/oracle etc etc but no one has yet
> identified the database design flaws yet. How can we look at this in more
> detail especially in respect to the 20million maximum rowcount table size!
> Appreciate your input
> Marc
>|||To add to what everyone else says, after design (and a 95 column wide table
may or may not be a design issue, depending on if this is a fact table or
not (if so, then 80+ dimensions may be an issue, but I degress) the hardware
is the key. Too often people who claim some fixed number as a maximum don't
think of a Windows server as scalable. A lot will depend on your disk
subsystem for example. You might be doing your work on IDE drives, or a
slow Raid-5 array, or one of unlimited possibilites. When you start to
approact a large size/many users, the cost does go up greatly, but it is
likely not SQL Server's fault (as these vendors may tell you it is, since
they have a vested interest in you going to Oracle on their hardware)
As Jacco says in particular, you need someone to look at all of these
factors independent of a hardward/software vendor (ie they aren't
salespeople!) to get a valid look at what is going on.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Arguments are to be avoided: they are always vulgar and often convincing."
(Oscar Wilde)
"marcmc" <marcmc@.discussions.microsoft.com> wrote in message
news:40A756CA-0E23-4F44-8EE0-D742C1EF9C99@.microsoft.com...
> thx jacco,
> we have a table 95 columns wide with 3228 characters.
> the table is actually 18million rows(my mistake), we have had some
> performance issues with it such as when linked to other large
> tables(4million+ records).
> Do you think the number of records or the row size is more important
> Most of our columns are integers [ID's] but we do have some with
> smalldatetime and one with a 19 char length!
> we have people coming to talk teradata/oracle etc etc but no one has yet
> identified the database design flaws yet. How can we look at this in more
> detail especially in respect to the 20million maximum rowcount table size!
> Appreciate your input
> Marc
>

Maximum Recommended Databases

I've read MSDN that the theoretical maximum number of databases per
SQL server instance is 32,767. But I'd like to know if there is a
real world recommendation from Microsoft. I know this sounds like a
ridiculous question, but I've inherited an instance with approximately
1,000 databases (and growing) and I'm trying to determine when I
should expect major performance issues (don't laugh). I have a plan
for reducing the number of databases, but it won't happen over night.
I know a lot of this depends on database usage, but I'm just looking
for any information I can find. Thanks in advance!
I have a client with over 5000 databases and still growing. The usage
pattern is such that only a few dozen or even a few hundred are heavily used
at any one time. The system is performing quite well.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegr oups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>
|||I don't think I'm going to provide any new information that you don't
already have. But this is kind of like asking how big can a database be, or
how many rows can I get into a table? As the number of databases goes up,
the usage pattern really determines how detrimental each additional database
is to performance (let's leave administration costs out of this for now).
So many factors are involved that it's impossible for anyone to come up with
a real number that would apply to your situation and my situation
identically, even if you could feasibly perform all the real-world tests
required to come up with such a number...
When should you expect performance issues? When your usage patterns exceed
your hardware's capabilities. It is absolutely impossible to tell you that
it will happen when you add 5 or 500 more databases. In fact, it could
conceivably happen after you /*remove*/ databases ... if users of the
remaining databases coincidentally change their usage patterns.
I have relatively high-volume systems with over 500 databases, and they work
very well. Would I want to throw another 500 databases on there? Not in
one shot, I don't think. But it's not the number itself that scares me.
It's the fact that 10 or 15 of those (given my current scenario) would
likely end up high transaction hitters that *could* start to destabilize the
system. On the other hand, if they're all duds, then 500 could be added
easily without any impact to the system.
Aaron Bertrand
SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegr oups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>
|||On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
wrote:

>I've read MSDN that the theoretical maximum number of databases per
>SQL server instance is 32,767. But I'd like to know if there is a
>real world recommendation from Microsoft. I know this sounds like a
>ridiculous question, but I've inherited an instance with approximately
>1,000 databases (and growing) and I'm trying to determine when I
>should expect major performance issues (don't laugh).
The critical problem would probably be that they will never share
pages in a working set, so caching will be less effective. But that's
if they're all active at once, which is unlikely ... isn't it?
A few more open file handles at the OS level, ... nah. Whatever
insight you can get on data locality and access patterns is probably
best, but hey, that's true even if it's all a single database, too!
Josh
|||> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective.
Hmm, can you elaborate on this? When you have several databases, the data for each customer (or
whatever differentiate these databases) obviously need to be in separate tables, per customer. So,
data for one customer is on a separate set of pages compared to some other customer meaning data
retrieved for one customer cannot be used when querying from some other customer. But the same thing
would happen if the same set of table were in one database (except for very small tables where mixed
extents come into play). But if this is re-modeled and all customers share the same table, we have a
different situation...

> A few more open file handles at the OS level
Interesting you mention this, since this can impact caching. With many databases, setting auto close
might be needed (especially in 32 bit due to OS limitation and open file handles). And when a
database is auto closed, data in cache is removed (I know for sure this applies to plans, I'd have
to do a test to verify it applies to data).
Perhaps plan caching is a bigger issue? Assuming best practices are followed and stored procedures
are used. Now, we would end up with duplicate procedures, one set for each database. Even if we
don't use stored procedures, we won't be able to re-use cached plans across databases...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:799fb3l6bsc2its7r7mfma3kjqr65gekh1@.4ax.com...
> On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
> wrote:
>
> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective. But that's
> if they're all active at once, which is unlikely ... isn't it?
> A few more open file handles at the OS level, ... nah. Whatever
> insight you can get on data locality and access patterns is probably
> best, but hey, that's true even if it's all a single database, too!
> Josh
>
|||I have a client with 6500+ databases. Biggest issues are rolling out
changes, size of msdb database (make it BIG to begin with to avoid
fragmentation from autogrowths), and most 3rd party tools break down when
you try to use them against the server due to massive amounts of system data
they typically try to read. I had a developer build a multi-threaded
"database shotgun" for doing maintenance stuff. HUGE help!
TheSQLGuru
President
Indicium Resources, Inc.
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegr oups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>
|||> they typically try to read. I had a developer build a multi-threaded
> "database shotgun" for doing maintenance stuff. HUGE help!
I use SQL Farm Combine for this.
http://www.sqlfarms.com/
|||On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
<tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:

>Hmm, can you elaborate on this?
Just speculating wildly.
If they are a single database, there is at least a *chance* customers,
users, sessions, etc will share pages!

>Perhaps plan caching is a bigger issue?
Ugh, you're right!
But then, wouldn't this be quickly obvious to anyone running
hundreds++ of databases? And the initial comments mostly seem to be
that it runs pretty well.
Josh
|||> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
Agreed. To give a more precise answer, we would have to know the data model after coalescing into
one database.

> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
True. But "run pretty well" doesn't mean that it cannot run better... :-)
Of course, this as well depends on the app. If stored procedures are used, then I expect to se some
benefit of having, for a particular proc, only one plan in cache instead of one per database. If
procedures are not used, then chances are that we mostly have exact text matching of the query
string in cache, and re-use would be minimal even if all is in the same database.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:9acib3h5ct5mkgtoectjg2s9aqe8k5633k@.4ax.com...
> On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
> <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:
>
> Just speculating wildly.
> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
>
> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
> Josh
>
>

Maximum Recommended Databases

I've read MSDN that the theoretical maximum number of databases per
SQL server instance is 32,767. But I'd like to know if there is a
real world recommendation from Microsoft. I know this sounds like a
ridiculous question, but I've inherited an instance with approximately
1,000 databases (and growing) and I'm trying to determine when I
should expect major performance issues (don't laugh). I have a plan
for reducing the number of databases, but it won't happen over night.
I know a lot of this depends on database usage, but I'm just looking
for any information I can find. Thanks in advance!I have a client with over 5000 databases and still growing. The usage
pattern is such that only a few dozen or even a few hundred are heavily used
at any one time. The system is performing quite well.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||I don't think I'm going to provide any new information that you don't
already have. But this is kind of like asking how big can a database be, or
how many rows can I get into a table? As the number of databases goes up,
the usage pattern really determines how detrimental each additional database
is to performance (let's leave administration costs out of this for now).
So many factors are involved that it's impossible for anyone to come up with
a real number that would apply to your situation and my situation
identically, even if you could feasibly perform all the real-world tests
required to come up with such a number...
When should you expect performance issues? When your usage patterns exceed
your hardware's capabilities. It is absolutely impossible to tell you that
it will happen when you add 5 or 500 more databases. In fact, it could
conceivably happen after you /*remove*/ databases ... if users of the
remaining databases coincidentally change their usage patterns.
I have relatively high-volume systems with over 500 databases, and they work
very well. Would I want to throw another 500 databases on there? Not in
one shot, I don't think. But it's not the number itself that scares me.
It's the fact that 10 or 15 of those (given my current scenario) would
likely end up high transaction hitters that *could* start to destabilize the
system. On the other hand, if they're all duds, then 500 could be added
easily without any impact to the system.
Aaron Bertrand
SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
wrote:

>I've read MSDN that the theoretical maximum number of databases per
>SQL server instance is 32,767. But I'd like to know if there is a
>real world recommendation from Microsoft. I know this sounds like a
>ridiculous question, but I've inherited an instance with approximately
>1,000 databases (and growing) and I'm trying to determine when I
>should expect major performance issues (don't laugh).
The critical problem would probably be that they will never share
pages in a working set, so caching will be less effective. But that's
if they're all active at once, which is unlikely ... isn't it?
A few more open file handles at the OS level, ... nah. Whatever
insight you can get on data locality and access patterns is probably
best, but hey, that's true even if it's all a single database, too!
Josh|||> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective.
Hmm, can you elaborate on this? When you have several databases, the data fo
r each customer (or
whatever differentiate these databases) obviously need to be in separate tab
les, per customer. So,
data for one customer is on a separate set of pages compared to some other c
ustomer meaning data
retrieved for one customer cannot be used when querying from some other cust
omer. But the same thing
would happen if the same set of table were in one database (except for very
small tables where mixed
extents come into play). But if this is re-modeled and all customers share t
he same table, we have a
different situation...

> A few more open file handles at the OS level
Interesting you mention this, since this can impact caching. With many datab
ases, setting auto close
might be needed (especially in 32 bit due to OS limitation and open file han
dles). And when a
database is auto closed, data in cache is removed (I know for sure this appl
ies to plans, I'd have
to do a test to verify it applies to data).
Perhaps plan caching is a bigger issue? Assuming best practices are followed
and stored procedures
are used. Now, we would end up with duplicate procedures, one set for each d
atabase. Even if we
don't use stored procedures, we won't be able to re-use cached plans across
databases...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:799fb3l6bsc2its7r7mfma3kjqr65gekh1@.
4ax.com...
> On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
> wrote:
>
> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective. But that's
> if they're all active at once, which is unlikely ... isn't it?
> A few more open file handles at the OS level, ... nah. Whatever
> insight you can get on data locality and access patterns is probably
> best, but hey, that's true even if it's all a single database, too!
> Josh
>|||I have a client with 6500+ databases. Biggest issues are rolling out
changes, size of msdb database (make it BIG to begin with to avoid
fragmentation from autogrowths), and most 3rd party tools break down when
you try to use them against the server due to massive amounts of system data
they typically try to read. I had a developer build a multi-threaded
"database shotgun" for doing maintenance stuff. HUGE help!
TheSQLGuru
President
Indicium Resources, Inc.
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||> they typically try to read. I had a developer build a multi-threaded
> "database shotgun" for doing maintenance stuff. HUGE help!
I use SQL Farm Combine for this.
http://www.sqlfarms.com/|||On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
<tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:

>Hmm, can you elaborate on this?
Just speculating wildly.
If they are a single database, there is at least a *chance* customers,
users, sessions, etc will share pages!

>Perhaps plan caching is a bigger issue?
Ugh, you're right!
But then, wouldn't this be quickly obvious to anyone running
hundreds++ of databases? And the initial comments mostly seem to be
that it runs pretty well.
Josh|||> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
Agreed. To give a more precise answer, we would have to know the data model
after coalescing into
one database.

> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
True. But "run pretty well" doesn't mean that it cannot run better... :-)
Of course, this as well depends on the app. If stored procedures are used, t
hen I expect to se some
benefit of having, for a particular proc, only one plan in cache instead of
one per database. If
procedures are not used, then chances are that we mostly have exact text mat
ching of the query
string in cache, and re-use would be minimal even if all is in the same data
base.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:9acib3h5ct5mkgtoectjg2s9aqe8k5633k@.
4ax.com...
> On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
> <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:
>
> Just speculating wildly.
> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
>
> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
> Josh
>
>

Maximum Recommended Databases

I've read MSDN that the theoretical maximum number of databases per
SQL server instance is 32,767. But I'd like to know if there is a
real world recommendation from Microsoft. I know this sounds like a
ridiculous question, but I've inherited an instance with approximately
1,000 databases (and growing) and I'm trying to determine when I
should expect major performance issues (don't laugh). I have a plan
for reducing the number of databases, but it won't happen over night.
I know a lot of this depends on database usage, but I'm just looking
for any information I can find. Thanks in advance!I have a client with over 5000 databases and still growing. The usage
pattern is such that only a few dozen or even a few hundred are heavily used
at any one time. The system is performing quite well.
--
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||I don't think I'm going to provide any new information that you don't
already have. But this is kind of like asking how big can a database be, or
how many rows can I get into a table? As the number of databases goes up,
the usage pattern really determines how detrimental each additional database
is to performance (let's leave administration costs out of this for now).
So many factors are involved that it's impossible for anyone to come up with
a real number that would apply to your situation and my situation
identically, even if you could feasibly perform all the real-world tests
required to come up with such a number...
When should you expect performance issues? When your usage patterns exceed
your hardware's capabilities. It is absolutely impossible to tell you that
it will happen when you add 5 or 500 more databases. In fact, it could
conceivably happen after you /*remove*/ databases ... if users of the
remaining databases coincidentally change their usage patterns.
I have relatively high-volume systems with over 500 databases, and they work
very well. Would I want to throw another 500 databases on there? Not in
one shot, I don't think. But it's not the number itself that scares me.
It's the fact that 10 or 15 of those (given my current scenario) would
likely end up high transaction hitters that *could* start to destabilize the
system. On the other hand, if they're all duds, then 500 could be added
easily without any impact to the system.
--
Aaron Bertrand
SQL Server MVP
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
wrote:
>I've read MSDN that the theoretical maximum number of databases per
>SQL server instance is 32,767. But I'd like to know if there is a
>real world recommendation from Microsoft. I know this sounds like a
>ridiculous question, but I've inherited an instance with approximately
>1,000 databases (and growing) and I'm trying to determine when I
>should expect major performance issues (don't laugh).
The critical problem would probably be that they will never share
pages in a working set, so caching will be less effective. But that's
if they're all active at once, which is unlikely ... isn't it?
A few more open file handles at the OS level, ... nah. Whatever
insight you can get on data locality and access patterns is probably
best, but hey, that's true even if it's all a single database, too!
Josh|||> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective.
Hmm, can you elaborate on this? When you have several databases, the data for each customer (or
whatever differentiate these databases) obviously need to be in separate tables, per customer. So,
data for one customer is on a separate set of pages compared to some other customer meaning data
retrieved for one customer cannot be used when querying from some other customer. But the same thing
would happen if the same set of table were in one database (except for very small tables where mixed
extents come into play). But if this is re-modeled and all customers share the same table, we have a
different situation...
> A few more open file handles at the OS level
Interesting you mention this, since this can impact caching. With many databases, setting auto close
might be needed (especially in 32 bit due to OS limitation and open file handles). And when a
database is auto closed, data in cache is removed (I know for sure this applies to plans, I'd have
to do a test to verify it applies to data).
Perhaps plan caching is a bigger issue? Assuming best practices are followed and stored procedures
are used. Now, we would end up with duplicate procedures, one set for each database. Even if we
don't use stored procedures, we won't be able to re-use cached plans across databases...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:799fb3l6bsc2its7r7mfma3kjqr65gekh1@.4ax.com...
> On Mon, 06 Aug 2007 19:37:24 -0000, Chris Reeder <clreeder@.gmail.com>
> wrote:
>>I've read MSDN that the theoretical maximum number of databases per
>>SQL server instance is 32,767. But I'd like to know if there is a
>>real world recommendation from Microsoft. I know this sounds like a
>>ridiculous question, but I've inherited an instance with approximately
>>1,000 databases (and growing) and I'm trying to determine when I
>>should expect major performance issues (don't laugh).
> The critical problem would probably be that they will never share
> pages in a working set, so caching will be less effective. But that's
> if they're all active at once, which is unlikely ... isn't it?
> A few more open file handles at the OS level, ... nah. Whatever
> insight you can get on data locality and access patterns is probably
> best, but hey, that's true even if it's all a single database, too!
> Josh
>|||I have a client with 6500+ databases. Biggest issues are rolling out
changes, size of msdb database (make it BIG to begin with to avoid
fragmentation from autogrowths), and most 3rd party tools break down when
you try to use them against the server due to massive amounts of system data
they typically try to read. I had a developer build a multi-threaded
"database shotgun" for doing maintenance stuff. HUGE help!
--
TheSQLGuru
President
Indicium Resources, Inc.
"Chris Reeder" <clreeder@.gmail.com> wrote in message
news:1186429044.319226.206990@.b79g2000hse.googlegroups.com...
> I've read MSDN that the theoretical maximum number of databases per
> SQL server instance is 32,767. But I'd like to know if there is a
> real world recommendation from Microsoft. I know this sounds like a
> ridiculous question, but I've inherited an instance with approximately
> 1,000 databases (and growing) and I'm trying to determine when I
> should expect major performance issues (don't laugh). I have a plan
> for reducing the number of databases, but it won't happen over night.
> I know a lot of this depends on database usage, but I'm just looking
> for any information I can find. Thanks in advance!
>|||> they typically try to read. I had a developer build a multi-threaded
> "database shotgun" for doing maintenance stuff. HUGE help!
I use SQL Farm Combine for this.
http://www.sqlfarms.com/|||On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
<tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:
>> The critical problem would probably be that they will never share
>> pages in a working set, so caching will be less effective.
>Hmm, can you elaborate on this?
Just speculating wildly.
If they are a single database, there is at least a *chance* customers,
users, sessions, etc will share pages!
>Perhaps plan caching is a bigger issue?
Ugh, you're right!
But then, wouldn't this be quickly obvious to anyone running
hundreds++ of databases? And the initial comments mostly seem to be
that it runs pretty well.
Josh|||> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
Agreed. To give a more precise answer, we would have to know the data model after coalescing into
one database.
>>Perhaps plan caching is a bigger issue?
> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
True. But "run pretty well" doesn't mean that it cannot run better... :-)
Of course, this as well depends on the app. If stored procedures are used, then I expect to se some
benefit of having, for a particular proc, only one plan in cache instead of one per database. If
procedures are not used, then chances are that we mostly have exact text matching of the query
string in cache, and re-use would be minimal even if all is in the same database.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"JXStern" <JXSternChangeX2R@.gte.net> wrote in message
news:9acib3h5ct5mkgtoectjg2s9aqe8k5633k@.4ax.com...
> On Tue, 7 Aug 2007 08:41:20 +0200, "Tibor Karaszi"
> <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote:
>> The critical problem would probably be that they will never share
>> pages in a working set, so caching will be less effective.
>>Hmm, can you elaborate on this?
> Just speculating wildly.
> If they are a single database, there is at least a *chance* customers,
> users, sessions, etc will share pages!
>>Perhaps plan caching is a bigger issue?
> Ugh, you're right!
> But then, wouldn't this be quickly obvious to anyone running
> hundreds++ of databases? And the initial comments mostly seem to be
> that it runs pretty well.
> Josh
>
>

Wednesday, March 7, 2012

Maximum # of databases on 1 sql server

Does any one know the reccomended maximum number of
databases that can be supported on on SQL install. Is
their a recommended maximum to insure that performance in
not degraded.
Thanks32,767 PER INSTANCE OF SQL SERVER. Different Edtion of SQL Server Support
different max. number of instances on SQL machine.
"John" <jtscott@.mclanefs.com> wrote in message
news:05a801c3929f$8a76e950$a301280a@.phx.gbl...
> Does any one know the reccomended maximum number of
> databases that can be supported on on SQL install. Is
> their a recommended maximum to insure that performance in
> not degraded.
> Thanks|||We have servers with up to 150 databases, which represents over 100
applications. It's all about reducing interference (ie, keeping
transactions small, no !#!##%$ cursors, etc), and managing hardware
capacity. Go big on the hardware, if you're saving $$ from sharing, it's
worth it, do not run at > 40% peak.
"John" <jtscott@.mclanefs.com> wrote in message
news:05a801c3929f$8a76e950$a301280a@.phx.gbl...
> Does any one know the reccomended maximum number of
> databases that can be supported on on SQL install. Is
> their a recommended maximum to insure that performance in
> not degraded.
> Thanks|||In addition:
Go above some 50-70 databases and you will notice that the tools (EM...) aren't that well suited.
And, a large amount of databases can be a sign of bad design. (Doesn't have to be, but can be.)
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as ugroup=microsoft.public.sqlserver
"Kevin" <ReplyTo@.Newsgroups.only> wrote in message news:OfUQGNskDHA.424@.TK2MSFTNGP10.phx.gbl...
> We have servers with up to 150 databases, which represents over 100
> applications. It's all about reducing interference (ie, keeping
> transactions small, no !#!##%$ cursors, etc), and managing hardware
> capacity. Go big on the hardware, if you're saving $$ from sharing, it's
> worth it, do not run at > 40% peak.
>
> "John" <jtscott@.mclanefs.com> wrote in message
> news:05a801c3929f$8a76e950$a301280a@.phx.gbl...
> > Does any one know the reccomended maximum number of
> > databases that can be supported on on SQL install. Is
> > their a recommended maximum to insure that performance in
> > not degraded.
> >
> > Thanks
>