Monday, February 20, 2012

MAX() and DATEDIFF() question

I've written some software that listens to meters on a specified port and
writes XML files when any of those meters "speak." There are some 200 of
them, and each is programmed to "speak" at 10 second intervals, so there's a
fair amount of traffic.
Now what I've been asked to do is write a record to a database for each of
these meters at ONE MINUTE intervals. At first I simply set up a one minute
timer and pointed my software to the directory where the XML files live and
said "write a record for every file in there." But, alas, occasionally one
(or several) of the meters' files were being overwritten at that particular
time and thus weren't available.
So, I changed things so that each time a meter speaks, I check the database
to see if it has been a minute or more since the last database update for
that meter, and if it has I write a record out. Now this works fine, but it
uses an enormous amount of CPU time: at times nearly 50% of a dual 2 gig
Pentium 3 system is hijacked.
So, my question is: is my SQL (which calls the DATEDIFF() and MAX()
functions) the likely culprit for this inefficiency? My colleague believes
that it is, since (he says) calling MAX() forces a search through all the
records contained in the table (potentially a boatload). Here's the query:
SELECT DATEDIFF(second, MAX([LastUpdateTime]), '" + [CurrentTime] + "') from
" + MeterName
Thanks in advance for any advice on improving ANY of this!
Is there an index on the LastUpdateTime column? - since that might make a
significant difference.
--
Graeme Malcolm
Principal Technologist
Content Master Ltd.
www.contentmaster.com
www.microsoft.com/mspress/books/6137.asp
"BCM" <BCM@.none.com> wrote in message news:4104f338$1@.news.unc.edu...
I've written some software that listens to meters on a specified port and
writes XML files when any of those meters "speak." There are some 200 of
them, and each is programmed to "speak" at 10 second intervals, so there's a
fair amount of traffic.
Now what I've been asked to do is write a record to a database for each of
these meters at ONE MINUTE intervals. At first I simply set up a one minute
timer and pointed my software to the directory where the XML files live and
said "write a record for every file in there." But, alas, occasionally one
(or several) of the meters' files were being overwritten at that particular
time and thus weren't available.
So, I changed things so that each time a meter speaks, I check the database
to see if it has been a minute or more since the last database update for
that meter, and if it has I write a record out. Now this works fine, but it
uses an enormous amount of CPU time: at times nearly 50% of a dual 2 gig
Pentium 3 system is hijacked.
So, my question is: is my SQL (which calls the DATEDIFF() and MAX()
functions) the likely culprit for this inefficiency? My colleague believes
that it is, since (he says) calling MAX() forces a search through all the
records contained in the table (potentially a boatload). Here's the query:
SELECT DATEDIFF(second, MAX([LastUpdateTime]), '" + [CurrentTime] + "') from
" + MeterName
Thanks in advance for any advice on improving ANY of this!
|||Hi,
Just a thought but the way I'd approach it is to have a 'feeder' table that
whenever one of your meters 'speaks' you update the table with the info
irrrespective of whether there is already a record in there. Don't have any
indexes on this table so writing is very quick. Also dont close / re-open
the connection for each write, keep it open and re-use. Then have a 'job' on
your database server run every minute that takes the latest reading for each
meter and updates the main table deleting the records in the feeder table
when it is finished. If you wrap this job in a transaction no writes can be
made whilst the update is progressing. This is nice as far as anatomic
transactions are concerned but could cause a block on your updater.
Therefore if your table has an AutoIncrementing field (Identity in TSQL)
then when the server job starts it reads the Max value and processes any
with the Autonumber <= to that value, thereby ignoring any new ones entered
whilst the job is running, finally deleting all records where the
Incrementing field is <= to the value read at the start
If you need any more info or explanations of my ramblings just shout.
Graham
"BCM" <BCM@.none.com> wrote in message news:4104f338$1@.news.unc.edu...
> I've written some software that listens to meters on a specified port and
> writes XML files when any of those meters "speak." There are some 200 of
> them, and each is programmed to "speak" at 10 second intervals, so there's
a
> fair amount of traffic.
> Now what I've been asked to do is write a record to a database for each of
> these meters at ONE MINUTE intervals. At first I simply set up a one
minute
> timer and pointed my software to the directory where the XML files live
and
> said "write a record for every file in there." But, alas, occasionally one
> (or several) of the meters' files were being overwritten at that
particular
> time and thus weren't available.
> So, I changed things so that each time a meter speaks, I check the
database
> to see if it has been a minute or more since the last database update for
> that meter, and if it has I write a record out. Now this works fine, but
it
> uses an enormous amount of CPU time: at times nearly 50% of a dual 2 gig
> Pentium 3 system is hijacked.
> So, my question is: is my SQL (which calls the DATEDIFF() and MAX()
> functions) the likely culprit for this inefficiency? My colleague believes
> that it is, since (he says) calling MAX() forces a search through all the
> records contained in the table (potentially a boatload). Here's the query:
> SELECT DATEDIFF(second, MAX([LastUpdateTime]), '" + [CurrentTime] + "')
from
> " + MeterName
> Thanks in advance for any advice on improving ANY of this!
>

No comments:

Post a Comment