Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Drop a table based on the count of records in another table

Status
Not open for further replies.

BobThornton

Programmer
Sep 12, 2022
12
US
I run table updates daily. What I want to know is, can I drop table based on the count of another table?
So I run the query that is built with CTE's. The last CTE output is FINAL
What I want to do is -
if the count of FINAL is greater than 1
then
drop table PREV - this contains the data from 2 days ago.
drop table CURR - this contains the data from the previous day.
select * from FINAL into CURR
else
if FINAL not greater than 1 or query gets killed
then exit
 
Not really the answer to your question… but
That approach sounds like a very bad idea (read: bad data base design, IMO).
If you do DROP table(s), that means to me you also need to CREATE those tables often, right? Every day?
[hairpull2]


---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
Thanks Andrejeck. No the tables do not need to be re-created.
What approach would you take?
I used to drop the PREV table, move CURR to PREV, drop CURR, then results to CURR
 
Looks to me you have 3 tables with exactly the same structure (fields):
PREV - this contains the data from 2 days ago.
CURR - this contains the data from the previous day.
FINAL - today's data?

I would just have one table, let's name it BOB. And I would just add a DATE field (MY_DATE?) if it does not exist already. You could create a simple VIEWS that would equal your current code structure:
[pre]
PREV_V - Select * from BOB where MY_DATE = GETDATE() - 2
CURR_V - Select * from BOB where MY_DATE = GETDATE() - 1
FINAL_V - Select * from BOB where MY_DATE = GETDATE()
[/pre]
If you don't care about the data older than 2 days, just DELETE it. (I would keep it. You never know if you will need it in the future...) :)

BobThornton said:
No the tables do not need to be re-created.
On the contrary, that's what you do ([red]re-create[/red] CURR table) by: [tt]
drop table CURR
select * from FINAL [red]into[/red] CURR[/tt]

---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
Thanks! The issue is the count of records is usually >200K so having that many tables with that much data is the issue.
The current process is automated using TOAD, so I want to keep it automated.
I currently have -
IF OBJECT_ID ('AdHocData.MDCOutEd.FL_MCD_Uni_Prev') IS NOT NULL
DROP TABLE AdHocData.MDCOutEd.FL_MCD_Uni_Prev

SELECT *
INTO AdHocData.MDCOutEd.FL_MCD_Uni_Prev
FROM AdHocData.MDCOutEd.FL_MCD_Uni_Curr

IF OBJECT_ID('AdHocData.MDCOutEd.FL_MCD_Uni_Curr') IS NOT NULL
DROP TABLE AdHocData.MDCOutEd.FL_MCD_Uni_Curr

query

SELECT *
INTO AdHocData.MDCOutEd.FL_MCD_Uni_Curr
FROM FINAL

The problem I am having is, one of the DBA's kills the process and/or the main table the query is pulling from is blank.
If the process is killed back to back nights, my CURR and PREV tables will be blank with no data to use until a successful run of the process happens

Sorry for being a pain!
 
You are not 'being a pain' :) If we cannot have a conversation here, where can you (we?) do it?
>200K records is nothing to Oracle, if you set your table right. But that's just my opinion.

But, any time you do any [tt]SELECT * [red]INTO[/red] XYZ[/tt], if XYZ exists, you just insert new records, but if XYZ does NOT exist, you CREATE XYZ every time you run it.

>one of the DBA's kills the process
Well, if you DROP and CREATE tables a lot, your DBA may be ticked off. [mad]

---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
BTW, Instead of DROP table, you may want to consider TRUNCATE table.
It is like [tt]DELETE * From Table[/tt], but you cannot do RollBack. But it takes a lot less resources that DROP and CREATE.

Just a suggestion...

---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
And isn't the whole job done by renaming the tables? That would make the least effort.
Or, let me ask: If you're done with the process, curr would become tomorrows prev data, right? If there would not be updates today that make some of it more recent. Why don't you then have a new empty table for each day?

All in all, I would never separate data that way, but if you do, why not take it to the end, logically? Then it would mean to drop prev, rename current to prev, rename today to current and create a new empty "today" table for the next day.

Chriss
 
With your approach, Chriss, I still would NOT do DROP nor CREATE:

Truncate prev, drop rename prev to temp, rename current to prev, rename today to current and create a new empty rename temp to "today" table for the next day.
[wiggle]

But, at the end of the day, we keep the same number of records: 3 tables or one.
One table would be my way to go. Delete what you don't need any more, insert new records. Done.

---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
Thanks to both of you! You both are way over my head.
Chris, are you suggesting that I have one table - BOB
Nightly, run the process, and compare the nightly data to BOB and only insert those records that do not match?
Would I have to match all 70 fields from Nightly to Bob?
Thanks again for all of your help and guidance!
 
One table - BOB was my suggestion :)
You said you have >200K records (70 fields) per day?

---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
In my opinion, nobody (no human) can ‘consume’ 200K+ records of 70 fields of data per day. Especially when all of that is gone 3 days later. I do know that some system can produce this many records, that’s OK. But (that’s just my opinion) chances are – you may need just 1 record per day which could be the analyses of the 200K+ records: SUMs, averages, MINs, MAXes, etc. How all those 200K+ records are use now.
Wouldn’t that be nice…[ponder]


---- Andy

"Hmm...they have the internet on computers now"--Homer Simpson
 
Bob, the major idea is that renaming tables instead of moving data from one to another table is a fast operation that doesn't shovel data around.

Isn't your problem, that the whole process is sometimes killed? If you do 3 or 5 or even 100 table renames, that takes about 0 seconds instead of moving data around. You overcome that problem instead of figuring out how to get to the final situation because your process was interrupted as too-long-running process. I can understand if you look in disbelief, but as long as the granularity of data you need to shovel around is full tables, then this can be done with a renaming instead.

Chriss
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top