Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Can you force an error if insert limit exceeded? 1

Status
Not open for further replies.

Jocsta

Programmer
Jun 14, 2002
204
GB
Im wondering if there is a way of forcing a rollback and error if an SQL statement attempts to insert more than X rows into a table (its not an empty table, but a table used as storage for the results of another query). The value X is a varying value (on a per user basis), so probably cant use a constraint on the table - and the query is run from a Java application using a JDBC connection.

At the moment, we go ahead with the full insert and check after the insert completes how many rows were inserted and if it exceeds a certain limit, roll it back. I was just wondering if there is a better way as if the user has a limit of 1000 rows, but the other query returns millions, I'd rather fail as quickly as possible.
 
I would suggest that you don't deliberately create an error condition.

If your insert is based on a query, than can the number of records not be compared with X, and if greater, the user be advised that an error will occur, and the activity cancelled?

I also question the business requirement here. What is this trying to achieve?

Regards

T
 
Im over simplifying the setup, but it works a little like this.
A query gets run and the id's of the records it matches are stored in a table as
INSERT INTO RESULTS (SID, ID) SELECT 123,ID from (...user configured query)

This becomes the list of things the user will work on, and workflow is run over just the records in this list (maybe over a couple of days or however long they want to keep the list for) - up to date data isn't necessary in this case but consistent results for a point in time are.

The user has a quota that limits how many things they can fetch into this results list, and at the moment this is handled by checking how many rows were inserted, and if it exceeds X then its rolled back and the user is informed they must refine their query further.

What I was hoping to achieve is an "early out" on this condition to avoid the unnecessary inserts when - for example they are allowed 1000, but the query returns 10000000 values. I could run the query twice once with a count and if thats ok again doing the insert - but was just looking for possible alternatives. In testing tho, the majority of selects fall under the limit, but I was looking to find a better way to handle the cases that don't without affecting the performance for the ones that do.

Hope that makes sense ?!?
 

Try a limit on the query itself:
Code:
INSERT INTO matched_emp_table
SELECT employee_id, first_name, last_name
     , ROWNUM() OVER (ORDER BY last_name, first_name) RN
  FROM employees
 WHERE {User conditions}
   AND rn <= {User limit};
[3eyes]



----------------------------------------------------------------------------
The person who says it can't be done should not interrupt the person doing it. -- Chinese proverb
 
That will just restict it to the first (arbitrary if unsorted) rows though, so assuming the data was names in alphabetical order Zachary Zimminy is going to feel awfully ignored :)

That said tho, if number of rows inserted equalled the total allowed, then that could be assumed a fail....

Seems so simple im about to bang my head against a wall. Thank you :)
 
Then do it like this

Code:
INSERT INTO matched_emp_table
select employee_id,first_name,last_name
from
(SELECT employee_id, first_name, last_name
   FROM employees
 WHERE {User conditions}
order by employee_id)
where rownum  <= {User limit};

Bill
Lead Application Developer
New York State, USA
 
When I implemented this, I did actually use rownum, and TOP x on SQLServer (probably shouldnt mention that name in this forum :) ), partly because they are what I am more familiar with.
I have dabbled with the analytical stuff in the past, and it was a lot slower then - haven't revisited it to check lately, but the other reason I went with the rownum was to avoid the sort that ROWNUM() needs. Generally the searches from the first stage of this process don't have any ordering, as its unnecessary at that point, and also its faster not to. When the results are viewed at a later seperate stage, formatting and ordering is applied then.

Thank you all again tho, its much appreciated.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top