Why PRIMARY_KEY is not deleted permanently?<>
This question already has an answer here:
- Auto-increment is not resetting in MySQL 8 answers
The primary keys are unique identifiers for the rows. They shouldn't be recycled.
Think on the scenario that you insert a user with ID "123", then someone deletes the record and then inserts a new record. The RDBMS recycles the ID.
Then you come back to look for user with ID "123", of course you get someone else user, not the one that you want.
Pretty much sounds like a problem of integrity constraint and entity integrity.
That's true. The auto_increment is always last value + 1. You can change the current value of auto_increment, but you cannot define a lower one.
For more information read this thread: Changing the current count of an Auto Increment value in MySQL?
Another solution for you problem is to define the primary key by our one:
INSERT tableName (id, field1, field2) VALUES((SELECT COUNT(*) + 1 FROM tableName),"value1","value2")
The way most if not all auto-increment schemes work is that the DB remembers the last number assigned for each such field, and the next record inserted always gets +1. So it only has to remember 1 number: the last number assigned.
Supposed you inserted 5 records. They get 1, 2, 3, 4, 5. Now you delete 2 and 4. How would the database know to re-use 2 and 4 for the next two inserts?
It could, I suppose, scan through all the records in the table looking for the first hole in the sequence every time you did an insert. But then every insert would require reading every record in the table. What happens if the table has millions of records? An insert could go from taking a fraction of a millisecond to taking minutes.
It could keep a table of deleted records. Presumably it would just pull the first number off the table every time it did an insert. But still, every insert is now: Check the table. Any records? If so, take that number, delete the record. If not, take the next available number. The table would have to be synchronized so that if multiple users are adding records, we don't give out the same number twice. If there are a lot of deletes, it could potentially become quite large. Even if it's just one extra read every time we do an insert, we're now doing two operations instead of one: performance will be cut in half.
Okay, we could handle the special case of when the highest number so far assigned is deleted, then subtract one from our highest-number-so-far-assigned value. Do-able, but is it worth making a special rule for that one special case? How often do you delete the last record inserted? If deletes hit essentially at random, the chance that that record will be the one deleted is small.
There are distinct advantages to always assigning a new number:
One: Simplicity. The behavior is straight-forward and easy to predict. There are no special cases.
Two: Speed. As noted, alternatives require extra work. Maybe not a lot, but if we have to process just one additional record for each insert, we are cutting performance in half.
Three: We can use the assigned number to tell us the order in which records were added. High number records are always newer than low number records. I often find this handy when doing ad hoc queries and tracking down problems.
Four: We avoid potential mis-connections. Suppose you add a record to table A and it gets assigned, say, number 12. Then you add a record to table B that includes a reference to table A, so we insert that number 12. Let's suppose that for any of a variety of reasons you don't declare it as a foreign key. Then you delete record 12 from table A. So now you have this dangling reference in table B. That's bad. But imagine a new record is added to A and it gets a recycled number 12. Now we have a record in B that points to the wrong record in A. A dangling pointer is bad, but a wrong pointer is even worse. Customers get billed for someone else's charge, or the wrong person is arrested for the crime, etc.
And what would be the gain of a more complex system? The only gain I see is that we would make it less likely that we will run out of possible numbers. But if the sequence number is a 4-byte integer, there are 2 billion possible values. How many tables get 2 billion inserts over their lifetime? Of course if the table has 5 billion records you have a problem regardless of whether you try to re-use numbers. I suppose if you have some very high-volume queue, where new records are constantly added and old ones dropped, this could be an issue. Or if you are constantly deleting records and re-inserting them instead of doing updates in place. But frankly, I've been in this business for 30 years and I have never, ever, had a problem because an auto-sequence in a database ran out of numbers. I don't doubt that it's happened to someone somewhere, but it's just not a common problem. I don't think it's anywhere near common enough to junk up a clean, simple system.
You don't want to reuse numbers. There's no advantage to it. They're meaningless. Don't rely on the value of any ID having any given value.