The Hidden Cost of Login (Redo)
I was investigating an issue where there were a lot of user logins to Oracle database via listener due to misconfigured connection pool. This post explains why many logins could be a problem.
Most would first think of the cost of process creation, memory structures allocation and initialization etc. Which is all true, except that there is another important thing that goes on - and it has the potential to affect all the other sessions (those already logged in) - the REDO!
This post will focus on what is going on with REDO at the time of new user login.
The Test Case
The original story was that "the database is 'slow' because of a lot of logins!". And I thought "that sounds unlikely". So I made a quick test case - a program which takes three parameters:
connect string (e.g. user/pass@host:1521/service )
number of minutes to run the test
number of parallel threads in which to run the test, but we'll always use only 1 in this blog post.
Each thread does only the following:
connect
select * from dual;
disconnect
... as many times as possible. The code of this test case is available as GitHub Gist.
Note that this test case was run on fairly "slow" vm; which is intentional - you can spot performance issues faster on slower machines :)
ASH
Here's the 1 hour ASH AAS (average active sessions per minute) for the period where the only load on the database was the given test case:


Notice the "orange" and “blue” colors? Those are log file parallel write andlog file sync in this case. So, what was being written to redo?
(btw, the ASH images are screenshots from the free APPM tool).
Oracle Log Miner
To find out what was written to redo logs I decided to "craft" an archivelog by:
alter system archive log current
run the test for 1 minute
alter system archive log current
My test case said:
Threads: 1
Total time: 60.03 seconds
Total successful connections: 770
Total errors: 0
Total connections per second: 12.83
Now let's see what LogMiner has to say regarding operations on SYS.USER$ segment:
with
user_ops as (
select *
from v$logmnr_contents
where seg_owner='SYS' and seg_name='USER$'
)
select c.operation, count(*)
from user_ops u
join v$logmnr_contents c on c.xid=u.xid
group by c.operation;
OPERATION COUNT(*)
--------------- ----------
START 770
UNSUPPORTED 770
COMMIT 770
The query first finds all the changes on SYS.USER$ segment and then joins all other operations from the same transactions as the changes (XID is the transaction id). This means we can confirm that the START, UNSUPPORTED, and COMMIT operations are part of the same transaction triggered by each login, not unrelated activity.
Notice how there is exactly as many transactions on SYS.USER$ as there are logins (770)? So, according to this, each login does "update" on SYS.USER$ and a COMMIT. In our "connection storm", we managed to perform 12 commits per second just by logging in 12 times per second. And those numbers can get even higher when users are logging in concurrently.
Unsupported Operation
Official documentation says regarding such operations: "Change was caused by operations not currently supported by LogMiner", so, we need to resort to redo dump next:
ALTER SYSTEM DUMP LOGFILE '/path/to/arch1_112_1223914063.dbf' SCN MIN 6365975 SCN MAX 6365975;
26ai
All 770 of redo records contain this change on SYS.USER$:
CHANGE #3 ... OP:11.5 ...
ncol: 30 nnew: 1 size: 0
col 18: [ 1] 80
Column 18, if I'm not mistaken is SPARE1 column of SYS.USER$ table. Value 0x80 is Oracle's internal representation of NUMBER 0. In DBA_USERS, this column is used with bitand() to check for many different flags.
But - of those 770 redo records, 122 of them, besides the described change, also contain an update of column 23, which is SPARE6 column of SYS.USER$, which is of datatype DATE and refers to DBA_USERS.LAST_LOGIN:
ncol: 30 nnew: 2 size: 0
col 18: [ 1] 80
col 23: [ 7] 78 7e 02 10 0b 35 0f
This tells us, that even though my test case was doing about 12 logins per second, the LAST_LOGIN column was updated two times per second.
19c
On 19.20.0, the situation is pretty similar for each login:
CHANGE #3 ... OP:11.19 ...
Array Update of 1 rows:
ncol: 28 nnew: 1 size: 0
col 23: [ 7] 78 7e 02 10 10 26 1e
So, instead of changing column 18, it either applies an empty change array or updates the last login date. Regardless, it still makes and commits a transaction for each login, just like 26ai does.
Older versions
I didn't test older versions, but it might be interesting to know that according DBA_USERS 12c documentation, the DBA_USERS.LAST_LOGIN column was introduced in version 12c.
_disable_last_successful_login_time
This is the undocumented parameter which greatly reduces redo generation on logins. However, as it is with all undocumented parameters, you should only use them in accordance with Oracle Support guidance.
Here's the same test case with _disable_last_successful_login_time set:
select count(*) as update_count
from v$logmnr_contents
where seg_owner='SYS' and seg_name='USER$';
select count(*) as commit_count
from v$logmnr_contents
where operation='COMMIT';
UPDATE_COUNT
------------
0
COMMIT_COUNT
------------
9
So, truly no commit per login anymore. There are 9 commits, yes, but those were unrelated to logins. This not only disables last successful login time update, but also the update on SPARE1 - after enabling the parameter, no USER$ transaction per login was observed in LogMiner output.
Conclusion
In tested 19c/26ai environments with default settings, each regular user login resulted in a recursive transaction and commit. During a connection storm, these login-driven commits add to the LGWR workload alongside commits from actual application activity. So the hidden cost isn't just the login itself - it's the commit latency impact on every other session. This kind of issue is quite rare, but I find it valuable to know that, by default, there is a transaction after each regular user login.