Feed aggregator

how to import sequence

Tom Kyte - Tue, 2020-08-11 18:46
export is done on table level using 9.2.0.1 exp user/password tables=emp,foo file=test.dmp during import sequences never got imported. This is the default behavior of oracle. I would appreciate if you please advise on the followings: 1. How to import sequences in table level export? 2. How to get the same sequence value at the time of export? Thanks
Categories: DBA Blogs

Update in Oracle DB

Tom Kyte - Tue, 2020-08-11 18:46
Hi dear AskTOM team. Have a great day to everyone. I have some confusion about UPDATE TABLE statement in Oracle DB 12cr2. Let's assume we have 3 users: U1; U2; U3; U1 has a table called TEST_1, and U2 and U3 both have UPDATE privilege on that table. My question is that: <b>If U2 and U3 try to update same rows in that particular table at the same time what will happen? How Oracle will control such kind of processes?</b> Thanks beforehand!
Categories: DBA Blogs

Ranking based on time break

Tom Kyte - Tue, 2020-08-11 18:46
i want to make rankning of trucks exit based on break more than 1 hour like eg below TRUCK EXIT T1 10:00 PM T2 10:05 PM T3 12:00 PM T4 12:05 PM T5 12:10 PM T6 12:20 PM Result should be like...... leaving break more than 1 hours gaps 10:00 10:05 1 12:00 12:20 2
Categories: DBA Blogs

Converting XML to JSON using Apex

Tom Kyte - Tue, 2020-08-11 18:46
Hello Everyone, There is a table(xxln_vs_publish_stg) has xmltype column(xml_data) which stores XML data. I have a requirement to convert XML data to json data. For that, I am using apex_json.write to convert. While executing below logic for changing it, I am getting Error as: ORA-20987: APEX - JSON.WRITER.NOT_OPEN - Contact your application administrator. Can you please help what is it I am doing which is wrong. <code> DECLARE l_xml sys.xmltype; l_amount BINARY_INTEGER := 32000; l_buffer RAW(32000); l_pos INTEGER := 1; l_stage NUMBER; content CLOB; content_blob BLOB; content_length NUMBER; BEGIN SELECT xml_data INTO l_xml FROM xxln_vs_publish_stg WHERE xml_data IS NOT NULL AND ROWNUM < 2; content := xmltype.getclobval(l_xml); xxln.convert_clob_to_blob(content, content_blob); content_length := dbms_lob.getlength(content_blob); dbms_output.put_line(content_length); apex_json.initialize_clob_output; IF dbms_lob.getlength(content_blob) < 32000 THEN apex_json.write(content); ELSE WHILE l_pos < content_length--DBMS_LOB.GETLENGTH(v_output_file_blob) LOOP dbms_lob.read(content_blob, l_amount, l_pos, l_buffer); apex_json.write(content); l_pos := l_pos + l_amount; END LOOP; END IF; dbms_output.put_line(apex_json.get_clob_output); apex_json.free_output; END; </code>
Categories: DBA Blogs

Options to quickly access large portions of rows

Tom Kyte - Tue, 2020-08-11 18:46
Hello, Tom. We have a fact table, that is partitioned by day and stores the last 90 days of data. Sometimes users of the application can change the status of record from 'ACTIVE' to 'CANCELED'. There are a lot of heavy analytical queries against that table that include full scans but only consider the 'ACTIVE' records. The number of 'CANCELED' record can wary greatly over time, from 5% to 60%. Right now it has 37 million active ones, and 67 million canceled, so my full scan could be 3 times faster. My question is: what is the best option to quickly access all the active records? B-tree index won't help, because there are too many rows to retrieve. Bitmap index seems to be a bad choice, since there are a lot of DML operations. I wanted to try subpartitioning by list and move the rows to the 'CANCELED' subpartition, but I immediately have concerns: There are 7 indexes on the table now. Moving a lot of rows between sections would require a lot of time and could potentially fill up the undo if someone decides to change the status of tens of millions of rows at a time(users can do and will do that). Since the table is partitioned by day, any blank space left after row movent in sections older than today won't be reused or reclaimed and a full scan will take just a much time. That makes the whole idea almost useless. I am afraid that shrinking the entire table could fill up the undo segment. I don't have an environment close in specs to our PROD environment, so I can't even really test my concerns with undo. Unfortunately we can't upgrade to 12.2 for a few more months, so move online is not availbable. Is there another option that I am missing or should I just run shrink space partition by partition on a daily basis?
Categories: DBA Blogs

Dynamically passing sequence name to get currval

Tom Kyte - Tue, 2020-08-11 18:46
I am trying to get the currval of all the user sequences in the schema. When I run the below sql it gives me invalid sql statement. I am not sure if the below is the right way to achieve it. Please advise. Assumption: The current value of the sequences are already set in the session. <code> set serveroutput on; declare sq number; sqnm varchar2(50); stmt varchar2(1000); cursor sqnc is (select sequence_name from user_sequences); begin for row in sqnc loop sqnm := row.sequence_name; stmt := 'SELECT' || sqnm ||'.currval into' || sq || 'from dual'; execute immediate stmt; dbms_output_put_line(sqnm || ' ' ||sq); end loop; end; </code>
Categories: DBA Blogs

Oracle has any feature similar to "Always Encrypted" that is offered by SQL server?

Tom Kyte - Tue, 2020-08-11 18:46
Hello, It would be great if you can help me here. Can you please share if Oracle has any feature similar to the "Always Encrypted" feature offered by SQL server? Link pasted at end has information on "Always Encrypted". I understand that Oracle offers data redaction to mask data. However, my understanding is that users with high authorization can bypass it. Oracle also offers Vault to control data access. However, there still will be Oracle users that can see the data in clear. It would be really helpful if you can share some pointers. Thanks, AB ------------------------------------------------------------------------------------------------------------------------------- Link: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-database-engine?view=sql-server-ver15 Text from this link: Always Encrypted allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to the Database Engine (SQL Database or SQL Server). As a result, Always Encrypted provides a separation between those who own the data and can view it, and those who manage the data but should have no access. By ensuring on-premises database administrators, cloud database operators, or other high-privileged unauthorized users, can't access the encrypted data, Always Encrypted enables customers to confidently store sensitive data outside of their direct control. This allows organizations to store their data in Azure, and enable delegation of on-premises database administration to third parties, or to reduce security clearance requirements for their own DBA staff.
Categories: DBA Blogs

This is just a quick blog entry to

Kevin Closson - Tue, 2020-08-11 12:06

This is just a quick blog entry to direct readers to an Amazon Web Services blog post regarding Oracle Licensing options when deploying Oracle Database in AWS.

Licensing Options for Oracle Database Deployments in Amazon Web Services

 

SCN_TO_TIMESTAMP

Hemant K Chitale - Tue, 2020-08-11 09:23
A quick demo of SCN_TO_TIMESTAMP in 19c

 
oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 11 21:59:56 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Mon Aug 10 2020 16:08:38 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select scn_to_timestamp(5389994) from dual;

SCN_TO_TIMESTAMP(5389994)
---------------------------------------------------------------------------
11-AUG-20 09.53.44.000000000 PM

SQL>
SQL> select scn_to_timestamp(5389994-100000) from dual;

SCN_TO_TIMESTAMP(5389994-100000)
---------------------------------------------------------------------------
12-JUL-20 11.19.13.000000000 PM

SQL>
SQL> select scn_to_timestamp(32720) from dual;
select scn_to_timestamp(32720) from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1


SQL>


If you query for an older SCN, you would get an ORA-08181 error.  What is an "older SCN" ?  

Technically, Oracle frequently inserts new rows into SYS.SMON_SCN_TIME and deletes older rows.  This is the table that is queried by the SCN_TO_TIMESTAMP function.  So, if you query for an SCN no longer present in the table, you get an ORA-08181 error.

Does Oracle insert every SCN into this table ? Of course not !  Else there would have been more 5million rows in the table in my database.  It periodically inserts rows.  When you run the SCN_TO_TIMESTAMP function, you get an approximate timestamp  -- an estimate that Oracle derives from reading "nearby" rows.  

Do not ever assume that SCN_TO_TIMETAMP returns an Exact Timestamp for that SCN.

For a range of potential SCNs, you can query V$ARCHIVED_LOG for FIRST_TIME (which is still in DATE format, not TIMESTAMP) and FIRST_CHANGE# (which is the first SCN recorded for that ArchiveLog).


Categories: DBA Blogs

Quick Intro to BOTO3

Pakistan's First Oracle Blog - Mon, 2020-08-10 03:37

 I just published my very first tutorial video on youtube which lists down a quick introduction to AWS BOTO3 with a step by step walkthrough of a simple program. Please feel free to subscribe to my channel. Thanks. You can find video here.

Categories: DBA Blogs

Real Time SQL Monitor using SQL Developer 20.2

Hemant K Chitale - Mon, 2020-08-10 03:27
Here are a few screenshots of using the Real Time SQL Monitor in SQL Developer 20.2 against a 19c database.  I use the MONITOR hint explicitly in the SQL statements to force them to be visible in the SQL Monitor.

Note : The "B" after the "18" and "20" for I/O requests in the first two screenshots is *not* "Billion"




This is an INSERT statement

This shows the Execution Plan of the INSERT statement 

Here is  more complicated query with Parallel Execution  (all 3 panes : Plan Statistics, Plan and Parallel Execution)


 

Categories: DBA Blogs

Ansible Configuration Management Tool

Online Apps DBA - Mon, 2020-08-10 01:46

Ansible is the most widely used tool for Configuration Management in the industry since it is very simple to use yet powerful enough to automate complex multi-tier IT application environments. Check out the blog at https://k21academy.com/devops21 to know more about Ansible and other Configuration management tools. This blog post covers: Configuration Management Configuration Management Tools […]

The post Ansible Configuration Management Tool appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Jenkins Overview and Installation Pre-requisites

Online Apps DBA - Mon, 2020-08-10 01:37

Jenkins, originally developed for continuous integration, is the most widely adopted solution for software process automation, continuous integration, and continuous delivery. Check out the blog at https://k21academy.com/devops20 to know more about Jenkins and its concepts. This blog post covers: Jenkins Overview Jenkins Features Installation Pre-requisites Jenkins Concepts and much more. Begin your journey towards becoming […]

The post Jenkins Overview and Installation Pre-requisites appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

EBS R12.2 Upgrade Frequently Asked Questions

Online Apps DBA - Sun, 2020-08-09 03:00

Can you upgrade from DB 11g to 19c directly? Is there any direct path to upgrade to R12.2.9 from 11i? If you are performing the EBS upgrade to R12.2 then you might have been facing all these questions. Well, we have got a series of FAQs that our trainees have faced during their upgrade practice. […]

The post EBS R12.2 Upgrade Frequently Asked Questions appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Tips To Prepare AZ-301 Exam Microsoft Certified Azure Architect Design

Online Apps DBA - Sun, 2020-08-09 02:47

Want to clear AZ-301 Architect Design Certified Exam? Worried about how to and where to start preparing? Have a look at this blog at https://k21academy.com/az30112, this will assist you to pass the AZ-301 Architect Design Certified Exam in an efficient way from the beginning to the end in the precise and most brief way. This […]

The post Tips To Prepare AZ-301 Exam Microsoft Certified Azure Architect Design appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DDL Script of partition in oracle

Tom Kyte - Fri, 2020-08-07 23:06
Hi Tom, How to get DDL scripts of table partition and index partition in oracle. Thanks, Leon.
Categories: DBA Blogs

Online table redefinition

Tom Kyte - Fri, 2020-08-07 23:06
This function DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS copy the dependent objects of the base table into interim. What is the use of it? And what will happen if I do not run this step?
Categories: DBA Blogs

Mining Statspack and AWR Repositories for Historical Trends

Pythian Group - Fri, 2020-08-07 16:43

There are various ways that we can approach the analysis of a past database performance problem. The initial steps usually differ based on their scope. Is it limited to a certain SQL/process/task, or is it perceived as a database-wide slowdown? Additionally, is it occurring at the moment, or is it an event that occurred in the past?

In case the starting scope of analysis is database-wide, mining the Automatic Workload Repository (AWR) is one of the methods we can use to obtain historical performance trends. However, not all customers have access to it, either because it requires the Diagnostics Pack license on Enterprise Edition, or because they are running Oracle Standard Edition, where it’s not present.

In such cases, we can still use Statspack as a free alternative to AWR, even though it’s not as sophisticated. One of Statspack’s shortcomings is it doesn’t store Active Session History data, which we can use to drill-down into the activity of particular sessions over time. With Statspack we’re missing this session-level granularity.

In this post, I’m going to present a script I use to get an overview of the workload dynamics of a database querying the Statspack repository. There’s also an AWR counterpart, as I’ll mention later in the post.

statspack_load_trends.sql 1. Script’s Properties

Let’s summarize the script’s main properties:

  1. It queries the Statspack repository directly and doesn’t create any (temporary) objects, nor relies on any pre-created Statspack reports.
  2. We can use it to analyze a Statspack repository imported from another DB, containing data with a different DBID as the DB hosting the repository.
  3. We can analyze a period spanning instance restart(s). The script considers only adjacent snapshot pairs having the same startup_time value. In this case, “adjacent” denotes two snapshots which belong to the same [DBID, instance number] pair, and which SNAP_IDs are closest to each other when ordered. That’s just to emphasize that the difference between two consecutive SNAP_IDs is not always one (think RAC with cached sequence values, an instance restart, or pressure on the shared pool).
2. Purpose

The script provides a quick high-level overview of the DB workload dynamics. It reports a combination of historical OS utilization statistics (stats$osstat), system time model statistics (stats$sys_time_model), and aggregated instance-level statistics (stats$sysstat) for a given period. Currently, it doesn’t query stats$system_event for wait event details. Several time-based metrics are presented in a form of Average Active Sessions (AAS), calculated by simply dividing the observed time-based metric by the elapsed time in the observed period.

You can download the script here, and its AWR counterpart is available here. Before running the AWR version, make sure the database has the Diagnostics Pack license. The following explanation applies to both scripts.

Note: I got the idea for this script from various AWR mining scripts in Chapter 5 (Sizing Exadata) from the “Oracle Exadata Recipes: A Problem-Solution Approach” book. Additionally, the idea to calculate and display CPU core-normalized values for some of the CPU usage statistics originates from John Beresniewicz’s AWR1page project.

3. Calculating CPU Capacity

The script considers the number of CPU cores, and not threads (in case hyperthreading is enabled) to calculate the number of CPU seconds between two Statspack snapshots. Various publications explain the reasons for this approach, but to summarize: Depending on the workload type, hyperthreading can provide up to approx. 30% higher CPU throughput compared to non-hyperthread mode.

When the number of processes running on CPU approach the number of CPU cores, the system might become CPU saturated/over-subscribed. At that point, its response time becomes unpredictable, and additional workload decreases the amount of useful work performed by the system.

4. Usage

The script produces a very wide output, so I recommend spooling it out to a file for easier viewing. Because Statspack tables have public synonyms, any user that has permission to select from the repository tables can run it.

Note: I’ve seen the script fail with errors like “ORA-00936: missing expression,” or “ORA-01722: invalid number” when used on databases running with cursor_sharing=FORCE. To avoid the error, I included the /*+ cursor_sharing_exact*/ hint in the script’s SELECT statement. Setting cursor_sharing=EXACT at the session-level is also a valid alternative.

SQL> spool load_trends.txt
SQL> @statspack_load_trends.sql

First, we provide the DBID and instance number we want to analyze. If we don’t provide an instance number, all of the instances for the provided DBID are considered:

Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   DB Id   |Inst Num|DB Name     |Instance    |Host
-----------|--------|------------|------------|-------------
 1558102526|       1|ORCL        |orcl1       |ol7-122-rac1
 1558102526|       2|ORCL        |orcl2       |ol7-122-rac2


Enter DBID to analyze - default "1558102526" :   /* enter DBID to analyze */
Enter instance number or "all" to analyze all instancs for DBID = 1558102526 - default "all" : /* report data for a specific RAC instance or all of them */

Next, we provide a time range to analyze.

5. Sample Output

Let’s check what the output looks like. Due to its width, and to fit the blog format, it’s segmented in several sections. Also, due to some (temporary) blog formatting limitations, I recommend viewing wide code sections by clicking “Open code in new window.”

  • “CPU #Cores;#Threads”: The number of CPU cores and threads (in case of enabled hyperthreading) reported by the OS.
  • “Tot CPU Time Avail [Cores] (s)”: The number of CPU seconds available between the two analyzed snapshots based on the number of CPU cores.
Instance|Snap ID        |Begin Snap     |End Snap       |Elapsed|            CPU|     Tot CPU Time|
  Number|start-end      |Time           |Time           |   Mins|#Cores;#Threads|Avail [Cores] (s)|
--------|---------------|---------------|---------------|-------|---------------|-----------------|
       1|19195-19196    |16-APR-20 17:00|16-APR-20 18:00|  59.98|         24; 48|        86,376.00|
        |19196-19197    |16-APR-20 18:00|16-APR-20 19:00|  60.00|         24; 48|        86,400.00|
        |19197-19198    |16-APR-20 19:00|16-APR-20 20:00|  60.00|         24; 48|        86,400.00|
        |19198-19199    |16-APR-20 20:00|16-APR-20 21:00|  60.02|         24; 48|        86,424.00|
        |19199-19200    |16-APR-20 21:00|16-APR-20 22:00|  59.98|         24; 48|        86,376.00|

Note: One hour between snapshots is probably excessive, but that’s beyond the scope of this post.

Time Model Statistics: stats$sys_time_model

The next section reports the time spent in the database in the form of Average Active Sessions (AAS). For completeness, and to better understand what the figures represent, I’m including how various statistics from stats$sys_time_model are related:

                DB time = CPU + Wait time spent by foreground sessions
background elapsed time = CPU + Wait time spent by background sessions
                 DB CPU = CPU time spent by foreground sessions
    background cpu time = CPU time spent by background sessions

Considering the above, we can calculate the AAS figures as follows:

       Total AAS = ("DB time" + "background elapsed time")/elapsed_time
Total AAS on CPU = ("DB CPU" + "background cpu time")/elapsed_time

FG AAS = "DB time" / elapsed_time
BG AAS = "background elapsed time" / elapsed_time

FG AAS on CPU = "DB CPU" / elapsed_time
BG AAS on CPU = "background cpu time" / elapsed_time

Total AAS in wait = ("Total AAS" - "Total AAS on CPU") / elapsed_time

FG AAS in wait = ("DB time" - "DB CPU") / elapsed_time
BG AAS in wait = ("background elapsed time" - "background cpu time") / elapsed_time

Columns reporting CPU-related figures display two values: The “usual” AAS value, and the “core-normalized Average Active Sessions” value, using the acronym “NPC”. If the core-normalized value approaches (or even crosses) the value of “1,” the system could potentially be CPU saturated:

  • “AAS [FG+BG]”: The number of AAS considering foreground and background sessions.
  • “AAS [FG]”: The number of AAS considering foreground sessions only.
  • “AAS [BG]”: The number of AAS considering background sessions only.

Continuing with the CPU component of the above three mentioned metrics:

  • “AAS on CPU [FG+BG]”: The number of AAS on CPU considering foreground and background sessions, followed by core-normalized AAS on CPU (NPC).
  • “AAS on CPU [FG]”: Same as above, though only considering foreground sessions.
  • “AAS on CPU [BG]”: Same as above, though only considering background sessions.

The “AAS wait” figures follow the same principle as the above-mentioned columns, and “AAS RMAN CPU” represents AAS performing “RMAN CPU time (backup/restore)” activities.

Returning to our sample output:

      AAS|      AAS|      AAS|AAS on CPU| AAS on CPU|AAS on  |AAS on CPU|AAS on  |AAS on CPU| AAS wait| AAS wait| AAS wait|             |
  [FG+BG]|     [FG]|     [BG]|[FG+BG]   |[FG+BG] NPC|CPU [FG]|  [FG] NPC|CPU [BG]|[BG] NPC  |  [FG+BG]|     [FG]|     [BG]|AAS RMAN CPU |
---------|---------|---------|----------|-----------|--------|----------|--------|----------|---------|---------|---------|-------------|
     96.4|     94.3|      2.0|       8.5|        0.4|     8.3|       0.3|     0.2|       0.0|     87.9|     86.0|      1.9|          0.0|
     32.9|     31.6|      1.3|      10.3|        0.4|    10.1|       0.4|     0.2|       0.0|     22.5|     21.4|      1.1|          0.0|
     59.4|     58.9|      0.6|      23.3|        1.0|    23.2|       1.0|     0.1|       0.0|     36.2|     35.7|      0.5|          0.0|
     13.3|     12.9|      0.5|       5.8|        0.2|     5.7|       0.2|     0.1|       0.0|      7.5|      7.1|      0.4|          0.0|
     23.0|     22.2|      0.8|       6.0|        0.3|     5.9|       0.2|     0.1|       0.0|     17.0|     16.3|      0.7|          0.0|

The first line reports 94.3 foreground AAS, out of which only 8.3 were on CPU, and 86 were in various waits. Looking at the third line, the situation changes, as out of the 58.9 AAS, 23.3 were on CPU, and 35.7 in waits. Checking the per-CPU-core-normalized value, we see it reports 1, which means the machine might be approaching or has already crossed CPU saturation. We also see that there was no RMAN activity occurring during that time. Background processes also spent most of their time in waits, rather than on CPU.

For convenience, we have displayed the number of seconds consumed by foreground sessions, breaking them further down into CPU and wait components, and reporting the relative percentages. This is basically the same information we saw in the previous section, just expressed as time instead of AAS:

  DB Time (s)            DB CPU (s)                      |
[FG CPU+WAIT]  =           [FG CPU] +          [FG WAIT] |
---------------------------------------------------------|
    339,555.62 =      29,924.51  9% +     309,631.11 91% |
    113,683.70 =      36,469.52 32% +      77,214.18 68% |
    211,880.46 =      83,404.47 39% +     128,475.99 61% |
     46,325.13 =      20,692.78 45% +      25,632.35 55% |
     79,966.07 =      21,274.38 27% +      58,691.69 73% |
OS Statistics – stats$osstat

Next, OS statistics from stats$osstat are displayed. “Tot OS Load@end_snap” is the recorded OS load at the time of the end snapshot creation. The other four columns represent Average Active Processes (AAP), which is simply the measured time of each named statistic divided by elapsed time in the observed period. Similarly, as above, the normalized value per core is also reported here for the BUSY statistic (sum of USER+SYS). The meaning is the same; If the value approaches 1, the system might be CPU saturated.

In our sample report, the third line reports 23.9 processes on CPU, or 1 per CPU core (that’s considering all the OS processes, not only Oracle’s). That also correlates with the “AAS on CPU [FG+BG]” figure in the third line we saw in the above snippet. Because in this particular case the machine is dedicated to one Oracle instance, it used all of the available CPU:

       Tot OS|           |AAP OS  |   AAP OS|   AAP OS|   AAP OS|
Load@end_snap|AAP OS BUSY|BUSY NPC|     USER|      SYS|   IOWAIT|
-------------|-----------|--------|---------|---------|---------|
         37.0|        9.2|     0.4|      8.2|      0.9|     12.1|
         76.8|       10.8|     0.4|      9.8|      0.9|      5.3|
          9.4|       23.9|     1.0|     22.9|      0.9|      2.3|
          4.3|        6.2|     0.3|      5.7|      0.5|      1.4|
          4.8|        6.4|     0.3|      5.6|      0.7|      4.7|
System Statistics: stats$sysstat

Finally, stats$sysstat reports various system statistics. I won’t describe their meaning because that’s beyond the scope of this post. It’s worth noting that apart from “Logons Current,” almost all other statistics are expressed in units of work per second. The only exceptions are statistics related to parallel operations. Because their usage usually pertains to “heavy-duty” DDL/DML tasks, we don’t expect to see many such operations per second. Thus, the whole snapshot interval seems a more appropriate time-frame to report the number of occurrences of such events.

    Logons|        |      User|            |SQL*Net roundtrips|SQL*Net roundtrips|   Bytes received via|     Bytes sent via|   Bytes received via|
   Current|Logons/s|   calls/s|  Executes/s|  to/from client/s|  to/from dblink/s|SQL*Net from client/s|SQL*Net to client/s|SQL*Net from dblink/s|
----------|--------|----------|------------|------------------|------------------|---------------------|-------------------|---------------------|
     556.0|     9.5|     872.9|       692.2|             723.0|           4,575.6|          1,846,238.6|        6,305,967.2|          1,177,004.3|
     527.0|    16.2|   1,008.0|       639.4|             828.5|           5,773.2|          2,462,067.1|        7,760,807.5|          1,453,024.0|
     607.0|    18.5|     738.8|       588.3|             556.1|           5,618.1|          1,986,647.1|        3,644,026.9|          1,448,627.4|
     427.0|     9.2|     873.3|       910.0|             716.4|           5,972.3|          2,691,244.6|        4,067,039.1|          1,532,389.7|
     418.0|     7.4|     719.9|       627.8|             588.5|           7,471.6|          2,564,916.7|        3,773,344.1|          1,852,806.9|

   
     Bytes sent via|Cluster wait|Session logical|      DB block|    Consistent| Consistent reads |  Physical|  Physical|Physical read|Physical write|
SQL*Net to dblink/s|      time/s|        reads/s|     changes/s|   changes/sec|undo rec applied/s|   reads/s|  writes/s|IO requests/s| IO requests/s|
-------------------|------------|---------------|--------------|--------------|------------------|----------|----------|-------------|--------------|
          576,510.8|         0.0|      339,009.8|      31,353.8|       3,062.0|           4,002.4|  47,349.4|   1,879.1|      2,621.5|         448.5|
          726,935.7|         0.0|      487,469.9|      48,874.4|         487.3|             563.9|  31,277.7|   2,127.6|      5,021.6|         526.8|
          707,648.6|         0.0|      343,665.8|      38,862.0|         379.4|             362.9|  37,057.7|     777.7|      1,949.2|         265.2|
          751,698.7|         0.0|      288,724.2|      26,163.7|         618.0|             435.8|  14,001.6|     823.3|        828.5|         274.1|
          940,096.3|         0.0|      335,631.4|      24,500.0|         198.5|             211.9|  53,625.8|     638.2|      2,451.6|         227.5|
     


    Parses|      Hard|     Parse|     Parse|      User|       User|     Redo size|          Redo|      Rollback changes|
   total/s|  parses/s|describe/s|failures/s| commits/s|rollbacks/s|       bytes/s|      writes/s|undo records applied/s|
----------|----------|----------|----------|----------|-----------|--------------|--------------|----------------------|
     142.0|     103.9|       0.0|       0.0|     158.2|        0.8|  46,951,095.1|         137.8|                   0.1|
     143.4|     100.1|       0.0|       0.3|     155.0|        0.8|  49,017,168.1|         170.3|                   1.2|
     135.9|      89.2|       0.0|       0.1|     143.3|        0.8|  11,513,858.2|         149.7|                   0.1|
     141.1|     109.8|       0.0|       0.0|     284.4|        0.8|   9,513,089.1|         226.2|                   0.1|
     123.0|      93.4|       0.0|       0.0|     175.5|        0.9|   7,462,206.6|         169.3|                   0.3|
     

     
           Queries|    DML statements|     PX oper not|PX oper downgraded|PX oper downgraded|PX oper downgraded|PX oper downgraded|PX oper downgraded	   
parallelized/Total|parallelized/Total|downgraded/Total|   to serial/Total|75 to 99 pct/Total|50 to 75 pct/Total|25 to 50 pct/Total| 1 to 25 pct/Total	   
------------------|------------------|----------------|------------------|------------------|------------------|------------------|------------------	   
           1,912.0|               3.0|         1,989.0|              38.0|               0.0|               1.0|               4.0|               1.0	   
           2,450.0|               6.0|         2,551.0|              10.0|               0.0|               0.0|               0.0|               0.0	   
           2,477.0|              13.0|         2,584.0|               9.0|               0.0|               0.0|               0.0|               0.0	   
           1,553.0|               3.0|         1,646.0|               9.0|               0.0|               0.0|               0.0|               0.0	   
           1,390.0|               2.0|         1,487.0|               8.0|               0.0|               0.0|               0.0|               0.0	   
6. Visualizing Results

When comparing a period where there was an issue to one where the database was running fine, or also when just checking for trends, it’s more convenient to plot the results. That’s an easy way to get an overview of how certain metrics changed over time, or how do they compare across nodes on a RAC database.

To ease that task, the two scripts contain two ways of formatting columns: One for plotting/charting purposes, and one displaying column headings on two lines for a more user-friendly format (the format used in the above descriptions). Based on the needs, the appropriate block of column formatting commands has to be uncommented in the script. You can plot results using a third-party utility, such as Tableau, which is used to produce graphs in the following sections.

CPU Usage Distribution and Limits Across Nodes

The graph below represents the average number of OS processes on CPU, normalized to the CPU core count for each of the three nodes on a RAC system. As noted above, when the normalized value per core crosses the value of “1,” the host might be oversubscribed on CPU. Nodes 2 and 3 are usually below the value of 1. However, spikes in usage on node 1 might require further investigation. Also, there seems to be an imbalance in CPU usage across nodes:Number of OS processes on CPU, normalized to the CPU core count

Breakdown of Foreground and Background Sessions on CPU and Wait Components, Expressed as AAS

When investigating a problematic time period, we can quickly get a high-level overview of the relation between CPU and waits experienced by foreground and background sessions:

Average Active Sessions breakdown to foreground and background CPU and wait components Observing Waits After Applying a Fix to Reduce Them

After applying “a fix” on April 16th, the time spent waiting by foreground sessions decreased substantially. CPU demand also decreased. Foreground CPU vs waits

 

Comparing the Number of AAS on CPU Normalized to CPU Core Count at the OS and DB Level on a Three-Node RAC DB

The observed DB isn’t using all/most of the CPU available on the nodes, and there’s “something else” using it at the OS level. That’s visible on the graph for node 1, between May 6th at 7-9 PM, where CPU usage at the OS level increased, but that was not the case for DB sessions. Additionally, because we have the normalized per CPU core value displayed, we can see that node 1 crosses the value of 1 quite often.Comparison of Average Active Sessions between OS and DB level

Logons per Second and Number of Logged-in Sessions per Node

RAC nodes 1 and 2 already have a relatively high number of Logons per second at ~5, whereas node 3 has this even higher at ~23. Additionally, there’s a large imbalance in the number of established sessions on node 3 compared to nodes 1 and 2. Because each node has 8 physical cores (not visible from the below graphs), the ~2500 established sessions represent a potential risk should too many of them try to become active at the same time. Overall it seems a connection pooling review is in place for this system.Session trends

A final example from a different server, but still related to connection management. Detecting a large drop in the number of established sessions and correlating with the number of logins per second:Session trends

7. Other Mining Tools

Some time ago, Maris Elsins published a post describing a set of handy scripts to mine the AWR repository. Make sure to check it out!

To conclude, here’s a list of some free mining utilities. These are mostly created for AWR, but some are for Statspack. Some of them parse a pre-created set of AWR/Statspack reports. Others connect directly to the database and extract/analyze data from there. The script presented in this post might not offer the same functionality as those utilities. However, for some of my use-cases, it complemented them by providing a customized set of pre-calculated workload related figures.

Categories: DBA Blogs

openssl: strange error.... (on the first look)

Dietrich Schroff - Fri, 2020-08-07 14:13
Some days ago i wanted to do a check of a certificate of an ip address. No big deal - so i did:
schroff@zerberus:~$ openssl s_client -showcerts  -connect 82.165.229.87.87:443 

140011908769088:error:2008F002:BIO routines:BIO_lookup_ex:system
lib:../crypto/bio/b_addr.c:726:Name or service not known
connect:errno=22
So i opened google to find a solution.
But on google i found nothing really helpful.

?

The answer was very easy:
If i read the command line carfully, i would have detected my error:

THE IP ADDRESS WAS INVALID

I wrote an ipv4 with 5 numbers and not with 4...

After using a correct ipv4 number the command worked like expected:
schroff@zerberus:~$ openssl s_client -showcerts  -connect 82.165.229.87:443 #
CONNECTED(00000003)
Can't use SSL_get_servername
depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = GeoTrust RSA CA 2018
verify return:1
depth=0 C = DE, ST = Rheinland-Pfalz, L = Montabaur, O = 1&1 Mail & Media GmbH, CN = gmx.net
verify return:1
---

Total weight selection of rows

Tom Kyte - Fri, 2020-08-07 04:46
I have 1000 cubes of weight between 360 and 430 grams. I need to choose a set of cubes which will have weigh 50.000 grams. Do you have any idea how to do it in SQL or PL/SQL? <code>CREATE TABLE CUBES ( ID VARCHAR2(100 CHAR), WEIGHT NUMBER(10,3) ); BEGIN FOR i IN 1..1000 LOOP INSERT INTO cubes VALUES (i, round(dbms_random.value( 360, 430), 3)); END LOOP; COMMIT; END;</code>
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator