DBA Blogs

Embedding Machine Learning Models in the Oracle Database: Create an ONNX model

DBASolved - Mon, 2024-04-15 09:20

  This post is the first of a three-part series where I’m going to show you how to use pre-configured […]

The post Embedding Machine Learning Models in the Oracle Database: Create an ONNX model appeared first on DBASolved.

Categories: DBA Blogs

Video on Client Side Failover in RAC

Hemant K Chitale - Thu, 2024-04-11 01:54

 I've posted a new video demonstrating Client Side Failover defined by the tnsnames.ora file





Categories: DBA Blogs

Catastrophic Database Failure -- Deletion of Control and Redo Files

Tom Kyte - Wed, 2024-04-10 14:26
We recently had a database failure that resulted in data loss after an Oracle 19.3.0.0.0 database had both both its control, and redo log files deleted. Please note that I am not a DBA, but simply an analyst that supports the system that sits on this Oracle database. Any amount of data loss is fairly serious, and I am wondering how we avoid this in the future. Before the control, and redo files were deleted, we had an event wherein the drive this database is on was full. This caused the database stop writing transactions, and disallowed users from accessing the application. Once space was made on this drive, the database operated normally for several hours until...the redo, and control files were deleted. What would have caused the control, and redo files to be deleted? In trying to figure out what happened, it was noted that if we had expanded the drive's memory in response to its becoming full, the later data loss would not have happened. Does Tom agree with that sentiment? Are these two events linked (disk drive nearly full and later data loss), or are they symptomatic of two different things?
Categories: DBA Blogs

Is 'SELECT * FROM :TABLE_NAME;' available?

Tom Kyte - Tue, 2024-04-09 20:06
Is 'SELECT * FROM :TABLE_NAME;' available?
Categories: DBA Blogs

Is fragmentation an issue ?

Tom Kyte - Mon, 2024-04-08 07:06
Hai all, I have 1000 number of tables. some of the tables got delete rows and updated the fragmentaion is created. How to determine which tables are fragmented ?
Categories: DBA Blogs

Does Migrating 4k Tablespace block size to 8k database cause performance impact ?

Tom Kyte - Mon, 2024-04-08 07:06
I am migrating 11g database cross endianness from on-prem to EXACS . On-prem database db_block_size is 4k and all the tablespaces are also of 4k block size . <u>Since, I cannot provision non-standard block size database in OCI</u> , I am worried about the performance impact caused by different block size. Please help me understand what database block size would be recommended for the below scenario. <code> ----------------------------------------------------------- Source : ON_PREM ----------------------------------------------------------- Platform / ID : AIX-Based Systems (64-bit) / 6 Version : 11.2.0.4.0 Size (GB) : 17 TB db_block_size : 4k All Tablespaces BLK Size : 4k ----------------------------------------------------------- Target : OCI - EXACS ----------------------------------------------------------- Platform / ID : LINUX / 13 Version : 11.2.0.4.0 Size (GB) : 17 TB db_block_size : 8K APP Tablespaces BLK Size : 4k SYSTEM/SYSAUX/TEMP/UNDO : 8K </code> Phase 1: Migrating from AIX 11g to EXACS 11g Phase 2: 19c upgrade and Multi tenant {<i>Due to business requirement we have to split migration and upgrade</i>} <b>Question : </b> 1. Can we guarantee that there will be no performance impact due to difference in tablespace and database block size if db_4k_cache_size parameter is set adequately to large value . 2. Or Better to go for same 4k block size as source on-premises database. Off course application regression testing and RAT will be included , but testing both cases is not feasible, hence reaching for expert advice .
Categories: DBA Blogs

Restore from Archivelog only

Tom Kyte - Mon, 2024-04-08 07:06
Hello Sir, I am able to get one scenario to work and that scenario was where I had a VM (server) running Oracle 19c with just 1 table 5 records and I did a backup of the whole VM (disk backup) and now I added a new table in my db with 3 records (ensured db is in Archivelog mode) and then I ran: rman target / backup database plus archivelog; Now I went ahead and added 2 more records and noted the system time lets say **2024-04-06 15:33:55 ** (so I can restore upto this time). So basically a new table with 5 records. Once all this done I ran below command: backup incremental level 1 database plus archivelog; Now I deleted my VM and restored the first Old copy of my VM backup (one that had 1 table n 5 records), post this VM restore. I followed the steps below and I was able to get Point in time recovery to work up to 2024-04-06 15:33:55 (here now I should have 2 tables each with 5 records each). The main step which I had missed earlier was RESTORING the control file since I was doing restore on a different (new VM) server: sql>> shutdown abort; rman<code>>> startup nomount; rman >>RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f"; rman>> startup mount; rman >>run { SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')"; RESTORE DATABASE; RECOVER DATABASE; sql `ALTER DATABASE OPEN RESETLOGS?; }</code> Everything good here and with this approach I was able to get the Point in time recovery to work. I was missing that restore of the control file. Now the scenario which I am still not able to work out and I am sure I am making a very basic mistake (may be I dont understand the archivelog and redolog properly). The scenario I want to make work is : I have VM backup (disk backup) upto a level of 1 Table and 5 records. Then I create 2nd table and add lets say 2 records to it and this time I only take ARCHIVELOG backup and then add 3 more records and then backup incremental archivelog all and I note the time (lets assume '2024-04-06 15:33:55) with following steps: <code> backup archivelog all; insert into xxx VALUES(3,'Line 1'); insert into xxx VALUES(4,'Line 1'); commit; backup incremental level 1 archivelog all</code>; Here I have not done backup database plus archivelog (assuming all those new inserts would be in redolog and may be in archivelog?). Now I delete this VM and restore from disk backups a new VM from backup1 (where only 1 table 5 records) exists and now I simply run following: <code>shutdown abort; startup nomount; RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f"; startup mount; run { SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')"; RESTORE ARCHIVELOG all ; RECOVER DATABASE; sql `ALTER DATABASE OPEN RESETLOGS?; } </code> But unfortunately it complains about ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: 'D:\BASE\MYDB\DATA\SYSTEM01.DBF' Not sure why this, as I was thinking that Arc...
Categories: DBA Blogs

Returning data in EXECUTE IMMEDIATE with dynamic values in USING clause

Tom Kyte - Mon, 2024-04-08 07:06
Hi Team I have below scenario. Step#1) User clicks to particular App UI screen. Step#2) User selects multiple filters on UI - say filter1, filter2 which correspond to table columns. Step#3) For each filter selected by user, he needs to enter data - say Mark (for filter1), Will (for filter2) based on which search will be performed on the respective filters (aka table columns). Step#4) User inputs from above Steps#2, 3 are passed to PLSQL API which returns desired SQL result in paginated manner (pageSize: 50). User inputs from Step#2, 3 will be dynamic. I have tried to implement this using native dynamic SQL, but looks like I have hit an end road here. Able to use dynamic values in "USING" clause, but not able to return the data from SELECT statement with EXECUTE IMMEDIATE. Shared above LiveSQL link which has re-producible test case. If I comment line "BULK COLLECT INTO l_arId, l_arName, l_arType" in the procedure, the block executes successfully. But I need the result set from SELECT statement in procedure as output. Looking for some advise here. Thanks a bunch!
Categories: DBA Blogs

19c doesn't allow truncation of data that is longer in length of column's char(40) definition

Tom Kyte - Mon, 2024-04-08 07:06
We have an application that has been written to insert a variable that is char(50) into a column that is defined as char(40). In Oracle 11g (I know this is very old) it would merely truncate the last 10 characters without issue. However, Oracle 19c doesn't allow this and raises an exception (which I believe should've always been the case). Where can I find documentation of this restriction and when it was changed and is there away around this other than changing the program code? Oracle 11 truncated that extra 10 characters in the below statemt ADBBGNX_ADDRESS_LINE_1 := agentrecord.producerrec.businessAddressLine1; Oracle 19 throws an exception with a NULL error status.
Categories: DBA Blogs

AnythingLLM - Any LLM, Any Document, Full Control, Full Privacy, Local, Secure

Pakistan's First Oracle Blog - Sun, 2024-04-07 18:14

This video shows how to locally install AnythingLLM to privately and securly and remotely run any LLM with any RAG document. It all runs locally with zero required internet connectivity.



Categories: DBA Blogs

Oracle to Power BI

Tom Kyte - Thu, 2024-04-04 10:26
How to connect oracle database / data set into Power BI, I already searched from google and youtube but I can't do it please help. Thank you.
Categories: DBA Blogs

Compiling Java class

Tom Kyte - Thu, 2024-04-04 10:26
Hi, We are trying to compile following class in Oracle database 23c. However, we are encountering surprising error (we are not using ANY database link: Error report - ORA-04054: database link MYOPTIMIUM does not exist 04054. 00000 - "database link %s does not exist" *Cause: During compilation of a PL/SQL block, an attempt was made to use a non-existent database link. *Action: Either use a different database link or create the database link. Java class: <code>create or replace and compile java source named "llm" as import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.Reader; import java.sql.Clob; import java.sql.SQLException; import javax.script.ScriptEngineFactory; import javax.script.ScriptEngineManager; public class llm { public static String postPrompt(String appSource, String appSourceID, String targetllm, String param4, Clob pprompt) throws IOException, SQLException { try (Reader reader = pprompt.getCharacterStream()) { if (reader != null) { try (BufferedReader bufferedReader = new BufferedReader(reader)) { StringBuilder stringBuilder = new StringBuilder(); String line; while ((line = bufferedReader.readLine()) != null) { stringBuilder.append(line); } String clobData = stringBuilder.toString(); ProcessBuilder pb = new ProcessBuilder("python3", "/opt/oracle/Optimium/python/app/optimium/invokellm.py", appSource, appSourceID, targetllm, param4, clobData); Process p = pb.start(); StringBuilder out = new StringBuilder(); BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream())); String thisLine; while ((thisLine = in.readLine()) != null) { out.append(thisLine); } return out.toString(); } } } return ""; } } / </code> Thanks Sammeer
Categories: DBA Blogs

How to Install OpenDevin Locally

Pakistan's First Oracle Blog - Sat, 2024-03-30 20:00

This is a step by step easy tutorial to locally install OpenDevin which is an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects.





Commands Used:


Pre-requisites:


Update OS:

sudo apt-get update


Install git:

sudo apt-get install git


Install Nodejs:

sudo apt-get install nodejs


Install npm:

sudo apt install npm


Install Rust:

curl --proto '=https' --tlsv1.3 https://sh.rustup.rs -sSf | sh


Install docker:

sudo apt-get update

sudo apt-get install ca-certificates curl

sudo install -m 0755 -d /etc/apt/keyrings

sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc

sudo chmod a+r /etc/apt/keyrings/docker.asc


# Add the repository to Apt sources:

echo \

  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \

  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \

  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin


Install conda:

wget https://repo.anaconda.com/archive/Anaconda3-2022.05-Linux-x86_64.sh

bash Anaconda3-2022.05-Linux-x86_64.sh


Install uvicorn:

sudo apt install uvicorn


Backend Commands:

git clone https://github.com/OpenDevin/OpenDevin.git

cd OpenDevin

conda create -n opendevin python=3.11

conda activate opendevin 

docker ps

docker pull ghcr.io/opendevin/sandbox

which python

python --version

/home/ubuntu/anaconda3/envs/opendevin/bin/python -m pip install pipenv

/home/ubuntu/anaconda3/envs/opendevin/bin/python -m pipenv install -v

/home/ubuntu/anaconda3/envs/opendevin/bin/python -m pipenv shell

uvicorn opendevin.server.listen:app --port 3000


Frontend Commands:

python -m pipenv requirements > requirements.txt && python -m pip install -r requirements.txt

PYTHONPATH=`pwd` /home/ubuntu/anaconda3/envs/opendevin/bin/python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

Categories: DBA Blogs

Video on Host Names, SCAN and IP Addresses

Hemant K Chitale - Sat, 2024-03-30 04:31

 I've posted a new Video on Host Names, SCAN and Virtual IPs in RAC (using a 2-node 19c Cluster)



Categories: DBA Blogs

Question - increase size of number column

Tom Kyte - Mon, 2024-03-25 12:21
We just hit 2.1 billion row count on a table with primary key INT. This is the worse thing to happen :( Any one know if we can do alter without requiring space on the DB for the entire table?
Categories: DBA Blogs

SQL Performance differences with STATS gather

Tom Kyte - Mon, 2024-03-25 12:21
We have seen in many situations in our environment where a SQL was running badly but the plan for the query has not changed. When we gather stats for the associated table.we see that same query performs significantly better. However there is no change in PHV of the execution plan. My Question is if the PHV is staying same then that means execution plan remains the same then why is the performance varying.Are table statistics used by the optimizer even after plan is generated?
Categories: DBA Blogs

AUD$ EXP/TRUNCATE/IMP (feasibility)

Tom Kyte - Mon, 2024-03-25 12:21
We are going to MOVE the TBS of AUD$ table in PROD. Purpose: AUD$ table is totally fragmented and the CLEANUP / PURGE runs very slow - even with max 1.000.000 batch size but as per test in our test environment we had some issues regarding using API (DBMS_AUDIT_MGMT) to MOVE TBS on AUD$. And we are using STANDARD AUDIT TRAIL! SELECT * FROM dba_audit_mgmt_config_params where audit_trail ='STANDARD AUDIT TRAIL' and parameter_name='DB AUDIT TABLESPACE'; DB AUDIT TABLESPACE CLARITYAUDIT <b>STANDARD AUDIT TRAIL</b> But the MOVE worked now after we got some action plan from oracle support to fix the issue in TEST env. so that the MOVE went through via API. Now we are planning to do the TBS MOVE of AUD in PROD (<b>online</b>!). But we need to have a fallback plan, in case the MOVE hangs, or/and the data in AUD table get inconsistence. so the fallback plan is: 1) EXP the data in a downtime (disable audit trail) and keep the dump file on the server. but with parameter "DATA_ONLY" - as metadata (table DDL) would still be there. 2) run the MOVE TBS on PROD via API (DMBS). 3) if it goes through and AUD$ is accessible and purgeable, we are good - if not, we need to truncate the data in AUD$ and IMP the SAVED data (as per EXP/dump file) - again with parameter "DATA_ONLY" So i hope thats clear enough. The question is now if step 3 would work or not - we are also planning the to test the step 3 in our TEST env. but we are concerned , If this action plan (especially step 3) could impact the PROD - in case we needed to go for the fallback plan (!?). We would also appreciate any other action plan/ option to save and recover data in AUD$ , in the above scenario. Thank you! Ali
Categories: DBA Blogs

How does Oracle Database provide the user$.user#

Tom Kyte - Mon, 2024-03-25 12:21
Hi Toms, eventually, after many years, I came across e question I never realized. Indeed I have to face a customer, who uses the user$.user# for application purposes. Will say after creating a user, that application stores the user# within application tables columns, say USR_ID, which, subsequently leads to the need that user$.user# has to match the USR_ID. In consequence, if you have to migrate that application via epxdp / impdp (we have to, as we migrate from Solaris to Linux) these IDs won't match anymore as the users on the new database are created with different user$.user#. You do not have to tell me that THAT application needs "some redesign"... However, I have some questions regarding user$.user#. As far as I have seen / read, when creating a new user using the usual "create user" statement the new users user# is provided by oracle rdbms as the user# of "_NEXT_USER". _NEXT_USERs user# serves as a high water mark, even when dropping the user again _NEXT_USERs user# won't decrease (looks like an Oracle maintained sequence is used), so creating and dropping users leades to unused ranges of numbers in user$.user#. Questions: - Which sequence does provide the number of _NEXT_USER? - Is there any way to reset it? - or is there any way to influence the user$.user# or the number that is provided by rdbms to be stored as user$.user#? => I assume this may result in corruption but perhaps there is a way. Thanks and best regards - dietmar
Categories: DBA Blogs

Der Aufschlag im #Pickleball

The Oracle Instructor - Sun, 2024-03-24 12:24

Das Beste vorweg: Der Aufschlag im Pickleball ist leicht zu lernen und auszuführen. Anders als etwa im Tennis oder Tischtennis, wo man typischerweise viel Zeit mit der Übung des Aufschlags verbringt, um konkurrenzfähig zu sein. Die Regeln erschweren es zudem beträchtlich, dass der Aufschlag zur spielentscheidenden „Waffe“ werden kann. Asse oder zwingender Vorteil nach dem Aufschlag sind daher ziemlich selten.

Regeln

Die Regeln zur Positionierung gelten sowohl für den Volley-Serve als auch für den Drop-Serve:

Zum Zeitpunkt, wenn der Ball beim Aufschlag auf den Schläger trifft, müssen die Füße des aufschlagenden Spielers hinter der Grundlinie und innerhalb verlängerten Linien der jeweiligen Platzhälfte sein:

Der Oberkörper des Spielers und der Ball dürfen sich dabei innerhalb des Spielfelds befinden. Außerdem darf der Spieler das Spielfeld betreten, unmittelbar nachdem der Ball den Schläger verlassen hat.

Vorher aber nicht:

Schon das Berühren der Grundlinie mit der Fußspitze beim Aufschlag ist ein Fehler.

Man darf auch nicht beliebig weit außen stehen:

Der linke Fuß ist hier außerhalb der verlängerten Linien der linken Platzhälfte, weshalb dieser Aufschlag nicht regelgerecht wäre.

Wer schlägt wann wohin auf, und was ist mit der NVZ?

Es muss jeweils das diagonal gegenüberliegende Feld getroffen werden, wobei der Ball nicht in der NVZ landen darf. Die Linien gehören dabei zur NVZ: Ein Ball auf die hintere Linie der NVZ ist ein Fehler. Genauso gehören die Linien zum Aufschlagfeld: Ein Ball auf die Grundlinie oder ein Ball auf die Außenlinie ist also kein Fehler. Wird der Punkt gewonnen, wechselt der Aufschläger mit seinem Partner die Seite. Verliert der erste Aufschläger die Rally, macht sein Partner von der Seite weiter, wo er grad steht. Verliert auch der zweite Aufschläger die Rally, wechselt der Aufschlag auf das andere Team. Die Zählweise und die jeweilige Positionierung der Spieler hab ich in diesem Artikel behandelt.

Volley-Serve

Das ist derzeit der beliebteste Aufschlag und ursprünglich auch der einzig erlaubte Aufschlag. Der Ball wird dabei aus der Hand aufgeschlagen.

Der Schlägerkopf muss dabei eine Bewegung von unten nach oben ausführen, wie in 4-3 zu sehen.

Der Schlägerkopf darf sich zum Zeitpunkt des Auftreffen des Balls nicht über dem Handgelenk befinden (4-1 zeigt die korrekte Ausführung, 4-2 ist ein häufig zu beobachtender Fehler).

Außerdem muss der Ball unterhalb der Taille des Aufschlägers getroffen werden, wie in 4-3 zu sehen.

Das obige Bild stammt aus dem Official Rulebook der USAP.

Alle Regeln zum Volley-Serve sollen im Grunde sicherstellen, dass dieser Aufschlag eben nicht zum spielentscheidenden Vorteil wird. Im folgenden Clip sehen wir einen Aufschlag von Anna Leigh Waters, der momentan besten Spielerin der Welt:

Diese Art des Aufschlags ist bei den Pros am häufigsten zu sehen: Volley-Serve, Top-Spin, tief ins Feld gespielt. Wir sehen aber auch, dass ihre Gegenspielerin den Aufschlag ohne große Mühe returniert. Asse oder direkt aus dem Aufschlag resultierender Punktgewinn sind im Pickleball ziemlich selten. Ganz im Gegensatz etwa zu Tennis und Tischtennis.

Drop-Serve

Diese Art des Aufschlags ist erst seit 2021 erlaubt. Abgesehen von den oben beschriebenen Regeln zur Positionierung der Füße gibt es beim Drop-Serve nur eine weitere Regel: Der Ball muss aus der Hand fallengelassen werden. Hochwerfen oder nach unten Stoßen/Werfen des Balls ist nicht erlaubt. Insbesondere darf der Ball auf beliebige Art geschlagen werden. Das macht diesen Aufschlag besonders Einsteigerfreundlich, weil man kaum die Regeln verletzen kann.

Ich habe dem Drop-Serve bereits diesen Artikel gewidmet.

Abgesehen von den Regeln – wie sollte man aufschlagen?

Es gibt hier zwei grundsätzliche Herangehensweisen:

Die einen sagen, weil man mit dem Aufschlag ohnehin selten einen direkten Punkt macht, sollte man ihn nur möglichst sicher in das hintere Drittel des Felds reinspielen:

Ins hintere Drittel, weil die Rückschläger sonst zu leicht einen starken Return spielen und die Aufschläger hinten halten können. Mit der gelben Zielzone ist es unwahrscheinlich, dass der Aufschlag aus geht.

Die anderen (zu denen ich auch gehöre) sagen: Mit dem Aufschlag kann man ruhig etwas Risiko eingehen. Schließlich kann das Return-Team keinen Punkt machen. Es ist okay, wenn von 10 Aufschlägen 2 ausgehen und die übrigen 8 es den Rückschlägern schwer machen, uns hinten zu halten. Der eine oder andere direkte Punkt sollte auch dabei sein. Darum sehen meine Zielzonen so aus:

Die meisten Aufschläge gehen Richtung Zielzone 1, ab und an mal einer nach 2 und 3. Die roten Zonen sind deutlich näher an den Linien als die gelbe, was natürlich die Gefahr eines Ausballs erhöht.

Im Allgemeinen kann man zum Aufschlag im Pickleball sagen:

Länge ist wichtiger als Härte oder Spin. Ein entspannt in hohem Bogen ins hintere Drittel des Aufschlagfelds gelobbter Ball macht dem Rückschläger mehr Probleme als ein harter Topspin in die Mitte. Kurze Aufschläge sind sporadisch eingesetzt als Überraschungswaffe gut, ansonsten erleichtern sie es dem Rückschläger nur, nach vorn an die NVZ zu kommen.

ALW_Serve
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs