Feed aggregator

Connecting to MySQL Database Service (MDS) via DBeaver

DBASolved - Mon, 2022-05-16 10:37

With every new service on any cloud platform, the need to make connections is essential .This is the case with […]

The post Connecting to MySQL Database Service (MDS) via DBeaver appeared first on DBASolved.

Categories: DBA Blogs

A quick way of generating Informatica PowerCenter Mappings from a template

Rittman Mead Consulting - Mon, 2022-05-16 04:52
Generating Informatica PowerCenter Content - the Options

In our blogs we have discussed the options for Oracle Data Integrator (ODI) content generation here and here. Our go-to method is to use the ODI Java SDK, which allows querying, manipulating and generating new ODI content.

Can we do the same with Informatica PowerCenter? In the older PC versions there was the Design API that enabled browsing the repository and creating new content. However, I have never used it. My impression is that Oracle APIs are more accessible than Informatica APIs in terms of documentation, help available online and availability for download and tryout.
If we want to browse the PowerCenter repository content, there is an easy way - query the repository database. But what about content generation? Who will be brave or foolish enough to insert records directly into a repository database!? Fortunately, there is a way, and a fairly easy one, if you don't mind doing a bit of Python scripting.

Generate PowerCenter Mappings - an Overview

Selective Informatica PC repository migrations are done via XML export and import - it is easy and mostly fool-proof. If we can generate XMLs for import, then we have found a way of auto-generating PowerCenter content. Informatica seems to support this approach by giving us nice, descriptive error messages if something is wrong with import XMLs. Only completely valid XMLs will import successfully. I have never managed to corrupt my Informatica repository with a dodgy XML import.

Let us look at an example - we need to extract a large number of OLTP tables to a Staging schema. The source and staging tables have very similar structures, except the staging tables have MD5 codes based on all non-key source fields to simplify change data capture (CDC) and also have the extract datetime.

  1. We start by creating a single mapping in Designer, test it, make sure we are 100% happy with it before proceeding further;
  2. We export the mapping in XML format and in the XML file we replace anything unique to the source and target table and their fields with placeholder tags: [[EXAMPLE_TAG]]. (See the XML template example further down.)
  3. Before we generate XMLs for all needed mappings, we need to import Source and Target table definitions from the databases. (We could, if we wanted, generate Source and Target XMLs ourselves but PC Designer allows us to import tables in bulk, which is quicker and easer than generating the XMLs.)
  4. We export all Sources into a single XML file, e.g. sources.xml. Same with all the Targets - they go into targets.xml. (You can select multiple objects and export in a single XML in Repository Manager.) The Source XML file will serve as a driver for our Mapping generation - all Source tables in the sources.xml file will have a Mapping generated for them.
  5. We run a script that iterates through all source tables in the source XML, looks up its target in the targets XML and generates a mapping XML. (See the Python script example further down.) Note that both the Source and Target XML become part of the Mapping XML.
  6. We import the mapping XMLs. If we import manually via the Designer, we still save time in comparison to implementing the mappings in Designer one by one. But we can script the imports, thus getting both the generation and import done in minutes, by creating an XML Control File as described here.
Scripting Informatica PowerCenter Mapping generation

A further improvement to the above would be reusable Session generation. We can generate Sessions in the very same manner as we generate Mappings.

The Implementation

An example XML template for a simple Source-to-Staging mapping that includes Source, Source Qualifier, Expression and Target:

<?xml version="1.0" encoding="UTF-8"?>
<FOLDER NAME="Extract" GROUP="" OWNER="Developer" SHARED="NOTSHARED" DESCRIPTION="" PERMISSIONS="rwx---r--" UUID="55321111-2222-4929-9fdc-bd0dfw245cd3">

            <TABLEATTRIBUTE NAME ="Sql Query" VALUE =""/>
            <TABLEATTRIBUTE NAME ="User Defined Join" VALUE =""/>
            <TABLEATTRIBUTE NAME ="Source Filter" VALUE =""/>
            <TABLEATTRIBUTE NAME ="Number Of Sorted Ports" VALUE ="0"/>
            <TABLEATTRIBUTE NAME ="Tracing Level" VALUE ="Normal"/>
            <TABLEATTRIBUTE NAME ="Select Distinct" VALUE ="NO"/>
            <TABLEATTRIBUTE NAME ="Is Partitionable" VALUE ="NO"/>
            <TABLEATTRIBUTE NAME ="Pre SQL" VALUE =""/>
            <TABLEATTRIBUTE NAME ="Post SQL" VALUE =""/>
            <TABLEATTRIBUTE NAME ="Output is deterministic" VALUE ="NO"/>
            <TABLEATTRIBUTE NAME ="Output is repeatable" VALUE ="Never"/>
            <TABLEATTRIBUTE NAME ="Tracing Level" VALUE ="Normal"/>








Python script snippets for generating Mapping XMLs based on the above template:

  1. To translate database types to Informatica data types:
mapDataTypeDict = {
	"nvarchar": "nstring",
	"date": "date/time",
	"timestamp": "date/time",
	"number": "decimal",
	"bit": "nstring"

2. Set up a dictionary of tags:

xmlReplacer = {
	"[[SOURCE]]": "",
	"[[TARGET]]": "",
	"[[MAPPING_NAME]]": "",
	"[[MD5_EXPRESSION]]": "",
	"[[SRC_2_SQ_CONNECTORS]]": "",
	"[[SQ_2_EXP_CONNECTORS]]": "",

3. We use the Source tables we extracted in a single XML file as our driver for Mapping creation:

sourceXmlFilePath = '.\\sources.xml'

# go down the XML tree to individual Sources
sourceTree = ET.parse(sourceXmlFilePath)
sourcePowerMart = sourceTree.getroot()
sourceRepository = list(sourcePowerMart)[0]
sourceFolder = list(sourceRepository)[0]

for xmlSource in sourceFolder:
	# generate a Mapping for each Source
    # We also need to go down the Field level:    
    for sourceField in xmlSource:
    	# field level operations

4. Generate tag values. This particular example is of a Column-level tag, a column connector between Source Qualifier and Expression:


5. We assign our tag values to the tag dictionary entries:

xmlReplacer["[[SQ_2_EXP_CONNECTORS]]"] = '\n'.join(sqToExpConnectors)

6. We replace the tags in the XML Template with the values from the dictionary:

for replaceTag in xmlReplacer.keys():
	mappingXml = mappingXml.replace(replaceTag, xmlReplacer[replaceTag])

Interested in finding out more about our approach to generating Informatica content, contact us.

Categories: BI & Warehousing

Data Annotation with SVG and JavaScript

Andrejus Baranovski - Mon, 2022-05-16 01:35
I explain how to build a simple data annotation tool with SVG and JavaScript in HTML page. The sample code renders two boxes in SVG on top of the receipt image. You will learn how to select and switch between annotation boxes. Enjoy!


Maximum number of concurrent sessions in multi instance database

Tom Kyte - Sun, 2022-05-15 23:46
Hi, We have Oracle 12C on 2 instances. I know GV$license can give maximum number of concurrent sessions since start of instances. But is there a way to get maximum we had accessing the database from both together ? Syed
Categories: DBA Blogs

Index on XMLTYPE with XPATH Expression including a XPATH Function

Tom Kyte - Sun, 2022-05-15 23:46
Is there a way to create a index for a xpath that is including a xpath function? Please consider that xmltype index creation fails at oracle livesql.
Categories: DBA Blogs

Cannot Upload git-upload-pack error while cloning Azure Git Repository

Tom Kyte - Sun, 2022-05-15 23:46
Hi, <i>Background and Requirement</i> - I am working for a firm that uses <b>Oracle SQL Developer</b> for Data Cleaning and Manipulation of the data residing in the Oracle Database. We use <b>Microsoft Azure</b> for complete lifecycle management and work planning. So, we decided to use an <b>Azure-hosted cloud Git Repository</b> to host our code remotely and leverage its version control capabilities. We have a Git repository on Azure and are trying to clone the same in Oracle SQL Developer. <i>Steps followed to fulfill the requirement</i> - The following steps were followed for cloning the existing remote repository in Oracle SQL Developer. 1. Go to Teams Menu. 2. Hover over Git. 3. Select Clone option. 4. After the Clone from Git wizard opens up, entered the correct Repository URL, Username and password. 5. We work on a VPN so, I have set the corresponding proxy settings too. When testing the proxy, it gives a success message. (So, no issue in the proxy settings) 6. Click next to fetch remote repository branches. An error appears at this stage. <i>Error that occured</i> - A popup with the title <b>Validation failed</b> and the content as https://<remote repo url>/_git/<remote repo name>:cannot open git-upload-pack appears. <i>Troubleshooting Methods Tried</i> - The following troubleshooting methods have been tried. 1. A lot of troubleshooting methods online suggested that the Local git config has sslVerify set to false could help. Did that, no gain. 2. Tried cloning my personal git repository to test the working of the Git integration on Oracle SQL Developer. It was able to successfully fetch the remote branches. Hence, the error is coming up only while cloning an Azure Repository. 3. Looked at almost all the solution links online, but most of them were for Eclipse. Since both Eclipse and SQL Developer are Java-based applications, I tried doing those resolutions but most of them are regarding SSL Verify setting to false. At the end I have raised the issue here. Hoping to find some help here. Thanks in advance.
Categories: DBA Blogs


Marian Crkon - Sun, 2022-05-15 00:14


The Feature - Sun, 2022-05-15 00:14




Categories: APPS Blogs

FORCE_LOGGING in Autonomous Database

Tom Kyte - Fri, 2022-05-13 16:46
Is FORCE_LOGGING enabled at CDB level in ADB-S? I checked that FORCE_LOGGING was not enabled at the PDB level and the Tablespace level.
Categories: DBA Blogs

Find Circular References in UDTs

Tom Kyte - Fri, 2022-05-13 16:46
The latest Oracle docs has the following design tip: Circular Dependencies Among Types Avoid creating circular dependencies among types. In other words, do not create situations in which a method of type T returns a type T1, which has a method that returns a type T. https://docs.oracle.com/en/database/oracle/oracle-database/21/adobj/design-consideration-tips-and-techniques.html Attached is a link to LiveSQL that exhibits a very simple circular dependency that will likely have issues recompiling during a datapump. Assuming we already have a large application that the compiler is having issues with is there a query we can use to find instances where T1 references T2 and T2 references T1? We would also need to find them a few generations apart (T1 references T2, T2 references T3, T3 references T1). The reference may be either in an attribute (REF) or a subprogram (parameter or return type). This would allow us to find what types may need to be changed to be brought in line with the latest documentation. Thanks in advance for your help.
Categories: DBA Blogs

Select XMLQuery XML parsing error with ampersands

Tom Kyte - Fri, 2022-05-13 16:46
Hi Tom and Team, I guess that this issue is related to the namespace, but as I don't know well this, Could you help me to solve the error running this Select, please? <code>with testTable as ( select xmltype ('<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <ns5:MT_Consulta_pedidos_pagamento xmlns:ns2="urn:Cpy.com/Model/ConsultaPedidosDevolucao/v0" xmlns:ns3="urn:Cpy.com/Model/AtualizaStatusPagamento/v0" xmlns:ns4="urn:Cpy.com/Model/AtualizaItensDevolvidosCancelados/v0" xmlns:ns5="urn:Cpy.com/Model/ConsultaPedidosPagamento/v0"> <codigo_empresa>&Empresa</codigo_empresa> <numero_pedido_venda>&Pedido</numero_pedido_venda> <codigo_loja>&Loja</codigo_loja> <numero_componente>&Componente</numero_componente> </ns5:MT_Consulta_pedidos_pagamento> </soap:Body> </soap:Envelope>' ) xml_val from dual ) select xmlquery('/soap' passing xml_val returning content) as dados from testTable;</code>
Categories: DBA Blogs

Patch Oracle GoldenGate Microservices using RESTful APIs

DBASolved - Fri, 2022-05-13 08:10

In 2017, Oracle introduced the world to Oracle GoldenGate Microservices through the release of Oracle GoldenGate 12c ( Upon the […]

The post Patch Oracle GoldenGate Microservices using RESTful APIs appeared first on DBASolved.

Categories: DBA Blogs

How can we execute a SQL script file in SQL trigger and output of this SQL script execution into the log file?

Tom Kyte - Thu, 2022-05-12 22:26
How can we execute a SQL script file in SQL trigger and output of this SQL script execution into the log file? We are automating one of the SQL script file execution. We want to execute this SQL script file once the data will insert into the table and we want the SQL script file execution in the trigger. Regards, Abhishek Bhargava
Categories: DBA Blogs

PLSQL nested procedure hides resolution of an outer procedure

Tom Kyte - Thu, 2022-05-12 22:26
<code>declare type t1 is record ( f1 number ); type t2 is record ( f1 number ); v1 t1; v2 t2; procedure q(p1 in t1) is begin null; end q; procedure p(p1 in t1, p2 in t2) is procedure q(p2 in t2) is begin null; end q; begin q(p1); q(p2); end p; begin p(v1, v2); end; /</code> Procedure p has a nested procedure with the same name of an outer procedure (q). PLSQL cannot resolve the call to q, raising the error PLS-00306: wrong number or types of arguments in call to 'Q'. If I move the nested procedure in an outer scope, the block runs ok: <code>declare type t1 is record ( f1 number ); type t2 is record ( f1 number ); v1 t1; v2 t2; procedure q(p1 in t1) is begin null; end q; procedure q(p2 in t2) is begin null; end q; procedure p(p1 in t1, p2 in t2) is begin q(p1); q(p2); end p; begin p(v1, v2); end; /</code> It seems that the local procedure q(t2) hides the outer q(t1), even if they have different signatures. Are there any reasons for that behaviour? Thanks Eddy
Categories: DBA Blogs

External table in a PL/SQL procedure

Tom Kyte - Thu, 2022-05-12 22:26
Hi Tom ? My task: move several dozen text file imports from SQLLDR (on AIX) into callable PL/SQL procedures. The text files are static in structure with daily refreshes of the contents. The contents are loaded into individual tables in our 19c EE database. The solution appeared to be external tables, so I created a proof-of-concept example that worked as expected as stand-alone code. So far, so good: <code>SELECT * FROM all_directories WHERE directory_name = 'CONNECT2'; -- returns /connect2. CREATE TABLE MY_EXT_TBL ( CUSIP VARCHAR2(25 BYTE), DESCRIPTION VARCHAR2(200 BYTE), QTY NUMBER(18,5), ACCOUNT VARCHAR2(100 BYTE) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY CONNECT2 ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE CONNECT2:'MY_EXT_TBL%a_%p.bad' LOGFILE CONNECT2: 'MY_EXT_TBL%a_%p.log' DISCARDFILE CONNECT2: 'MY_EXT_TBL%a_%p.discard' FIELDS TERMINATED BY '|' MISSING FIELD VALUES ARE NULL ( CUSIP, DESCRIPTION, QTY, ACCOUNT ) ) LOCATION ('exttabletestfile.txt') ) REJECT LIMIT UNLIMITED; -- Table MY_EXT_TBL created. SELECT COUNT(*) FROM MY_EXT_TBL; -- Returns 65159. Matches file row count. </code> It was when I attempted to move the working code into a procedure that things went sour. This example shows a very basic (no Log, Bad, or Discard files) example and hints at the hazards of going that route. I accepted that challenge, but after trying every combination of single and double quotes around file names without success, I am stumped. This feels harder than it should be. If External Tables in a sproc are a valid, if tricky, solution, could you please demonstrate a working example? Or should I be using UTL_File instead? Or something else? Best regards, Dexter
Categories: DBA Blogs

Configure of Oracle Data Miner repository in SQL Developer Desktop to work with Autonomous Database

Tom Kyte - Thu, 2022-05-12 04:06
I was looking at this article https://blogs.oracle.com/machinelearning/post/oracle-data-miner-now-available-for-autonomous-database. Is Data Miner also supported on ADW? If so, I am looking for a tutorial to setup Oracle Data Miner to use with ADW. In particular, I am struggling with the setup of the data miner connection / user with SYS privileges to install the Data Miner Repository. I am using SQL developer on MacOs.
Categories: DBA Blogs


Marian Crkon - Thu, 2022-05-12 00:13


The Feature - Thu, 2022-05-12 00:13




Categories: APPS Blogs

table with 900 million records with 2 clob fields and weighing 5tera and without indexes

Tom Kyte - Wed, 2022-05-11 09:46
Greetings oracle DB gurus, On this subject I want a recommendation, the database weighs 7 teras in total but 5 of that 7 teras is only the audit table, that table only has 3 years of data (The business needs to keep all the data) and it has more than 900 million record and 2 clob fields, it is a move table, We have had several incidents related to this table, slowness in the Database for inserting that table, as it has clob fields that sometimes save 10 million characters, not if that is related, apart from that we have run out of disk space, tablespace or data file, the log is filling up very fast, it doesn't even let the alerts arrive before the disk is full, for example. This table is used by several applications at the same time and saves all the activities that users perform, the clob fields are the details of the activities The business wants to pull reports from this table when that table only has one index. here I leave the structure of the table CREATE TABLE EBTDEV.ADMIN_AUDIT ( ID NUMBER NOT NULL , EVENT_TYPE NUMBER(1, 0) , OWNER_ID NUMBER , OWNER VARCHAR2(100 BYTE) , OWNER_PERMISSIONS CLOB , EVENT_DESCRIPTION VARCHAR2(200 BYTE) , OBJECT_TYPE VARCHAR2(100BYTE) , OBJECT_ID NUMBER , BEFORE CLOB , AFTER CLOB , TERMINAL VARCHAR2(100 BYTE) , EVENT_DATE TIMESTAMP(6) , AGENCY VARCHAR2(10 BYTE) ,PORTAL VARCHAR2(20 BYTE) , UPD_FILE_DW TIMESTAMP(6) ) and this is the only index it has CREATE INDEX EBTDEV.IX_EVENT_DT_UPD_FILE_DW ON EBTDEV.ADMIN_AUDIT (EVENT_DATE ASC, UPD_FILE_DW ASC) my question is what is your recommendation to improve performance regarding the creation of reports and optimize the table so as not to have more issues of DB space and slowness in the DB
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator