Feed aggregator

Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil

Online Apps DBA - Mon, 2019-08-26 08:58

New Region Added: Sao Paulo @ Brazil In 2019 till mid-August Oracle added 6 new Regions in Gen 2 Cloud that’s OCI and a lot more in the pipeline This means you now have in total 10 regions, 4 with 3 availability domain while 6 with single availability domain If you want to get full […]

The post Region & Availability Domain (AD) in Oracle Cloud Infrastructure (OCI): 10 Regions Latest Sao Paulo @ Brazil appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Troubleshooting

Jonathan Lewis - Mon, 2019-08-26 06:19

A recent thread on the Oracle Developer Community starts with the statement that a query is taking a very long time (with the question “how do I make it go faster?” implied rather than asked). It’s 12.1.0.2 (not that that’s particularly relevant to this blog note), and we have been given a number that quantifies “very long time” (again not particularly relevant to this blog note – but worth mentioning because your “slow” might be my “wow! that was fast” and far too many people use qualitative adjectives when the important detail is quantative). The query had already been running for 15 hours – and here it is:


SELECT 
        OWNER, TABLE_NAME 
FROM
        DBA_LOGSTDBY_NOT_UNIQUE 
WHERE
        (OWNER, TABLE_NAME) NOT IN (
                SELECT 
                        DISTINCT OWNER, TABLE_NAME 
                        FROM     DBA_LOGSTDBY_UNSUPPORTED
        ) 
AND     BAD_COLUMN = 'Y'

There are many obvious suggestions anyone could make for things to do to investigate the problem – start with the execution plan, check whether the object statistics are reasonably representative, run a trace with wait state tracing enabled to see where the time goes; but sometimes that are a couple of very simple observation you can make that point you to simple solutions.

Looking at this query we can recognise that it’s (almost certainly) about a couple of Oracle data dictionary views (which means it’s probably very messy under the covers with a horrendous execution plan) and, as I’ve commented from time to time in the past, Oracle Corp. developers create views for their own purposes so you should take great care when you re-purpose them. This query also has the very convenient feature that it looks like two simpler queries stitched together – so a very simple step in trouble-shooting, before going into any fine detail, is to unstitch the query and run the two parts separately to see how much data they return and how long they take to complete:


SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'

SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED

It’s quite possble that the worst case scenario for the total run time of the original query could be reduced to the sum of the run time of these two queries. One strategy to achieve this would be a rewrite of the form:

select  * 
from    (
        SELECT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_NOT_UNIQUE WHERE BAD_COLUMN = 'Y'
        minus
        SELECT DISTINCT OWNER, TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED
)

Unfortunately the immediately obvious alternative may be illegal thanks to things like duplicates (which disappear in MINUS operations) or NULLs (which can make ALL the data “disappear” in some cases). In this case the original query might be capable of returning duplicates of (owner, table_name) from dba_lgstdby_not_unique which would collapse to a single ocurrence each in my rewrite – so my version of the query is not logically equivalent (unless the definition of the view enforces uniqueness); on the other hand tracking, back through the original thread to the MoS article where this query comes from, we can see that even if the query could return duplicates we don’t actually need to see them.

And this is the point of the blog note – it’s a general principle (that happens to be a very obvious strategy in this case): if a query takes too long, how does it compare with a simplified version of the query that might be a couple of steps short of the final target. If it’s easy to spot the options for simplification, and if the simplified version operates efficiently, them isolate it (using a no_merge hint if necessary), and work forwards from there. Just be careful that your rewrite remains logically equivalent to the original (if it really needs to).

In the case of this query, the two parts took 5 seconds and 9 seconds to complete, returning 209 rows and 815 rows respectively. Combining the two queries with a minus really should get the required result in no more than 14 seconds.

Footnote

The “distinct” in the second query is technically redundant as the minus operation applies a sort unique operation to both the two intermediate result sets before comparing them.  Similarly the  “distinct” was also redundant when the second query was used for the “in subquery” construction – again there would be an implied uniqueness operation if the optimizer decided to do a simple unnest of the subquery.

 

 

 

 

DevOps for Oracle DBA

Pakistan's First Oracle Blog - Sun, 2019-08-25 00:21
DevOps is natural evolution for Oracle database administrators or sysadmins of any kind. The key to remain relevant in the industry is to embrace DevOps these days and in near future.

The good news is that if you are an Oracle DBA, you already have the solid foundation. You have worked with the enterprise, world class database system and are aware of high availability, disaster recovery, performance optimization, and troubleshooting. Having said that, there is still lots to learn and unlearn to become a DevOps Engineer.


You would need to look outside of Oracle, Linux Shell and the core competency mantra. You would need to learn a proper computer language such as Python. You would need to learn software engineering framework like Agile methodology, and you would need to learn stuff such as Git. Above all you would need to unlearn that you only manage Database. As DevOps Engineer in today's Cloud era, you would be responsible for end to end delivery.


Without Cloud skills, its impossible to transition from Oracle DBA to DevOps role. Regardless of the cloud provider, you must know the networking, compute, storage, and infrastructure as code. You already know the databases side of things, but now learn a decent amount about other databases as you would be expected to migrate and manage them in the cloud.


So any of public cloud like AWS, Azure, or GCP plus a programming language like Python or Go or NodeJS, plus agile concepts, IaC as Terraform or CloudFormation, and plethora of stuff like code repositories and pipelining would be required to be come an acceptable DevOps Engineer.


Becoming obsolete by merely staying Oracle DBA is not an option. So buckle up and start DevOps journey today.
Categories: DBA Blogs

Alfresco – Share Clustering fail with ‘Ignored XML validation warning’

Yann Neuhaus - Sat, 2019-08-24 10:00

In a recent project on Alfresco, I had to setup a Clustering environment. It all went smoothly but I did face one single issue with the setup of the Clustering on the Alfresco Share layer. That’s something I never faced before and you will understand why below.

Initially, to setup the Alfresco Share Clustering, I used the sample file packaged in the distribution zip (E.g.: alfresco-content-services-distribution-6.1.0.5.zip):

<?xml version='1.0' encoding='UTF-8'?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:hz="http://www.hazelcast.com/schema/spring"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                http://www.hazelcast.com/schema/spring
                https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd">

   <!--
        Hazelcast distributed messaging configuration - Share web-tier cluster config
        - see http://www.hazelcast.com/docs.jsp
        - and specifically http://docs.hazelcast.org/docs/2.4/manual/html-single/#SpringIntegration
   -->
   <!-- Configure cluster to use either Multicast or direct TCP-IP messaging - multicast is default -->
   <!-- Optionally specify network interfaces - server machines likely to have more than one interface -->
   <!-- The messaging topic - the "name" is also used by the persister config below -->
   <!--
   <hz:topic id="topic" instance-ref="webframework.cluster.slingshot" name="slingshot-topic"/>
   <hz:hazelcast id="webframework.cluster.slingshot">
      <hz:config>
         <hz:group name="slingshot" password="alfresco"/>
         <hz:network port="5801" port-auto-increment="true">
            <hz:join>
               <hz:multicast enabled="true"
                     multicast-group="224.2.2.5"
                     multicast-port="54327"/>
               <hz:tcp-ip enabled="false">
                  <hz:members></hz:members>
               </hz:tcp-ip>
            </hz:join>
            <hz:interfaces enabled="false">
               <hz:interface>192.168.1.*</hz:interface>
            </hz:interfaces>
         </hz:network>
      </hz:config>
   </hz:hazelcast>

   <bean id="webframework.cluster.clusterservice" class="org.alfresco.web.site.ClusterTopicService" init-method="init">
      <property name="hazelcastInstance" ref="webframework.cluster.slingshot" />
      <property name="hazelcastTopicName"><value>slingshot-topic</value></property>
   </bean>
   -->

</beans>

 

I obviously uncommented the whole section and configured it properly for the Share Clustering. The above content is only the default/sample content, nothing more.

Once configured, I restarted Alfresco but it failed with the following messages:

24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:35:12.974 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:35:12.988 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:35:15 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:35:15 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
2019-08-23 14:35:16,052  WARN  [factory.xml.XmlBeanDefinitionReader] [localhost-startStop-1] Ignored XML validation warning
 org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; schema_reference.4: Failed to read schema document 'https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd', because 1) could not find the document; 2) the document could not be read; 3) the root element of the document is not <xsd:schema>.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.warning(ErrorHandlerWrapper.java:100)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:392)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:306)
	at java.xml/com.sun.org.apache.xerces.internal.impl.xs.traversers.XSDHandler.reportSchemaErr(XSDHandler.java:4218)
  ... 69 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
	at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
	at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
	... 89 more
...
2019-08-23 14:35:16,067  ERROR [web.context.ContextLoader] [localhost-startStop-1] Context initialization failed
 org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [surf-config.xml]
Offending resource: class path resource [web-application-config.xml]; nested exception is org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
  ... 33 more
Caused by: org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from URL location [classpath*:alfresco/web-extension/*-context.xml]
Offending resource: class path resource [surf-config.xml]; nested exception is org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:68)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:76)
	... 42 more
Caused by: org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException: Line 18 in XML document from file [/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml] is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:397)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
	... 44 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 18; columnNumber: 92; cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hz:topic'.
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:204)
	at java.xml/com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:135)
	at java.xml/com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:396)
	... 64 more
...
24-Aug-2019 14:35:16.196 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
24-Aug-2019 14:35:16.198 SEVERE [localhost-startStop-1] org.apache.catalina.core.StandardContext.startInternal Context [/share] startup failed due to previous errors
Aug 24, 2019 2:35:16 PM org.apache.catalina.core.ApplicationContext log
...

 

As you can see above, the message is pretty clear: there is a problem within the file “/opt/tomcat/shared/classes/alfresco/web-extension/custom-slingshot-application-context.xml” which is causing Share to fail to start properly. The first warning message points you directly to the issue: “Failed to read schema document ‘https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd’

After checking the content of the sample file and comparing it with a working one, I found out what was wrong. To solve this specific issue, you can simply replace “https://hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd” with “http://www.hazelcast.com/schema/spring/hazelcast-spring-2.4.xsd“. Please note the two differences in the URL:

  • Switch from “https” to “http
  • Switch from “hazelcast.com” to “www.hazelcast.com

 

The issue was actually caused by the fact that this installation was completely offline, with no access to internet. Because of that, Spring wasn’t able to check for the XSD file to validate the definition in the context file. The solution is therefore to switch the URL to http with www.hazelcast.com so that the Spring internal resolution can understand and use the local file to do the validation and not look for it online.

As mentioned previously, I never faced this issue before for two main reasons:

  • I usually don’t use the sample files provided by Alfresco, I always prefer to build my own
  • I mainly install Alfresco on servers which have internet access (outgoing communications allowed)

 

Once the URL is corrected, Alfresco Share is able to start and the Clustering is configured properly:

24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
24-Aug-2019 14:37:22.558 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
24-Aug-2019 14:37:22.573 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDescriptor Deploying configuration descriptor [/opt/tomcat/conf/Catalina/localhost/share.xml]
Aug 24, 2019 2:37:24 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: No Spring WebApplicationInitializer types detected on classpath
Aug 24, 2019 2:37:25 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n1.domain' to address(es): [10.10.10.10]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Resolving domain name 'share_n2.domain' to address(es): [127.0.0.1, 10.10.10.11]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [share_n1.domain/10.10.10.10, share_n2.domain/10.10.10.11, share_n2.domain/127.0.0.1]
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Prefer IPv4 stack is true.
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.AddressPicker
INFO: Picked Address[share_n2.domain]:5801, using socket ServerSocket[addr=/0:0:0:0:0:0:0:0,localport=5801], bind any local is true
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Hazelcast Community Edition 2.4 (20121017) starting at Address[share_n2.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.system
INFO: [share_n2.domain]:5801 [slingshot] Copyright (C) 2008-2012 Hazelcast.com
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTING
Aug 24, 2019 2:37:28 PM com.hazelcast.impl.TcpIpJoiner
INFO: [share_n2.domain]:5801 [slingshot] Connecting to possible member: Address[share_n1.domain]:5801
Aug 24, 2019 2:37:28 PM com.hazelcast.nio.ConnectionManager
INFO: [share_n2.domain]:5801 [slingshot] 54991 accepted socket connection from share_n1.domain/10.10.10.10:5801
Aug 24, 2019 2:37:29 PM com.hazelcast.impl.Node
INFO: [share_n2.domain]:5801 [slingshot] ** setting master address to Address[share_n1.domain]:5801
Aug 24, 2019 2:37:35 PM com.hazelcast.cluster.ClusterManager
INFO: [share_n2.domain]:5801 [slingshot]

Members [2] {
	Member [share_n1.domain]:5801
	Member [share_n2.domain]:5801 this
}

Aug 24, 2019 2:37:37 PM com.hazelcast.impl.LifecycleServiceImpl
INFO: [share_n2.domain]:5801 [slingshot] Address[share_n2.domain]:5801 is STARTED
2019-08-23 14:37:37,664  INFO  [web.site.ClusterTopicService] [localhost-startStop-1] Init complete for Hazelcast cluster - listening on topic: share_hz_test
...

 

Cet article Alfresco – Share Clustering fail with ‘Ignored XML validation warning’ est apparu en premier sur Blog dbi services.

Rittman Mead at Oracle OpenWorld 2019

Rittman Mead Consulting - Fri, 2019-08-23 07:52
Rittman Mead at Oracle OpenWorld 2019

Oracle OpenWorld is coming soon! 16th-20th September in Moscone Center, San Francisco. It's Oracle's biggest conference and I'll represent Rittman Mead there with the talk "Become a Data Scientist"  exploring how Oracle Analytics Cloud can speed any analyst path to data science. If you are an analyst looking to move your first steps in data-science or a manager trying to understand how to optimize your business analytics workforce, look no further, this presentation is your kickstarter!

Rittman Mead at Oracle OpenWorld 2019

To have an introduction to the topic have a look at my blog post series episodes I, II and III.

If you'll be at OOW2019 and you see me around, don't hesitate to stop me! I’d be pleased to speak with you about OAC, Analytics, ML, and more important topics like food or wine as well!

Categories: BI & Warehousing

Optimizer Tricks 1

Jonathan Lewis - Fri, 2019-08-23 06:39

I’ve got a number of examples of clever little tricks the optimizer can do to transform your SQL before starting in on the arithmetic of optimisation. I was prompted to publish this one by a recent thread on ODC. It’s worth taking note of these tricks when you spot one as a background knowledge of what’s possible makes it much easier to interpret and trouble-shoot from execution plans. I’ve labelled this one “#1” since I may publish a few more examples in the future, and then I’ll have to catalogue them – but I’m not making any promises about that.

Here’s a table definition, and a query that’s hinted to use an index on that table.


rem
rem     Script:         optimizer_tricks_01.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2019
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0
rem             11.2.0.4
rem

create table t1 (
        v1      varchar2(10),
        v2      varchar2(10),
        v3      varchar2(10),
        padding varchar2(100)
);

create index t1_i1 on t1(v1, v2, v3);


explain plan for
select
        /*+ index(t1 (v1, v2, v3)) */
        padding 
from 
        t1
where
        v1 = 'ABC'
and     nvl(v3,'ORA$BASE') = 'SET2'
;

select * from table(dbms_xplan.display);

The query uses the first and third columns of the index, but wraps the 3rd column in an nvl() function. Because of the hint the optimizer will generate a plan with an index range scan, but the question is – what will the Predicate Information tell us about Oracle’s use of my two predicates:


Plan hash value: 3320414027

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |     1 |    66 |     0   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    66 |     0   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("V1"='ABC')
       filter(NVL("V3",'ORA$BASE')='SET2')

The nvl() test is used during the index range scan (from memory I think much older versions of Oracle would have postponed the predicate test until they had accessed the table itself). This means Oracle will do a range scan over the whole section of the index where v1 = ‘ABC’, testing every index entry it finds against the nvl() predicate.

But what happens if we modify column v3 to be NOT NULL? (“alter table t1 modify v3 not null;”) Here’s the new plan:


Plan hash value: 3320414027

---------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |     1 |    66 |     0   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    66 |     0   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | T1_I1 |     1 |       |     0   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("V1"='ABC' AND "V3"='SET2')
       filter("V3"='SET2')


The optimizer will decide that with the NOT NULL status of the column the nvl() function can be eliminated and the predicate can be replaced with a simple column comparison. At this point the v3 predicate can now be used to reduce the number of index entries that need to be examined by using a type of skip-scan/iterator approach, but Oracle still has to test the predciate against the index entries it walks through – so the predicate still appears as a filter predicate as well.

You might notice, by the way, that the Plan hash value does not change as the predicate use changes – even though the change in use of predicates could make a huge difference to the performance. (As indicated in the comments at the top of the script, I’ve run this model against 11.2.0.4 – which is the version used in the ODC thread – and 19.3.0.0: the behaviour is the same in both versions, and the Plan hash value doesn’t change from version to version.)

Footnote

The reason why I decided to publish this note is that the original thread on the ODC forums reported the Following contradictory details – an index definition and the optimizer’s use of that index as shown in the predicate section of the plan:


Index column name      Column position
---------------------- ----------------
FLEX_VALUE_SET_ID      1
PARENT_FLEX_VALUE      2
RANGE_ATTRIBUTE        3
CHILD_FLEX_VALUE_LOW   4
CHILD_FLEX_VALUE_HIGH  5
ZD_EDITION_NAME        6

---------------------------------------------------------------------------
|* 17 |      INDEX RANGE SCAN             | FND_FLEX_VALUE_NORM_HIER_U1   |
---------------------------------------------------------------------------
  17 - access("FLEX_VALUE_SET_ID"=:B1 AND NVL("ZD_EDITION_NAME",'ORA$BASE')='SET2')  
       filter((NVL("ZD_EDITION_NAME",'ORA$BASE')='SET2'  ..... lots more bits of filter predicate.

Since the expression nvl(zd_edition_name, ‘ORA$BASE’) = ‘SET2’ appears as an access predicate and a filter predicate it must surely be a column in the index. So either this isn’t the definition of the index being used or, somehow, there’s a trick that allows zd_edition_name to appear as a column name in the index when it really means nvl(zd_edition_name,’ORA$BASE’) at run-time. (And if there is I want to know what it is – edition-based redefinition and tricks with virtual columns spring to mind, but I avoid thinking about complicated explanations when a simpler one might be available.)

 

sql_patch

Jonathan Lewis - Wed, 2019-08-21 10:49

This note is a short follow-up to a note I wrote some time ago about validating foreign key constraints where I examined the type of SQL Oracle generates internally to do the validation between parent and child tables.  In that article I suggested (before testing) that you could create an SQL patch for the generated SQL to over-ride the plan taken by Oracle – a plan dictated to some extent by hints (including a “deprecated” ordered hint) embedded in the code. I did say that the strategy might not work for SQL optimised by SYS, but it turned out that it did.

Here’s a little script I ran to test a few variations on the theme:


declare
        v1      varchar2(128);
begin
        v1 :=   dbms_sqldiag.create_sql_patch(
                        sql_id  => 'g2z10tbxyz6b0',
                        name    => 'validate_fk',
                        hint_text => 'ignore_optim_embedded_hints'
--                      hint_text => 'parallel(a@sel$1 8)'      -- worked
--                      hint_text => 'parallel(8)'              -- worked
--                      hint_text => q'{opt_param('_fast_full_scan_enabled' 'false')}'  -- worked
                );
        dbms_output.put_line(v1);
end;
/

I’ve tested this on 12.2.0.1 and 19.3.0.0, but for earlier versions of Oracle, and depending what patches you’ve applied, you will need to modify the code.

The SQL_ID represents the query for my specific tables, of course, so you will have to do a test run to find the query and SQL_ID for the validation you want to do. This is what the statement for my parent/child pair looked like (cosmetically adjusted):

select /*+ all_rows ordered dynamic_sampling(2) */ 
        A.rowid, :1, :2, :3
from
        "TEST_USER"."CHILD" A , 
        "TEST_USER"."PARENT" B 
where
        ("A"."OBJECT_ID" is not null) 
and     ("B"."OBJECT_ID"(+) = "A"."OBJECT_ID")
and     ("B"."OBJECT_ID" is null)

The patch that the script creates simply tells Oracle to ignore the embedded hints (in particular I don’t want that ordered hint), but I’ve left a few other options in the text, commenting them out.

Without the patch I got the following plan:.


Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  NESTED LOOPS ANTI (cr=399 pr=279 pw=0 time=47801 us starts=1 cost=70 size=22000 card=1000)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=250 pr=247 pw=0 time=19943 us starts=1 cost=32 size=1700000 card=100000)(object id 73191)
     10000      10000      10000   INDEX UNIQUE SCAN PAR_PK (cr=149 pr=32 pw=0 time=3968 us starts=10000 cost=0 size=49995 card=9999)(object id 73189)

Rerunning the validation test with the patch in place I got the following plan – clearly the patch had had an effect.

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  HASH JOIN RIGHT ANTI (cr=246 pr=242 pw=0 time=96212 us starts=1 cost=39 size=22000 card=1000)
     10000      10000      10000   INDEX FAST FULL SCAN PAR_PK (cr=24 pr=23 pw=0 time=1599 us starts=1 cost=4 size=50000 card=10000)(object id 73235)
    100000     100000     100000   INDEX FAST FULL SCAN CHI_FK_PAR (cr=222 pr=219 pw=0 time=27553 us starts=1 cost=32 size=1700000 card=100000)(object id 73237)
(object id 73229)

Don’t worry too much about the fact that in my tiny example, and with a very new, nicely structured, data set the original plan was a little faster. In a production environment creating a hash table from the parent keys and probing it with the child keys may reduce the CPU usage and random I/O quite dramatically.

Bear in mind that the best possible plan may depend on many factors, such as the number of child rows per parent, the degree to which the parent and child keys arrive in sorted (or random) order, and then you have to remember that Oracle gets a little clever with the original anti-join (note that there are only 10,000 probes for 100,000 child rows – there’s an effect similar to the scalar subquery caching going on there), so trying to patch the plan the same way for every parent/child pair may not be the best strategy.

If you want to drop the patch after playing around with this example a call to execute dbms_sqldiag.drop_sql_patch(name=>’validate_fk’) will suffice.

 

Useful Linux commands for an Oracle DBA

Yann Neuhaus - Wed, 2019-08-21 09:00
Introduction

Oracle & Linux is a great duet. Very powerfull, very scriptable. Here are several commands that make my life easier. These tools seems to be widespread on most of the Linux distributions.

watch with diff

It’s my favorite tool since a long time. watch can repeat a command indefinitely until you stop it with Ctrl+C. And it’s even more useful with the – -diff parameter. All the differences since last run are highlighted. For example if you want to monitor a running backup, try this:

watch -n 60 --diff 'sqlplus -s /nolog @check_backup; echo ; du -hs /backup'

The check_backup.sql being:


conn / as sysdba
set feedback off
set lines 150
set pages 100
col status for a30
alter session set NLS_DATE_FORMAT="DD/MM-HH24:MI:SS";
select start_time "Start", round (input_bytes/1024/1024,1) "Source MB", round(output_bytes/1024/1024,1) "Backup MB", input_type "Type", status "Status", round(elapsed_seconds/60,1) "Min", round(compression_ratio,1) "Ratio" from v$rman_backup_job_details where start_time >= SYSDATE-1 order by 1 desc;
exit;

Every minute (60 seconds), you will check, in the rman backup views, the amount of data already backed up. And the amount of data in your backup folder.

Very convenient to keep an eye on things without actually repeating the commands.

Truncate a logfile in one simple command

Oracle is generating a lot of logfiles, some of them can reach several GB and fill up your filesystem. How to quickly empty a big logfile without removing it? Simply use the true command:

true > listener.log

Run a SQL script on all the running databases

You need to check something on every databases running on your system? Or eventually make the same change to all these databases? A single line will do the job:

for a in `ps -ef | grep pmon | grep -v grep | awk '{print $8}' | cut -c 10- | sort`; do . oraenv <<< $a; sqlplus -s / as sysdba @my_script.sql >> output.log; done

Don’t forget to put an exit at the end of your SQL script my_script.sql. Using this script through ansible will even increase the scope and save hours of work.

Copy a folder to another server

scp is fine for copying single file or multiple files inside a folder. But copying a folder recursively to a remote server with scp is more complicated. Actually, you need to do a tarfile for that purpose. A clever solution is to use tar without creating any archive on the source server, but with a pipe to the destination server. Very useful and efficient, with just one line:

tar cf - source_folder | ssh oracle@192.168.50.167 "cd destination_folder_for_source_folder; tar xf -"

For sure, you will need +rwx on destination_folder_for_source_folder for oracle user on 192.168.50.167.

Check the network speed – because you need to check

As an Oracle DBA you probably have to deal with performance: not a problem it’s part of your job. But are you sure your database system is running at full network speed? You probably didn’t check that, but low network speed could be the root cause of some performance issues. This concerns copper-based networks.

Today’s servers handle 10Gb/s ethernet speed but can also work with 1Gb/s depending on the network behind the servers. You should be aware that you can still find 100Mb/s network speeds, for example if the network port of the switch attached to your server has been limitated for some reason (needed for the server connected to this port before yours for example). If 1Gb/s is probably enough for most of the databases, 100Mb/s is clearly inadequate, and most of the recent servers will even not handle correctly 100Mb/s network speed. Your Oracle environment may work, but don’t expect high performance level as your databases will have to wait for the network to send packets. Don’t forget that 1Gb/s gives you about 100-120MBytes/s in real condition, and 100Mb/s only allows 10-12MBytes/s, “Fast Ethernet” of the 90’s…

Checking the network speed is easy, with ethtool.

[root@oda-x6-2 ~]# ethtool btbond1
Settings for btbond1:
Supported ports: [ ] Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 1000Mb/s <= Network speed is OK
Duplex: Full
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

In case of a network bonding interface, please also check the real interfaces associated to the bonding, all the network interfaces belonging to the bonding need to have the same network speed :

[root@oda-x6-2 ~]# ethtool em1
Settings for em1:
Supported ports: [ TP ] Supported link modes: 100baseT/Full <= This network interface is physically supporting 100Mb/s
1000baseT/Full <= also 1Gb/s
10000baseT/Full <= and 10Gb/s
Supported pause frame use: Symmetric
Supports auto-negotiation: Yes
Advertised link modes: 100baseT/Full
1000baseT/Full
10000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 1000Mb/s <= Network speed is 1Gb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: external
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes <= This interface is connected to a switch

Conclusion

Hope this helps!

Cet article Useful Linux commands for an Oracle DBA est apparu en premier sur Blog dbi services.

Join View

Jonathan Lewis - Tue, 2019-08-20 06:39

It’s strange how one thing leads to another when you’re trying to check some silly little detail. This morning I wanted to find a note I’d written about the merge command and “stable sets”, and got to a draft about updatable join views that I’d started in 2016 in response to a question on OTN (as it was at the time) and finally led to a model that I’d written in 2008 showing that the manuals were wrong.

Since the manual – even the 19c manual – is still wrong regarding the “Delete Rule” for updatable (modifiable) join views I thought I’d quickly finish off the draft and post the 2008 script. Here’s what the manual says about deleting from join views (my emphasis on “exactly”):

Rows from a join view can be deleted as long as there is exactly one key-preserved table in the join. The key preserved table can be repeated in the FROM clause. If the view is defined with the WITH CHECK OPTION clause and the key preserved table is repeated, then the rows cannot be deleted from the view.

But here’s a simple piece of code to model a delete from a join view that breaks the rule:


rem
rem     Script:         delete_join.sql 
rem     Dated:          Dec 2008
rem     Author:         J P Lewis
rem

create table source
as
select level n1
from dual
connect by level <= 10
/ 
 
create table search
as
select level n1
from dual
connect by level <= 10
/ 

alter table source modify n1 not null;
alter table search modify n1 not null;

create unique index search_idx on search(n1);
-- create unique index source_idx on source(n1)

I’ve set up a “source” and a “search” table with 10 rows each and the option for creating unique indexes on each table for a column that’s declared non-null. Initially, though, I’ve only created the index on search to see what happens when I run a couple of “join view” deletes using “ANSI” syntax.

prompt  ===============================
prompt  Source referenced first in ANSI
prompt  ===============================

delete from (select * from source s join search s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;
rollback;
 
prompt  ===============================
prompt  Search referenced first in ANSI
prompt  ===============================

delete from (select * from search s join source s1 on s.n1 = s1.n1);
select count(1) source_count from source;
select count(1) search_count from search;
rollback;

With just one of the two unique indexes in place the order of the tables in the inline view makes no difference to the outcome. Thanks to the unique index on search any row in the inline view corresponds to exactly one row in the source table, while a single row in the search table could end up appearing in many rows in the view – so the delete implictly has to operate as “delete from source”. So both deletes will result in the source_count being zero, and the search_count remaining at 10.

If we now repeat the experiment but create BOTH unique indexes, both source and search will be key-preserved in the join. According to the manual the delete should produce some sort of error. In fact the delete works in both cases – but the order that the tables appear makes a difference. When source is the first table in the in-line view the source_count drops to zero and the search_count stays at 10; when search is the first table in the in-line view the search_count drops to zero and the source_count stays at 10.

I wouldn’t call this totally unreasonable – but it’s something you need to know if you’re going to use the method, and something you need to document very carefully in case someone editing your code at a later date (or deciding that they could add a unique index) doesn’t realise the significance of the table order.

This does lead on to another important test – is it the order that the tables appear in the from clause that matters, or the order they appear in the join order that Oracle uses to optimise the query. (We hope – and expect – that it’s the join order as written, not the join order as optimised, otherwise the effect of the delete could change from day to day as the optimizer chose different execution plans!). To confirm my expectation I switched to traditional Oracle syntax with hints (still with unique indexes on both tables), writing a query with search as the first table in the from clause, but hinting the inline view to vary the optimised join order.


prompt  ============================================
prompt  Source hinted as leading table in join order 
prompt  ============================================

delete from (
        select 
                /*+ leading(s1, s) */
                * 
        from 
                search s,
                source s1 
        where
                s.n1 = s1.n1
        )
;

select count(1) source_count from source; 
select count(1) search_count from search;
rollback;

prompt  ============================================
prompt  Search hinted as leading table in join order 
prompt  ============================================

delete from (
        select 
                /*+ leading(s, s1) */
                * 
        from 
                search s,
                source s1 
        where
                s.n1 = s1.n1
        )
;

select count(1) source_count from source; 
select count(1) search_count from search;
rollback;

In both cases the rows were deleted from search (the first table in from clause). And, to answer the question you should be asking, I did check the execution plans to make sure that the hints had been effective:


============================================
Source hinted as leading table in join order
============================================

------------------------------------------------------------------
| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
------------------------------------------------------------------
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SOURCE_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SEARCH_IDX |     1 |     3 |       |
------------------------------------------------------------------

============================================
Search hinted as leading table in join order
============================================

------------------------------------------------------------------
| Id  | Operation           | Name       | Rows  | Bytes | Cost  |
------------------------------------------------------------------
|   0 | DELETE STATEMENT    |            |    10 |    60 |     1 |
|   1 |  DELETE             | SEARCH     |       |       |       |
|   2 |   NESTED LOOPS      |            |    10 |    60 |     1 |
|   3 |    INDEX FULL SCAN  | SEARCH_IDX |    10 |    30 |     1 |
|*  4 |    INDEX UNIQUE SCAN| SOURCE_IDX |     1 |     3 |       |
------------------------------------------------------------------

Summary

Using updatable join views to handle deletes can be very efficient but the manual’s statement of the “Delete Rule” is incorrect. It is possible to have several key-preserved tables in the view that you’re using, and if that’s the case you need to play safe and ensure that the table you want to delete from is the first table in the from clause. This means taking steps to eliminate the risk of someone editing some code at a later date without realising the importance of the table order.

 

Upgrading from OpenLeap to SLES15

Yann Neuhaus - Mon, 2019-08-19 13:58

Sometimes business plans change and maybe you need to move your OpenLeap 15 Server to the supported version SUSE Linux Enterprise Server 15. Upgrade is getting really easy with version 15. It can be performed online. So your server does not need to be offline during the upgrade.

So let’s have a look on the upgrade.

First of all, you need a SUSE Subscription. We will help you with this. Just send us a message.
As soon as you got it you can go on with the upgrade.

Let’s start with checking the actual version running on the server.

openleap:~ $ cat /etc/os-release
NAME="openSUSE Leap"
VERSION="15.0"
ID="opensuse-leap"
ID_LIKE="suse opensuse"
VERSION_ID="15.0"
PRETTY_NAME="openSUSE Leap 15.0"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:15.0"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"

Now we can install SUSEConnect, so we can register the system in the next step.

openleap:~ $ zypper in SUSEConnect
Retrieving repository 'openSUSE-Leap-15.0-Update' metadata ..............................................................................................................................................................[done]
Building repository 'openSUSE-Leap-15.0-Update' cache ...................................................................................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 3 NEW packages are going to be installed:
  SUSEConnect rollback-helper zypper-migration-plugin

3 new packages to install.
Overall download size: 138.9 KiB. Already cached: 0 B. After the operation, additional 213.9 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package SUSEConnect-0.3.17-lp150.2.14.1.x86_64                                                                                                                                 (1/3), 100.9 KiB (176.3 KiB unpacked)
Retrieving: SUSEConnect-0.3.17-lp150.2.14.1.x86_64.rpm ..................................................................................................................................................................[done]
Retrieving package rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch                                                                                                             (2/3),  22.6 KiB ( 19.9 KiB unpacked)
Retrieving: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch.rpm ..................................................................................................................................[done (7.9 KiB/s)]
Retrieving package zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch                                                                                                       (3/3),  15.5 KiB ( 17.6 KiB unpacked)
Retrieving: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch.rpm ..............................................................................................................................[done (253 B/s)]
Checking for file conflicts: ............................................................................................................................................................................................[done]
(1/3) Installing: SUSEConnect-0.3.17-lp150.2.14.1.x86_64 ................................................................................................................................................................[done]
(2/3) Installing: rollback-helper-1.0+git20181218.5394d6e-lp150.3.3.1.noarch ............................................................................................................................................[done]
(3/3) Installing: zypper-migration-plugin-0.11.1520597355.bcf74ad-lp150.1.1.noarch ......................................................................................................................................[done]
openleap:~ # 

Register your system at the SUSE Customer Center, so you get full access to the repositories. This step is mandatory, otherweise it’s not possible to upgrade.

openleap:~ $ SUSEConnect -r REGISTRATION_CODE -p SLES/15/x86_64
Registering system to SUSE Customer Center

Announcing system to https://scc.suse.com ...

Activating SLES 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Now check for available extensions and the command to activate it using SUSEConnect

openleap:~ $ SUSEConnect --list-extensions
AVAILABLE EXTENSIONS AND MODULES

    Basesystem Module 15 x86_64
    Activate with: SUSEConnect -p sle-module-basesystem/15/x86_64

        Containers Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-containers/15/x86_64

        Desktop Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-desktop-applications/15/x86_64

            Development Tools Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-development-tools/15/x86_64

            SUSE Linux Enterprise Workstation Extension 15 x86_64
            Activate with: SUSEConnect -p sle-we/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Cloud Application Platform Tools Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-cap-tools/15/x86_64

        SUSE Linux Enterprise Live Patching 15 x86_64
        Activate with: SUSEConnect -p sle-module-live-patching/15/x86_64 -r ADDITIONAL REGCODE

        SUSE Package Hub 15 x86_64
        Activate with: SUSEConnect -p PackageHub/15/x86_64

        Server Applications Module 15 x86_64
        Activate with: SUSEConnect -p sle-module-server-applications/15/x86_64

            Legacy Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-legacy/15/x86_64

            Public Cloud Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-public-cloud/15/x86_64

            SUSE Linux Enterprise High Availability Extension 15 x86_64
            Activate with: SUSEConnect -p sle-ha/15/x86_64 -r ADDITIONAL REGCODE

            Web and Scripting Module 15 x86_64
            Activate with: SUSEConnect -p sle-module-web-scripting/15/x86_64


REMARKS

(Not available) The module/extension is not enabled on your RMT/SMT
(Activated)     The module/extension is activated on your system

MORE INFORMATION

You can find more information about available modules here:
https://www.suse.com/products/server/features/modules.html

In case you need more modules, you can add now any module you need. Please keep in mind, for the extensions you need a separate subscription. For my needs the base module is enough.

openleap:~ $ SUSEConnect -p sle-module-basesystem/15/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-basesystem 15 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

Next step is to do the upgrade itself. As the output is quite huge, I put some [***] as place holders.

openleap:~ $ zypper dup --force-resolution
Warning: You are about to do a distribution upgrade with all enabled repositories. Make sure these repositories are compatible before you continue. See 'man zypper' for more information about this command.
Refreshing service 'Basesystem_Module_15_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_x86_64'.
Loading repository data...
Warning: Repository 'openSUSE-Leap-15.0-Update-Non-Oss' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Computing distribution upgrade...

The following 11 NEW packages are going to be installed:
  dejavu-fonts glibc-locale-base google-opensans-fonts issue-generator kernel-default-4.12.14-lp150.12.67.1 man-pages man-pages-posix release-notes-sles rpcgen yast2-vm zypper-search-packages-plugin

The following 286 packages are going to be upgraded:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions 
[***]
[***]
  yast2-storage-ng yast2-users

The following 288 packages have no support information from their vendor:
  NetworkManager NetworkManager-lang PackageKit PackageKit-backend-zypp PackageKit-gstreamer-plugin PackageKit-gtk3-module PackageKit-lang aaa_base aaa_base-extras apparmor-abstractions apparmor-docs apparmor-parser
[***]
[***]

The following package is not supported by its vendor:
  zypper-search-packages-plugin

286 packages to upgrade, 11 new.
Overall download size: 322.4 MiB. Already cached: 0 B. After the operation, additional 343.9 MiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package issue-generator-1.6-1.1.noarch                                                                                                                                       (1/297),  28.0 KiB ( 25.6 KiB unpacked)
Retrieving: issue-generator-1.6-1.1.noarch.rpm ..........................................................................................................................................................................[done]
Retrieving package man-pages-4.16-3.3.1.noarch                                                                                                                                          (2/297),   
[***]
[***]

Executing %posttrans scripts ............................................................................................................................................................................................[done]
There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.

Disable the openLeap repository to prevent warnings when using zypper (this is optional).

openleap:~ $ zypper lr -d
Repository priorities are without effect. All enabled repositories share the same priority.

#  | Alias                                                                       | Name                                      | Enabled | GPG Check | Refresh | Priority | Type   | URI                                                                                                                                                                                                                                                        | Service       
---+-----------------------------------------------------------------------------+-------------------------------------------+---------+-----------+---------+----------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------
 1 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Pool          | SLE-Module-Basesystem15-Debuginfo-Pool    | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_debug?8YsR5pv4h6qQr15qW8KWqXRBK0MDd9EONPOcnYjrQyXxeU4PVhIX5FRdwf5ziU1Oa8rdtuE2W4NyVotHhKeQrdvQMM9OQ3sEllMJno1VxgQEPq-1QyaCv24cSZsg2H21-d3hQqkxXD3iUKRgNTqHGtkRHHCN71yMa28   | Basesystem_Module_15_x86_64
 2 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Debuginfo-Updates       | SLE-Module-Basesystem15-Debuginfo-Updates | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update_debug?jjKAgTm0AAAAAAAAq_jTGwRAkx4zc8EQV0ANMjmrFjIoJBofX8ETJPW9qS9ojjVsnoDNK1TRGjk5t31J0Y9Bv_KRzpdYdJVmoH_gO-WaIo-dsZHiDXUm9fjYvLJcjsm0TidUzPnNkAqDAQsPZGZUUCXrek3JjRZl        | Basesystem_Module_15_x86_64
 3 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Pool                    | SLE-Module-Basesystem15-Pool              | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product?MbepfbRQy5WToAHi4xjhC2KOqjwW00ax8Xj23W9iMukhhWz78BKVY5sSDHiT4nurfz1JyHJrqcqpZiJU-PdajPthp3lQx4hyu-5FzifML0ALTTvKY6XEYA7qlwbn0E6fmA_iSbMl2JOWvZDpeQUZtMlCjQI                 | Basesystem_Module_15_x86_64
 4 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Source-Pool             | SLE-Module-Basesystem15-Source-Pool       | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Module-Basesystem/15/x86_64/product_source?86sSfrO8KT3dMsapcn4ihtYRbSwy2kunffEZ6oUiH-vBC-0IkEZQPniCPn63-DeOwlX9brw3vR-BqMNjC9KiOAq0JR0aHZUcyHP5sGhjitLFGTx9zUYo3F4u0KNC3rqIq2WGq-kZEhLm1s2U-vVJHpr6x5RWmMjuBDAe | Basesystem_Module_15_x86_64
 5 | Basesystem_Module_15_x86_64:SLE-Module-Basesystem15-Updates                 | SLE-Module-Basesystem15-Updates           | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Module-Basesystem/15/x86_64/update?WzCCey-NrSLfBHonPxWuaTXt1QuGMemPZsFEhtMfDC_jKtn5XUsqbdI8JZ9D6YNveeYrthpKY2uLTOIB_vtbMQsQUblAr2dU4D59yIBIjZv1l91CLeZD2z61oLPc7ad0UkZjl9R_e6bSNAGP8oz94Fp5                      | Basesystem_Module_15_x86_64
 6 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Pool    | SLE-Product-SLES15-Debuginfo-Pool         | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_debug?xtsT1GSwugZaHDGElBaTczgwJS79hgJDOy_tkzInodgbplBttQlatgP5rI0SnLQqLCw5WsfSqBIyN_tnMVZn4ZLJ3S3ENBDiZsYhg0vGZf7ILMix03bcXoHEKlzAYRntcEIx877RvS7DDHAAR4cj1V5gzcu6               | SUSE_Linux_Enterprise_Server_15_x86_64
 7 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Debuginfo-Updates | SLE-Product-SLES15-Debuginfo-Updates      | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update_debug?tkJ9rVV33hinQtEBnPYH_5D8OCs1ZtB4WEQFAShIaq1yN6Lwg2-W2Zu2AFALp5Jk3Oh1g1XVBqEOSPnSgACvcCIWuXr_cRfirUHEwbNqIcaSwcjxGjJYdhsb97t01_X-LPT0FDiGGezP64HheC_CzdV6xA                   | SUSE_Linux_Enterprise_Server_15_x86_64
 8 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Pool              | SLE-Product-SLES15-Pool                   | Yes     | (r ) Yes  | No      |   99     | rpm-md | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product?887kGBgH3AfONFY1X3wVkuYn_5nm8sTKex06X1JSRI9gXQNqJioSBea5sAECwbVhqs510L3YRdVlVLgsavZ9D8PPplk8S_oEvhWEQdS-jfFH9dTKcukF09RkjliWQkcaNHkFzY4uQWbHzXJYekkn                             | SUSE_Linux_Enterprise_Server_15_x86_64
 9 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Source-Pool       | SLE-Product-SLES15-Source-Pool            | No      | ----      | ----    |   99     | NONE   | https://updates.suse.com/SUSE/Products/SLE-Product-SLES/15/x86_64/product_source?XhlzrvfoPp1qTZqv1hErqkUwBGOoZMYY4RAS-c78IKoacswAmOXTemuxa8ZiAFfopgedlQfewbcC7_gxUERoKGdlcW7E4WaqpcuSDYh-xlJr2SG9-4OuxPDToPfZ1CgvDDZIAlqIyXDKGcwvl3EjALH9msDNHg            | SUSE_Linux_Enterprise_Server_15_x86_64
10 | SUSE_Linux_Enterprise_Server_15_x86_64:SLE-Product-SLES15-Updates           | SLE-Product-SLES15-Updates                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | https://updates.suse.com/SUSE/Updates/SLE-Product-SLES/15/x86_64/update?j0Qh2SfH11scgFCBmZI3R9B4GMJWAh5l6C0P7_jtUle_3dAATzJ2wwwo3SR_dOpn4bBYL4wSkD9bMuCRJlzcmWSkeh1W06Rz8Jsq1KysLODXqUtsBgeE5Tju1Pf-XTpNJF1RQMRRRmb_Tj8RPA                                 | SUSE_Linux_Enterprise_Server_15_x86_64
11 | openSUSE-Leap-15.0-1                                                        | openSUSE-Leap-15.0-1                      | No      | ----      | ----    |   99     | rpm-md | cd:///?devices=/dev/disk/by-id/ata-VBOX_CD-ROM_VB0-01f003f6                                                                                                                                                                                                |               
12 | repo-debug                                                                  | openSUSE-Leap-15.0-Debug                  | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/oss/                                                                                                                                                                                        |               
13 | repo-debug-non-oss                                                          | openSUSE-Leap-15.0-Debug-Non-Oss          | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                    |               
14 | repo-debug-update                                                           | openSUSE-Leap-15.0-Update-Debug           | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/oss/                                                                                                                                                                                                   |               
15 | repo-debug-update-non-oss                                                   | openSUSE-Leap-15.0-Update-Debug-Non-Oss   | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/debug/update/leap/15.0/non-oss/                                                                                                                                                                                               |               
16 | repo-non-oss                                                                | openSUSE-Leap-15.0-Non-Oss                | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                          |               
17 | repo-oss                                                                    | openSUSE-Leap-15.0-Oss                    | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/distribution/leap/15.0/repo/oss/                                                                                                                                                                                              |               
18 | repo-source                                                                 | openSUSE-Leap-15.0-Source                 | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/oss/                                                                                                                                                                                       |               
19 | repo-source-non-oss                                                         | openSUSE-Leap-15.0-Source-Non-Oss         | No      | ----      | ----    |   99     | NONE   | http://download.opensuse.org/source/distribution/leap/15.0/repo/non-oss/                                                                                                                                                                                   |               
20 | repo-update                                                                 | openSUSE-Leap-15.0-Update                 | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/oss/                                                                                                                                                                                                         |               
21 | repo-update-non-oss                                                         | openSUSE-Leap-15.0-Update-Non-Oss         | Yes     | (r ) Yes  | Yes     |   99     | rpm-md | http://download.opensuse.org/update/leap/15.0/non-oss/                                                                                                                                                                                                     |               
openleap:~ # zypper mr -d 21
Repository 'repo-update-non-oss' has been successfully disabled.

Check for orphaned packages.

openleap:~ $ zypper rm $(zypper --no-refresh packages --orphaned | gawk '{print $5}'  | tail -n +5 )
Too few arguments.
At least one package name is required.
remove (rm) [OPTIONS]  ...

Remove packages with specified capabilities. A capability is NAME[.ARCH][OP], where OP is
one of <, =, >.

  Command options:

-r, --repo     Load only the specified repository.
-t, --type            Type of package (package, patch, pattern, product).
-n, --name                  Select packages by plain name, not by capability.
-C, --capability            Select packages by capability.
-u, --clean-deps            Automatically remove unneeded dependencies.
-U, --no-clean-deps         No automatic removal of unneeded dependencies.
-D, --dry-run               Test the removal, do not actually remove.
    --details               Show the detailed installation summary.
-y, --no-confirm            Don't require user interaction. Alias for the --non-interactive global
                            option.

  Solver options:

    --debug-solver          Create a solver test case for debugging.
    --force-resolution      Force the solver to find a solution (even an aggressive one) rather than
                            asking.
    --no-force-resolution   Do not force the solver to find solution, let it ask.

openleap:~ $ zypper --no-refresh packages --orphaned
Loading repository data...
Reading installed packages...
No packages found.

My whole migration lasts about 30 minutes. But that’s really a small server.

And in the end – you have to reboot, anyway.

openleap:~ $ systemctl reboot

Let’s check if we really run a SLES15 server now.

openleap:~ # cat /etc/os-release
NAME="SLES"
VERSION="15"
VERSION_ID="15"
PRETTY_NAME="SUSE Linux Enterprise Server 15"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15"

Looks good! The system is running SLES15 now. Now you can enjoy the full support and service of SUSE.

Cet article Upgrading from OpenLeap to SLES15 est apparu en premier sur Blog dbi services.

Linux: Configuring hosts per ssh in parallel: pssh

Dietrich Schroff - Fri, 2019-08-16 15:11
If you have to set up some hosts in a way, that a number of commands has to be executed on each node, than you should consider PSSH:

http://manpages.ubuntu.com/manpages/bionic/man1/parallel-ssh.1.html

The installation is straight forward:
root@zerberus:~# apt-get install pssh
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  btrfs-tools geoip-database-extra libcryptui0a libjs-openlayers seahorse-daemon
Verwenden Sie »apt autoremove«, um sie zu entfernen.
Die folgenden NEUEN Pakete werden installiert:
  pssh
0 aktualisiert, 1 neu installiert, 0 zu entfernen und 8 nicht aktualisiert.
Es müssen 29,0 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 135 kB Plattenplatz zusätzlich benutzt.
Holen:1 http://de.archive.ubuntu.com/ubuntu bionic/universe amd64 pssh all 2.3.1-1 [29,0 kB]
Es wurden 29,0 kB in 0 s geholt (71,0 kB/s).
Vormals nicht ausgewähltes Paket pssh wird gewählt.
(Lese Datenbank ... 488993 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../archives/pssh_2.3.1-1_all.deb ...
Entpacken von pssh (2.3.1-1) ...
pssh (2.3.1-1) wird eingerichtet ...
Trigger für man-db (2.8.3-2ubuntu0.1) werden verarbeitet ...
But executing on ubuntu is a little bit tricky:
If you want to do a test, you have to know, that ubuntu has renamed the binary to parallel-ssh. pssh is not known at commandline:

schroff@zerberus:~$ parallel-ssh -i -H "localhost 127.0.0.1" ls -l .bashrc
[1] 00:04:48 [SUCCESS] 127.0.0.1
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
[2] 00:04:48 [SUCCESS] localhost
-rw-r--r-- 1 schroff schroff 3815 Jul 14  2017 .bashrc
Please note the syntax highlighting, which i found very helpful:

“No Primary Key item has been defined for form region”

Jeff Kemp - Fri, 2019-08-16 02:30

Otherwise reported as “WWV_FLOW_FORM_REGION.NO_PRIMARY_KEY_ITEM” or merely “Internal error”, this bug caused me to waste hours of investigation, twice, because a simple Google search was not returning any results. This post is merely to remedy that situation.

On an APEX 19.1 page with a new Form region combined with an Interactive Grid region, when the user makes a change to a record and Submits the page, they get this error.

This is caused by bug 29612553 – FORM REGION DML PROCESS THROWS INTERNAL ERROR WHEN INTERACTIVE GRID DML PROCESS IS ON THE SAME PAGE and fixed by APEX patch 29369869.

Basic Replication -- 1 : Introduction

Hemant K Chitale - Thu, 2019-08-15 23:24
Basic Replication, starting with Read Only Snapshots has been available in Oracle since  V7.   This was doable with the "CREATE SNAPSHOT" command.

In 8i, the term was changed from "Snapshot" to "Materialized View"  and the "CREATE MATERIALIZED VIEW" command was introduced, while "CREATE SNAPSHOT" was still supported.

Just as CREATE SNAPSHOT is still available in 19c,  DBMS_SNAPSHOT.REFRESH is also available.


























Not that I recommend that you use CREATE SNAPSHOT and DBMS_SNAPSHOT anymore.  DBAs and Developers should have been using CREATE MATERIALIZED VIEW and DBMS_REFRESH since 8i.

In the next few blog posts (this will be a very short series) I will explore Basic Replication.  Let me know if you want to see it in 11.2 and 12c as well.



Categories: DBA Blogs

Find Docker Container IP Address?

DBASolved - Wed, 2019-08-14 11:38

This is just simple post for later reference, if I need it …

In setting up some docker containers for testing Oracle GoldenGate, I needed to find the IP address of the container where my database was running (I keep my database in a seperate container in order not to rebuild it every time).

To find the address of my database container, I had to use the docker “inspect” command. This command returns low level infomation on Docker objects.

The syntax is as follows:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'  

 

 

Enjoy!!!!

Categories: DBA Blogs

Identifying the TNS_ADMIN for an deployment in GoldenGate Microservices

DBASolved - Wed, 2019-08-14 10:30

Setting up network routing/access with Oracle GoldenGate Microservices (12.3 and later) can be an interesting experience. As many in the Oracle space knows, you simply have to setup TNS_ADMIN to point to the location where your sqlnet.ora and tnsnames.ora files are located. This would normally look like this:


export TNS_ADMIN=${ORACLE_HOME}/network/admin


or


export TNS_ADMIN=$(ORA_CLIENT_HOME}/network/admin

 

These examples still working for Oracle GoldenGate Classic, however, when we start looking at this through the lens of Microservices; it changes a bit. Within the Oracle GoldenGate Microservices architecture the TNS_ADMIN enviroment variable has to be set “per deployment”. Depending on the number of deployments that are deployed with in the architecture, it is possible to have 1+N TNS_ADMIN variables.

As a illistration, it would look something like this:

As you can see this is specific to the Microservices architecture and how to setup network routing for individual deployments.

Setting TNS_ADMIN

How do you set the TNS_ADMIN environment variable for each deployment? It is quite simple, when you are building a deployment using the Oracle GoldenGate Configuration Assistant (OGGCA). Priort to running OGGCA, you can set the TNS_ADMIN variable at the OS level and the OGGCA will pick it up for that run and configuration of that specific deployment.

Optionally, you don’t have to set it at the OS level. During the OGGCA walkthrough, you will be able to set the variable manually. The OGGCA will not move past the enviornment variables step until it is provided.

Changing TNS_ADMIN

After building a deployment, you many want to chang the location of your network related files. This can be done from either the HTML5 web page for the deployment or from REST API.

To change TNS_ADMIN from the HTML5 pages within Oracle GoldenGate Microservices, you need to start at the ServiceManager Overview page. At the bottom on this page, there is a section called “Deployments”

The select the deployment you want to work with. After clicking on the deployment name, you should now be on the “Deployment Information” page. This page has two tabs at the top. The first tab is related to details of the deployment. The second table is related to configurations for the deployment.

Within the second tab – Configurations, is where you can set/change the environment variables for the deployment. In this case, we want to to modify the TNS_ADMIN enviornment variable.

 

To the right of the variable in the “Actions” column, click on the pencil icon. This will allow you to edit the environment variable. Change to the new location and save it. You may need to restart the deployment (hint, that step is on the ServiceManager Overview page).

At this point, you should now be able to change the location of your TNS_ADMIN variable. This is also handy for Oracle GoldenGate Microserivces on Marketplace as well … just saying.

Using REST API

This same process can be done quickly using the REST API. The below sample code, is only and sample and has not been tested. Use at your own risk!

curl -X PATCH \
  <a href="https://<ip_address>/services/v2/deployments/alpha" target="_blank" rel="noopener">https://<ip_address>/services/v2/deployments/alpha</a> \
  -H 'cache-control: no-cache' \
  -d '{
    "oggHome":"/opt/app/oracle/product/19.1.0/oggcore_1",
    "oggEtcHome":"/opt/app/oracle/gg_deployments/Atlanta/etc",
    "oggVarHome":"/opt/app/oracle/gg_deployments/Atlanta/var",
    "environment"{
    	"tns_admin":"/opt/app/oracle/product/18.1.0/network/admin"
    }
    "status":"restart"
}'

Enjoy!!!

Categories: DBA Blogs

gather_system_stats

Jonathan Lewis - Wed, 2019-08-14 08:20

What happens when you execute dbms_stats.gather_system_stats() with the ‘Exadata’ option ?

Here’s what my system stats look like (12.2.0.1 test results) after doing so. (The code to generate the two different versions is at the end of the note).


System Stats
============
Status: COMPLETED
Timed: 13-Aug-2019 15:00:00 - 13-Aug-2019 15:00:00
--------------------------------------------------
CPUSPEED        :
CPUSPEEDNW      :          918
IOSEEKTIM       :           10
IOTFRSPEED      :      204,800
MAXTHR          :
MBRC            :          128
MREADTIM        :
SLAVETHR        :
SREADTIM        :

PL/SQL procedure successfully completed.

MBRC       :          128
MREADTIM   :
SREADTIM   :
CPUSPEED   :
CPUSPEEDNW :          918
IOSEEKTIM  :           10
IOTFRSPEED :      204,800
MAXTHR     :
SLAVETHR   :

PL/SQL procedure successfully completed.

All the code does is set the MBRC, IOSEEKTIM, and IOTFRSPEED to fixed values and the only real gather is the CPUSPEEDNW. The parameters showing blanks are deliberately set null by the procedure – before I called the gather_system_stats() every parameter had a value. You could also check the SQL trace file (with bind captured enabled) to see the statements that deliberately set those parameters to null if you want more proof.

What are the consequences of this call (assuming you haven’t also done something with the calibrate_io() procedure? Essentially Oracle now has information that says a single (8KB) block read request will take marginally over 10 milliseconds, and a multiblock read request of 1MB will take just over 15 milliseconds: in other words “tablescans are great, don’t use indexes unless they’re really precisely targetted”. To give you a quantitative feel for the numbers: given the choice between doing a tablescan of 1GB to pick 1,500 randomly scattered rows and using a perfect index the optimizer would choose the index.

To explain the time calculations: Oracle has set an I/O seek time of 10 ms, and a transfer rate of 204,800 bytes per ms (200 MB/s), with the guideline that a “typical” multiblock read is going to achieve 128 blocks. So the optimizer believes a single block read will take 10 + 8192/204800 ms = 10.04ms, while a multiblock read request for 1MB will take 10 + 1048576/204800 ms = 15.12 ms.

It’s also important to note that Oracle will use the 128 MBRC value in its calculation of the cost of the tablescan – even if you’ve set the db_file_mulitblock_read_count parameter for the session or system to something smaller; and if you have set the db_file_multiblock_read_count that’s the maximum size of multiblock read that the run-time engine will use.

Code

Here are the two procedures I used to report the values above. You will only need the privilege to execute the dbms_stats package for the second one, but you’ll need the privilege to access the SYS table aux_stats$ to use the first. The benefit of the first one is that it can’t go out of date as versions change.


rem
rem     Script:         get_system_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2002
rem
rem     Last tested
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

set linesize 180
set trimspool on
set pagesize 60
set serveroutput on

spool get_system_stats

declare
        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
begin
        for r1 in (
                select  rownum rn, pname
                from    sys.aux_stats$
                where   sname = 'SYSSTATS_MAIN'
        ) loop
                dbms_stats.get_system_stats(m_status, m_start, m_stop, r1.pname, m_value);
                if r1.rn = 1 then
                        dbms_output.put_line('System Stats');
                        dbms_output.put_line('============');
                        dbms_output.put_line('Status: ' || m_status);
                        dbms_output.put_line(
                                'Timed: ' ||
                                to_char(m_start,'dd-Mon-yyyy hh24:mi:ss') ||
                                ' - ' ||
                                to_char(m_stop ,'dd-Mon-yyyy hh24:mi:ss')
                        );
                        dbms_output.put_line('--------------------------------------------------');
                end if;
                dbms_output.put_line(rpad(r1.pname,15) ||  ' : ' || to_char(m_value,'999,999,999'));
        end loop;
end;
/

declare
        m_value         number;
        m_status        varchar2(64);
        m_start         date;
        m_stop          date;
begin
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MBRC', m_value);
        dbms_output.put_line('MBRC       : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MREADTIM', m_value);
        dbms_output.put_line('MREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SREADTIM', m_value);
        dbms_output.put_line('SREADTIM   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEED', m_value);
        dbms_output.put_line('CPUSPEED   : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'CPUSPEEDNW', m_value);
        dbms_output.put_line('CPUSPEEDNW : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOSEEKTIM', m_value);
        dbms_output.put_line('IOSEEKTIM  : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'IOTFRSPEED', m_value);
        dbms_output.put_line('IOTFRSPEED : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'MAXTHR', m_value);
        dbms_output.put_line('MAXTHR     : ' || to_char(m_value,'999,999,999'));
        dbms_stats.get_system_stats(m_status, m_start, m_stop, 'SLAVETHR', m_value);
        dbms_output.put_line('SLAVETHR   : ' || to_char(m_value,'999,999,999'));
end;
/

spool off

Menu Popup with Declarative List

Jeff Kemp - Tue, 2019-08-13 23:04

In the past when I’ve needed to add half a dozen or more buttons to a page, I’ve sometimes encased them in a Collapsible region so that the user can slide them out of the way instead of clogging up the screen. Recently however I’ve started (sparingly) using a Menu Popup, as per this tutorial. The issue I have with this method, however, is that the menu items are defined in a shared component (a List) which means it’s not defined on a per-page basis.

Some of the actions simply need to do a Submit on the page, which is simple enough: set the URL Target to something like:

In other cases, the action needs to do something more specific to the page, e.g. show a region:

apex.theme.openRegion("popupQuestion")

Or the action might need to navigate to another page, passing parameters based on specific items on the page. This means the list, defined in Shared Components, now has hardcoded elements that are only useful for that one page; more to the point, they are defined outside of the page – I’d rather that everything specific to a page is defined within that page’s definition.

The approach I’m using now is to use a custom trigger. Each list item has its URL Target set to something like:

The third parameter is set to a unique code that the page can use to identify which menu option was chosen. This parameter will be passed to this.data in the custom trigger’s event handler.

On the page, I have a Dynamic Action with the following attributes:

  • Event: Custom
  • Custom Event: menuAction
  • Selection Type: JavaScript Expression
  • JavaScript Expression: document
  • True Action: Execute JavaScript Code, e.g.:
switch(this.data) {
  case 'OPEN_POPUP':
    apex.theme.openRegion("popupQuestion");
    break;

  default:
    apex.page.submit({request:this.data,showWait:true});
}

Note that to do a simple Submit on the page, all I need to do is set the request on the third parameter of the menu item’s URL. If I want to do something slightly different for a particular request, I can put an extra “case” in the JavaScript code to handle it.

The benefit of this approach is that this trigger becomes the jumping-off point for all such menu actions for this page. In theory I could re-use the same List on multiple pages (if the items in the list are generic enough) but have different behaviour occur for each menu item specific to each page. The only challenge with this approach might be if you needed some conditions on each menu item, e.g. so they are shown or hidden in specific circumstances. If the condition for a menu item references a particular page item the List will no longer be generic and re-usable. For this reason, I usually still use a separate List for each menu for each page.

Perhaps in a future release of APEX we will gain the ability to define a List on a Page instead of in Shared Components. In the meantime, if you are interested in all the details on this method (including a solution for implementing a redirect to another page, or to open a modal page), refer to this tip.

Multi-table

Jonathan Lewis - Tue, 2019-08-13 07:34

Here’s a problem (and I think it should be called a bug) that I first came across about 6 years ago, then forgot for a few years until it reappeared some time last year and then again a few days ago. The problem has been around for years (getting on for decades), and the first mention of it that I’ve found is MoS Bug 2891576, created in 2003, referring back to Oracle 9.2.0.1, The problem still exists in Oracle 19.2 (tested on LiveSQL).

Here’s the problem: assume you have a pair of tables (call them parent and child) with a referential integrity constraint connecting them. If the constraint is enabled and not deferred then the following code may fail, and if you’re really unlucky it may only fail on rare random occasions:


insert all
        into parent({list of parent columns}) values({list of source columns})
        into child ({list of child columns})  values({list of source columns})
select
        {list of columns}
from    {source}
;

The surprising Oracle error is “ORA-02291: integrity constraint ({owner.constraint_name}) violated – parent key not found”, and the reason is simple (and documented in MoS note 265826.1 Multi-table Insert Can Cause ORA-02291: Integrity Constraint Violated for Master-Detail tables: the order in which the insert operations take place is “indeterminate” so that child rows may be inserted before their parent rows (and for the multi-table insert the constraint checks are not postponed until the statement completes as they are, for instance, for updates to a table with a self-referencing RI constraint).

Two possible workarounds are suggested in Doc ID 265826.1

  • drop the foreign key constraint and recreate it after the load,
  • make the foreign key constraint deferrable and defer it before the insert so that it is checked only on commit (or following an explicit call to make it immediate)

The second option would probably be preferable to the first but it’s still not a very nice thing to do and could leave your database temporarily exposed to errors that are hard to clean up. There are some details of the implementation of deferrable constraints in the comments of this note on index rebuilds if you’re interested in the technicalities.

A further option which seems to work is to create a (null) “before row insert” trigger on the parent table – this appears to force the parent into a pattern of single row inserts and the table order of insertion then seems to behave. Of course you do pay the price of an increase in the volume of undo and redo. On the down-side Bug 2891576 MULTITABLE INSERT FAILS WITH ORA-02291 WHEN FK & TRIGGER ARE PRESENT can also be fouind on MoS, leading 265826.1 to suggests disabling triggers if their correctness is in some way dependent on the order in which your tables are populated. That dependency threat should be irrelevant if the trigger is a “do nothing” trigger. Sadly there’s a final note that I should mention: Bug 16133798 : INSERT ALL FAILS WITH ORA-2291 reports the issue as “Closed: not a bug”

There is a very simple example in the original bug note demonstrating the problem, but it didn’t work on the version of Oracle where I first tested it, so I’ve modified it slightly to get it working on a fairly standard install. (I suspect the original was executed on a database with a 4KB block size.)


drop table child purge;
drop table parent purge;

create table parent (id number primary key);

create table child  (id number, v1 varchar2(4000),v2 varchar2(3920));
alter table child add constraint fk1 foreign key (id) references parent (id);
 
create or replace trigger par_bri
before insert on parent
for each row
begin
        null;
end;
.

insert all
        into parent ( id ) values ( id )
        into child  ( id ) values ( id )
select  100 id from dual
;

In the model above, and using an 8KB block in ASSM, the code as is resulted in an ORA-02991 error. Changing the varchar2(3920) to varchar2(3919) the insert succeeded, and when I kept the varchar2(3920) but created the trigger the insert succeeded.

Fiddling around in various ways and taking some slightly more realistic table definitions here’s an initial setup to demonstrate the “randomness” of the failure (tested on various versions up to 18.3.0.0):


rem
rem     Script:         insert_all_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem             10.2.0.5
rem              9.2.0.8
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        lpad(rownum,10,'0')             small_vc,
        lpad(rownum,100,'0')            medium_vc,
        lpad(rownum,200,'0')            big_vc
from
        generator       v1
;

create table parent(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint par_pk primary key(id)
)
segment creation immediate
;

create table child(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint chi_pk primary key(id),
        constraint chi_fk_par foreign key (id) references parent(id)
)
segment creation immediate
;

create table child2(
        id              number,
        small_vc        varchar2(10),
        medium_vc       varchar2(100),
        big_vc          varchar2(200),
        constraint ch2_pk primary key(id),
        constraint ch2_fk_par foreign key (id) references parent(id)
)
segment creation immediate
;

I’ve created a “source” table t1, and three “target” tables – parent, child and child2. Table parent has a declared primary key and both child and child2 have a referential integrity constraint to parent. I’m going to do a multi-table insert selecting from t1 and spreading different columns across the three tables.

Historical note: When I first saw the “insert all” option of multi-table inserts I was delighted with the idea that it would let me query a de-normalised source data set just once and insert the data into a normalised set of tables in a single statement – so (a) this is a realistic test from my perspective and (b) it has come as a terrible disappointment to discover that I should have been concerned about referential integrity constraints (luckily very few systems had them at the time I last used this feature in this way).

The multi-table insert I’ve done is as follows:


insert all
        into parent(id, small_vc)  values(id, small_vc)
        into child (id, medium_vc) values(id, medium_vc)
        into child2(id, medium_vc) values(id, medium_vc)
--      into child2(id, big_vc)    values(id, big_vc)
select
        id, small_vc, medium_vc, big_vc
from
        t1
where
        rownum <= &m_rows_to_insert
;

You’ll notice that I’ve allowed user input to dictate the number of rows selected for insertion and I’ve also allowed for an edit to change the column that gets copied from t1 to child2. Althought it’s not visible in the create table statements I’ve also tested the effect of varying the size of the big_vc column in t1.

Starting with the CTAS and multi-table insert as shown the insert runs to completion if I select 75 rows from t1, but if I select 76 rows the insert fails with “ORA-02991: integrity constraint (TEST_USER.CHI_FK_PAR) violated – parent key not found”. If I change the order of the inserts into child1 and child2 the violated constraint is TEST_USER.CH2_FK_PAR – so Oracle appears to be applying the inserts in the order they appear in the statement in some circumstances.

Go back to the original order of inserts for child1 and child2, but use the big_vc option for child2 instead of the medium_vc. In this case the insert succeeds for 39 rows selected from t1, but fails reporting constraint TEST_USER.CH2_FK_PAR when selecting 40 rows. Change the CTAS and define big_vc with as lpad(rownum,195) and the insert succeeds with 40 rows selected and fails on 41 (still on the CH2_FK_PAR constraint); change big_vc to lpad(rownum,190) and the insert succeeds on 41 rows selected, fails on 42.

My hypothesis on what’s happening is this: each table in the multitable insert list gets a buffer of 8KB (maybe matching one Oracle block if we were to try different block sizes). As the statement executes the buffers will fill and, critically, when the buffer is deemed to be full (or full enough) it is applied to the table – so if a child buffer fills before the parent buffer is full you can get child rows inserted before their parent, and it looks like Oracle isn’t postponing foreign key checking to the end of statement execution as it does with other DML – it’s checking as each array is inserted.

Of course there’s a special boundary condition, and that’s why the very first test with 75 rows succeeds – neither of the child arrays gets filled before we reach the end of the t1 selection, so Oracle safely inserts the arrays for parent, child and child2 in that order. The same boundary applies occurs in the first of every other pair of tests that I’ve commented on.

When we select 76 rows from t1 in the first test the child and child2 arrays hit their limit and Oracle attempts to insert the child1 rows first – but the parent buffer is far from full so its rows are not inserted and the attempted insert results in the ORA-02991 error. Doing a bit of rough arithmetic the insert was for 76 rows totalling something like: 2 bytes for the id, plus a length byte, plus 100 bytes for the medium_vc plus a length byte, totalling 76 * 104 =7,904 bytes.

When we switch to using the big_vc for child2 the first array to fill is the child2 array, and we have 3 sets of results as we shorten big_vc:

  • 40 * ((1 + 2) + (1 + 200)) = 8160
  • 41 * ((1 + 2) + (1 + 195)) = 8159
  • 42 * ((1 + 2) + (1 + 190)) = 8148

While I’m fairly confident that my “8KB array” hypothesis is in the right ballpark I know I’ve still got some gaps to explain – I don’t like the fact that I’ve got a break point around 7,900 in the first example and something much closer to 8,192 in the other three examples.  I could try to get extra precision by running up a lot more examples with different numbers and lengths of columns to get a better idea of where the error is appearing – but I’m sufficiently confident that the idea is about right so I can’t persuade myself to make the effort to refine it. An example of an alternative algorithm (which is actually a better fit though a little unexpected) is to assume that the normal 5 byte row overhead (column count, lock byte, flags and 2-byte row directory entry) has been included in the array sizing code, and the insert takes place at the point incoming row breaks, or just touches, the limit. In this case our 4 results would suggest the following figures:

  • 75 * 109 = 8175
  • 39 * 209 = 8151
  • 40 * 204 = 8160
  • 41 * 199 = 8159

With these numbers we can see 8KB (8,192 bytes) very clearly, and appreciate that the one extra row would take us over the critical limit, hence triggering the insert and making the array space free to hold the row.

Bottom Line

If you’re using the multi-table “insert all” syntax and have referential integrity declared between the various target tables then you almost certainly need to ensure that the foreign key constraints are declared as deferrable and then deferred as the insert takes place otherwise you may get random (and, until now, surprisingly inexplicable) ORA-02991 foreign key errors.

A possible alternative workaround is to declare a “do nothing” before row insert trigger on the top-level as this seems to switch the process into single row inserts on the top-most parent that force the other array inserts to take place with their parent row using small array sizes and protecting against the foreign key error. This is not an officially sanctioned workaround, though, and may only have worked by accident in the examples I tried.

It is possible, if the 8KB working array hypothesis is correct, that you will never see the ORA-02991 if the volume of data (number of rows * row length) for the child rows of any given parent row is always less than the size of the parent row – but that might be a fairly risky thing to hope for in a production system. It might be much better to pay the overhead of deferred foreign key checking than having a rare, unpredictable error appearing.

 

Patching or reimaging your ODA?

Yann Neuhaus - Mon, 2019-08-12 08:30
Introduction

One of the key features of the ODA (Oracle Database Appliance) is the ability to patch the entire stack every three months, the goal being to keep all the components up-to-date. Most of the customers won’t patch so often, but one patch a year is quite a good average. But when comes the time for patching, comes the time for anxiety for the DBA. And it’s totally justified.

Why ODA patching can eventually be a nightmare?

First of all, patching all the products is a complex operation. ODA is not a real appliance: it’s classic hardware composed of parts from various vendors, and with nearly standard software, including Linux, Grid Infrastructure, ASM, ACFS and database engines. And all these products need to be patched together. If you were allowed to patch the ODA components separately, it could last quite a long time. Yes, Oracle provides a single patch for the ODA, but it’s just a bundle of dozen of patches. It’s easier to apply, all together the patches are certified, but it’s still a complex operation to bring all the modules to the target patch level. This is why you can encounter multiple problems. For example if you installed your own RPMs onto the system (unable to update the OS), if you lack some free space (unable to complete the patching), if your databases have specific configuration, or eventually if you discover that there is a bug in the patch related to the version you come from and linked to your ODA model.

Also, some of the patches are not cumulative, meaning that you cannot directly upgrade to the latest version. You sometimes need to apply 4 or 5 patches to upgrade, making the patching even more uncertain.

Starting from these facts, you may think about reimaging, and you’re probably right.

What are the advantages and drawbacks of reimaging?

For sure, reimaging has a lot of advantages:

  • Guarantee of success (you start from scratch)
  • Cleaner ODA (no question about that)
  • Make sure you are able to do the reimage (in case of you really need it)
  • Make sure your documentation is good (and this is the only way to validate it!)
  • Avoid later problems if patching not worked correctly

These are the drawbacks:

  • Longer than a single patch succesfully applied on the first try (who knows)
  • Need to erase everything and restart as if it were a new ODA
  • You need to know how your ODA was installed and configured (not so simple if someone did the job for you)
  • You probably need another ODA with Data Guard or DBVisit to limit the downtime
Can reimaging be quicker than patching?

Patching last about 3 hours if everything is OK. But it’s only for one patch and only if everything is OK. With my patching experience, you probably need to plan 1 day for the first ODA you will patch.

Reimaging also last about 3 hours (more or less depending on your ODA version). But it’s only for reinstalling the software without any database. You will need to restore all your databases, and do all the things you’ve done at the first deployment: copy your scripts, setup your additional software, restore your crontabs, your specific configuration, put back monitoring, and so on.

So, reimaging is probably longer, but you are quite sure to redeploy your ODA in a known time. This is a strong argument. “It will take 8 hours” is always better than “it would take between 3 and 8 hours. Or maybe more. If I succeed”.

How to proceed with patches?

If you need to patch regularly, try to apply the patch on a ODA you can live without. If something goes wrong, you can decide to reimage very quickly instead of opening a SR on MOS. Please don’t get stuck because a patch is not applying correctly, it’s a waste of time.

If you patch every year, consider redeploying instead of patching. It’s probably more work but it will take the same amount of time, with success guarantee (you will love that point). Also, you will ensure that you are able to reimage completely. Reimaging is sometimes also needed if you move your ODA to another datacenter with a network change, so you could have to reimage even for other reasons than patching.

How to make sure that you are able to reimage?

This is the key: be able to reimage

Rules to follow:

  • restrict the access on your ODA to only people concerned about the appliance
  • document every change you make on the server, even a simple chmod
  • never use the GUI to deploy the appliance: deploy your ODA using odacli and save the deployment json file outside of the ODA
  • never use the GUI to create the databases: create the database with odacli and backup the used parameters in the documentation
  • use scripts to configure your databases (avoid one-shot changes)
  • install other products only if necessary: do you really need a backup tool on ODA? NFS backups are great and easy to configure without installing anything
  • install only RPMs manually from Oracle ISOs and only if needed
  • do everything from the command line and avoid using vi. Text editors prevent you from being able to repeat the exact same operation. For example, replace vi /etc/fstab by echo "srv-nfs:/orabackups /backup nfs rw,bg,hard,nolock,nointr" >> /etc/fstab
  • always consider your ODA not so critical by having the possibility to restore your database elsewhere (understand on another ODA), or adopt Data Guard or DBVisit for all your databases that cannot support to be down for hours (even development databases are production for developpers!)
  • keep the install zipfiles corresponding to your version somewhere secured to avoid searching for them on MOS the day you need to reimage

Regarding the scripts, I always create a scripts folder in /home/oracle on ODA, and each database has 3 dedicated scripts to speed up the database recreation if needed: create_SID.sh, configure_SID.sql and tbs_SID.sql. First script is for odacli database creation, first SQL script if for specific configuration (controlfile multiplexing for example, disabling the recycle bin or enabling the archive_lag_target, etc). Second SQL script is for tablespace creation. Target is to be able to recreate the database even for datapump-based restore. Make sure to backup these scripts somewhere else.

Few words about RPMs : for me the best way to install additional RPMs on ODA is to download the Oracle Linux ISO corresponding to the version on your ODA (the ISO you would use if you need to deploy a normal server), mount the ISO on your ODA and pickup only the RPMs you need from it (you can also put these few RPMs on /home/oracle/RPMs).

Conclusion

Reimaging should always be considered as an alternative way of patching. Or the best way. Companies already having integrated this are happier with their ODAs. And are taking the best from these appliances.

Cet article Patching or reimaging your ODA? est apparu en premier sur Blog dbi services.

Video : Vagrant : Oracle Database Build (19c on OL8)

Tim Hall - Mon, 2019-08-12 02:18

Today’s video is an example of using Vagrant to perform an Oracle database build.

In this example I was using Oracle 19c on Oracle Linux 8. It also installs APEX 19.1, ORDS 19.2, SQLcl 19.2, with ORDS running on Tomcat 9 and OpenJDK 12.

If you’re new to Vagrant, there is an introduction video here. There’s also an article if you prefer to read that.

If you want to play around with some of my other Vagrant builds, you can find them here.

If you want to read about some of the individual pieces that make up this build, you can find them here.

The star of today’s video is Noel Portugal. It’s been far too long since I’ve seen you dude!

Cheers

Tim…

Video : Vagrant : Oracle Database Build (19c on OL8) was first posted on August 12, 2019 at 8:18 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator