Feed aggregator

ADWC new OCI interface

Yann Neuhaus - Sun, 2018-06-17 14:51

A few things have changed about the Autonomous Data Warehouse Cloud service recently. And I’ve found the communication not so clear, so here is a short post about what I had to do to start the service again. The service has always been on the OCI data centers but was managed with the classic management interface. It has been recently migrated to the new interface:
CaptureADWCnew
Note that ADWC here is the name I’ve given for my service. It seems that the Autonomous Data Warehouse Cloud Service is now referred by the ADW acronym.

The service itself did not have any outage. The migration concerns only the interface. However, once the migration done, you cannot use the old interface. I went to the old interface with the URL I bookmarked, tried to start the service, and got a ‘last activity START_SERVICE failed’ error message without additional detail.
CaptureADWCfail

You can forget the old bookmark (such as https://psm-tenant.console.oraclecloud.com/psmui/faces/paasRunner.jspx?serviceType=ADWC) and you now have to use the new one (such as https://console.us-ashburn-1.oraclecloud.com/a/db/adws/ocid1.autonomousdwdatabase.oc1.iad.al-long-IAD-identifier)

So I logged to the console https://console.us-ashburn-1.oraclecloud.com (My service is in Ashburn-1 region). There I provided the tenant name (was the cloud account in the old interface) which can also be provided in the URL as https://console.us-ashburn-1.oraclecloud.com/?tenant=tenant. I selected oracleidentitycloudservice as the ‘identity provider’, my username and password and I am on the OCI console.

From the top-left menu, I can go to Autonomous Data Warehouse. I see nothing until I choose the compartement in the ‘list scope’. The ADWC service I had created when in the old interface is in the ‘tenant (root)’ compartment. Here I can start the service.

The previous PSM command line interface cannot be used anymore. We need to install the OCI CLI:

$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

You will need the Tenancy ID (Tenancy OCID:ocid1.tenancy.oc1..aaaaaaaa… that you find on the bottom of each page in the console), the User ID (User OCID ocid1.user.oc1..aaaaaaa… that you find in the ‘users’ menu). All those ‘OCID’ are documented in https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

If you used the REST API, they change completely. You will have to post to something like:

/20160918/autonomousDataWarehouses/ocid1.autonomousdwdatabase.oc1.iad.abuwcljrb.../actions/start

where the OCID is the database one that cou can copy from the console.

 

Cet article ADWC new OCI interface est apparu en premier sur Blog dbi services.

Global Temporary Table in a PDB

Hemant K Chitale - Sun, 2018-06-17 10:45
Where and how is the space consumption for a Global Temporary Table when created in a Pluggable Database ?

In a 12c MultiTenant Database, each Pluggable Database (PDB) has its own Temporary Tablespace. So, a GTT (Global Temporary Table) in a PDB is local to the associated Temporary Tablespace.

Let me be clear.  The "Global" does *not* mean that the table is
(a) available across all PDBs   (it is restricted to that PDB alone)
(b) available to all schemas (it is restricted to the owner schema alone, unless privileges are granted to other database users as well)
(c) data is visible to other sessions (data in a GTT is visible only to that session that populated it)

The "global" really means that the definition is created once and available across multiple sessions, each session having a "private" copy of the data.
The "temporary" means that the data does not persist.  If the table is defined as "on commit delete rows", rows are not visible after a COMMIT is issued.  If the table is defined as "on commit preserve rows", rows remain only for the life of the session.  In either case, a TRUNCATE can also be used to purge rows.


Here, I connect to a particular PDB and create a GTT and then populate it

$sqlplus hemant/hemant@PDBHKC

SQL*Plus: Release 12.2.0.1.0 Production on Sun Jun 17 23:06:28 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Sun Jun 17 2018 23:02:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create global temporary table my_gtt
2 (id_number number, object_name varchar2(128))
3 on commit preserve rows
4 /

Table created.

SQL>
$sqlplus hemant/hemant@PDBHKC

SQL*Plus: Release 12.2.0.1.0 Production on Sun Jun 17 23:06:28 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Sun Jun 17 2018 23:02:29 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create global temporary table my_gtt
2 (id_number number, object_name varchar2(128))
3 on commit preserve rows
4 /

Table created.

SQL>
SQL> select distinct sid from v$mystat;

SID
----------
36

SQL>
SQL> select serial# from v$session where sid=36;

SERIAL#
----------
4882

SQL>


Another session can see that the table exists (without any corresponding "permanent" tablespace) but not see any data in it.

SQL> select temporary, tablespace_name
2 from user_tables
3 where table_name = 'MY_GTT'
4 /

T TABLESPACE_NAME
- ------------------------------
Y

SQL> select count(*) from my_gtt;

COUNT(*)
----------
0


Let's look for information on the Temporary Tablespace / Segment usage(querying from the second session)

SQL> select sid, serial#, sql_id    
2 from v$session
3 where username = 'HEMANT';

SID SERIAL# SQL_ID
---------- ---------- -------------
36 4882
300 34315 739nwj7sjgaxp

SQL> select username, session_num, sql_id, tablespace, contents, segtype, con_id, sql_id_tempseg
2 from v$tempseg_usage;

USERNAME SESSION_NUM SQL_ID TABLESPA CONTENTS SEGTYPE CON_ID SQL_ID_TEMPSE
-------- ----------- ------------- -------- --------- --------- ---------- -------------
HEMANT 4882 92ac4hmu9qgw3 TEMP TEMPORARY DATA 6 3t82sphjrt73h

SQL> select sql_id, sql_text
2 from v$sql
3 where sql_id in ('92ac4hmu9qgw3','3t82sphjrt73h');

SQL_ID
-------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
92ac4hmu9qgw3
select serial# from v$session where sid=36


SQL>


So, SID 36 is the session that populated the GTT and identified it's own SID (36) and SERIAL# (4882), which we can see as the user of the Temporary Segment when querying from the second session (SID 300).

What about the size of the temporary segment populated by SESSION_NUM (i..e SERIAL#)=4882 ?
Again, querying from the second session.

SQL> select extents, blocks, sql_id, sql_id_tempseg 
2 from v$tempseg_usage
3 where session_num=4882;

EXTENTS BLOCKS SQL_ID SQL_ID_TEMPSE
---------- ---------- ------------- -------------
4 512 92ac4hmu9qgw3 3t82sphjrt73h

SQL>


Now, let's "grow" the GTT with more rows (and then query from the other session).

SQL> insert into my_gtt select * from my_gtt;

72638 rows created.

SQL>
SQL> l
1 select extents, blocks, sql_id, sql_id_tempseg
2 from v$tempseg_usage
3* where session_num=4882
SQL> /

EXTENTS BLOCKS SQL_ID SQL_ID_TEMPSE
---------- ---------- ------------- -------------
8 1024 gfkbdvpdb3qvf 3t82sphjrt73h

SQL> select sql_text from v$sql where sql_id = 'gfkbdvpdb3qvf';

SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
insert into my_gtt select * from my_gtt

SQL>


So, the increased space allocation in the Temporary Segment is from the growth of the GTT. Let's grow it further.

SQL> INSERT INTO MY_GTT select * from MY_GTT;

145276 rows created.

SQL> /

290552 rows created.

SQL>
SQL> select extents, blocks, sql_id, sql_id_tempseg
2 from v$tempseg_usage
3 where session_num=4882
4 /

EXTENTS BLOCKS SQL_ID SQL_ID_TEMPSE
---------- ---------- ------------- -------------
29 3712 2c3sccf0pj5g1 3t82sphjrt73h

SQL> select sql_text, executions from v$sql where sql_id = '2c3sccf0pj5g1';

SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
EXECUTIONS
----------
INSERT INTO MY_GTT select * from MY_GTT
2


SQL>


So, the growth of the GTT results in increased space allocation in the Temporary Segment.

What happens if I truncate the GTT ?

SQL> truncate table my_gtt;

Table truncated.

SQL>
SQL> select extents, blocks, sql_id, sql_id_tempseg
2 from v$tempseg_usage
3 where session_num=4882;

no rows selected

SQL>
SQL> select * from v$tempseg_usage;

no rows selected

SQL>


Temp Space is released by the TRUNCATE of the GTT.

I invite you to try this with a GTT created with ON COMMIT DELETE ROWS and see what happens before and after the COMMIT.

.
.
.

Categories: DBA Blogs

CDN Support in Oracle JET

Andrejus Baranovski - Sun, 2018-06-17 02:12
With the recent releases of Oracle JET - CDN support in your app can be enabled easily. By default JET app is set to download all JET toolkit related scripts and static files from the same host, where application is hosted. You can track it easily through network monitor, you should see such files as ojknockout.js, etc. fetched from same host:


CDN can be enabled by changing use property from local to cdn in path_mapping.json and restarting the app:


After this change, you should see all JET toolkit content to be downloaded from static.oracle.com host:


Benefit - you reduce load on your host, from where only application specific files will be downloaded, with JET toolkit code downloaded from external Oracle host. Same achievable on your own host, but JET toolkit content downloaded from Oracle host - is compressed out of the box (another benefit):

Docker: How to limit memory

Dietrich Schroff - Sat, 2018-06-16 14:46
By starting your container you can limit the RAM usage simply by adding
-m 4M

(this limits the memory to 4 megabytes).

To check this simply run:

docker run -it -m=4M  --rm alpine /bin/ash

and on your docker machine check the following entry:

alpine:~# cat /sys/fs/cgroup/memory/docker/4ce0403caf667e7a6d446eac3820373aefafe4e73463357f680d7b38a392ba62/memory.limit_in_bytes 
4194304


5 Things That Will Definitely Make You Consider Wireframing

Nilesh Jethwa - Fri, 2018-06-15 23:08

Most web developers tend to skip the wireframing process and go straight to design. There are two reasons this happens: One, the web developer does not know what wireframing is or two, he knows what it is, but is afraid … Continue reading ?

Credit: MockupTiger Wireframes

Jürgen Schuster: APEX Distinguished Community Member

Joel Kallman - Fri, 2018-06-15 18:21


I just got back from the ODTUG Kscope18 conference in Orlando, Florida where, once again, the global APEX community descended.  During the Sunday Symposium at this conference, I had the privilege of honoring Jürgen Schuster, who has been the catalyst and engine for so many positive things in this wonderful APEX community.

For the past few months, I was unsure of what words I could use to introduce this award to Jürgen, which accurately and humbly conveyed the breadth of his impact.  But the words just came to me late one evening in early May, and I used them almost verbatim during the Sunday Symposium.  I would like to share these words here, for everyone else in our global community to appreciate the impact he has had, along with the sacrifices he has personally made.

To quote Jürgen: live APEX and prosper.



ODTUG Kscope18 Conference
Sunday Symposium, June 10, 2018

The APEX Community is awesome, and it's awesome to be here and be a part of this conference. This is my 12th Kscope and I firmly believe that this is where the APEX community really got its start. Additionally, some of my greatest friends in the world are in this room, and I’m really grateful to be here.

There are really so many people who have made this community special:  from the many plug-in developers, people who record training videos, members who write blog posts and books, create presentations with technical content and share them, manage and organize meetups, organize entire conferences or technical days or user group meetings or organize tracks at Kscope!  There are those who create Web sites: builtwithapex.com, translate-apex.com, for example. There are open source sites dedicated to APEX and the Oracle Database. There are testing frameworks and security analysis tools.  And the list goes on and on.

Many of these people who have worked so hard on all of these contributions are here in this room today. But among the many people who have given of their time and talent to the APEX community, there is one person who stands out. And I have two words to describe this person: passionate and selfless.

This person has spent a lot of time and a lot of their own personal money to carry the message of APEX across the globe.  It's pretty extraordinary.

I'm sure everyone is going to know this person when I start citing some of their amazing work.

I’m sure you've seen these coveted APEX stickers before. It was the genius of this person to create a sticker for the APEX community. And instead of using some low-quality, cheap-looking, shoddy sticker, he opted for something classy, high-quality, enduring. At a personal cost of more than $1 per sticker, this person has supplied them and shipped them all over the globe to anyone who asks.  At his own personal expense.

The next challenge? This person, with the help of others in this room, created apex.world - the APEX community site - being your one-stop portal for all things related to the APEX community - Slack channels, jobs, plug-ins, news, newsletters, awards, tweets, and more.  There are more than 3,300 members today on apex.world.  It’s the central starting point for the APEX community.

How many of you know how to create your own podcast and host it on iTunes? I don't. Neither did this person. Out of his own pocket, he paid a professional to educate him and show him everything you need to know to prepare and publish on iTunes. And since that time, he has recorded, edited and published 20 episodes of the Oracle APEX Talkshow.

The contributions go on - ODTUG APEX On Air Webinars, APEX Meetup organizer, he created a web site dedicated to APEX dynamics actions, and on.  And almost all of these contributions has cost him his own money and he’s done it for no real personal gain.

Remember the two words I started with:  passionate and selfless.

The person I’m proudly referring to is Jürgen Schuster, an energetic and passionate freelance consultant from Munich Germany.  We wanted to recognize Jürgen and his many generous contributions to the community.

Please join me in congratulating Jürgen Schuster, as we proudly honor him with the first ever APEX Distinguished Community Member Award.



Sqlldr is throwing OCI.dll exception with Oracle 12.2 Instant Client

Tom Kyte - Fri, 2018-06-15 11:06
Hi, I have downloaded Oracle 12.2 Instant Client (both SQLPlus and Tools) and on my Window 7 64 Bit system from http://www.oracle.com/technetwork/topics/winx64soft-089540.html. I have unzipped the all files to C:\oracle122, along with by tnsname...
Categories: DBA Blogs

Converting rows to columns

Tom Kyte - Fri, 2018-06-15 11:06
Hi Tom, Hope you are good. I have a requirement where I need to display rows as columns. Suppose there are 2 rows with 4 columns each then the result should display 2*4 i.e, 8 columns. Is it possible just using SQL? Thanks
Categories: DBA Blogs

Oracle Goldengate

Tom Kyte - Fri, 2018-06-15 11:06
What is the advantage of Goldengate over Stream? Oracle Goldengate has high license cost compared to Streams. So, why an organization should use Goldengate for their data replication need and not Streams? Does Goldengate has advantage, which is wo...
Categories: DBA Blogs

What is the relationship of CPU, Memories against DB performances?

Tom Kyte - Fri, 2018-06-15 11:06
Hi Tom, Frequently I get asked quite a number of times when planning for a new server setup for the creation of databases. How much CPU cores should I get? How much Memories should I get. Normally I'll answer them, just get the highest cores & me...
Categories: DBA Blogs

DBMS_FILE_TRANSFER.PUT_FILE multiple "source_file_name"

Tom Kyte - Fri, 2018-06-15 11:06
Hi I am using Datapump to export dump file from a database and while exporting the dumpfile, I am splitting that dumpfile into multiple files. Now I want to transfer those files to another server using DBMS_FILE_TRANSFER.PUT_FILE. I know ...
Categories: DBA Blogs

UTL_FILE.FCOPY not working in FOR LOOP <file read error>

Tom Kyte - Fri, 2018-06-15 11:06
Hi There, I have a PIPELINED function which retrieves me filenames which I feed to UTL_FILE.FCOPY like below: <code>DECLARE PROCEDURE copy_var_templates (p_var_report_name st_string) IS lkv_template_dir CONSTANT st_string := 'T...
Categories: DBA Blogs

Side-effects when working with associative array in pl/sql

Tom Kyte - Fri, 2018-06-15 11:06
I've noticed strange side-effect when working with associative arrays in pl/sql. Basically, it appearts, that when element of the array is passed to procedure as "in out nocopy", then after procedure finishes, Oracle copies possibly updated element b...
Categories: DBA Blogs

Extracting attributes from JSON documents

Tom Kyte - Fri, 2018-06-15 11:06
Hi all, Have question in JSON array accessing along with normal columns like below, <code> (Reports : [( 'reportname': 'abc', 'Sort order':'abc', 'sortlabel':'name', 'columns' :[ ( 'component' : 'q_test1', ...
Categories: DBA Blogs

Get a JSON from a SQL query

Tom Kyte - Fri, 2018-06-15 11:06
Hello! Just a question. Is it possible to write a query that returns a JSON code? If yes, could you give me a brief example? Thanks!
Categories: DBA Blogs

Index-Organized Materialized View with different primary key than the master table?

Tom Kyte - Fri, 2018-06-15 11:06
Dear Oracle-Team, we need a daily snapshot from the company's personal data for our software. For that reason we want to use an index-organized materialized view (with daily 'refesh complete'). Unfortunately there are two user id's for every empl...
Categories: DBA Blogs

Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle

Oracle Press Releases - Fri, 2018-06-15 07:00
Press Release
Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle Invest Banca S.p.A. is the first Italian bank to implement Oracle FLEXCUBE, localized and integrated by banking outsourcer Cabel Industry S.p.A.

Redwood Shores, Calif.—Jun 15, 2018

Cabel, an IT service provider for the financial services market in Italy since 1985, and Oracle Financial Services Global Business Unit, announced the availability today of Oracle FLEXCUBE for the Italian market. Oracle FLEXCUBE is a core banking solution that has been adopted by more than 600 financial institutions around the world.

Cabel and Oracle have collaborated since 2016 to localize the Oracle FLEXCUBE solution to improve the process of marketing new products and services to the Italian market, where client requirements are evolving rapidly. In recent months, the Oracle FLEXCUBE solution has been adapted to the regulations governing the Italian banking market and is now fully able to support the typical activity of the Italian banking system.

“The attitude of the banking system towards innovation is changing and at the same time there is a growing interest in the world of fintech. Invest Banca, thanks to the Oracle FLEXCUBE solution, has taken a decisive step forward. We went live with this Open Banking Platform May 7th and Oracle FLEXCUBE now allows us to easily and efficiently integrate with a series of specialized solutions already in use by our retail and institutional clients, but moreover, it allows us to keep apace with ever more demanding banking regulations, such as MiFID, PSD2 and GDPR. It also facilitates our interaction and experimentation with the latest technology advances such as Robo-Advisor, artificial intelligence, data science, social trading and blockchain,” said Stefano Sardelli, Managing Director of Invest Banca.”

Cabel implemented the Italian version of Oracle FLEXCUBE making it possible to integrate in an already live and running banking system covering other banking operations. The solution can be outsourced or used on premise.

"This is a radically innovative solution, because it is a technology that facilitates the creation of lean products and services that are independent and based on completely different and more modern logic than traditional core banking systems in Italy,” said Francesco Bosio, President of Cabel Holding S.p.A.

“Oracle’s strategy is to work with leading local partners, who bring local domain skills to our best-in-class global solutions," said Chet Kamat, Senior Vice President, Banking, Oracle Financial Services. "Cabel is an innovation-oriented company and we chose to work with Cabel knowing it could fully utilize our modern, flexible technology to respond to the changes imposed by the digital age. As a result, Italian banks will see a significant improvement in their own productivity and market offerings—and their customers will get the benefit of excellent customer experience.”

Contact Info
Stefano Cassola
Oracle
39 022 495 9032
stefano.cassola@oracle.com
Sara D’Agati
Cabel Industry S.p.A.
39 339 8610096
sara.dagati@hfilms.net
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Stefano Cassola

  • 39 022 495 9032

Sara D’Agati

  • 39 339 8610096

How to replace old Exadata storage cells with the new X7-2 storage cells, without downtime

Alejandro Vargas - Fri, 2018-06-15 03:32

Lately I had to help various customers to replace their old storage cells with the new X7-2 ones.

There are huge benefits in doing this, the X7 has 25.5TB of flash, 12 x 10TB disks and 192 GB of DDR4 Memory.

X7-2 hardware

 

 

 

 

 

 

 

 

 

 

 

The question my customers asked the most was: Can we do the migration from our old storage to the new X7 without downtime and without risk?

The answer was: YES!

For doing this I've prepared and implemented a procedure that cover step by step how to migrate critical data, from production databases, while these databases are online, without downtime and without risk.

So far I've done two of these migrations in 2 weeks, one in Haifa and one in Istanbul.

The haifa migration was run on a machine without customers working on it. The Istanbul migration was implemented on a critical production database with full customer load.

Customer was very happy to see how the data was transferred in a fast and safe way without affecting the online business.

This is the power of both Exadata and ASM, a migration that only a few years ago may have imposed tremendous effort of planning and most probably required downtime, is now possible to be run online and without affecting the performance of critical workloads!

In summa

ry the steps of the migration includes the following:

1. Attach the new cells to Exadata and setup the ILOM to the final IP on customer network

2. Connect via ILOM to the new storage and setup the network to customer values

3. Upgrade the new storage servers to latest Exadata storage version

4. Customize the new servers to reflect customer preferences, mail alerts, writeback, asr server, etc

5. Create celldisks and griddisks to match existing diskgroups

6. Extend existing disk groups into the new cells and wait for first rebalance to complete

7. Once second rebalance completes, drop failgroups from the old cells and wait for second rebalance to complete

8. Once second rebalance complete flush the flashcache on the old cells, drop its griddisks and celldisks and shutdown the cells

9. Check the free space available on new cells and increase the size of the griddisks to use all of it as required

10. On the ASM instance resize all griddisks on the disk groups where you increase the size of the griddisks, wait for the third rebalance to complete.

 

 

 

 

Categories: DBA Blogs

ChitChat for OBIEE - Now Available as Open Source!

Rittman Mead Consulting - Fri, 2018-06-15 03:20

ChitChat is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.

Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: https://github.com/RittmanMead/ChitChat

Highlights of the features that ChitChat provides includes:

  • Annotate - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.

  • Document - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.

  • Share - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.

  • Verified Compatibility - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.

Getting Started

In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.

First, download the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.

See the Installation Guide for full detail on how to install ChitChat.

Database Setup

Build the required database tables using the installer:

cd /home/federico/ChitChatInstaller  
java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:"jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1" -DatabaseUser:"sys as sysdba" -DatabasePassword:password -NewDBUserPassword:password1  

The installer will create a new user (RMREP), and tables required for the application to operate correctly. -DatabasePath flag tells the installer where to place the datafiles for ChitChat in your database server. -JDBC indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. -DatabaseUser specifies the user to access the database with. -DatabasePassword specifies the password for the user previously given. -NewDBUserPassword indicates the password for the new user (RMREP) being created.

WebLogic Data Source Setup

Add a Data Source object to WebLogic using WLST:

cd /home/federico/ChitChatInstaller/jndiInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py

To use this script, modify the ds.properties file using the method of your choice. The following parameters must be updated to reflect your installation: domain.name, admin.url, admin.userName, admin.password, datasource.target, datasource.url and datasource.password.

Deploying the Application on WebLogic

Deploy the application to WebLogic using WLST:

cd /home/federico/ChitChatInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py

To use this script, modify the deploySocialize.py file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in deploy command must be updated to reflect your ChitChat access location.

Configuring the Application

ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the RMREP user, and update the values manually into the APPLICATION_CONSTANT table.

See the Installation Guide for full detail on the available configuration and integration options.

Enabling the Application

To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:

<rm id="socializePageParams"  
user="@{biServer.variables['NQ_SESSION.USER']}"  
tab="@{dashboard.currentPage.name}"  
page="@{dashboard.name}">  
</rm>  
<script src="/Socialize/js/dashboard.js"></script>  

Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the User Guide.

Problems?

Please raise any issues on the github issue tracker. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.

If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do get in touch with Jon Mead or DM us on Twitter @rittmanmead to get access to our Slack channel for support about ChitChat.

Please contact us on the same channels to request a demo.

Categories: BI & Warehousing

Convert a WebLogic Cluster from configured to dynamic

Yann Neuhaus - Fri, 2018-06-15 00:14

Unless the servers in cluster are not symmetric, which is not recommended anyway, dynamic cluster have many advantages against configured cluster:

  1. Ensure Cluster Member Uniformity
  2. Easily add new servers to manage more traffic
  3. Automatically adapt to load to add/remove managed servers
  4. Can still contain configured servers even if not recommended as for point 1
Server template

A server template defines a set of attributes. A change in a template will be propagated to all server depending on it. A dynamic cluster can be based on a server template.

Here is an overview of the configured to dynamic change:

  1. Write down all customized parameters of server’s member of the cluster.
  2. Create new dynamic cluster
  3. Report all settings. There are two specificities on dynamic cluster:
    1. Listen port and SSL port which can be either:
      1. Static, meaning all servers of the cluster will have same port. This is the best option when you have one server to one machine mapping
      2. Calculated, meaning each server will listen on a different port by step of 1. For example, if first port is set to 7000, server 1 will listen on 7001, server 2 on 7002, …
    2. Machine binding: Use of a specific (filtered) subset of machines from the cluster
  4. Create new server template
Procedure
  1. In left tree, go in Environment, Clusters, Server Templates
  2. Lock & Edit the configuration and click New
    1. Name the template:
      5 - Server Template Name
    2. In the server template list, select the newly created template to customize the parameters.
    3. In General tab, select the Cluster that needs to be converted:
      6 - Server Template Name
    4. Save
  3. In Environment, Clusters, Servers tab, set Server Template name:
    Cluster settings - servers tab
  4. Save and Activate the changes
  5. In Environment, Servers folder, dynamically created servers will be displayed (4 in your example):
    7 - Servers List
    Note that machine are distributed across available machines in round robin and also the Listen port and SSL are incrementing.
  6. Then, start these new servers, test application is running correctly.
  7. Finally, stop configured managed servers by selecting “Work when completes”:
    8 - stop when complete

Cluster is now dynamic and you can easily add or remove managed servers from cluster.

What Next ?

This was a quick overview of how to convert configured to dynamic cluster. As we saw, it still require manual intervention to add or remove servers from cluster.

Coming with 12.2.1, WebLogic introduce a new feature called “elasticity”. This feature allows to automatically scales the amount of managed servers in the cluster based on user defined policies.

Thanks to WebLogic Diagnostic Framework (WLDF) Policies, it is possible to monitor memory, CPU usage, threads and then trigger a scale up or down action.

 

Cet article Convert a WebLogic Cluster from configured to dynamic est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator