ODTUG Aggregator ODTUG Blogs http://localhost:8080 Tue, 22 May 2018 09:45:01 +0000 http://aggrssgator.com/ Announcing the 2018 ODTUG Innovation Award Nominations https://www.odtug.com/p/bl/et/blogaid=803&source=1 Introducing this year's 2018 ODTUG Innovation Award Nominations! ODTUG https://www.odtug.com/p/bl/et/blogaid=803&source=1 Tue May 22 2018 08:38:16 GMT-0400 (EDT) Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code https://technology.amis.nl/2018/05/22/simple-cqrs-tweets-to-apache-kafka-to-elastic-search-index-using-a-little-node-code/ <p>Put simply – CQRS (Command Query Responsibility Segregation) is an architecture pattern that recognizes the fact that it may be wise to separate the database that processes data manipulations from the engines that handle queries. When data retrieval requires special formats, scale, availability, TCO, location, search options and response times, it is worth considering introducing additional databases to handle those specific needs. These databases can provide data in a way that caters for the special needs to special consumers – by offering data in filtered, preprocessed format or shape or aggregation, with higher availability, at closer physical distance, with support for special search patterns and with better performance and scalability.</p> <p>A note of caution: you only introduce CQRS in a system if there is a clear need for it. Not because you feel obliged to implement such a shiny, much talked about pattern or you feel as if everyone should have it. CQRS is not a simple thing – especially in existing systems, packaged applications and legacy databases. Detecting changes and extracting data from the source, transporting and converting the data and applying the data in a reliable, fast enough way with the required level of consistency is not trivial.</p> <p>In many of my conference presentations, I show demonstrations with running software. To better clarify what I am talking about, to allow the audience to try things out for themselves and because doing demos usually is fun. And a frequent element in these demos is Twitter. Because it is well known and because it allows the audience to participate in the demo. I can invite an audience to tweet using an agreed hashtag and their tweets trigger the demo or at least make an appearance. In this article, I discuss one of these demos – showing an example of CQRS. The picture shows the outline: tweets are consumed by a Node application. Each tweet is converted to an event on a Kafka Topic. This event is consumed by a second Node application (potentially one of multiple instances in Kafka Consumer Group, to allow for more scalability. This Node application creates a new record in an Elastic Search index – the Query destination in this little CQRS spiel.&nbsp; The out of the box dashboard tool Kibana allows us to quickly inspect and analyse the tweet records. Additionally we can create an advanced query service on top of Elastic Search.</p> <p>This article shows the code behind this demo. This code as prepared for the JEEConf 2018 conference in Kyiv, Ukraine &#8211; and can be found in GitHub: <a title="https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo" href="https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo">https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo</a> .</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-7.png?ssl=1"><img width="702" height="385" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-7.png?resize=702%2C385&#038;ssl=1" border="0" data-recalc-dims="1"></a></p> <p>The main elements in the demo:</p> <p>1. Kafka Topic tweets-topic (in my demo, this topic is created in Oracle Cloud Event Hub Service, a managed Kafka cloud service)</p> <p>2. Node application that consumes from Twitter – and publishes to the Kafka topic </p> <p>3. (Postman Collection to create) Elastic Search Index plus custom mapping (primarily to extract proper creation date time value from a date string) (in my demo, this Elastic Search Index is created in a Elastic Search instance running in a Docker Container on Oracle Container Cloud)</p> <p>4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the index. In this demo, the Node application is also running on Oracle Cloud (Application Container Cloud), but that does not have to be the case</p> <p>5. Kibana dashboard on top of the Tweets Index. In my demo, Kibana is also running in a Docker container in Oracle Container Cloud</p> <p></p> <h3>1. Kafka Tweets Topic on Oracle Event Hub Cloud Service</h3> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-8.png?ssl=1"><img width="702" height="318" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-8.png?resize=702%2C318&#038;ssl=1" border="0" data-recalc-dims="1"></a></p> <p>After completing the wizard, the topic is created an can be accessed by producers and consumers.</p> <p></p> <h3>2. Node application that consumes from Twitter – and publishes to the Kafka topic </h3> <p>The Node application consists of an index.js file that handles HTTP Requests – for health checking – and consumes from Twitter and pulishes to a Kafka Topic. It uses a file twitterconfig.js (not included) that contains the secret details of a Twitter client. The contents of this file should look like this – and should contain your own Twitter Client Details:</p> <p><pre class="brush: jscript; title: ; notranslate"> // CHANGE THIS ************************************************************** // go to https://apps.twitter.com/ to register your app var twitterconfig = { consumer_key: 'mykey', consumer_secret: 'mysecret', access_token_key: 'at-key', access_token_secret: 'at-secret' }; module.exports = {twitterconfig}; </pre> </p> <p>The index.js file requires the npm libraries <em>kafka-node</em> and <em>twit</em> as well as <em>express</em> and <em>http</em> for handling http requests. </p> <p>The code can be said to be divided in three parts:</p> <ul> <li>initialization, create HTTP server and handle HTTP requests</li> <li>Consume from Twitter</li> <li>Publish to Kafka</li> </ul> <p>Here are the three code sections:</p> <p><pre class="brush: jscript; title: ; notranslate"> const express = require('express'); var http = require('http') const app = express(); var PORT = process.env.PORT || 8144; const server = http.createServer(app); var APP_VERSION = &quot;0.0.3&quot; const startTime = new Date() const bodyParser = require('body-parser'); app.use(bodyParser.json()); var tweetCount = 0; app.get('/about', function (req, res) { var about = { &quot;about&quot;: &quot;Twitter Consumer and Producer to &quot; + TOPIC_NAME, &quot;PORT&quot;: process.env.PORT, &quot;APP_VERSION &quot;: APP_VERSION, &quot;Running Since&quot;: startTime, &quot;Total number of tweets processed&quot;: tweetCount } res.json(about); }) server.listen(PORT, function listening() { console.log('Listening on %d', server.address().port); }); </pre> </p> <p></p> <p>Code for consuming from Twitter – in this case for the hash tags #jeeconf,#java and #oraclecode:</p> <p><pre class="brush: jscript; title: ; notranslate"> var Twit = require('twit'); const { twitterconfig } = require('./twitterconfig'); var T = new Twit({ consumer_key: twitterconfig.consumer_key, consumer_secret: twitterconfig.consumer_secret, access_token: twitterconfig.access_token_key, access_token_secret: twitterconfig.access_token_secret, timeout_ms: 60 * 1000, }); var twiterHashTags = process.env.TWITTER_HASHTAGS || '#oraclecode,#java,#jeeconf'; var tracks = { track: twiterHashTags.split(',') }; let tweetStream = T.stream('statuses/filter', tracks) tweetstream(tracks, tweetStream); function tweetstream(hashtags, tweetStream) { console.log(&quot;Started tweet stream for hashtag #&quot; + JSON.stringify(hashtags)); tweetStream.on('connected', function (response) { console.log(&quot;Stream connected to twitter for #&quot; + JSON.stringify(hashtags)); }) tweetStream.on('error', function (error) { console.log(&quot;Error in Stream for #&quot; + JSON.stringify(hashtags) + &quot; &quot; + error); }) tweetStream.on('tweet', function (tweet) { produceTweetEvent(tweet); }); } </pre> </p> <p></p> <p>Code for publishing to the Kafka Topic <em>a516817-tweetstopic</em>:</p> <p><pre class="brush: jscript; title: ; notranslate"> const kafka = require('kafka-node'); const APP_NAME = &quot;TwitterConsumer&quot; var EVENT_HUB_PUBLIC_IP = process.env.KAFKA_HOST || '129.1.1.116'; var TOPIC_NAME = process.env.KAFKA_TOPIC || 'a516817-tweetstopic'; var Producer = kafka.Producer; var client = new kafka.Client(EVENT_HUB_PUBLIC_IP); var producer = new Producer(client); KeyedMessage = kafka.KeyedMessage; producer.on('ready', function () { console.log(&quot;Producer is ready in &quot; + APP_NAME); }); producer.on('error', function (err) { console.log(&quot;failed to create the client or the producer &quot; + JSON.stringify(err)); }) let payloads = [ { topic: TOPIC_NAME, messages: '*', partition: 0 } ]; function produceTweetEvent(tweet) { var hashtagFound = false; try { // find out which of the original hashtags { track: ['oraclecode', 'java', 'jeeconf'] } in the hashtags for this tweet; //that is the one for the tagFilter property // select one other hashtag from tweet.entities.hashtags to set in property hashtag var tagFilter = &quot;#jeeconf&quot;; var extraHashTag = &quot;liveForCode&quot;; for (var i = 0; i &lt; tweet.entities.hashtags.length; i++) { var tag = '#' + tweet.entities.hashtags[i].text.toLowerCase(); console.log(&quot;inspect hashtag &quot; + tag); var idx = tracks.track.indexOf(tag); if (idx &gt; -1) { tagFilter = tag; hashtagFound = true; } else { extraHashTag = tag } }//for if (hashtagFound) { var tweetEvent = { &quot;eventType&quot;: &quot;tweetEvent&quot; , &quot;text&quot;: tweet.text , &quot;isARetweet&quot;: tweet.retweeted_status ? &quot;y&quot; : &quot;n&quot; , &quot;author&quot;: tweet.user.name , &quot;hashtag&quot;: extraHashTag , &quot;createdAt&quot;: tweet.created_at , &quot;language&quot;: tweet.lang , &quot;tweetId&quot;: tweet.id , &quot;tagFilter&quot;: tagFilter , &quot;originalTweetId&quot;: tweet.retweeted_status ? tweet.retweeted_status.id : null }; eventPublisher.publishEvent(tweet.id, tweetEvent) tweetCount++ }// if hashtag found } catch (e) { console.log(&quot;Exception in publishing Tweet Event &quot; + JSON.stringify(e)) } } var eventPublisher = module.exports; eventPublisher.publishEvent = function (eventKey, event) { km = new KeyedMessage(eventKey, JSON.stringify(event)); payloads = [ { topic: TOPIC_NAME, messages: [km], partition: 0 } ]; producer.send(payloads, function (err, data) { if (err) { console.error(&quot;Failed to publish event with key &quot; + eventKey + &quot; to topic &quot; + TOPIC_NAME + &quot; :&quot; + JSON.stringify(err)); } console.log(&quot;Published event with key &quot; + eventKey + &quot; to topic &quot; + TOPIC_NAME + &quot; :&quot; + JSON.stringify(data)); }); }//publishEvent </pre> </p> <p></p> <h3>3. (Postman Collection to create) Elastic Search Index plus custom mapping </h3> <p>Preparation of an Elastic Search environment is done through REST API calls. These can be made from code or from the command line (using CURL) or from a tool such as Postman. In this case, I have created a Postman collection with a number of calls to prepare the Elastic Search index <em>tweets</em>. </p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-9.png?ssl=1"><img width="702" height="386" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-9.png?resize=702%2C386&#038;ssl=1" border="0" data-recalc-dims="1"></a></p> <p></p> <p>The following requests are relevant:</p> <ul> <li>Check if the Elastic Search server is healthy: GET {{ELASTIC_HOME}}:9200/_cat/health</li> <li>Create the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets</li> <li>Create the mapping for the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets/_mapping/doc</li> </ul> <p>The body for the last request is relevant:</p> <p><pre class="brush: jscript; title: ; notranslate"> { &quot;properties&quot;: { &quot;author&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;createdAt&quot;: { &quot;type&quot;: &quot;date&quot;, &quot;format&quot;: &quot;EEE MMM dd HH:mm:ss ZZ yyyy&quot; }, &quot;eventType&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;hashtag&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;isARetweet&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;language&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;tagFilter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;text&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;keyword&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 256 } } }, &quot;tweetId&quot;: { &quot;type&quot;: &quot;long&quot; } } } </pre> </p> <p>The custom aspect of the mapping is primarily to extract proper creation date time value from a date string.</p> <p></p> <h3>4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the elastic search index </h3> <p>The tweetListener.js file contains the code for two main purposes: handle HTTP requests (primarily for health checks) and consume events from the Kafka Topic for <em>tweets</em>. This file requires the npm modules express, http and kafka-node for this. It also imports the local module <em>model </em>from the file model.js. This module writes Tweet records to the Elastic Search index. It uses the npm&nbsp; module <em>elasticsearch</em> for this.</p> <p>The code in tweetListener.js is best read in two sections:</p> <p>First section for handling HTTP requests:</p> <p><pre class="brush: jscript; title: ; notranslate"> const express = require('express'); var https = require('https') , http = require('http') const app = express(); var PORT = process.env.PORT || 8145; const server = http.createServer(app); var APP_VERSION = &quot;0.0.3&quot; const bodyParser = require('body-parser'); app.use(bodyParser.json()); var tweetCount = 0; app.get('/about', function (req, res) { var about = { &quot;about&quot;: &quot;Twitter Consumer from &quot; +SOURCE_TOPIC_NAME, &quot;PORT&quot;: process.env.PORT, &quot;APP_VERSION &quot;: APP_VERSION, &quot;Running Since&quot;: startTime, &quot;Total number of tweets processed&quot;: tweetCount } res.json(about); }) server.listen(PORT, function listening() { console.log('Listening on %d', server.address().port); }); </pre> </p> <p>Second section for consuming Kafka events from tweets topic – and invoking the model module for each event:</p> <p><pre class="brush: jscript; title: ; notranslate"> var kafka = require('kafka-node'); var model = require(&quot;./model&quot;); var tweetListener = module.exports; var subscribers = []; tweetListener.subscribeToTweets = function (callback) { subscribers.push(callback); } // var kafkaHost = process.env.KAFKA_HOST || &quot;192.168.188.102&quot;; // var zookeeperPort = process.env.ZOOKEEPER_PORT || 2181; // var TOPIC_NAME = process.env.KAFKA_TOPIC ||'tweets-topic'; var KAFKA_ZK_SERVER_PORT = 2181; var SOURCE_KAFKA_HOST = '129.1.1.116'; var SOURCE_TOPIC_NAME = 'a516817-tweetstopic'; var consumerOptions = { host: SOURCE_KAFKA_HOST + ':' + KAFKA_ZK_SERVER_PORT , groupId: 'consume-tweets-for-elastic-index', sessionTimeout: 15000, protocol: ['roundrobin'], fromOffset: 'latest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest' }; var topics = [SOURCE_TOPIC_NAME]; var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics); consumerGroup.on('error', onError); consumerGroup.on('message', onMessage); function onMessage(message) { console.log('%s read msg Topic=&quot;%s&quot; Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset); console.log(&quot;Message Value &quot; + message.value) subscribers.forEach((subscriber) =&gt; { subscriber(message.value); }) } function onError(error) { console.error(error); console.error(error.stack); } process.once('SIGINT', function () { async.each([consumerGroup], function (consumer, callback) { consumer.close(true, callback); }); }); tweetListener.subscribeToTweets((message) =&gt; { var tweetEvent = JSON.parse(message); tweetCount++; // ready to elastify tweetEvent console.log(&quot;Ready to put on Elastic &quot;+JSON.stringify(tweetEvent)); model.saveTweet(tweetEvent).then((result, error) =&gt; { console.log(&quot;Saved to Elastic &quot;+JSON.stringify(result)+'Error?'+JSON.stringify(error)); }) }) </pre> </p> <p>The file model.js connects to the Elastic Search server and saves tweets to the tweets index when so requested. Very straightforward. Without any exception handling, for example in case the Elastic Search server does not accept a record or is simply unavailable. Remember: this is just the code for a demo.</p> <p><pre class="brush: jscript; title: ; notranslate"> var tweetsModel = module.exports; var elasticsearch = require('elasticsearch'); var ELASTIC_SEARCH_HOST = process.env.ELASTIC_CONNECTOR || 'http://129.150.114.134:9200'; var client = new elasticsearch.Client({ host: ELASTIC_SEARCH_HOST, }); client.ping({ requestTimeout: 30000, }, function (error) { if (error) { console.error('elasticsearch cluster is down!'); } else { console.log('Connection to Elastic Search is established'); } }); tweetsModel.saveTweet = async function (tweet) { try { var response = await client.index({ index: 'tweets', id: tweet.tweetId, type: 'doc', body: tweet } ); console.log(&quot;Response: &quot; + JSON.stringify(response)); return tweet; } catch (e) { console.error(&quot;Error in Elastic Search - index document &quot; + tweet.tweetId + &quot;:&quot; + JSON.stringify(e)) } } </pre> </p> <p></p> <h3>5. Kibana dashboard on top of the Tweets Index. </h3> <p>Kibana is an out of the box application, preconfigured in my case for the colocated Elastic Search server. Once I provide the name of the index – TWEETS – I am interested in, immediately Kibana shows an overview of (selected time ranges in) this index (the peaks in the screenshot indicate May 19th and 20th when the JEEConf was taking place in Kyiv, where I presented this demo:</p> <p></p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-10.png?ssl=1"><img width="702" height="325" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-10.png?resize=702%2C325&#038;ssl=1" border="0" data-recalc-dims="1"></a></p> <p>The same results in the Twitter UI:</p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-11.png?ssl=1"><img width="644" height="412" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-11.png?resize=644%2C412&#038;ssl=1" border="0" data-recalc-dims="1"></a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/05/22/simple-cqrs-tweets-to-apache-kafka-to-elastic-search-index-using-a-little-node-code/">Simple CQRS &ndash; Tweets to Apache Kafka to Elastic Search Index using a little Node code</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=48819 Tue May 22 2018 08:29:00 GMT-0400 (EDT) Demonstrating Oracle SQL Developer Web: CREATE & ALTER TABLE https://www.thatjeffsmith.com/archive/2018/05/demonstrating-oracle-sql-developer-web-create-alter-table/ <p>In two previous posts, I have:</p> <ul> <li><a href="https://www.thatjeffsmith.com/archive/2018/05/announcing-oracle-sql-developer-web/" rel="noopener" target="_blank">Introduced Oracle SQL Developer Web</a> and did a quick demo of the Worksheet</li> <li><a href="https://www.thatjeffsmith.com/archive/2018/05/demonstrating-oracle-sql-developer-web-the-data-modeler/" rel="noopener" target="_blank">Demonstrated the data modeler</a> diagramming feature.</li> </ul> <p>Today, I want to show you our CREATE and EDIT TABLE dialogs.</p> <p>While I aim for 10 minute videos, I had to go into overtime, and came out at 13 minutes. But as a bonus, you get to see me think in real time as I cocktail-napkin-style &#8216;design&#8217; my table. </p> <p>But before I show the video, some people have been asking &#8211; </p> <h3>Where can I get and use SQL Developer Web!?!</h3> <p>The answer is of course, in the <a href="http://cloud.oracle.com/database" rel="noopener" target="_blank">Oracle Cloud</a>! Sign up for one of our DBaaS subscriptions. In the future: we will be adding this interface for many of our other database-centric services AND we will be making this available for on our on-premises Oracle Database customers (via ORDS.) So, stay tuned for more news in this space.</p> <p>In other, shorter words, &#8220;SQL Developer Web is Cloud-First, not Cloud-Only.&#8221; &#8212;<em> insert legalese and disclaimers here</em>. </p> <h3>The Demo!</h3> <p>If you&#8217;re a subscriber to the blog &#8211; thank you! &#8211; and reading this post in your INBOX, you&#8217;ll need to open the post directly on my site, as the RSS feeds don&#8217;t render the embedded YouTube videos in many of your email clients. Or you can of course also subscribe and hit the notification button for my <a href="https://www.youtube.com/c/JeffSmiththat" rel="noopener" target="_blank">YouTube channel</a> to know immediately when I post new content. </p> <p><iframe width="853" height="480" src="https://www.youtube.com/embed/G6-MB2uOqEQ" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p> thatjeffsmith https://www.thatjeffsmith.com/?p=6637 Mon May 21 2018 11:15:21 GMT-0400 (EDT) Rapidly spinning up a VM with Ubuntu and Docker–on my Windows machine using Vagrant and VirtualBox https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/ <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-2.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-2.png?resize=583%2C412&#038;ssl=1" alt="image" width="583" height="412" border="0" data-recalc-dims="1" /></a>I have a Windows laptop. And of course I want to work with Docker containers. Using the Docker Quickstart Terminal is one way of doing so, and to some extent that works fine. But whenever I want to have more control over the Linux environment that runs the Docker host, or I want to run multiple such envionments in parallel, I like to just run VMs under my own control and use them to run Docker inside.</p> <p>The easiest way for me to create and run Docker enabled VMs is using combination of Vagrant and VirtualBox. VirtualBox runs the VM and takes care of network from and to the VM as well as mapping local directories on the Windows host machine into the VM. Vagrant runs on the Windows machine as a command line tool. It interacts with the VirtualBox APIs, to create, start, pause and resume and stop the VMs. Based on simple declarative definitions – text files – it will configure the VM and take care of it.</p> <p>In this article, I share the very simple Vagrant script that I am using to spin up and manage VMs in which I run Docker containers. Vagrant takes care of installing Docker into the VM, of configuring the Network, for mapping a local host directory into the VM and for creating a larger-than-normal disk for the Ubuntu VM. I will briefly show how to create/start the VM, SSH into it to create a terminal session, run a Docker file from the Windows host to run a container and to halt and restart.</p> <p>The prerequisites for following along: have a recent version of Vagrant and VirtualBox installed.</p> <p>To create and run the VM, write the following Vagrantfile to a directory: <a title="https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2" href="https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2">https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2</a> , for example using</p> <p><em>git clone https://gist.github.com/7593677f6d03285236c8f0391f1a78c2.git</em><br /> <script src="https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2.js"></script></p> <p>Open a Windows command line and cd to that directory. <a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-3.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-3.png?resize=702%2C353&#038;ssl=1" alt="image" width="702" height="353" border="0" data-recalc-dims="1" /></a></p> <p>Then type</p> <p><em>vagrant up</em></p> <p>This will run Vagrant and have it process the local vagrantfile. The base VM image is downloaded – if it does not already exist on your Windows host and subsequently Vagrant engages with VirtualBox to create the VM according to the configuration settings.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-4.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-4.png?resize=467%2C412&#038;ssl=1" alt="image" width="467" height="412" border="0" data-recalc-dims="1" /></a></p> <p>When the VM is running, Vagrant will do the next processing steps: provisioning Docker and Docker Compose in the VM.</p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/SNAGHTMLd51f82.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="SNAGHTMLd51f82" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/SNAGHTMLd51f82_thumb.png?resize=702%2C366&#038;ssl=1" alt="SNAGHTMLd51f82" width="702" height="366" border="0" data-recalc-dims="1" /></a>Finally, if there is a docker-compose.yml file in the current directory, it will be run by docker compose inside the VM; if there is none, an ugly error message is shown – but the VM will still be created and end up running.</p> <p>&nbsp;</p> <p>When vagrant up is complete, the VM is running, Docker is running and if any containers were created and started by Docker Compose, then they will be running as well.</p> <p>Using</p> <p><em>vagrant ssh</em></p> <p>(from the command line interface and still from the same directory) we can create a terminal session into the VM.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/SNAGHTML3ed88f.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="SNAGHTML3ed88f" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/SNAGHTML3ed88f_thumb.png?resize=390%2C412&#038;ssl=1" alt="SNAGHTML3ed88f" width="390" height="412" border="0" data-recalc-dims="1" /></a></p> <p>Using</p> <p><em>docker ps</em></p> <p>we can check if any containers were started. And we can start a(nother) container if we feel like it, such as:</p> <p><em>docker run busybox echo &#8220;hello from busybox&#8221;</em></p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-5.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-5.png?resize=702%2C130&#038;ssl=1" alt="image" width="702" height="130" border="0" data-recalc-dims="1" /></a></p> <p>The directory on the Windows host from which we ran vagrant up and vagrant ssh is mapped into the vm, to /vagrant. Using</p> <p><em>ls /vagrant</em></p> <p>we can check on the files in that directory that are available from within the VM.</p> <p>&nbsp;</p> <p>We can for example build a Docker container from a Docker file in that directory.</p> <p>Using</p> <p><em>exit</em></p> <p>we can leave the Vagrant SSH session. The VM keeps on running. We can return into the VM using vagrant ssh again. We can have multiple sessions into the same VM – by just starting multiple command line sessions in Windows, navigating to the same directory and running vagrant ssh in each session.</p> <p>Using</p> <p><em>vagrant halt</em></p> <p>we stop the VM. Its state is saved and we can continue from that state at a later point, simply by running <em>vagrant up</em> again.</p> <p>With <em>vagrant pause</em> and <em>vagrant resume</em> we can create a snapshot of the VM in mid flight and at a later moment (which can be after a restart of the host system) continue where we left off.</p> <p>Using</p> <p><em>vagrant destroy</em></p> <p>you can completely remove the VM, releasing the host (disk) resources that were consumed by it.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image-6.png?ssl=1"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/image_thumb-6.png?resize=661%2C94&#038;ssl=1" alt="image" width="661" height="94" border="0" data-recalc-dims="1" /></a></p> <h2>Resources</h2> <p>Vagrant Documentation: <a title="https://www.vagrantup.com/docs/" href="https://www.vagrantup.com/docs/">https://www.vagrantup.com/docs/</a></p> <p>Download Vagrant: <a title="https://www.vagrantup.com/downloads.html" href="https://www.vagrantup.com/downloads.html">https://www.vagrantup.com/downloads.html</a></p> <p>Download Oracle VirtualBox: <a title="https://www.virtualbox.org/wiki/Downloads" href="https://www.virtualbox.org/wiki/Downloads">https://www.virtualbox.org/wiki/Downloads</a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/">Rapidly spinning up a VM with Ubuntu and Docker&ndash;on my Windows machine using Vagrant and VirtualBox</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=48805 Mon May 21 2018 09:22:03 GMT-0400 (EDT) Announcing July/August Australian Dates: “Oracle Indexing Internals and Best Practices” Seminar https://richardfoote.wordpress.com/2018/05/21/announcing-july-august-australian-dates-oracle-indexing-internals-and-best-practices-seminar/ I&#8217;m very excited to announce new Australian dates for my highly acclaimed &#8220;Oracle Indexing Internals and Best Practices&#8221; seminar. This is a must attend seminar of benefit to not only DBAs, but also to Developers, Solution Architects and anyone else interested in designing, developing or maintaining high performance Oracle-based applications. It’s a fun, but intense, [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5624 Mon May 21 2018 03:58:17 GMT-0400 (EDT) New XPS 15 : The Wait is Over http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/qCzCg7Yf_fI/ <p>Followers of the blog will know I&#8217;ve been moaning about my MacBook Pro and macOS for a while now, and talking about making a switch back to Windows. That time will arrive soon, because I&#8217;ve just ordered one of these.</p> <p><a href="http://www.dell.com/en-uk/shop/laptops-notebooks-and-2-in-1-laptops/new-xps-15/spd/xps-15-9570-laptop/cnx97007"><img class="alignleft wp-image-8055" src="https://oracle-base.com/blog/wp-content/uploads/2018/05/dell-xps-15.png" alt="" width="200" height="119" /></a>It&#8217;s a <a href="http://www.dell.com/en-uk/shop/laptops-notebooks-and-2-in-1-laptops/new-xps-15/spd/xps-15-9570-laptop/cnx97007">Dell XPS 15&#8243;</a> with 32G RAM, 1TB M.2 drive and an i9 (6 core) processor.</p> <p>It&#8217;s a little over the top, but I tend to hold on to laptops for quite a while, assuming they work properly. I might have gone down-market a bit if Dell had released something in the middle range. In the UK they currently have low spec or mega spec in the new 15&#8243; range, and I&#8217;m getting increasingly worried about my current MBP, so I just went for it. Working for a university has the distinct advantage that I get a fantastic Higher Education discount from Dell when buying kit for home use. We also get an OK discount from Apple, but who cares&#8230;</p> <p>This will be my main desktop and travel laptop, so I&#8217;ll be interested to see how it stacks up. I know a couple of people with the 2017 model and they say it is awesome, so on paper this looks like it will be great, assuming it works. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I was tempted to go for one of the 13&#8243; versions, which <a href="https://connor-mcdonald.com/">Connor McDonald</a> recommended. The extra portability would be nice, but having recently spent some time working from just the laptop with no extra screen, I would go mad on such a small screen, no matter how good the resolution was.</p> <p>Of course I&#8217;ve bought a <a href="https://www.dell.com/en-uk/shop/accessories/apd/452-bcov">dock</a> for home and I already have a great monitor, so hopefully is should all slot into the setup nicely. I probably won&#8217;t get to use it for the next couple of conferences because of delivery dates, setup and understanding what adapters I need to connect to the real world. I&#8217;m not carrying the dock around with me. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I&#8217;ll no doubt write about the experience has it happens. I&#8217;m using Windows 10 at work, so I don&#8217;t think that will be an issue as it is working out fine. It&#8217;s always a bit of a concern when switching over to a new bit of kit. What if you get &#8220;the bad one&#8221;, which has certainly happened with this last MBP. Also, I&#8217;ve got my setup documented, but I always worry I will miss something out&#8230; <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Fingers crossed this will work out&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <p>PS. For context, you might want to read my post <a href="/blog/2018/02/04/mac-updates-disaster-again-and-a-return-to-windows-desktop/">here</a> before you tell me how great your preferred desktop OS is&#8230; <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/21/new-xps-15-the-wait-is-over/">New XPS 15 : The Wait is Over</a> was first posted on May 21, 2018 at 7:36 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/qCzCg7Yf_fI" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8053 Mon May 21 2018 02:36:28 GMT-0400 (EDT) New Video : BACKUP AS COPY PLUGGABLE DATABASE https://hemantoracledba.blogspot.com/2018/05/new-video-backup-as-copy-pluggable.html <div dir="ltr" style="text-align: left;" trbidi="on">I have published a new YouTube video on the RMAN "<a href="https://youtu.be/wxbzlHvH8pM" target="_blank">BACKUP AS COPY PLUGGABLE DATABASE</a>" command.<br />.<br />.<br />.<br /><br /></div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-3489779333014370943 Sun May 20 2018 10:42:00 GMT-0400 (EDT) Chrome 68, HTTPS , Let’s Encrypt and ORDS http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/2SMCtONvyBc/ <p>In February Google released a <a href="https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html">post about Chrome 68</a>, due for release in July, which will increase the pressure to adopt HTTPS for all websites because of this behaviour change.</p> <p><img class="alignnone size-full wp-image-8051" src="https://oracle-base.com/blog/wp-content/uploads/2018/05/chrome68-https.png" alt="" width="564" height="161" /></p> <p>Basically HTTP sites will be marked as insecure, rather than just getting the (i) symbol.</p> <p>Recently I&#8217;ve seen a bunch of sponsored posts talking about this in an attempt to sell certificates. GoDaddy are pushing the advertising hard. I just wanted to remind people there is a free alternative called <a href="https://letsencrypt.org/">Let&#8217;s Encrypt</a> you might want to consider.</p> <h2>Let&#8217;s Encrypt</h2> <p>I&#8217;ve been using HTTPS for a few years now, but over a year ago I switched to using the free Let&#8217;s Encrypt service to get my certificates and so far I&#8217;ve had no problems. I wrote about this in a blog post <a href="/blog/2017/03/15/lets-encrypt-free-certificates-on-oracle-linux-certbot/">here</a>. That links to this article about using CertBot to automate the certificate renewal, which includes the Apache HTTP Server config.</p> <ul> <li><a href="/articles/linux/letsencrypt-free-certificates-on-oracle-linux">Let’s Encrypt – Free Certificates on Oracle Linux (CertBot)</a></li> </ul> <p>The article also links to this article about configuring HTTPS for Tomcat, which includes an example of using a Let&#8217;s Encrypt certificate.</p> <ul> <li><a href="/articles/linux/apache-tomcat-enable-https#certs-and-keys">Apache Tomcat : Enable HTTPS &#8211; Using Certificates and Keys</a></li> </ul> <p>I always run Oracle REST Data Services (ORDS) under Tomcat, so this is how I HTTPS enable ORDS. If you would prefer to run ORDS in standalone mode, but still want to use a real certificate <a href="http://krisrice.io">Kris Rice</a> has your back with this article.</p> <ul> <li><a href="http://krisrice.io/2018-05-09-ORDS-and-lets_encrypt/">ORDS and Let&#8217;s Encrypt</a></li> </ul> <p>Of course, you shouldn&#8217;t be having direct traffic to Tomcat servers or standalone ORDS services you care about. They should be sitting behind some form of reverse proxy, or a load balancer acting as a reverse proxy, which is performing the SSL termination. In my company, we have the real certificates on the load balancers, which perform the SSL termination, then re-encrypt to speak to the services below them.</p> <h2>Thoughts</h2> <p>In general I think the push towards HTTPS is a good thing, but I do have a few reservations.</p> <ul> <li>There are plenty of sites, like my own, that don&#8217;t really do anything that requires encrypted connections. You are just there to read publicly available stuff. Marking them as insecure seems a little stupid to me. <strong>Update</strong>: As pointed out in the comments, it does make it harder for people to intercept and change the information during transit.</li> <li>A bigger beef is the fact that anything with a valid HTTPS certificate is marked as &#8220;Secure&#8221;. If you work in IT you understand this just means the connection is secure, but what does it mean to other people? I could understand it if some people thought it meant it was a safe website to visit, when it means nothing of the sort. If HTTPS is the new &#8220;normal&#8221;, I think the browser should stop marking it as secure, and only flag when it is insecure. <strong>Update</strong>: It seems this is going to change (<a href="https://blog.chromium.org/2018/05/evolving-chromes-security-indicators.html">here</a>). Thanks to <a href="http://blog.sydoracle.com/">Gary</a> for pointing this out.</li> <li>It worries me that Google can make this decision and the rest of the world has to jump. This all started when they began to alter index ranking based on the presence of HTTPS, which is why I first enabled HTTPS on my website about 4-5 years ago I think. Now the Chrome market share of about 60% is such that they can make big changes like this without having to get buy in from the rest of the world. The motives are good, but I don&#8217;t like it.</li> <li>I&#8217;m not saying you shouldn&#8217;t pay for certificates. My company still does. I&#8217;m just saying you have a choice, especially if it is something that you do for fun like this website. In this case the free option is always the good one. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li> </ul> <p>Happy encrypting&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/19/chrome-68-https-lets-encrypt-and-ords/">Chrome 68, HTTPS , Let&#8217;s Encrypt and ORDS</a> was first posted on May 19, 2018 at 7:59 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/2SMCtONvyBc" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8050 Sat May 19 2018 02:59:37 GMT-0400 (EDT) My Performance & Troubleshooting scripts (TPT) for Oracle are now in GitHub and open sourced https://blog.tanelpoder.com/2018/05/18/my-performance-troubleshooting-scripts-tpt-for-oracle-are-now-in-github-and-open-sourced/ <p>I have uploaded my <strong>TPT-oracle</strong> scripts to GitHub and have formally open sourced them under Apache 2.0 license as well. This allows companies to embed this software in their toolsets and processes &amp; distribute them without a worry from legal departments.</p> <p>The repository is here:</p> <ul> <li><a href="https://github.com/tanelpoder/tpt-oracle" target="_blank" rel="noopener">https://github.com/tanelpoder/tpt-oracle</a></li> </ul> <p>Now you can &#8220;git clone&#8221; this repository once and just &#8220;git pull&#8221; every now and then to see what updates &amp; fixes I have made.</p> <p>Also if you like my scripts, make sure you &#8220;Star&#8221; this repository in Github too &#8211; the more stars it gets, the more updates I will commit! ;-)</p> <p><a href="https://github.com/tanelpoder" target="_blank" rel="noopener"><img class="alignnone wp-image-3812 size-large" src="https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?resize=800%2C202" alt="" width="800" height="202" srcset="https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?resize=1024%2C258&amp;ssl=1 1024w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?resize=300%2C76&amp;ssl=1 300w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?resize=768%2C194&amp;ssl=1 768w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?resize=50%2C13&amp;ssl=1 50w, https://i2.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/github_star.png?w=1600 1600w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" /></a></p> <p>While &#8220;git clone&#8221; is a recommended method for getting your own workstation copy of the repository now, your servers might not have git installed (and no direct internet access), so you can still download a zipfile of everything in this repo too:</p> <ul> <li><a href="https://github.com/tanelpoder/tpt-oracle/archive/master.zip">https://github.com/tanelpoder/tpt-oracle/archive/master.zip</a></li> </ul> <p>You can still directly access individual scripts too using links like the ones below. For example, if you want to run fish.sql to display an awesome <em>SQL fish</em> in sqlplus, you can download this:</p> <ul> <li><a href="https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/fish.sql" target="_blank" rel="noopener">https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/fish.sql</a></li> </ul> <p><img class="alignnone size-large wp-image-3811" src="https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?resize=800%2C831" alt="" width="800" height="831" srcset="https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?resize=986%2C1024&amp;ssl=1 986w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?resize=289%2C300&amp;ssl=1 289w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?resize=768%2C797&amp;ssl=1 768w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?resize=48%2C50&amp;ssl=1 48w, https://i0.wp.com/blog.tanelpoder.com/wp-content/uploads/2018/05/fish.png?w=1600 1600w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" /></p> <p>Or if you want to run something from a subdirectory, like ash/dashtop.sql for showing <em>ASH top</em> from the historical ASH data in DBA_HIST views, you can download this script from the ASH subdirectory:</p> <ul> <li><a href="https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/ash/dashtop.sql" target="_blank" rel="noopener">https://raw.githubusercontent.com/tanelpoder/tpt-oracle/master/ash/dashtop.sql</a></li> </ul> <p>Example output below:</p> <pre>SQL&gt; @ash/dashtop sql_opname,event2 username='SYS' DATE'2018-04-19' DATE'2018-04-20' Total Seconds AAS %This SQL_OPNAME EVENT2 FIRST_SEEN LAST_SEEN --------- ------- ------- -------------------- ------------------------------------------ ------------------- ------------------- 4930 .1 83% ON CPU 2018-04-19 18:00:04 2018-04-19 23:48:08 430 .0 7% SELECT ON CPU 2018-04-19 18:01:04 2018-04-19 23:49:48 290 .0 5% SELECT acknowledge over PGA limit 2018-04-19 18:00:34 2018-04-19 23:23:50 60 .0 1% UPSERT ON CPU 2018-04-19 18:00:04 2018-04-19 22:00:15 50 .0 1% UPSERT acknowledge over PGA limit 2018-04-19 18:00:04 2018-04-19 23:13:47 30 .0 1% CALL METHOD ON CPU 2018-04-19 18:00:24 2018-04-19 21:03:19 30 .0 1% control file sequential read 2018-04-19 18:56:42 2018-04-19 21:47:21 30 .0 1% log file parallel write 2018-04-19 21:03:19 2018-04-19 22:13:39 20 .0 0% CALL METHOD acknowledge over PGA limit 2018-04-19 18:00:24 2018-04-19 22:01:55 20 .0 0% DELETE db file sequential read 2018-04-19 20:46:54 2018-04-19 22:00:35 20 .0 0% SELECT db file sequential read 2018-04-19 22:01:05 2018-04-19 22:01:35 10 .0 0% INSERT ON CPU 2018-04-19 19:50:28 2018-04-19 19:50:28 10 .0 0% INSERT acknowledge over PGA limit 2018-04-19 20:43:12 2018-04-19 20:43:12 10 .0 0% SELECT db file scattered read 2018-04-19 23:03:55 2018-04-19 23:03:55 10 .0 0% LGWR any worker group 2018-04-19 21:03:19 2018-04-19 21:03:19 10 .0 0% control file parallel write 2018-04-19 21:05:59 2018-04-19 21:05:59 16 rows selected. </pre> <p>Now that I have this stuff in Github, I plan to update my scripts a bit more regularly &#8211; and you can follow the repository to get real time updates whenever I push something new.</p> <p>As a next step I&#8217;ll convert my blog from WordPress to static hosting (Hugo) hopefully over this weekend, so you might see a few blog template/webserver glitches in the next few days.</p> <div class="crp_related "><h4>Related Posts</h4><ul><li><a href="https://blog.tanelpoder.com/2015/11/10/troubleshooting-another-complex-performance-issue-oracle-direct-path-inserts-and-seg-contention/" ><span class="crp_title">Troubleshooting Another Complex Performance Issue &#8211; Oracle direct path inserts and&hellip;</span></a></li><li><a href="https://blog.tanelpoder.com/2015/10/09/advanced-oracle-troubleshooting-v2-5-with-12c-stuff-too/" ><span class="crp_title">Advanced Oracle Troubleshooting v2.5 (with 12c stuff too)</span></a></li><li><a href="https://blog.tanelpoder.com/2017/11/29/advanced-oracle-troubleshooting-seminar-in-2018/" ><span class="crp_title">Advanced Oracle Troubleshooting seminar in 2018!</span></a></li><li><a href="https://blog.tanelpoder.com/seminar/aot-setup/" ><span class="crp_title">Installation and Setup for the Advanced Oracle Troubleshooting class</span></a></li><li><a href="https://blog.tanelpoder.com/2015/04/24/advanced-oracle-troubleshooting-guide-part-12-control-file-parallel-reads-causing-enq-sq-contention-waits/" ><span class="crp_title">Advanced Oracle Troubleshooting Guide &#8211; Part 12: control file reads causing enq: SQ&hellip;</span></a></li></ul><div class="crp_clear"></div></div> Tanel Poder http://blog.tanelpoder.com/?p=3810 Fri May 18 2018 16:13:22 GMT-0400 (EDT) ODTUG User Group Integrating New Tools to Be More Data Driven https://www.odtug.com/p/bl/et/blogaid=802&source=1 ODTUG User Group Integrating New Tools to Be More Data Driven:Oracle Essbase Cloud and Data Visualization Support Annual User Group Conference Planning ODTUG https://www.odtug.com/p/bl/et/blogaid=802&source=1 Fri May 18 2018 14:36:56 GMT-0400 (EDT) SQL Developer Tips & Tricks: The GIF https://www.thatjeffsmith.com/archive/2018/05/sql-developer-tips-tricks-the-gif/ <p>Not able to make my Tips &#038; Tricks talk at the Great Lakes Oracle User Group conference this week?</p> <p>Don&#8217;t have the 50 minutes to watch my recorded <a href="https://www.youtube.com/watch?v=-FkM3ByuPYA&#038;t=12s" rel="noopener" target="_blank">YouTube version</a>?</p> <p>Can you spare 90 seconds?</p> <p>You don&#8217;t even need to turn the sound on.</p> <p>Just watch this GIF.</p> <div id="attachment_6633" style="width: 810px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/tips-gif.gif"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/tips-gif.gif" alt="" width="800" height="450" class="size-full wp-image-6633" /></a><p class="wp-caption-text">5 or 6 tricks, in a single GIF!<t;</div> <p>To recap:</p> <ul> <li>ALT+G to do a <a href="https://www.thatjeffsmith.com/archive/2017/04/searching-source-code-your-views-in-sql-developer/" rel="noopener" target="_blank">search</a></li> <li>Use <a href="https://www.thatjeffsmith.com/archive/2014/05/how-can-i-see-tables-from-two-different-connections-in-oracle-sql-developer/" rel="noopener" target="_blank">Doc Tab Groups</a> to see multiple things at once</li> <li>Ctrl+Space to <a href="https://www.thatjeffsmith.com/archive/2015/03/continued-improvements-to-code-completion-insight/" rel="noopener" target="_blank">auto-complete</a></li> <li><a href="https://www.thatjeffsmith.com/archive/2011/11/sql-developer-quick-tip-filtering-your-data-grids/" rel="noopener" target="_blank">Filter grids</a> with WHERE clause predicates</li> <li>drag and drop items to get a comma delimited list</li> <li>Ctrl+F7 to <a href="https://www.thatjeffsmith.com/archive/2018/04/18-1-new-formatting-option-right-align-query-keywords/" rel="noopener" target="_blank">format your code</a></li> </ul> thatjeffsmith https://www.thatjeffsmith.com/?p=6632 Fri May 18 2018 10:40:34 GMT-0400 (EDT) Bitmap Join Indexes https://jonathanlewis.wordpress.com/2018/05/18/bitmap-join-indexes-2/ <p>I&#8217;ve been prompted by <a href="http://www.freelists.org/post/oracle-l/bitmap-index-is-not-used-when-the-actual-consistent-gets-is-lower"><em><strong>a recent question on the ODC database forum</strong></em></a> to revisit <a href="https://jonathanlewis.wordpress.com/2013/12/09/bitmap-join-indexes/"><em><strong>a note I wrote nearly five years ago</strong></em></a> about bitmap join indexes and their failure to help with join cardinalities. At the time I made a couple of unsupported claims and suggestions without supplying any justification or proof. Today&#8217;s article finally fills that gap.</p> <p>The problem is this &#8211; I have a column which exhibits an extreme skew in its data distribution, but it&#8217;s in a &#8220;fact&#8221; table where most columns are meaningless ids and I have to join to a dimension table on its primary key to translate an <em><strong>id</strong></em> into a <em><strong>name</strong></em>. While there is a histogram on the column in the fact table the information in the histogram ceases to help if I do the join to the dimension and query by name, and the presence of a bitmap join index doesn&#8217;t make any difference. Let&#8217;s see this in action &#8211; some of the code follows a different pattern and format from my usual style because I started by copying and editing the example supplied in the database forum:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: bitmap_join_4.sql rem Author: Jonathan Lewis rem Dated: May 2018 rem rem Last tested rem 12.2.0.1 rem 12.1.0.2 rem 11.2.0.4 rem rem Notes: rem Bitmap join indexes generate virtual columns on the fact table rem but you can't get stats on those columns - which means if the rem data is skewed you can have a histogram on the raw column but rem you don't have a histogram on the bitmap virtual column. rem drop table t1; drop table dim_table; create table dim_table (type_code number, object_type varchar2(10)); insert into dim_table values (1,'TABLE'); insert into dim_table values (2,'INDEX'); insert into dim_table values (3,'VIEW'); insert into dim_table values (4,'SYNONYM'); insert into dim_table values (5,'OTHER'); alter table dim_table add constraint dim_table_pk primary key (type_code) using index; exec dbms_stats.gather_table_stats(user,'dim_table',cascade=&gt;true); create table t1 nologging as select object_id, object_name, decode(object_type, 'TABLE',1,'INDEX',2,'VIEW',3,'SYNONYM',4,5) type_code from all_objects where rownum &lt;= 50000 -- &gt; comment to bypass wordpress format issue ; insert into t1 select * from t1; insert into t1 select * from t1; insert into t1 select * from t1; create bitmap index t1_b1 on t1(dt.object_type) from t1, dim_table dt where t1.type_code = dt.type_code ; exec dbms_stats.gather_table_stats(null, 't1', cascade=&gt;true, method_opt=&gt;'for all columns size 254'); select dt.object_type, count(*) from t1, dim_table dt where t1.type_code = dt.type_code group by dt.object_type order by dt.object_type ; </pre> <p>I&#8217;ve started with a dimension table that lists 5 type codes and has a primary key on that type code; then I&#8217;ve used <em><strong>all_objects</strong></em> to generate a table of 400,000 rows using those type codes, and I&#8217;ve created a bitmap join index on the fact (<em><strong>t1</strong></em>) table based on the dimension (<em><strong>dim_table</strong></em>) table column. By choice the distribution of the five codes is massively skewed so after gathering stats (including histograms on all columns) for the table I&#8217;ve produced a simple aggregate report of the data showing how many rows there are of each type &#8211; by name. Here are the results &#8211; with the execution plan from 12.1.0.2 showing the benefit of the <em>&#8220;group by placement&#8221;</em> transformation:</p> <pre class="brush: plain; title: ; notranslate"> OBJECT_TYP COUNT(*) ---------- ---------- INDEX 12960 OTHER 150376 SYNONYM 177368 TABLE 12592 VIEW 46704 5 rows selected. ------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost | ------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 735 | | 1 | SORT GROUP BY | | 5 | 125 | 735 | |* 2 | HASH JOIN | | 5 | 125 | 720 | | 3 | VIEW | VW_GBF_7 | 5 | 80 | 717 | | 4 | HASH GROUP BY | | 5 | 15 | 717 | | 5 | TABLE ACCESS FULL| T1 | 400K| 1171K| 315 | | 6 | TABLE ACCESS FULL | DIM_TABLE | 5 | 45 | 2 | ------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;ITEM_1&quot;=&quot;DT&quot;.&quot;TYPE_CODE&quot;) </pre> <p>Having established the basic result we can now examine some execution plans to see how well the optimizer is estimating cardinality for queries relating to that skewed distribution. I&#8217;m going to generate the execution plans for a simple select of all the rows of type &#8216;TABLE&#8217; &#8211; first by code, then by name, showing the execution plan of each query:</p> <pre class="brush: plain; title: ; notranslate"> explain plan for select t1.object_id from t1 where t1.type_code = 1 ; select * from table(dbms_xplan.display(null,null,'outline')); -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 12592 | 98K| 281 (8)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T1 | 12592 | 98K| 281 (8)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;T1&quot;.&quot;TYPE_CODE&quot;=1) </pre> <p>Thanks to the histogram I generated on the <em><strong>type_code</strong></em> table the optimizer&#8217;s estimate of the number of rows is very accurate. So how well does the optimizer handle the join statistics:</p> <pre class="brush: plain; title: ; notranslate"> prompt ============= prompt Unhinted join prompt ============= explain plan for select t1.object_id from t1, dim_table dt where t1.type_code = dt.type_code and dt.object_type = 'TABLE' ; select * from table(dbms_xplan.display(null,null,'outline')); -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 80000 | 1328K| 287 (10)| 00:00:01 | |* 1 | HASH JOIN | | 80000 | 1328K| 287 (10)| 00:00:01 | |* 2 | TABLE ACCESS FULL| DIM_TABLE | 1 | 9 | 2 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL| T1 | 400K| 3125K| 277 (7)| 00:00:01 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access(&quot;T1&quot;.&quot;TYPE_CODE&quot;=&quot;DT&quot;.&quot;TYPE_CODE&quot;) 2 - filter(&quot;DT&quot;.&quot;OBJECT_TYPE&quot;='TABLE') </pre> <p>Taking the default execution path the optimizer&#8217;s estimate of rows identified by type <em><strong>name</strong> </em>is 80,000 &#8211; which is one fifth of the total number of rows. Oracle knows that the <em><strong>type_code</strong></em> is skewed in <em><strong>t1</strong></em>, but at compile time doesn&#8217;t have any idea which <em><strong>type_code</strong></em> corresponds to type &#8216;TABLE&#8217;, so it&#8217;s basically using the number of distinct values to dictate the estimate.</p> <p>We could try hinting the query to make sure it uses the bitmap join index &#8211; just in case this somehow helps the optimizer (and we&#8217;ll see in a moment why we might have this hope, and why it is forlorn):</p> <pre class="brush: plain; title: ; notranslate"> prompt =================== prompt Hinted index access prompt =================== explain plan for select /*+ index(t1 t1_b1) */ t1.object_id from t1, dim_table dt where t1.type_code = dt.type_code and dt.object_type = 'TABLE' ; select * from table(dbms_xplan.display(null,null,'outline')); --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 80000 | 625K| 687 (1)| 00:00:01 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 80000 | 625K| 687 (1)| 00:00:01 | | 2 | BITMAP CONVERSION TO ROWIDS | | | | | | |* 3 | BITMAP INDEX SINGLE VALUE | T1_B1 | | | | | --------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access(&quot;T1&quot;.&quot;SYS_NC00004$&quot;='TABLE') </pre> <p>The plan tells us that the optimizer now realises that it doesn&#8217;t need to reference the dimension table at all &#8211; all the information it needs is in the <em><strong>t1</strong></em> table and its bitmap join index &#8211; but it still comes up with an estimate of 80,000 for the number of rows. The predicate section tells us what to do next &#8211; it identifies a system-generated column, which is the virtual column underlying the bitmap join index: let&#8217;s see what the stats on that column look like:</p> <pre class="brush: plain; title: ; notranslate"> select column_name, histogram, num_buckets, num_distinct, num_nulls, sample_size from user_tab_cols where table_name = 'T1' order by column_id ; COLUMN_NAME HISTOGRAM NUM_BUCKETS NUM_DISTINCT NUM_NULLS SAMPLE_SIZE -------------------- --------------- ----------- ------------ ---------- ----------- OBJECT_ID HYBRID 254 50388 0 5559 OBJECT_NAME HYBRID 254 29224 0 5560 TYPE_CODE FREQUENCY 5 5 0 400000 SYS_NC00004$ NONE 4 rows selected. </pre> <p>There are no stats on the virtual column &#8211; and Oracle won&#8217;t try to collect any, and even if you write some in (using <em><strong>dbms_stats.set_column_stats</strong></em>) it won&#8217;t use them for the query. The optimizer seems to be coded to use the number of distinct keys from the index in this case.</p> <h3>Workaround</h3> <p>It&#8217;s very disappointing that there seems to be no official way to work around this problem &#8211; but Oracle has their own (undocumented) solution to the problem that comes into play with OLAP &#8211; the hint <em><strong>/*+ precompute_subquery() */</strong></em>. It&#8217;s possible to tell the optimizer to execute certain types of subquery as the first stage of optimising a query, then changing the query to take advantage of the resulting data:</p> <pre class="brush: plain; title: ; notranslate"> explain plan for select /*+ qb_name(main) precompute_subquery(@subq) */ t1.object_id from t1 where t1.type_code in ( select /*+ qb_name(subq) */ dt.type_code from dim_table dt where dt.object_type = 'TABLE' ) ; select * from table(dbms_xplan.display(null,null,'outline')); -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 12592 | 98K| 281 (8)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T1 | 12592 | 98K| 281 (8)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;T1&quot;.&quot;TYPE_CODE&quot;=1) </pre> <p>Oracle hasn&#8217;t optimized the query I wrote, instead it has executed the subquery, derived a (very short, in this case) list of values, then optimized and executed the query I first wrote using the constant(s) returned by the subquery. And you can&#8217;t see the original subquery in the execution plan. Of course, with the literal values in place, the cardinality estimate is now correct.</p> <p>It&#8217;s such a pity that this hint is undocumented, and one that you shouldn&#8217;t use in production.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18291 Fri May 18 2018 09:29:49 GMT-0400 (EDT) Emerging Tech Unconference Session & Changing Landscape Panels https://www.odtug.com/p/bl/et/blogaid=801&source=1 The tech world is constantly evolving and so is ODTUG Kscope! This year, we are introducing a new Emerging Tech unconference breakout session and careers and changing landscape panels for DBA/Developers and EPM/BI. ODTUG https://www.odtug.com/p/bl/et/blogaid=801&source=1 Thu May 17 2018 13:38:59 GMT-0400 (EDT) JDev/ADF sample - Microservice Approach for Web Development - Micro Frontends http://andrejusb-samples.blogspot.com/2018/05/jdevadf-sample-microservice-approach.html <div dir="ltr" style="text-align: left;" trbidi="on"><ul><li><a href="https://andrejusb.blogspot.lt/2018/05/microservice-approach-for-web.html" target="_blank">Microservice Approach for Web Development - Micro Frontends</a>. Wondering what micro frontends term means? Check micro frontends description here. Simply speaking, micro frontend must implement business logic from top to bottom (database, middleware and UI) in isolated environment, it should be reusable and pluggable into main application UI shell. There must be no shared variables between micro frontends. Advantage - distributed teams can work on separate micro frontends, this improves large and modular system development. There is runtime advantage too - if one of the frontends stops working, main application should continue to work.</li></ul><ol>Download - <a href="https://github.com/abaranovskis-redsamurai/warsaw" target="_blank">GitHub</a></ol></div> Andrejus Baranovskis tag:blogger.com,1999:blog-4301764760924839143.post-8698757766798328164 Thu May 17 2018 13:26:00 GMT-0400 (EDT) SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/ <p>For developers, installing a full blown local SOA Suite environment has never been a favorite (except for a select few). It is time consuming and requires you to download and run various installers after each other. If you want to start clean (and you haven&#8217;t taken precautions), it could be you have to start all over again.</p> <p>There is a new and easy way to get a SOA Suite environment up and running without downloading any installers in only a couple of commands without depending on scripts provided by any party other than Oracle. The resulting environment is an Oracle Enterprise Edition database, an Admin Server and a Managed Server. All of them running in separate Docker containers with ports exposed to the host. The 3 containers can run together within an 8Gb RAM VM.</p> <p>The documentation Oracle provides in its Container Registry for the SOA Suite images, should be used as base, but since you will encounter some errors if you follow it, you can use this blog post to help you solve them quickly.</p> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?ssl=1"><img data-attachment-id="48773" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture01-15/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?fit=1540%2C691&amp;ssl=1" data-orig-size="1540,691" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture01" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?fit=300%2C135&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?fit=702%2C315&amp;ssl=1" class="aligncenter size-medium wp-image-48773" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?resize=300%2C135&#038;ssl=1" alt="" width="300" height="135" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?resize=300%2C135&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?resize=768%2C345&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?resize=1024%2C459&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?w=1540&amp;ssl=1 1540w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture01.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p><span id="more-48766"></span></p> <h1>A short history</h1> <h2>QuickStart and different installers</h2> <p>During the 11g times, a developer, if he wanted to run a local environment, he needed to install a database (usually XE), WebLogic Server, SOA Infrastructure, run the Repository Creation Utility (RCU) and one or more of SOA, BPM, OSB. In 12c, the SOA Suite QuickStart was introduced. The QuickStart uses an Apache Derby database instead of the Oracle database and lacks features like ESS, split Admin Server / Managed Server, NodeManager and several other features, making this environment not really comparable to customer environments. If you wanted to install a standalone version, you still needed to go through all the manual steps or automate them yourself (with response files for the installers and WLST files for domain creation). As an alternative, during these times, Oracle has been so kind as to provide VirtualBox images (like <a href="http://www.oracle.com/technetwork/middleware/soasuite/learnmore/soa-vm-2870913.html">this one</a> or <a href="http://www.oracle.com/technetwork/middleware/soasuite/learnmore/prebuiltvm-soasuite122110-3070567.html">this one</a>) with everything pre-installed. For more complex set-ups Edwin Biemond / Lucas Jellema have provided <a href="https://technology.amis.nl/2014/07/31/rapid-creation-of-virtual-machines-for-soa-suite-12-1-3-server-run-time-environment-leveraging-vagrant-puppet-and-biemond/">Vagrant files and blog posts</a> to quickly create a 12c environment.</p> <h2>Docker</h2> <p>One of the benefits of running SOA Suite in Docker containers is that the software is isolatd in the container. You can quickly remove and recreate domains. Also, in general, Docker is more resource efficient compared to for example VMWare, VirtualBox or Oracle VM and the containers are easily shippable to other environments/machines.</p> <h2>Dockerfiles</h2> <p>Docker has become very popular and there have been several efforts to run SOA Suite in Docker containers. At first these efforts where by people who created their own Dockerfiles and used the installers and responsefiles to create images. Later Oracle provided their own Dockerfiles but you still needed the installers from <a href="https://edelivery.oracle.com">edelivery.oracle.com</a> and first build the images. The official Oracle provided Docker files can be found in GitHub <a href="https://github.com/oracle/docker-images">here</a>.</p> <h2>Container Registry</h2> <p>Oracle has introduced its <a href="https://container-registry.oracle.com/">Container Registry</a> recently (the start of 2017). The Container Registry is a Docker Registry which contains prebuild images, thus just Dockerfiles. Oracle Database appeared, WebLogic and the SOA Infrastructure and now (May 2018) the complete SOA Suite.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?ssl=1"><img data-attachment-id="48775" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture02-14/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?fit=942%2C788&amp;ssl=1" data-orig-size="942,788" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture02" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?fit=300%2C251&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?fit=702%2C587&amp;ssl=1" class="aligncenter size-medium wp-image-48775" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?resize=300%2C251&#038;ssl=1" alt="" width="300" height="251" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?resize=300%2C251&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?resize=768%2C642&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture02.png?w=942&amp;ssl=1 942w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p>How do you use this? You link your OTN account to the Container Registry. This needs to be done only once. Next you can accept the license agreement for the images you would like to use. The Container Registry contains a useful description with every image on how to use it and what can be configured. Keep in mind that since the Container Registry has recently been restructured, names of images have changed and not all manuals have been updated yet. That is also why you want to tag images so you can access them locally in a consistent way.</p> <h1>Download and run!</h1> <p>For SOA Suite, you need to accept the agreement for the Enterprise Edition database and SOA Suite. You don&#8217;t need the SOA Infrastructure; it is part of the SOA Suite image.</p> <h2>Login</h2> <pre class="brush: plain; title: ; notranslate">docker login -u OTNusername -p OTNpassword container-registry.oracle.com</pre> <h2>Pull, tag, create env files</h2> <p>Pulling the images can take a while&#8230; (can be hours on Wifi). The commands for pulling differ slightly from the examples given in the image documentation in the Container Registry because image names have recently changed. For consistent access, tag them.</p> <h3>Database</h3> <pre class="brush: plain; title: ; notranslate">docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 docker tag container-registry.oracle.com/database/enterprise:12.2.0.1 oracle/database:12.2.0.1-ee</pre> <p>The database requires a configuration file. The settings in this file are not correctly applied by the installation which is executed when a container is created from the image however. I&#8217;ve updated the configuration file to reflect what is actually created:</p> <pre class="brush: plain; title: ; notranslate">db.env.list ORACLE_SID=orclcdb ORACLE_PDB=orclpdb1 ORACLE_PWD=Oradoc_db1</pre> <h3>SOA Suite</h3> <pre class="brush: plain; title: ; notranslate">docker pull container-registry.oracle.com/middleware/soasuite:12.2.1.3 docker tag container-registry.oracle.com/middleware/soasuite:12.2.1.3 oracle/soa:12.2.1.3</pre> <p>The Admin Server also requires a configuration file:</p> <pre class="brush: plain; title: ; notranslate">adminserver.env.list CONNECTION_STRING=soadb:1521/ORCLPDB1.localdomain RCUPREFIX=SOA1 DB_PASSWORD=Oradoc_db1 DB_SCHEMA_PASSWORD=Welcome01 ADMIN_PASSWORD=Welcome01 MANAGED_SERVER=soa_server1 DOMAIN_TYPE=soa</pre> <p>As you can see, you can use the same database for multiple SOA schema&#8217;s since the RCU prefix is configurable.</p> <p>The Managed Server also requires a configuration file:</p> <pre class="brush: plain; title: ; notranslate">soaserver.env.list MANAGED_SERVER=soa_server1 DOMAIN_TYPE=soa ADMIN_HOST=soaas ADMIN_PORT=7001</pre> <p>Make sure the Managed Server mentioned in the Admin Server configuration file matches the Managed Server in the Managed Server configuration file. The Admin Server installation creates a boot.properties for the Managed Server. If the server name does not match, the Managed Server will not boot.</p> <h2>Create local folders and network</h2> <p>Since you might not want to lose your domain or database files when you remove your container and start it again, you can create a location on your host machine where the domain will be created and the database can store its files. Make sure the user running the containers has userid/groupid 1000 for the below commands to allow the user access to the directories. Run the below commands as root. They differ slightly from the manual since errors will occur if SOAVolume/SOA does not exist.</p> <pre class="brush: plain; title: ; notranslate">mkdir -p /scratch/DockerVolume/SOAVolume/SOA chown 1000:1000 /scratch/DockerVolume/SOAVolume/ chmod -R 700 /scratch/DockerVolume/SOAVolume/</pre> <p>Create a network for the database and SOA servers:</p> <pre class="brush: plain; title: ; notranslate">docker network create -d bridge SOANet</pre> <h2>Run</h2> <h3>Start the database</h3> <p>You&#8217;ll first need the database. You can run it by:</p> <pre class="brush: plain; title: ; notranslate">#Start the database docker run --name soadb --network=SOANet -p 1521:1521 -p 5500:5500 -v /scratch/DockerVolume/SOAVolume/DB:/opt/oracle/oradata --env-file /software/db.env.list oracle/database:12.2.0.1-ee</pre> <p>This installs and starts the database. db.env.list, which is described above, should be in /software in this case.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?ssl=1"><img data-attachment-id="48770" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture05-8/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?fit=846%2C760&amp;ssl=1" data-orig-size="846,760" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture05" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?fit=300%2C270&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?fit=702%2C631&amp;ssl=1" class="aligncenter size-medium wp-image-48770" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?resize=300%2C270&#038;ssl=1" alt="" width="300" height="270" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?resize=300%2C270&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?resize=768%2C690&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture05.png?w=846&amp;ssl=1 846w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h3>SOA Suite</h3> <p>In the examples documented, it is indicated you can run the Admin Server and the Managed Server in separate containers. You can and they will startup. However, the Admin Server cannot manage the Managed Server and the WebLogic Console / EM don&#8217;t show the Managed Server status. The configuration in the Docker container uses a single machine with a single host-name and indicates both the Managed Server and Admin Server both run there. In order to fix this, I&#8217;ll suggest two easy workarounds.</p> <h4>Port forwarding. Admin Server and Managed Server in separate containers</h4> <p>You can create a port-forward from the Admin Server to the Managed Server. This allows the WebLogic Console / EM and Admin Server to access the Managed Server at &#8216;localhost&#8217; within the Docker container on port 8001.</p> <pre class="brush: plain; title: ; notranslate">#This command starts an interactive shell which runs the Admin Server. Wait until it is up before continuing! docker run -i -t --name soaas --network=SOANet -p 7001:7001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list oracle/soa:12.2.1.3</pre> <p><img data-attachment-id="48769" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture04-8/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?fit=1526%2C482&amp;ssl=1" data-orig-size="1526,482" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture04" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?fit=300%2C95&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?fit=702%2C221&amp;ssl=1" class="aligncenter wp-image-48769 size-large" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?resize=702%2C221&#038;ssl=1" alt="" width="702" height="221" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?resize=1024%2C323&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?resize=300%2C95&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?resize=768%2C243&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?w=1526&amp;ssl=1 1526w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture04.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></p> <pre class="brush: plain; title: ; notranslate">#This command starts an interactive shell which runs the Managed Server. docker run -i -t --name soams --network=SOANet -p 8001:8001 --volumes-from soaas --env-file /software/soaserver.env.list oracle/soa:12.2.1.3 &quot;/u01/oracle/dockertools/startMS.sh&quot;</pre> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?ssl=1"><img data-attachment-id="48768" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture03-10/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?fit=723%2C185&amp;ssl=1" data-orig-size="723,185" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture03" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?fit=300%2C77&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?fit=702%2C180&amp;ssl=1" class="aligncenter wp-image-48768 size-medium" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?resize=300%2C77&#038;ssl=1" alt="" width="300" height="77" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?resize=300%2C77&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture03.png?w=723&amp;ssl=1 723w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <pre class="brush: plain; title: ; notranslate">#The below commands install and run socat to do the port mapping from Admin Server port 8001 to Managed Server port 8001 docker exec -u root soaas yum -y install socat docker exec -d -u root soaas &quot;/usr/bin/socat&quot; TCP4-LISTEN:8001,fork TCP4:soams:8001&quot;</pre> <p>The container is very limited. It does not contain executables for ping, netstat, wget, ifconfig, iptables and several other common tools. socat seemed an easy solution (easier than iptables or SSH tunnels) to do port forwarding and it worked nicely.</p> <h4>Admin Server and Managed Server in a single container</h4> <p>An alternative is to run the both the Managed Server and the Admin Server in the same container. Here you start the Admin Server with both the configuration files so all environment variables are available. Once the Admin Server is started, the Managed Server can be started in a separate shell with docker exec.</p> <pre class="brush: plain; title: ; notranslate">#Start Admin Server docker run -i -t --name soaas --network=SOANet -p 7001:7001 -p 8001:8001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list --env-file /software/soaserver.env.list oracle/soa:12.2.1.3</pre> <pre class="brush: plain; title: ; notranslate">#Start Managed Server docker exec -it soaas &quot;/u01/oracle/dockertools/startMS.sh&quot;</pre> <h4>Start the NodeManager</h4> <p>If you like (but you don&#8217;t have to), you can start the NodeManager in both set-ups like;</p> <pre class="brush: plain; title: ; notranslate">docker exec -d soaas &quot;/u01/oracle/user_projects/domains/InfraDomain/bin/startNodeManager.sh&quot;</pre> <p>The NodeManager runs on port 5658.</p> <h1>How does it look?</h1> <p>A normal SOA Suite environment.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?ssl=1"><img data-attachment-id="48787" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture08-7/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?fit=1893%2C812&amp;ssl=1" data-orig-size="1893,812" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture08" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?fit=300%2C129&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?fit=702%2C301&amp;ssl=1" class="aligncenter size-medium wp-image-48787" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?resize=300%2C129&#038;ssl=1" alt="" width="300" height="129" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?resize=300%2C129&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?resize=768%2C329&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?resize=1024%2C439&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture08.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?ssl=1"><img data-attachment-id="48786" data-permalink="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/capture07-7/" data-orig-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?fit=1888%2C659&amp;ssl=1" data-orig-size="1888,659" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture07" data-image-description="" data-medium-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?fit=300%2C105&amp;ssl=1" data-large-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?fit=702%2C245&amp;ssl=1" class="aligncenter size-medium wp-image-48786" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?resize=300%2C105&#038;ssl=1" alt="" width="300" height="105" srcset="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?resize=300%2C105&amp;ssl=1 300w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?resize=768%2C268&amp;ssl=1 768w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?resize=1024%2C357&amp;ssl=1 1024w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/05/Capture07.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/05/17/soa-suite-12c-in-docker-containers-only-a-couple-of-commands-no-installers-no-third-party-scripts/">SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=48766 Thu May 17 2018 11:58:30 GMT-0400 (EDT) Datascape Podcast Episode 27 – Oracle Cloud vs. the World https://blog.pythian.com/episode-27-oracle-cloud-vs-world/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>On today’s episode, we are going to take a closer look at the Oracle Cloud and see how Oracle&#8217;s cloud strategy stacks up against other cloud offerings. We will also dig deeper and try to understand some of the key technologies that Oracle has to offer. To do so, we have invited two Datascape regulars back on the podcast, Bjoern Rost and Simon Pane. Bjoern is an Oracle ACE Director and one of Pythian’s top Oracle experts. He has been working in IT for 15 years, and because of his love and passion for the Oracle Database, Bjoern has specialized in it and currently works as a Principal Consultant at Pythian. Our other guest, Simon, is an accomplished Principal Consultant, who has developed a multitude of complex solutions for Pythian clients. Simon is an Oracle ACE and Oracle Certified Professional and has experience with literally thousands of database environments at hundreds of client sites ranging from small single database implementations to large corporate enterprises. Stay tuned as we discuss the array of products that Oracle has to offer, how they integrate with Oracle Cloud, how they relate to other cloud offerings and which is best for your specific software needs. All this and more inside today’s episode.</p> <p><strong> Key Points From This Episode:</strong></p> <p>• Discover more about how the Oracle Cloud works.<br /> • The two different clouds offered by Oracle; Classic and OCI.<br /> • Learn how Oracle positions their cloud as not being “locked-in”.<br /> • Key technologies offered by the Oracle Cloud.<br /> • Why Oracle Cloud is the best place to run more complicated Oracle platforms.<br /> • Understand why Oracle Cloud is not really for the home hobbyist.<br /> • The value proposition for clients to use Oracle Cloud.<br /> • What the Autonomous Data Warehouse is, and what it really isn’t.<br /> • The rise of autonomous features within Oracle products.<br /> • Why there is a lack of interest from younger folks and students in Oracle.<br /> • Languages and API’s used on Oracle Cloud for automation.<br /> • The lack of standardization and integration between Oracle products.<br /> • Learn more about the migration services available with Oracle Cloud.<br /> • Hybrid options for running the database partly on-prem, partly in the Oracle Cloud.<br /> • The main areas of interest for customers of the Oracle Cloud.<br /> • Exploring the decision criteria for a typical Oracle database installation.<br /> • Understanding licensing in the Oracle Cloud.<br /> • And much more!</p> <p><iframe width="100%" height="166" scrolling="no" frameborder="no" allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/442180278&#038;color=%23ff5500&#038;auto_play=false&#038;hide_related=false&#038;show_comments=true&#038;show_user=true&#038;show_reposts=false&#038;show_teaser=true"></iframe></p> <p><strong>Links Mentioned in Today’s Episode:</strong></p> <p><a href="https://ca.linkedin.com/in/bjoern-rost-1b1b5036">Bjoern Rost </a><br /> <a href="https://twitter.com/brost">Bjoern on Twitter</a><br /> <a href="https://www.linkedin.com/in/simonpane/">Simon Pane</a><br /> <a href="https://twitter.com/simonpane">Simon on Twitter</a><br /> <a href="https://www.oracle.com/database/">Oracle Database</a><br /> <a href="https://cloud.oracle.com/home">Oracle Cloud</a><br /> <a href="https://cloud.oracle.com/en_US/datawarehouse">Autonomous Data Warehouse Cloud service</a><br /> <a href="https://azure.microsoft.com">Azure</a><br /> <a href="https://cloud.google.com/">Google Cloud Platform (GCP)</a><br /> <a href="https://aws.amazon.com/">Amazon Web Service (AWS)</a><br /> <a href="https://www.digitalocean.com/">DigitalOcean</a><br /> <a href="https://www.oracle.com/ca-en/engineered-systems/exadata/index.html">Oracle Exadata</a><br /> <a href="https://www.snowflake.net">Snowflake</a><br /> <a href="https://cloud.google.com/bigquery/">BigQuery</a><br /> <a href="https://console.aws.amazon.com/redshift/home">Redshift</a><br /> <a href="https://azure.microsoft.com/en-us/solutions/data-warehouse/">Azure Data Warehouse</a><br /> <a href="https://aws.amazon.com/rds/aurora/">Aurora</a><br /> <a href="http://www.oracle.com/technetwork/database/availability/dataguardoverview-098960.html">Oracle Data Guard</a><br /> <a href="https://aws.amazon.com/rds/">Amazon Relational Database Service (RDS)</a><br /> <a href="https://www.oracle.com/openworld/index.html">Oracle OpenWorld 2018</a><br /> <a href="https://cloud.oracle.com/developer_service">Oracle Developer Cloud service</a><br /> <a href="https://jenkins.io/">Jenkins</a><br /> <a href="https://kubernetes.io/">Kubernetes</a><br /> <a href="https://www.terraform.io/">Terraform</a><br /> <a href="https://kafka.apache.org/">Kafka</a><br /> <a href="https://www.confluent.io/">Confluent</a><br /> <a href="https://cloud.oracle.com/event-hub">Event Hub Cloud Service</a><br /> <a href="https://developer.oracle.com/code">Oracle Code</a><br /> <a href="https://developer.oracle.com/">Oracle Developers Portal</a><br /> <a href="http://www.oracle.com/us/products/applications/peoplesoft-enterprise/overview/index.html">PeopleSoft</a><br /> <a href="http://www.oracle.com/us/products/applications/jd-edwards-enterpriseone/overview/index.html">J.D. Edwards</a><br /> <a href="https://aws.amazon.com/ebs/">Amazon Elastic Block Store (EBS)</a><br /> <a href="https://cloud.oracle.com/en_US/ravello">Ravello</a><br /> <a href="https://blog.pythian.com/cost-management-expenditure-alerts-oracle-cloud/">Cost management (expenditure alerts) in the Oracle Cloud</a></p> <p>&nbsp;</p> </div></div> Chris Presley https://blog.pythian.com/?p=104115 Thu May 17 2018 10:10:41 GMT-0400 (EDT) Microsoft Azure Managed Services Marketplace, Azure Data Factory V2 and More https://blog.pythian.com/cloudscape-podcast-in-review-episode-1-microsoft-azure-managed-services-marketplace-azure-data-factory-v2-and-more/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><a href="https://pythian.com/experts/chris-presley/">Chris Presley</a> recently kicked off his brand new Cloudscape podcast. This podcast focuses on different cloud vendors and shares their unique takes, approaches, and strategies as it relates to their area of expertise.</p> <p>I was glad to be one of his guests in an episode where I shared some news on what was taking place in the world of Microsoft Azure.</p> <p>In this episode, we discussed the following topics as they relate to Microsoft Azure:</p> <p>Security Center<br /> Managed Service Marketplace<br /> Data Factory v2<br /> New Data Features<br /> Analysis Service as a Service<br /> Meltdown Vulnerability<br /> Azure Security Center<br /> Azure Security Center released new log analysis, new search tools, and more dashboarding.<br /> We’re always seeing info about threats and hacks and even though these things are constantly happening, some still think that cloud providers cannot secure their data better than they do.<br /> The fact of the matter is that many are not aware of how vulnerable their data may be, whether they are on prem or in the cloud.</p> <p>For example, I was accessing files through a VM I use for testing with SQL Server, whichso it serves as my own Adventureworks database. Once I opened it with a Remote Desktop Protocol (RDP) port on the internet, I started receiving alerts that somebody out of tThe Netherlands was port scanning my VM because they saw that my RDP port was open. After that, they started using the RDP port to constantly try to log into my VM using brute force. This shows us that these types of things can easily fly under the radar without us realizing that it’s happening.<br /> Luckily, I was aware of what was going on, so I set up a port range in my firewall for the VM. I chose to do this not out of fear of getting hacked, but because it was killing my machine&#8217;s CPU due to the constant logging of events.<br /> As I learned through that experience, it’s a great tool, with the main goal of making people more aware of all the threats out there. It seems that many providers are going in the direction of having a type of security dashboard approach for their customers.</p> <p>Managed Service Marketplace<br /> Azure Marketplace allows a user to package applications, and the latest innovation is that they have added the option to deploy a managed service solution. This will allow you to choose a solution from the marketplace that may not be a SaaS offering. It can be a product that you&#8217;re deploying with your Azure subscription, but it can also be managed by a third party.<br /> It also enables economic relationships between the consumer and those running their cloud operation. It used to be that the marketplace was only for software, but now you can pick a VM that has licensed software and a set of services attached to it.<br /> Azure Data Factory V2<br /> For anyone not familiar, Azure Data Factory v2 was released a few months ago. This release is a huge leap in terms of what Data Factory v1 had. In the past, I would tell people that Azure didn&#8217;t have a decent ETL offering until V2 came out, and now, with the visual tools, it’s even easier for people to develop, especially for people that have an SSIS background. People who have worked with Microsoft’s data stack ETL for years find that working with ADFv2 is simple because they are very similar.</p> <p>Essentially, it’s a drag- and- drop type of approach. The visual tools make it even easier for SSIS developers to adapt and work graphically, which many enjoy for ETL tools.<br /> At this point, the service can host SSIS as well. For clients who are 100% in the cloud, there&#8217;s really no reason not to adopt it, which is a good thing because v1 had the unfortunate situation of making people decide whether to go with the limitations of v1 or just put up some ETL software in a VM. Now, v2 takes this issue away and users can build an entire end-to-end data platform solution with PaaS services.</p> <p>New Azure Data Features, Compatibility Level Default, and Analysis Services Changes<br /> This new compatibility level default makes it so every time you create an Azure SQL database, it automatically goes to the 140 compatibility level. It&#8217;s the same compatibility level as SQL 2017. The big difference is that this is the default, so if someone were to create something and start developing, they will automatically be on that version. Once there, the database will have access to the latest T-SQL and optimizer fixes.<br /> The interesting part about this is that the compatibility level was available in the cloud long before any of this was made available—it just wasn&#8217;t the default. It&#8217;s now a “cloud first” type of development, so when compatibility mode 15 rolls out for SQL 2018, we&#8217;ll likely see it as optional first.<br /> The takeaway here is that you don&#8217;t have to worry so much about all the new versions and patching. These things will continuously roll out on their own without you having to take action.<br /> Analysis Services as a Service<br /> The Analysis Services managed offering is now deployed in even more regions around the world. You can now build end-to-end data platform solutions that are 100% platform as a service. Previously, if I needed an analytical model, I would have had to go in with a third party tool or run Analysis Services from Microsoft in a VM. Now, we have Analysis Services as a service and it runs in even more regions so people can just use the PaaS service instead of having to deploy on a VM.<br /> Keep in mind, there are two ways to run Analysis Services. The first way is multi-dimensional, a cube style. The second way is with a tabular model. So it&#8217;s not something that’s new, but the amount of people in the field who still deploy multi-dimensional cubes is surprising. The cloud service doesn&#8217;t do multi dimensional models at this point, but it is on the work queue of the Microsoft team.<br /> Meltdown Vulnerability<br /> Azure was affected by the recent meltdown vulnerability, just like every other Ccloud provider was affected. This exposed content from memory space that wasn’t yours, so you could potentially see the memory space of another VM that belonged to another customer. This was the main thing that got patched right away. The fix was quick, it didn’t take longer than a day.</p> <p>***</p> <p>This was a summary of the Azure topics we discussed during the podcast, Chris also welcomed Greg Baker (Amazon Web Services), and John Laham (Google Cloud Platform) who also discussed topics related to their expertise.</p> <p>Listen to the entire conversation <a href="https://blog.pythian.com/cloudscape-ep1-cloud-vendor-news-january-2018/">here</a> and be sure to subscribe to the podcast to be notified when a new episode has been released.</p> <hr /> <p>Learn more about Pythian&#8217;s services for <a href="https://pythian.com/sql-server-consulting/">Microsoft</a> and <a href="https://pythian.com/microsoft-azure/">Microsoft Azure</a>.</p> </div></div> Warner Chaves https://blog.pythian.com/?p=104131 Wed May 16 2018 16:16:35 GMT-0400 (EDT) RDP Issue – CredSSP Encryption Oracle Remediation https://blog.pythian.com/rdp-issue-credssp-encryption-oracle-remediation/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><h1>Introduction</h1> <p>We have recently had some issues accessing a few client servers and found it is related to a Windows security update that was released earlier in May 2018. The problem is when you try to RDP to a server you can receive an error similar to this:</p> <blockquote><p> An authentication error has occurred.<br /> The function requested is not supported</p> <p>Remote computer: <computer/hostname><br /> This could be due to CredSSP encryption oracle remediation.<br /> For more information, see https://go.microsoft.com/fwlink/?linkid=866660 </p></blockquote> <div id="attachment_104142" style="width: 493px" class="wp-caption aligncenter"><img src="https://blog.pythian.com/wp-content/uploads/rdpissue_error.png" alt="" width="483" height="209" class="size-full wp-image-104142" srcset="https://blog.pythian.com/wp-content/uploads/rdpissue_error.png 483w, https://blog.pythian.com/wp-content/uploads/rdpissue_error-465x201.png 465w, https://blog.pythian.com/wp-content/uploads/rdpissue_error-350x151.png 350w" sizes="(max-width: 483px) 100vw, 483px" /><p class="wp-caption-text">RDP error received</p></div> <p>The error is the result of an update, Common Vulnerabilities and Exposures <a href="https://support.microsoft.com/en-us/help/4093492/credssp-updates-for-cve-2018-0886-march-13-2018" target="_blank">CVE-2018-0886</a>, being applied to the client machine and not the target servers (the client&#8217;s production servers). This update will adjust the configuration of the credentials delegation on the system. It changes the configuration from <strong>vulnerable</strong> to <strong>mitigated</strong> which, if you are on a system that is set to a lower configuration, causes RDP access to be blocked.</p> <h1>CVE Details</h1> <p><em>This update applies to Windows 7 and up for desktop and Windows Server 2008 and higher.</em></p> <p>In Windows Server 2016 and 2012 R2, we found this update included in the May rollup update. The following are the two KB links for Windows 8.1 up to Windows Server 2016. If these get applied to your Windows 8.1 or Windows 10 desktop and not the servers, you will lose RDP access:</p> <ul> <li><a href="https://support.microsoft.com/en-us/help/4103723/windows-10-update-kb4103723" rel="noopener" target="_blank">KB4103723</a> &#8211; Windows 10 (1607), Windows Server 2016</li> <li><a href="https://support.microsoft.com/en-us/help/4103725/windows-81-update-kb4103725" rel="noopener" target="_blank">KB4103725</a> &#8211; Windows 8.1, Windows Server 2012 R2 </ul> <h1>Resolution</h2> <p>The end result is to apply the update to all of the target servers to ensure the security vulnerability is patched properly. If you utilize any management system for Windows Update (e.g. WSUS) you can push the update to the specific targets using that service. The update will require a reboot of the target server before it is applied.</p> <p>An interim approach is to set the Credential Delegation back to vulnerable on your workstation and this will open access back until you can apply the same patch to your servers.</p> <h3>Change Credential Delegation to Vulnerable</h3> <p>You will need to do this logged in as a domain account that has elevated privileges on the workstation or server. Open a run prompt (Windows Key + R) and enter <code>gpedit.msc</code>. Go to Computer Configuration > Administrative Templates > System > Credentials Delegation:</p> <div id="attachment_104143" style="width: 308px" class="wp-caption aligncenter"><img src="https://blog.pythian.com/wp-content/uploads/rdpissue_gpedit_1.png" alt="" width="298" height="406" class="size-full wp-image-104143" /><p class="wp-caption-text">Credential Delegation policy</p></div> <p>Open the setting &#8220;Encryption Oracle Remediation,&#8221; then select &#8220;Enabled&#8221; and set the &#8220;Protection Level&#8221; to &#8220;Vulnerable&#8221;:</p> <div id="attachment_104144" style="width: 450px" class="wp-caption aligncenter"><img src="https://blog.pythian.com/wp-content/uploads/rdpissue_gpedit_2.png" alt="" width="440" height="384" class="size-full wp-image-104144" srcset="https://blog.pythian.com/wp-content/uploads/rdpissue_gpedit_2.png 440w, https://blog.pythian.com/wp-content/uploads/rdpissue_gpedit_2-350x305.png 350w" sizes="(max-width: 440px) 100vw, 440px" /><p class="wp-caption-text">Protection Level to Vulnerable</p></div> <p>Once you click OK you will then be able to RDP to the target servers again.</p> <h3>Add Registry Key</h3> <p>If you are not able to access Group Policy editor on the source/client machine you can simply add a registry key to perform the same task as above to temporarily regain access to your servers.</p> <pre lang="powershell"> New-ItemProperty -Path 'HKLM:\Software\Microsoft\Windows\CurrentVersion' -Name AllowEncryptionOracle -Value 2 -PropertyType DWORD -Force </pre> <h1>Post Patch Deployment</h1> <p>Once you have pushed the patch out to the servers you will need to &#8220;unconfigure&#8221; the Group Policy. Simply go back into that setting and select &#8220;Not Configured&#8221; and click OK. You will then regain access to all the servers again.</p> <p>If you used the registry option you can remove the registry key created using the following command:</p> <pre lang="powershell"> Remove-ItemProperty -Path 'HKLM:\Software\Microsoft\Windows\CurrentVersion' -Name AllowEncryptionOracle -Force </pre> </div></div> Shawn Melton https://blog.pythian.com/?p=104141 Wed May 16 2018 10:29:33 GMT-0400 (EDT) Oratop utility https://gavinsoorma.com/2018/05/oratop-utility/ <p>Oratop is a utility very similar in nature to the &#8216;top&#8217; unix OS utility.</p> <p>It provides an almost real-time overview of both RAC as well as single-instance database performance and can be used in combination with top to get a more complete overview of system performance and identify and monitor any activity causing bottlenecks.</p> <p>Oratop is available as part of the <strong>Trace File Analyzer</strong> (TFA) Collector package and also can be downloaded via the MOS note <strong>1500864.1</strong>.</p> <p>What information does oratop show us? &#8211; a lot!</p> <p>For example:</p> <ul> <li>Database instance activity like number of user sessions logged in, number of active sessions, SGA/PGA allocation and usage, I/O activity, Flash Recovery Area and TEMP usage</li> <li>Top 5 Wait Events &#8211; both real-time as well as since instance start</li> <li>Top user sessions and processes as well as sessions blocking others. What database events are sessions waiting on as well as latch activity</li> <li>Top SQL_IDs and SQL statements, elapsed time, rows returned</li> <li>SQL Execution Plan based on SQL_ID</li> <li>Tablespace usage information</li> <li>AWR Disk Group space usage information</li> </ul> <p>And much more..!</p> <p>We can launch oratop via the TFA menu interface as shown below :</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop2.png"><img class="aligncenter wp-image-8158" src="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop2.png" alt="" width="526" height="296" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop2.png 683w, https://gavinsoorma.com/wp-content/uploads/2018/05/oratop2-300x169.png 300w" sizes="(max-width: 526px) 100vw, 526px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop3.png"><img class="aligncenter wp-image-8159" src="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop3.png" alt="" width="443" height="433" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/oratop3.png 545w, https://gavinsoorma.com/wp-content/uploads/2018/05/oratop3-300x293.png 300w" sizes="(max-width: 443px) 100vw, 443px" /></a></p> <p>&nbsp;</p> <p>We can also launch oratop from the command line via tfactl.</p> <p>/u01/app/12.1.0.2/grid/tfa/aunmmorac1n1/tfa_home/bin/<strong>tfactl oratop -database &#8220;tppsnm1&#8221;</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop1.png"><img class="aligncenter size-full wp-image-8140" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop1.png" alt="" width="1402" height="350" /></a></p> <p>&nbsp;</p> <p>There are a number of switches available in the oratop utility to enable us to toggle the information which is displayed.</p> <p>Let us have a look at some of them.</p> <p>By default, oratop will display the Top 5 Wait Events since instance start. To view the information real-time we can use &#8216;<strong>d</strong>&#8216;.</p> <p>The Events are now shown real-time (RT) instead of the default (C).</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop2.png"><img class="aligncenter size-full wp-image-8140" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop2.png" alt="" width="1402" height="350" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop2.png 1402w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop2-300x75.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop2-768x192.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop2-1024x256.png 1024w" sizes="(max-width: 1402px) 100vw, 1402px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>k</strong>&#8216; to toggle between Events/Latches and File Number and Data Block.<br /> <a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop3.png"><img class="aligncenter size-full wp-image-8141" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop3.png" alt="" width="1365" height="302" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop3.png 1365w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop3-300x66.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop3-768x170.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop3-1024x227.png 1024w" sizes="(max-width: 1365px) 100vw, 1365px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>m</strong>&#8216; to toggle between display of Programs and display of Modules.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop4.png"><img class="aligncenter size-full wp-image-8142" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop4.png" alt="" width="1400" height="369" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop4.png 1400w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop4-300x79.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop4-768x202.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop4-1024x270.png 1024w" sizes="(max-width: 1400px) 100vw, 1400px" /></a><br /> <a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop5.png"><img class="aligncenter size-full wp-image-8143" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop5.png" alt="" width="1478" height="471" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop5.png 1478w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop5-300x96.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop5-768x245.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop5-1024x326.png 1024w" sizes="(max-width: 1478px) 100vw, 1478px" /></a></p> <p>Use &#8216;<strong>s</strong>&#8216; to toggle between Process Mode (default) and SQL Mode.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop6.png"><img class="aligncenter size-full wp-image-8144" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop6.png" alt="" width="1367" height="372" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop6.png 1367w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop6-300x82.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop6-768x209.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop6-1024x279.png 1024w" sizes="(max-width: 1367px) 100vw, 1367px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>t</strong>&#8216; to view Tablespace information.<br /> <a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop10.png"><img class="aligncenter size-full wp-image-8148" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop10.png" alt="" width="1309" height="803" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop10.png 1309w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop10-300x184.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop10-768x471.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop10-1024x628.png 1024w" sizes="(max-width: 1309px) 100vw, 1309px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>a</strong>&#8216; for ASM Disk Group information.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop11.png"><img class="aligncenter size-full wp-image-8149" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop11.png" alt="" width="1473" height="736" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop11.png 1473w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop11-300x150.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop11-768x384.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop11-1024x512.png 1024w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop11-940x470.png 940w" sizes="(max-width: 1473px) 100vw, 1473px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>x</strong>&#8216;  to display the SQL Execution Plan for a specific SQL ID.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop12.png"><img class="aligncenter size-full wp-image-8150" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop12.png" alt="" width="1430" height="748" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop12.png 1430w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop12-300x157.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop12-768x402.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop12-1024x536.png 1024w" sizes="(max-width: 1430px) 100vw, 1430px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>i</strong>&#8216; to change the refresh or data gathering interval.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop13.png"><img class="aligncenter size-full wp-image-8151" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop13.png" alt="" width="1399" height="428" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop13.png 1399w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop13-300x92.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop13-768x235.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop13-1024x313.png 1024w" sizes="(max-width: 1399px) 100vw, 1399px" /></a></p> <p>&nbsp;</p> <p>Use &#8216;<strong>h</strong>&#8216; to display the Help menu. We can view the Help menu for each section individually.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop14.png"><img class="aligncenter wp-image-8152" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop14.png" alt="" width="964" height="398" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop14.png 1459w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop14-300x124.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop14-768x317.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop14-1024x423.png 1024w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop14-825x340.png 825w" sizes="(max-width: 964px) 100vw, 964px" /></a><br /> <a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop15.png"><img class="aligncenter wp-image-8153" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop15.png" alt="" width="966" height="404" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop15.png 1115w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop15-300x126.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop15-768x322.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop15-1024x429.png 1024w" sizes="(max-width: 966px) 100vw, 966px" /></a><br /> <a href="https://gavinsoorma.com/wp-content/uploads/2018/05/otop16.png"><img class="aligncenter wp-image-8154" src="https://gavinsoorma.com/wp-content/uploads/2018/05/otop16.png" alt="" width="934" height="531" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/otop16.png 1082w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop16-300x171.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop16-768x437.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/otop16-1024x582.png 1024w" sizes="(max-width: 934px) 100vw, 934px" /></a></p> Gavin Soorma https://gavinsoorma.com/?p=8155 Wed May 16 2018 03:23:28 GMT-0400 (EDT) ODTUG Kscope18 Update #3 https://www.odtug.com/p/bl/et/blogaid=800&source=1 We are really counting down to the conference now. Less than four weeks to go until the best Oracle technical conference of the year! Are you ready? ODTUG https://www.odtug.com/p/bl/et/blogaid=800&source=1 Tue May 15 2018 14:24:52 GMT-0400 (EDT) Time for #GLOC, #SQLSatDallas, #DataSummit18 http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/ <p>The next nine days, I’m traveling to three cities for four events. We’ll just call this the 9-3-4 gauntlet of speaker life. I booked this travel as four, one-way flights to get the itinerary<br /> I needed to make the most of my schedule and will have breaks between each event to make sure I don’t kill myself my last two weeks at Delphix.</p> <p><a href="http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/fb46db14-5cab-4755-87bd-062e0e7a1a9e/" rel="attachment wp-att-7947"><img class="alignnone size-full wp-image-7947" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/05/FB46DB14-5CAB-4755-87BD-062E0E7A1A9E.gif?resize=595%2C364" alt="" width="595" height="364" data-recalc-dims="1" /></a></p> <p><b>GLOC</b></p> <p>Today I’m heading to the Great Lakes Oracle Conference, (https://gloc.neooug.org/) where I get to present on DevOps and hang out with all my Oracle peeps. GLOC is the second largest regional Oracle user group event and closing fast on my region’s event, RMOUG Training Days. They’ve done a great job embracing APEX into their program in the last year and I’m looking forward to being in Cleveland, a city that’s really starting to come into its own since the olden days, when the river caught on fire….(yeah, I went there.)</p> <p><b>SQL Saturday Dallas</b></p> <p>I’ll depart early on Thursday morning to arrive in Dallas, TX in time for the second event, the Dallas SQL User Group’s Speaker Idol, (https://www.meetup.com/North-Texas-SQL-Server-User-Group/events/250612710/). I’ll be a judge, scoring and offering constructive, (as well as insightful) feedback to speakers at this event. I made a promise to, so don’t be afraid and come and speak! I attended my first Speaker Idol at Summit 2017 and really enjoyed the presentations, the quality of constructive advice to the speakers and the opportunity to receive that type of feedback.</p> <p>Dallas happens to be home to a number of great people in the MSSQL community and I’ll get to hang out with Mindy Cornett, Amy Herald, Jen and Sean McGown, who reside in the area, along with staying at an AirBnB with Tracy Borrgiano. I also have a pair of gorgeous Dr. Marten oxfords for Ms. Angela Tidwell, too. Yes, they fit in my 20in suitcase with 9 days worth of my own belongings.</p> <p>I’ll be staying the weekend, as I’m then speaking at SQL Saturday Dallas, ( http://www.sqlsaturday.com/734/eventhome.aspx) talking on DevOps, but this time to the MSSQL community. I’ll have Sunday to relax, (could have gone home, but too late now!) maybe hang out with friends, but definitely check out the area. Jessica Sharp and Mary Elizabeth McNeeley will be in attendance, so will be great to see everyone while I’m in Dallas.</p> <p><b>Data Summit 2018</b></p> <p>Monday I head back up north to Boston for DBTA’s Data Summit, ( http://www.dbta.com/DataSummit/2018/Default.aspx) conference. This time I’m speaking on DevOps with Big Data, focused on DataOps or in other words, “You can’t do anything if you can’t get your data along with it.” I’m kind of bummed I didn’t plan more time to visit in Boston, as it’s one of my favorite historical spots to invest time in. As some know, I’m a history buff and it just doesn’t seem right to not visit some of the awesome historical sites in Boston, but hell, after 9 days on the road, I’m going to be more than ready to head home!</p> <p>I’ll be one of the first on the plane next Thursday so I can get home and spend the next number of days working on my latest “hobby”- the trailer. I have some trim to put down for the new floors I installed this last weekend and painting to do. After that, it’s move in time! Stay tuned…</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="http://dbakevlar.com/tag/big-data/" rel="tag">Big Data</a>, <a href="http://dbakevlar.com/tag/conferences/" rel="tag">Conferences</a>, <a href="http://dbakevlar.com/tag/oracle/" rel="tag">oracle</a>, <a href="http://dbakevlar.com/tag/sql-server/" rel="tag">SQL Server</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/&title=Time for #GLOC, #SQLSatDallas, #DataSummit18"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/&title=Time for #GLOC, #SQLSatDallas, #DataSummit18"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/&title=Time for #GLOC, #SQLSatDallas, #DataSummit18"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/&title=Time for #GLOC, #SQLSatDallas, #DataSummit18"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2016/08/why-mask-data/" >Why Mask Data</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2009/11/rebuilding-vs-no-rebuild-on-indexes/" >Rebuilding Vs. No Rebuild on Indexes</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2011/02/pulling-the-trigger/" >Pulling the Trigger</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2013/10/tuning-for-time-ctas-and-views/" >Tuning for Time- CTAS and Views</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2014/05/dba-kevlar-is-back/" >DBA Kevlar is BACK!! :)</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="http://dbakevlar.com">DBA Kevlar</a> [<a href="http://dbakevlar.com/2018/05/time-for-gloc-sqlsatdallas-datasummit18/">Time for #GLOC, #SQLSatDallas, #DataSummit18</a>], All Right Reserved. 2018.</small><br> dbakevlar http://dbakevlar.com/?p=7946 Tue May 15 2018 13:35:12 GMT-0400 (EDT) Trace File Analyzer 18.1.1 https://gavinsoorma.com/2018/05/trace-file-analyzer-18-1-1/ <p><strong>Trace File Analyzer Collector</strong> also known as <strong>TFA</strong> is a diagnostic collection utility which greatly simplifies the diagnostic data collection for both Oracle Database as well as Oracle Clusterware/Grid Infrastructure RAC environments.</p> <p>Trace File Analyzer provides a central and single interface for all diagnostic data collection and analysis.</p> <p>When a problem occurs, TFA collects all the relevant data at the time of the problem and consolidates data even across multiple nodes in a clustered Oracle RAC environment.  Only the relevant diagnostic data is collected and can be packaged and uploaded to Oracle Support and this leads to faster resolution times. All the required diagnostic data is collected via a single <em><strong>tfactl</strong></em> command instead of having to individually look for the required diagnostic information across a number of database and cluster alert logs, trace files or dump files.</p> <p>In addition to the core functionality of gathering, consolidating and processing diagnostic data, Trace File Analyzer comes bundled with a number of support tools which enable us to obtain a lot of other useful information like upgrade readiness, health checks for both Engineered as well as non-Engineered systems, OS performance graphs, Top SQL queries etc.</p> <p>Oracle Trace File Analyzer is shipped along with Oracle Grid Infrastructure (from version 11.2.0.4). <strong>However, it is recommended to download the latest TFA version</strong> which can be accessed via the My Oracle Support Note <strong>1513912.1</strong> since the TFA bundled with the Oracle Grid Infrastructure does not include many of the new features, bug fixes and more importantly the Oracle Database Support Tools bundle.</p> <p>Oracle releases new versions of the TFA several times a year and the most current version is Trace File Analyzer 18.1.1 which is now available for download via the MOS Note 1513912.1.</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1.png"><img class="aligncenter wp-image-8124" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1.png" alt="" width="646" height="391" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1.png 1145w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1-300x181.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1-768x464.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa1-1024x619.png 1024w" sizes="(max-width: 646px) 100vw, 646px" /></a></p> <p>&nbsp;</p> <p>Select the version appropriate to your operating system. Note that TFA supports Oracle databases and Grid Infrastructure versions <strong>11.2 upwards</strong>.</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa2.png"><img class="aligncenter wp-image-8125" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa2.png" alt="" width="513" height="182" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa2.png 1024w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa2-300x107.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa2-768x273.png 768w" sizes="(max-width: 513px) 100vw, 513px" /></a></p> <p>&nbsp;</p> <p><strong>Installation</strong></p> <p>For a new installation, the recommended location is /opt/oracle.tfa and in case an existing version of TFA exists, it will be upgraded as part of the installation.</p> <p>It is recommended to carry out the installation as the root user.</p> <p>If root access is not available, the installation can be carried out by the ORACLE_HOME owner, but this installation will cause TFA to function with lower capabilities. Functionalities like automatic collections and collection from remote hosts will not be available as well as collection and analysis of files not readable by the ORACLE_HOME owner like /var/log/messages and log files pertaining to certain clusterware daemon processes.</p> <p>The TFA download also includes Java Runtime Environment (JRE) version 1.8 which is required for running TFA.</p> <p>To install TFA, download the appropriate platform specific zip file, copy it to the required machine and unzip. Then execute the file <strong>installTFA&lt;platform&gt;.</strong></p> <p>For example:</p> <p>[root@autprorac1 oracle]# <strong>unzip TFA-LINUX_v18.1.1.zip</strong></p> <p>Archive:  TFA-LINUX_v18.1.1.zip</p> <p>inflating: README.txt</p> <p>inflating: installTFA-LINUX</p> <p># <strong>./installTFA-LINUX</strong></p> <p>The installation will prompt if a local install or a cluster install is going to be carried out. A cluster installation does require password-less SSH user equivalency for the root user to all cluster nodes. If this is not already configured, then the installation optionally sets up password-less SSH user equivalency for the root user account.</p> <p>&nbsp;</p> <p><strong>Running TFA</strong></p> <p>The Oracle TFA has a daemon process which is configured to start automatically on system start up. It runs from init on UNIX systems or init/upstart/systemd on Linux and in the case of Microsoft Windows runs as a Windows Service.</p> <p>To start or stop Oracle Trace File Analyzer daemon manually we can use the <strong>tfactl start</strong> or <strong>tfactl stop</strong> commands.</p> <p>We can also enable or disable the automatic restarting of the Oracle Trace File Analyzer daemon via the tfactl disable or tfactl enable commands.</p> <p>TFA is invoked via the tfactl command which in turn can be run from the command line or from within the Shell interface or via the TFA Menu interface.</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa3.png"><img class="aligncenter wp-image-8126" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa3.png" alt="" width="597" height="360" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa3.png 751w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa3-300x181.png 300w" sizes="(max-width: 597px) 100vw, 597px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa4.png"><img class="aligncenter wp-image-8127" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa4.png" alt="" width="582" height="360" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa4.png 734w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa4-300x186.png 300w" sizes="(max-width: 582px) 100vw, 582px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5.png"><img class="aligncenter wp-image-8128" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5.png" alt="" width="859" height="477" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5.png 1178w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5-300x167.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5-768x426.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa5-1024x569.png 1024w" sizes="(max-width: 859px) 100vw, 859px" /></a></p> <p>&nbsp;</p> <p><strong>Automatic Collections</strong></p> <p>TFA is configured to collect diagnostic information automatically for a number of specific Oracle errors and we can also configure it to collect diagnostics for any other user-defined Oracle errors as well.</p> <p>For instance the following Oracle and Cluster errors would have automatic diagnostic collection enabled:</p> <p>ORA-297(01|02|03|08|09|10|40)</p> <p>ORA-00600</p> <p>ORA-07445</p> <p>CRS-016(07|10|11|12)</p> <p>When TFA detects a problem, it would collect the necessary and relevant diagnostic data related to the problem going back by default to the past 12 hours and would also trim the log files it collects to gather only the bare amount of information required for problem diagnosis.</p> <p>It then would collect and package the diagnostic data and also consolidate the data on one node in case of a clustered environment. The diagnostic data is then stored in the TFA repository and if configured, can send an email notification that a problem has been detected and the collection of the packaged diagnostic data is now available for upload to Oracle Support.</p> <p>&nbsp;</p> <p><strong>On-demand Analysis and Diagnostic Collection</strong></p> <p>In addition to the automatic collection which is configured by default, we can use TFA to analyze all logs and identify any recent errors by performing an on-demand collection and analysis of diagnostic data.</p> <p>We can collect diagnostic data based on a search string like say &#8216;ORA-00600&#8217; and also specify the time duration in the past or time interval for which the diagnostic data should be analyzed.</p> <p>&nbsp;</p> <p><strong>Oracle Database support tools bundle</strong></p> <p>This is only available in case of TFA which is downloaded from My Oracle Support via the note 1513912.1.</p> <p><strong>ORAchk</strong> and <strong>EXAchk</strong>: Performs health check as well as upgrade readiness checks of the entire stack for both Engineered as well as non-Engineered systems</p> <p><strong>oswatcher</strong>:  Utility to capture and store performance metrics from the operating system</p> <p><strong>procwatcher</strong>: Monitor and collect stack traces of database and clusterware processes using tools like oradebug and pstack</p> <p><strong>oratop</strong>: Utility similar to the unix OS utility top which gives a real-time overview of performance from a database perspective and  can be used in combination with the unix top utility to get a more complete overview of system performance</p> <p><strong>summary, alertsummary, events</strong>: High level configuration summary as well as event details along with summary of events from clusterware and databases alert logs across all nodes</p> <p><strong>param</strong>: Find and display database and OS parameters that match a specified pattern</p> <p><strong>changes</strong>: Reports system changes for a given period of time which will include database parameters, operating system parameters, and the patches applied</p> <p>Other utilities and tools:  <strong>vi ls grep ps</strong></p> <p>&nbsp;</p> <p><strong>One Command Service Request Data Collections</strong></p> <p>Very often when we raise a Service Request with Oracle Support, we are asked to provide additional log and trace files to help Oracle Support better diagnose the problem. Collecting the various log and trace files individually can be a laborious task and we may miss out collecting an important log file required by Oracle Support.</p> <p>Oracle TFA now provides a <strong>single command SRDC</strong> (Service Request Data Collection) to collect exactly what is needed by Oracle Support (as well as the DBA) to diagnose a specific problem.</p> <p>A wide variety of SRDCs are available covering Oracle errors like ORA-00600, ORA-07445, database performance problems (dbperf), database resource problems (dbunixresource), database install and upgrade problems (dbinstall,dbupgrade) , database storage problems (dbasm) etc.</p> <p>Based on the SRDC, TFA will scan and analyze the relevant log and trace files it requires and then trims those files to only contain the required diagnostic information. The data is then packaged into a zip file which can be then uploaded to Oracle Support.</p> <p>For example, the TFA command <strong>tfactl diagcollect -srdc dbperf</strong> will generate a bundled package containing files like the AWR report, ADDM report, ASH report, OSWatcher and ORAchk performance related checks.</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6.png"><img class="aligncenter wp-image-8130" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6.png" alt="" width="812" height="345" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6.png 1041w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6-300x127.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6-768x326.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa6-1024x435.png 1024w" sizes="(max-width: 812px) 100vw, 812px" /></a></p> <p>&nbsp;</p> <p><strong>Trace File Analyzer Repository</strong></p> <p>TFA stores all diagnostic data collections in the repository and the size of the repository is the lower of the value 10GB or 50% of available directory free disk space. The location of the repository is the sub-directory <strong>tfa/repository</strong> under the Trace File Analyzer installation top level directory.</p> <p>The amount of data collected in the repository is determined by the Trace Level parameter which defaults to the value 1. The possible values are in the range 1-4 and a higher value will obviously lead to the repository being filled at a faster rate.</p> <p>The Oracle TFA daemon process monitors and automatically purges the repository when the free space falls below 1 GB and by default purges collections older than 12 hours. This can also be configured by the parameter <strong>minagetopurge</strong>.</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa7.png"><img class="aligncenter wp-image-8133" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa7.png" alt="" width="653" height="540" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa7.png 900w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa7-300x248.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa7-768x635.png 768w" sizes="(max-width: 653px) 100vw, 653px" /></a></p> <p>&nbsp;</p> <p><strong>Trace File Analyzer Command Examples</strong></p> <ul> <li>Viewing System and Cluster Summary</li> </ul> <p style="padding-left: 60px;"><strong>tfactl summary</strong></p> <ul> <li>To find all errors in the last one day</li> </ul> <p style="padding-left: 60px;"><strong>tfactl analyze -last 1d</strong></p> <ul> <li>To find all occurrences of a specific error  (in this case ORA-00600 errors)</li> </ul> <p style="padding-left: 60px;"><strong>tfactl analyze -search &#8220;ora-00600&#8221; -last 8h</strong></p> <ul> <li>To set the notification email to use</li> </ul> <p style="padding-left: 60px;"><strong>tfactl set notificationAddress=joeblogs@oracle.com</strong></p> <ul> <li>Enable or disable Automatic collections (ON by default)</li> </ul> <p style="padding-left: 60px;"><strong>tfactl set autodiagcollect=OFF</strong></p> <ul> <li>Adjusting the Diagnostic Data Collection Period</li> </ul> <p style="padding-left: 60px;"><strong>tfactl diagcollect -last 1 h</strong></p> <p style="padding-left: 60px;"><strong>tfactl diagcollect -from &#8220;2018-03-21&#8243;</strong></p> <p style="padding-left: 60px;"><strong>tfactl diagcollect  from &#8220;2018-03-21&#8221; -to &#8220;2018-03-22&#8221;</strong></p> <ul> <li>Analyze, trim and zip all files updated in the last 12 hours, including Cluster Health Monitor and OSWatcher data, from across all nodes  the cluster</li> </ul> <p style="padding-left: 60px;"><strong>tfactl diagcollect -all -last 12h</strong></p> <ul> <li>Run collection from specific nodes in a RAC cluster</li> </ul> <p style="padding-left: 60px;"><strong>tfactl diagcollect -last 1d -node rac01</strong></p> <ul> <li>Run collection for a specific database</li> </ul> <p style="padding-left: 60px;"><strong>tfactl -diagcollect -database hrdb -last 1d</strong></p> <ul> <li>Uploading collections to Oracle Support</li> </ul> <p style="padding-left: 30px;">Execute <strong>tfactl setupmos</strong> to configure Oracle Trace File Analyzer with MOS user name and password followed by</p> <p style="padding-left: 60px;"><strong>tfactl diagcollect -last 1d -sr 1234567</strong></p> <ul> <li>Search  database alert logs for the string &#8220;ORA-&#8221; from the past one day</li> </ul> <p style="padding-left: 60px;"><strong>tfactl analyze -search &#8220;ORA&#8221; -comp db -last 1d</strong></p> <ul> <li>Display a summary of events collected from all alert logs and system logs from the past six hours</li> </ul> <p style="padding-left: 60px;"><strong>tfactl analyze -last 6h</strong></p> <ul> <li>View the summary of a TFA deployment. This will display cluster node information as well as information related to database and grid infrastructure software homes like version, patches installed, databases running etc.</li> </ul> <p style="padding-left: 60px;"><strong>tfactl summary</strong></p> <ul> <li>Grant access to a user</li> </ul> <p style="padding-left: 60px;"><strong>tfactl access add -user oracle</strong></p> <ul> <li>List users with TFA access</li> </ul> <p style="padding-left: 60px;"><strong>tfactl access lsusers</strong></p> <ul> <li style="padding-left: 30px;">Run orachk</li> </ul> <p style="padding-left: 60px;"><strong>tfactl run orachk</strong></p> <ul> <li>Display current configuration settings</li> </ul> <p style="padding-left: 60px;"><strong>tfactl print config</strong></p> <p style="padding-left: 60px;"><a href="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa8.png"><img class="aligncenter wp-image-8134" src="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa8.png" alt="" width="738" height="483" srcset="https://gavinsoorma.com/wp-content/uploads/2018/05/tfa8.png 861w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa8-300x197.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/05/tfa8-768x503.png 768w" sizes="(max-width: 738px) 100vw, 738px" /></a></p> Gavin Soorma https://gavinsoorma.com/?p=8123 Tue May 15 2018 03:07:08 GMT-0400 (EDT) Why is my Forms 12c Application so slow? https://blog.pythian.com/why-is-my-forms-12c-application-so-slow/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><h3>After a recent Upgrade</h3> <p>Users cannot work with the forms application anymore; it has been a few weeks since an upgrade to Forms 12c.<br /> The usual suspects have been investigated.</p> <ul> <li>The database is not waiting on forms transactions</li> <li>The application server is not in any distress, and yes, we have fluctuations in server CPU usage, but well within limits</li> <li>The user PCs (running the forms in a Browser) are not seeing any issues like CPU/memory limitations</li> <li><strong>The forms are very slow in navigation and at odd times during the day they &#8220;freeze&#8221;</strong></li> </ul> <p>What next?</p> <h3>Re-visit the Basics</h3> <p>The forms application started life as &#8220;thick client&#8221; that would connect directly to the database. Today, it is a very different implementation (OHS, Weblogic, Listener Servlets, frmweb processes, Applets).</p> <p>When a form is written, it is likely to use &#8220;triggers&#8221; and &#8220;program-units&#8221; to carry out application logic and displays rows of data in data blocks that are tied to a query involving one or more database tables. These triggers may be &#8220;fired&#8221; when the user navigates from one item to another, when the forms instance starts, when a new row of data is inserted, or in countless other ways. In a well-designed form, triggers would typically call program units. Users can also use shortcut keys to add new rows, delete rows, commit or rollback data.</p> <p>If you have used a forms application, you will understand its simplicity and complexity. You (usually) do not scroll up or down the page like when viewing a spreadsheet, but you view a subset of the rows of data and the rows move up or down in response to keystrokes.</p> <p>A common misconception of forms applications is that since they work in a browser, they are built with HTML/javascript much like the web page on this blog. Unlike other applications, forms do use the browser, but all their work is actually done inside a <strong>forms applet</strong> that runs on a Java virtual machine provided by the browser.</p> <p>In Forms 12c, there is now a <a href="http://www.oracle.com/technetwork/developer-tools/forms/documentation/forms12clientdeploymentoptions-3030579.pdf">stand-alone client</a> that can be used to start the applet.</p> <p>The forms applet is able to display the screen, capture the users&#8217; keystrokes, show the form icons, status bar menus, and is a fixed view. It is able to respond to basic OS events and relies on its much smarter companion the<strong> frmweb</strong> process that lives on the forms server to actually talk with the database and process the logic of the form. So the forms applet sends small chunks of info <strong>every few milliseconds</strong> and receives responses from the frmweb process. They chatter a lot!</p> <p>A colleague of mine was incredulous when he looked at the network traffic ( the application with about 50 users logged in <strong>pings the forms server 200 times in a second !</strong>)</p> <h3>So why is it slow?</h3> <p>In essence, if the application is not responding quickly we have these classical areas to review</p> <ul> <li>The applet is slow to respond to the user input</li> <li>The frmweb process is slow to respond to the events received from the Applet</li> <li>The frmweb process is waiting on the database to provide it data</li> <li>Other unknown reasons</li> </ul> <p>Instead of us guessing our way through this, let&#8217;s get some data. Oracle provides an easy way to trace the forms sessions as they are running.<br /> The trace files are written on the Forms server and are labelled with the frmweb process ID.</p> <p>The trace location in Forms 12c is</p> <pre><strong>$DOMAIN_HOME/system_components/FORMS/forms1/trace</strong></pre> <pre style="font-size: small;">[oracle@oraserver1 karun]$ ls -ltr $DOMAIN_HOME/system_components/FORMS/forms1/trace | tail -3 ... -rw-r----- 1 oracle dba 5882480 May 6 10:50 forms_26737.trc -rw-r----- 1 oracle dba 759326 May 6 18:00 forms_17824.trc -rw-r----- 1 oracle dba 1209818 May 6 18:15 forms_16979.trc [oracle@oraserver1 karun]$</pre> <p><strong>The easy way to start a trace</strong></p> <p>Use the Enterprise Manager Control:<br /> Select a session and click on <strong>Enable Tracing</strong><br /> <img class="alignnone size-full wp-image-104104" src="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-22.05.22.png" alt="" width="551" height="447" srcset="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-22.05.22.png 551w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-22.05.22-465x377.png 465w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-22.05.22-350x284.png 350w" sizes="(max-width: 551px) 100vw, 551px" /><br /> Choose the tracegroup (I usually choose to run with debug as this has all the details).<br /> After a few minutes of tracing you can click on <strong>Disable Tracing</strong> button.<br /> And EM can also show you the trace file contents.</p> <p><strong>The other easy way to start a trace</strong></p> <p>This is what we can also do when the EM is not accessible ( or you do not really like the pointy-clicky stuff)<br /> Start the form <strong>on the client PC</strong> with this URL:</p> <pre>http://&lt;server&gt;:&lt;port&gt;/forms/frmservlet?config=&lt;your_app_config&gt;<strong>&amp;record=forms&amp;tracegroup=debug</strong></pre> <p>Work for a few minutes and close the application.<br /> Having generated trace files, we now have to read them on the forms application server after we translate them to text.</p> <p><strong>$JAVA_HOME/bin/java -classpath $ORACLE_HOME/jlib/frmxlate.jar oracle.forms.diagnostics.Xlate datafile=$DOMAIN_HOME/system_components/FORMS/forms1/trace/forms_17824.trc outputfile=trace_my_session_20180506.txt outputclass=WriteOutTEXT</strong></p> <p>I can also use different options if I want to translate the trace file to HTML or XML with</p> <pre>outputclass=<strong>WriteOutHTML</strong></pre> <pre>outputclass=<strong>WriteOutXML</strong></pre> <h3>What can traces tell us</h3> <p>A typical trace file is an engineer&#8217;s delight. To give you an idea, this is a snippet</p> <pre style="font-size: small;">9615 [BUILTIN.END,3] Timestamp=53160, StartEvent=9614, Duration=0 9616 [BUILTIN.START,3] Timestamp=53160, EndEvent=9617, Name=SET_ITEM_INSTANCE_PROPERTY [Arguments] Type=In Position=1 DataType=INTEGER Value=131081 Type=In Position=2 DataType=NUMBER Value=0 Type=In Position=3 DataType=NUMBER Value=1526 Type=In Position=4 DataType=STRING Value=BACKGROUND Type=In Position=5 DataType=STRING Value=NULL 9617 [BUILTIN.END,3] Timestamp=53160, StartEvent=9616, Duration=0 9618 [Local_PU.END,2] Timestamp=53160, StartEvent=9613, Duration=0 9619 [TRIGGER.END,1] Timestamp=53160, StartEvent=9612, Duration=0 <strong>9620 [NETWORK.WRITE] Timestamp=53160, StartEvent=9601, Duration=0, Packets=1, Bytes=742</strong> <strong>9621 [NETWORK.READ] Timestamp=53220, EndEvent=9625, Duration=0, Packets=1, Bytes=270</strong> 9622 [ECID] Timestamp=53220, Value=XXXXXXXXXX #9623 [Key] Timestamp=53220, FormName=XXX__YYY(3), KeyPressed=Up 9624 [ERROR] Timestamp=53220, Msg= Error Message: FRM-40100: At first record. 9625 [NETWORK.WRITE] Timestamp=53220, StartEvent=9621, Duration=0, Packets=1, Bytes=94</pre> <h3>Fun with traces</h3> <ul> <li>I extracted the NETWORK events timestamps.</li> </ul> <pre style="font-size: medium;">[oracle@oraserver1 karun]$ <strong>grep "NETWORK" trace_my_session_20180506.txt | tail -200</strong></pre> <pre style="font-size: small;">13378 [NETWORK.READ] Timestamp=185260, EndEvent=13397, Duration=0, Packets=1, Bytes=312 13397 [NETWORK.WRITE] Timestamp=185260, StartEvent=13378, Duration=0, Packets=1, Bytes=650 13398 [NETWORK.READ] Timestamp=185390, EndEvent=13417, Duration=0, Packets=1, Bytes=315 13417 [NETWORK.WRITE] Timestamp=185390, StartEvent=13398, Duration=0, Packets=1, Bytes=704 13418 [NETWORK.READ] Timestamp=185520, EndEvent=13437, Duration=0, Packets=1, Bytes=318 ... ... </pre> <ul> <li>I graphed the timestamp data. I am interested in the timestamp intervals.</li> </ul> <p>I am showing the timestamp in the X-axis and (Timestamp <strong>Next</strong> &#8211; Timestamp <strong>previous</strong>) on the Y-axis. <img class="alignnone size-full wp-image-104101" src="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.23.37.png" alt="" width="663" height="409" srcset="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.23.37.png 663w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.23.37-465x287.png 465w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.23.37-350x216.png 350w" sizes="(max-width: 663px) 100vw, 663px" /></p> <p>Notice the typical <strong>network timestamp interval is definitely above 100ms with spikes.</strong> In my head, this is telling us that the frmweb process on the server is idle for ~100ms while the next payload from the applet is yet to reach it!</p> <p>Yes, it correlates with reality &#8212; the users are really complaining about the application being unresponsive to the keyboard input when they use this machine. It &#8220;freezes&#8221; too!</p> <p>As a comparison, we ran<strong> the same form</strong> and gathered a trace from another machine <strong>in a different part of the network</strong>.</p> <p><img class="alignnone size-full wp-image-104102" src="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.37.55.png" alt="" width="667" height="413" srcset="https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.37.55.png 667w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.37.55-465x288.png 465w, https://blog.pythian.com/wp-content/uploads/Screen-Shot-2018-05-09-at-21.37.55-350x217.png 350w" sizes="(max-width: 667px) 100vw, 667px" /></p> <p>Here the <strong>network timestamp interval</strong><strong> is around 50ms</strong> and it does spike a few times to be around 75ms.</p> <p>On this machine the Users are able to work, but they do see a bit of freezing off and on. We are getting somewhere!<br /> <strong>The traces point to yet another (previously overlooked) contributor to the slow response.</strong></p> <h3>In conclusion</h3> <p>After some more digging, we found that the &#8220;slow&#8221; machines (running the applet) are in the network segments <strong>with really high latency</strong>.</p> <p>TL&amp;DR: Forms 12c  traces are a valuable source of information, easy to gather!<br /> Instead of guessing why your form applications are slow and applying arbitrary fixes, please consider using the trace data next time.</p> <p>Call us at Pythian anytime you need us to help with slow Forms 12c applications.</p> <p><em>My apologies to the folks using windows as their forms server &#8212; I am sure you get the idea. The windows commands and file locations are very similar.</em></p> <p>ps: In case you are wondering <strong>what is an acceptable network timestamp interval?</strong></p> <p>I would welcome your actual test results in your responses and if I hear more we could build another blog post.<br /> For the people interested in keyboard latency numbers, I could find <a href="https://danluu.com/keyboard-latency/">this</a>.<br /> Open question: Is the keyboard + ~10ms is the lowest possible network timestamp interval?</p> </div></div> Karun Dutt https://blog.pythian.com/?p=104099 Mon May 14 2018 16:22:36 GMT-0400 (EDT) Oracle Active Data Guard Overview & Architecture http://oracle-help.com/dataguard/oracle-active-data-guard-overview-architecture/ <p>We have seen 3 types of Standby Databases</p> <ol> <li>Physical Standby Database</li> <li>Logical Standby Database</li> <li>Snapshot Standby Database</li> </ol> <p>To know more about it <a href="http://oracle-help.com/dataguard/oracle-dataguard/"><span style="color: #0000ff;"><strong>Oracle Dataguard</strong></span></a></p> <p>Oracle 11g comes with a new option : <strong>Oracle Active Data Guard.</strong></p> <p>Oracle Active Data Guard is an optional license for Oracle Database Enterprise Edition. Active Data Guard enables advanced capabilities that that extend basic Data Guard functionality. Oracle Active Data Guard allows us to use real time query the standby database.</p> <p><strong>Architecture :</strong></p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png"><img data-attachment-id="4354" data-permalink="http://oracle-help.com/dataguard/oracle-active-data-guard-overview-architecture/attachment/active-data-guard1-2/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?fit=809%2C281" data-orig-size="809,281" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="active data guard1" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?fit=300%2C104" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?fit=809%2C281" class="alignnone wp-image-4354 size-full" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?resize=809%2C281" alt="" width="809" height="281" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?w=809 809w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?resize=300%2C104 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?resize=768%2C267 768w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?resize=60%2C21 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/active-data-guard1-1.png?resize=150%2C52 150w" sizes="(max-width: 809px) 100vw, 809px" data-recalc-dims="1" /></a></p> <p>Oracle Active Data Guard gives us read only access while applying redo. Redo streams are transmitted from primary database to standby database server , that is first written to standby redo log files and then redo are applied to Standby Database.</p> <p>At Standby side we can see each committed transaction of primary database as soon as Standby Database is in sync with Primary Database.</p> <p>Functionality we can use from Active Data Guard :</p> <ul class="listicons"> <li><strong>Real-Time Query</strong> &#8211; Offload read-only workloads to an up-to-date standby database</li> <li><strong>Automatic Block Repair</strong> &#8211; Automatic repair of physical corruption transparent to the user</li> <li><strong>Far Sync</strong> &#8211; Zero data loss protection across any distance</li> <li><strong>Standby Block Change Tracking</strong> &#8211; Enable incremental backups on an active standby</li> <li><strong>Active Data Guard Rolling Upgrade</strong> &#8211; Make it simple to reduce planned downtime</li> <li><strong>Global Database Services</strong> &#8211; Load balancing and service management across replicated databases.</li> <li><strong>Application Continuity</strong> &#8211; Make outages transparent to users.</li> </ul> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/oracle-active-data-guard-overview-architecture/">Oracle Active Data Guard Overview &amp; Architecture</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4352 Mon May 14 2018 15:02:43 GMT-0400 (EDT) Converting a Snapshot Standby Database to a Physical Standby Database http://oracle-help.com/dataguard/converting-a-snapshot-standby-database-to-a-physical-standby-database/ <p>In the previous post, we convert <span style="color: #0000ff;"><strong><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/converting-a-physical-standby-database-to-a-snapshot-standby-database/">Physical Standby to</a> Snapshot Standby</strong></span> . In this post, we can convert Snapshot Standby to Physical Standby.</p> <p>Prerequisites : Snapshot database is already exists</p> <table style="height: 73px; width: 362.883px;"> <tbody> <tr> <td style="width: 141px;">Primary Database</td> <td style="width: 201.883px;">Snapshot Standby database</td> </tr> <tr> <td style="width: 141px;">mgr</td> <td style="width: 201.883px;">mgr</td> </tr> </tbody> </table> <p>Step 1 : Check Primary Database Information :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR READ WRITE PRIMARY</pre><p>Step 2 : Check Snapshot Database Information :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR READ WRITE SNAPSHOT STANDBY</pre><p>Step 3 : Shut down standby database  :</p><pre class="crayon-plain-tag">SQL&gt; SHUTDOWN IMMEDIATE; Database closed. Database dismounted. ORACLE instance shut down. SQL&gt;</pre><p>Step 4 : Start standby database in mount state :</p><pre class="crayon-plain-tag">SQL&gt; STARTUP MOUNT ORACLE instance started. Total System Global Area 392495104 bytes Fixed Size 2253584 bytes Variable Size 176164080 bytes Database Buffers 209715200 bytes Redo Buffers 4362240 bytes Database mounted. SQL&gt;</pre><p>Step 5 : Convert database to physical standby database :</p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE CONVERT TO PHYSICAL STANDBY; Database altered.</pre><p>Step 6 : View information of physical standby database :</p><pre class="crayon-plain-tag">SQL&gt; SELECT NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR MOUNTED PHYSICAL STANDBY SQL&gt;</pre><p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/converting-a-snapshot-standby-database-to-a-physical-standby-database/">Converting a Snapshot Standby Database to a Physical Standby Database</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4330 Mon May 14 2018 14:31:01 GMT-0400 (EDT) Converting a Physical Standby Database to a Snapshot Standby Database http://oracle-help.com/dataguard/converting-a-physical-standby-database-to-a-snapshot-standby-database/ <p>In the previous post, we can read about <span style="color: #0000ff;"><strong><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/snapshot-standby-databases-overview-architecture/">Snapshot Standby Overview</a></strong></span>. In this post, we can convert to Snapshot Standby.</p> <p>Prerequisites :</p> <p>Physical Standby Database is already created and synchronized with Primary Database .</p> <p>Database Details :<br /> <a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg"><img data-attachment-id="4485" data-permalink="http://oracle-help.com/dataguard/converting-a-physical-standby-database-to-a-snapshot-standby-database/attachment/detstb/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?fit=673%2C93" data-orig-size="673,93" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1526342095&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="detstb" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?fit=300%2C41" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?fit=673%2C93" class="alignnone size-full wp-image-4485" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?resize=673%2C93" alt="" width="673" height="93" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?w=673 673w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?resize=300%2C41 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?resize=60%2C8 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/detstb.jpg?resize=150%2C21 150w" sizes="(max-width: 673px) 100vw, 673px" data-recalc-dims="1" /></a><br /> Step 1: Check Primary database :</p><pre class="crayon-plain-tag">[oracle@localhost admin]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Fri Apr 27 15:09:14 2018 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Oracle Label Security, OLAP, Data Mining, Oracle Database Vault and Real Application Testing options SQL&gt; select name,open_mode from v$database; NAME OPEN_MODE --------- -------------------- MGR READ WRITE SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR READ WRITE PRIMARY SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 56</pre><p>Step 2 : Check Standby Database :</p><pre class="crayon-plain-tag">[oracle@test1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Fri Apr 27 15:10:33 2018 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR MOUNTED PHYSICAL STANDBY SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 56</pre><p>Step 3 : Stop recovery.</p><pre class="crayon-plain-tag">SQL&gt; RECOVER MANAGED STANDBY DATABASE CANCEL; Media recovery complete.</pre><p>Step 4 : Convert physical standby to snapshot standby using following command:</p><pre class="crayon-plain-tag">SQL&gt; alter database convert to snapshot standby; Database altered. SQL&gt;</pre><p>check alert log</p><pre class="crayon-plain-tag">alter database convert to snapshot standby Starting background process RVWR Fri Apr 27 16:44:30 2018 RVWR started with pid=35, OS id=14563 Allocated 3981120 bytes in shared pool for flashback generation buffer Created guaranteed restore point SNAPSHOT_STANDBY_REQUIRED_04/27/2018 16:44:30 Killing 4 processes with pids 11523,11517,11519,11521 (all RFS) in order to disallow current and future RFS connections. Requested by OS process 11541 Begin: Standby Redo Logfile archival End: Standby Redo Logfile archival RESETLOGS after incomplete recovery UNTIL CHANGE 1216480 Waiting for all non-current ORLs to be archived... All non-current ORLs have been archived. Resetting resetlogs activation ID 1906664927 (0x71a565df) Online log /u01/oracle/fast_recovery_area/STD_MGR/onlinelog/o1_mf_1_ffcscxwk_.log: Thread 1 Group 1 was previously cleared Online log /u01/oracle/fast_recovery_area/STD_MGR/onlinelog/o1_mf_2_ffcsczdm_.log: Thread 1 Group 2 was previously cleared Online log /u01/oracle/fast_recovery_area/STD_MGR/onlinelog/o1_mf_3_ffcsd0p0_.log: Thread 1 Group 3 was previously cleared Standby became primary SCN: 1216478 Fri Apr 27 16:44:33 2018 Setting recovery target incarnation to 4 CONVERT TO SNAPSHOT STANDBY: Complete - Database mounted as snapshot standby Completed: alter database convert to snapshot standby Fri Apr 27 16:44:42 2018 ARC1: Becoming the 'no SRL' ARCH</pre><p><strong>View Snapshot Database Information :</strong></p> <p>Step 5 : Check role of snapshot database :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR READ WRITE SNAPSHOT STANDBY</pre><p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/converting-a-physical-standby-database-to-a-snapshot-standby-database/">Converting a Physical Standby Database to a Snapshot Standby Database</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4292 Mon May 14 2018 14:28:20 GMT-0400 (EDT) Activating a Snapshot Standby Database:Issues and Cautions and Target Restriction http://oracle-help.com/dataguard/activating-a-snapshot-standby-databaseissues-and-cautions-and-target-restriction/ <div>We have already seen overview of <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/snapshot-standby-databases-overview-architecture/"><strong>snapshot standby database.</strong></a></span></div> <div></div> <div><strong>But while converting snapshot standby database few things we need to consider.</strong></div> <div></div> <ol> <li><strong>Corruption of log files :</strong> As we already know snapshot database accepts redo log files from primary when it is converted to snapshot mode but it does not apply redo log to snapshot standby. So, if there is a corruption of redo log file at the standby database has occured , we can not discover it until snapshot standby database is converted back to physical standby database and managed recovery is started. If flashback log file is lost or corruption occured in that , it might prevent reverse conversion<strong>.[snapshot standby &#8211; physical standby].</strong></li> <li><strong>Time constraint in case of failover process : </strong>If in worst case if primary database crash while physical standby database is converted into a snapshot standby database , we need time to convert it to physical standby and then physical standby to PRIMARY role. And if there is lots of redo that needs to be applied it becomes lengthy process to convert snapshot standby to physical standby database.</li> </ol> <div></div> <div><strong>Target Restrictions :</strong></div> <div></div> <div>Snapshot standby database gives many benefits. But there are some restrictions when we want to activate snapshot standby database.</div> <ol> <li>When you are using Maximum Protection mode for your database and you want to use snapshot standby database , minimum 2 standby database must be configured for your primary database else you can  not convert your physical standby database to snapshot standby database.</li> <li><strong>Snapshot standby</strong> database can not be a target of switchover process.</li> <li><strong>Snapshot standby</strong> database can not be a target of a fast-start failover target.</li> </ol> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <div class="yj6qo"></div> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/activating-a-snapshot-standby-databaseissues-and-cautions-and-target-restriction/">Activating a Snapshot Standby Database:Issues and Cautions and Target Restriction</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4331 Mon May 14 2018 14:23:48 GMT-0400 (EDT) Snapshot Standby Databases: Overview & Architecture http://oracle-help.com/dataguard/snapshot-standby-databases-overview-architecture/ <p><strong>Snapshot Standby Database :</strong></p> <p>We have already seen types of standby database in Data Guard. <span style="text-decoration: underline;color: #0000ff"><strong><a style="color: #0000ff;text-decoration: underline" href="http://oracle-help.com/dataguard/oracle-dataguard/">Oracle Dataguard</a></strong></span></p> <p>In this article we are going to see Snapshot Standby Database in deep.</p> <p>In <strong>snapshot standby</strong> type database stays in a read,write mode that is fully update-able database. Snapshot standby database is created by converting physical standby database into a snapshot standby database.</p> <p>When standby database is converted into a snapshot database , it receives redo data but , it does not apply that redo data from a primary database. The redo data is kept in a standby database server . Once snapshot standby database is converted back into a physical standby database , it discards all local updates done in a snapshot database and then applies redo data kept in a standby server.</p> <p><strong>Architecture : </strong></p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png"><img data-attachment-id="4325" data-permalink="http://oracle-help.com/dataguard/snapshot-standby-databases-overview-architecture/attachment/snapshot-standby/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?fit=704%2C513" data-orig-size="704,513" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="snapshot standby" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?fit=300%2C219" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?fit=704%2C513" class="alignnone wp-image-4325" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?resize=579%2C423" alt="" width="579" height="423" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?resize=300%2C219 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?resize=60%2C44 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?resize=150%2C109 150w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/snapshot-standby.png?w=704 704w" sizes="(max-width: 579px) 100vw, 579px" data-recalc-dims="1" /></a></p> <p>We can see in above diagram , every process that works in data guard , works fine here except <strong>MRP</strong> process. When database is converted from physical standby database to snapshot standby database , <strong>MRP</strong> process is on hold.</p> <p>Redo are generated from primary and recorded to standby in archive log files.</p> <p>When a physical standby is converted into a snapshot standby , an implicit guaranteed restore point is created for the standby database. Flashback database is used to facilitate this functionality.And redo data are kept on hold.</p> <p>As snapshot database is on fully <strong>update-able mode</strong> , we can do transactions on standby database.We can say snapshot database is cloned copy of primary database.</p> <p>After completing our task when we convert standby database back in Physical standby database, all DMLs performed are discarded till that recorded restore point. DMLs will be rolled back and redo data remaining to apply will be applied by MRP process.</p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/snapshot-standby-databases-overview-architecture/">Snapshot Standby Databases: Overview &amp; Architecture</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4324 Mon May 14 2018 14:20:52 GMT-0400 (EDT) SQL Loader in Oracle http://oracle-help.com/oracle-database/sql-loader-in-oracle/ <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg"><img data-attachment-id="4317" data-permalink="http://oracle-help.com/oracle-database/sql-loader-in-oracle/attachment/images-5-5/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?fit=169%2C299" data-orig-size="169,299" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="images (5)" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?fit=169%2C299" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?fit=169%2C299" class="wp-image-4317 alignleft" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?resize=81%2C144" alt="" width="81" height="144" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?w=169 169w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?resize=34%2C60 34w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-5.jpg?resize=85%2C150 85w" sizes="(max-width: 81px) 100vw, 81px" data-recalc-dims="1" /></a>Today we are going to learn about &#8220;SQL Loader&#8221;. The journey of Oracle Database does not end here.  Let&#8217;s have look at the technical definition of SQL Loader.</p> <p><strong>what is SQL Loader?</strong></p> <p>The primary method for quickly populating Oracle tables with data from external files is called SQL*Loader. From its powerful data parsing engine that puts a little limitation on the format of the data in the datafile. SQL*Loader is invoked when you specify the sqlldr command or use the Enterprise Manager interface.</p> <p>SQL*Loader is an integral feature of Oracle databases and is available in all configurations.</p> <p>Follow these steps to perform SQL Loader</p> <ol> <li>Create a file and give it a name (Ex: data_1.txt)</li> <li>Insert some valuable data in it. Here we have insert the following things into data_1.txt</li> </ol> <p></p><pre class="crayon-plain-tag">1, Anuradha 2, Priya 3, Jack 4, Himanshu 5, Jissy 6, Joel</pre><p></p> <ol> <li>Create a SQL Loader control file and give it a name. Here we have create a control file which name is LOADER_1.CTL</li> </ol> <p></p><pre class="crayon-plain-tag">load data infile ‘c:\data_1.txt’ into table info fields terminated by ‘,’ (id,name)</pre><p>Now create a table in database</p><pre class="crayon-plain-tag">SQL&gt;CREATE TABLE info_1(id number,name varchar(30)); [Under Scott user]</pre><p>Run the sqlldr</p><pre class="crayon-plain-tag">C:\&gt;sqlldr userid=scott/abc#23 CONTROL=c:\LOADER_1.CTL</pre><p>Now, we can fetch the data from tables</p><pre class="crayon-plain-tag">SELECT * FROM info_1;</pre><p>Thanks for giving valuable time to add new gems to Oracle’s treasure.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/sql-loader-in-oracle/">SQL Loader in Oracle</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4316 Mon May 14 2018 14:16:05 GMT-0400 (EDT) Managing Role in Oracle http://oracle-help.com/oracle-database/managing-role-in-oracle/ <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png"><img data-attachment-id="4314" data-permalink="http://oracle-help.com/oracle-database/managing-role-in-oracle/attachment/images-3-10/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?fit=199%2C253" data-orig-size="199,253" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="images (3)" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?fit=199%2C253" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?fit=199%2C253" class="wp-image-4314 alignleft" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?resize=116%2C147" alt="" width="116" height="147" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?w=199 199w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?resize=47%2C60 47w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3-1.png?resize=118%2C150 118w" sizes="(max-width: 116px) 100vw, 116px" data-recalc-dims="1" /></a>Today we are going to have look at Role in Oracle database. It is the most important part of Oracle database. User Privileges and Roles is the most common task that is performed by Oracle DBA. With roles and privileges, we can easily point out which user has what responsibilities in databases.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Let’s start with roles in Oracle.</p> <p><strong>Creating a Role:</strong></p><pre class="crayon-plain-tag">create role OracleHelp;</pre><p><strong>Assign Privilege to a role:</strong></p><pre class="crayon-plain-tag">grant create session,create table to OracleHelp;</pre><p>Assign More Privilege to the role:</p><pre class="crayon-plain-tag">SQL&gt; Create table test(id number); SQL&gt; grant select,insert,update on test to OracleHelp;</pre><p>Add Another Layer To The Heirarchy:</p><pre class="crayon-plain-tag">SQL&gt; CREATE ROLE manager; SQL&gt; GRANT OracleHelp TO manager; SQL&gt; GRANT DELETE ON test TO manager;</pre><p>Assigning Role to user:</p><pre class="crayon-plain-tag">GRANT OracleHelp TO scott; GRANT manager TO Allen;</pre><p>Granting System Priviledge:</p><pre class="crayon-plain-tag">GRANT CREATE SESSION to managee WITH ADMIN OPTION;</pre><p>Revoke Role From a User:</p><pre class="crayon-plain-tag">REVOKE manager FROM Tallen;</pre><p>Drop a Role:</p><pre class="crayon-plain-tag">DROP ROLE manager;</pre><p>Obtaining Role Information:</p> <ul> <li>DBA__ROLES</li> <li>DBA_ROLES_PRIVS</li> <li>ROLE_ROL_PRIVS</li> <li>DBA_SYS_PRIVS</li> <li>ROLE_SYS_PRIVS</li> <li>ROLE_TAB_PRIVS</li> <li>SESSION_ROLES</li> </ul> <p>Thanks for giving valuable time to add new gems to Oracle’s treasure.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/managing-role-in-oracle/">Managing Role in Oracle</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4313 Mon May 14 2018 14:10:29 GMT-0400 (EDT) Flashback in Oracle http://oracle-help.com/oracle-database/flashback-in-oracle/ <p>We are going to have look at another part of the Oracle database. Flashback is going to be the topic of our discussion of today. Let&#8217;s start with its introduction after that with its example.</p> <p>Being DBA we have to make us familiar with Flashback technology. It is considered as a good feature of Oracle Database.</p> <p><strong>What is flashback technology? </strong></p> <p>Let&#8217;s have a technical definition of flashback technology.</p> <p>Oracle Flashback Technology is a group of Oracle Database features that that let you view past states of database objects or to return database objects to a previous state without using point-in-time media recovery.</p> <p>We can perform multiple tasks with the help of flashback technology some of them are mention below:-</p> <ul style="list-style-type: disc;"> <li>It performs queries that return past data</li> <li>We can perform queries that return metadata that shows a detailed history of changes to the database.</li> <li>Recover tables or rows to a previous point in time is the most beautiful feature of flashback technology.</li> <li>We can see that it automatically track and archive transactional data changes.</li> <li>It rolls back a transaction and its dependent transactions while the database remains online.</li> </ul> <p>Before use flashback technology we have to set some location and parameters so Oracle Database can give us better solutions.</p> <p><strong>RVWR Background Process</strong></p> <p>A new RVWR background process is started when Flashback Database is enabled. It is similar to the LGWR (log writer) process. The new process writes Flashback Database data to the Flashback Database logs.</p> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png"><img data-attachment-id="4309" data-permalink="http://oracle-help.com/oracle-database/flashback-in-oracle/attachment/capture-18/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?fit=471%2C250" data-orig-size="471,250" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?fit=300%2C159" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?fit=471%2C250" class="wp-image-4309 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?resize=410%2C217" alt="" width="410" height="217" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?resize=300%2C159 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?resize=60%2C32 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?resize=150%2C80 150w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-3.png?w=471 471w" sizes="(max-width: 410px) 100vw, 410px" data-recalc-dims="1" /></a></p> <p>Enabling Flashback Database:</p> <p>Make sure the database is in archive mode and FLASHBACK_ON Yes</p><pre class="crayon-plain-tag">SQL&gt;SELECT flashback_on, log_mode FROM v$database;</pre><p>Configure the recovery area(if necessary) by setting the two parameters:</p> <ul> <li>db_recovery_file_dest</li> <li>db_recovery_file_dest_size</li> </ul> <p>Open the database in MONT mode and turn on the flashback feture:</p><pre class="crayon-plain-tag">SQL&gt; STARTUP MOUNT; SQL&gt;ALTER DATABASE ARCHIVELOG; [If not in archive mode] SQL&gt; ALTER DATABASE FLASHBACK ON; SQL&gt; ALTER DATABASE OPEN;</pre><p>Test Case</p><pre class="crayon-plain-tag">SQL&gt; create table test_flashback(name varchar(30)); SQL&gt; insert into test_flashback values('TEST BEFORE'); SQL&gt; commit; SQL&gt; select to_char(sysdate,'dd-mm-yy hh24:mi:ss') from dual; SQL&gt; SELECT current_scn FROM v$database; SQL&gt; insert into test_flashback values('TEST AFTER'); SQL&gt; commit; SQL&gt; select * from test_flashback; SQL&gt; drop table test_flashback; SQL&gt; shutdown immediate; SQL&gt; startup mount; SQL&gt; FLASHBACK DATABASE to timestamp to_timestamp('16-05-2018 13:59:45', 'DD-MM-YYYY HH24:MI:SS'); OR SQL&gt; FLASHBACK DATABASE TO SCN 3726625; SELECT current_scn FROM v$database; SQL&gt; ALTER DATABASE OPEN RESETLOGS; SQL&gt; SELECT * FROM test_flashback;</pre><p>Another Example:</p><pre class="crayon-plain-tag">SQL&gt; conn Oraclehelp/Oraclehelp SQL&gt; create table test_flash(id number); SQL&gt; commit; SQL&gt; drop table test_flash; SQL&gt; flashback table test_flash to before drop; SQL&gt; select * from test_flash;</pre><p>Thanks for giving valuable time to add new gems to Oracle’s treasure.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/flashback-in-oracle/">Flashback in Oracle</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4307 Mon May 14 2018 14:04:40 GMT-0400 (EDT) Loading Tables with Oracle GoldenGate and REST APIs https://dbasolved.com/2018/05/14/loading-tables-with-oracle-goldengate-and-rest-apis/ <p dir="auto">With Oracle GoldenGate 12c (12.3.0.1.x), you can now quickly load empty target tables with data from your source database. You could always do this in previous releases of Oracle GoldenGate, but the process has now been simplified using REST APIs and some scripting know-how. In this post, I’m going to show you, high level, how you can use the REST APIs and a bit of scripting to do an initial load of a two table with a single command.</p> <p>In previous releases of Oracle GoldenGate, a similar task could be done, but it required you to include the Oracle Database Export/Import data pumps or some other drawn out process. With this new process, you can effectively get around that and only need to use trail files to perform the initial load. </p> <p>In this scenario, I have two table with a total of 14,000 records in them. This will be a small example of an initial load, but you should get the idea behind how this will work. This approach will also work for adding tables into an existing replication scheme. </p> <p>The below architcture diagram illistrates how the architecture would look with an existing GoldenGate capture running and incorprating an File-Based Initial Load process to load a few tables.</p> <p>Image 1:</p> <p><img src="https://curtisbl.files.wordpress.com/2018/05/fb_initialload_arch.jpg?w=833&#038;h=263" align="middle" width="833" height="263" class="aligncenter"></p> <p dir="ltr">This may look a bit confusing, but this is quite simple to understand. The red items are the GoldenGate extract, trails (local and remote), and the GoldenGate replicat. This is an existing replication stream. The GoldenGate extract is capturing from the source database, moving transactions to the local trail file (aa). Then the DistroService picks up/reads the local trail and ships the transactions across the GoldenGate Path to the ReceiverService. The Receiver Service then writes to the remote trail (ab) where the GoldenGate replicat processes the transactions into the target database. Pretty simple and this is doing a continuous replication of transactions.</p> <p dir="ltr">Now, you want to just setup a few new tables, but do not want to take the day or two it would take to configure, export, import, apply and then catch up. Along the bottom, is the initial load path (green) using a File-Based approach to initially load tables. This process is what I’ve scripted out to using cURL and Shell scripts. Normally, you would spend time doing an export/import for the table(s) that you want to move to the target system after setting up the initial load extract. </p> <p dir="ltr">Using Oracle GoldenGate Microservices architecture, this initial load process can be simplied and done very quickly. Below is a link to a script which I wrote to perform an File-Based Initial Load within Oracle GoldenGate Microservices. </p> <p dir="ltr"><a href="https://github.com/dbasolved/OGG_MA_ITEMS/blob/master/FB_InitialLoad.sh" target="_blank">FB_InitialLoad.sh</a> &lt;— Use at your own risk! This is only an example script of how this can be done.</p> <p dir="ltr">What this script does, is creates the File-Based Initial Load process and populates the two tables I’ve identified in the target system. </p> <p dir="ltr">As you run this script, everything I needed to build has been reduced down to functions that I can call when needed within the script. Granted this script if very simple but it orchatrates the whole initial load process for the tables I wanted. After the tables have been loaded, then they can be merged into the existing replication stream.</p> <p dir="ltr">Enjoy!!!</p> Bobby Curtis http://dbasolved.com/?p=1895 Mon May 14 2018 13:11:12 GMT-0400 (EDT) Pythian’s Vice President of Business Development Recognized as One of CRN’s 2018 Women of the Channel https://blog.pythian.com/pythians-vice-president-business-development-recognized-one-crns-2018-women-channel/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>OTTAWA, ON, May 14, 2018 – <a href="http://www.pythian.com">Pythian</a>, a global IT company that helps businesses leverage disruptive data technologies to better compete, announced today that CRN, a brand of The Channel Company, has named <a href="https://pythian.com/leadership-team/vanessa-simmons/">Vanessa Simmons</a>, Vice President of Business Development at Pythian, to its prestigious 2018 Women of the Channel list. The executives who comprise this annual list span the IT channel, representing vendors, distributors, solution providers and other organizations that figure prominently in the channel ecosystem. Each is recognized for her outstanding leadership, vision and unique role in driving channel growth and innovation.</p> <p>CRN editors select the Women of the Channel honorees based on their professional accomplishments, demonstrated expertise and ongoing dedication to the IT channel.</p> <p>Vanessa’s passion for technology and talent for relationship building have led her to her current role as VP of Business Development at Pythian. In 2013 she launched the Pythian Business Development team, which she has built from the ground up. For the past three years she has led Business Developments’ strategy which has focused on supporting Pythian&#8217;s transformation into a cloud-first/cloud-forward technology company. She has actively built strong relationships with <a href="https://pythian.com/google-cloud-platform/">Google</a>, <a href="https://pythian.com/microsoft-azure/">Microsoft</a> and <a href="https://pythian.com/amazon-web-services/">Amazon</a> that have helped drive Pythian&#8217;s cloud focus forward to its current state —the fastest growing practice area of the business. </p> <p>In September of 2017, Vanessa orchestrated a highly successful &#8220;Business Development Cloud Throwdown Challenge.&#8221; Business Development teamed up with Learning and Development to offer to fund cloud-related training with a goal of certification by December 31. Within 10 minutes of the announcement, the 20 allocated spots were taken. Within a few hours, nearly eighty people had registered, and all were accepted into the program due to the tremendous response. Pythian’s cloud partners provided support with training vouchers to offset some of the costs. Participants said they felt engaged, invigorated and re-motivated. </p> <p>Vanessa’s mission going forward is &#8220;Better, faster, more in the cloud.&#8221; &#8220;Better at how we approach and deliver cloud projects and communicate with partners and customers; faster at getting projects started up, even as more come in; and helping more clients get started on their cloud journey.&#8221;</p> <p>“This accomplished group of leaders is steadily guiding the IT channel into a prosperous new era of services-led business models and deep, strategic partnerships,” said Bob Skelley, CEO of The Channel Company. “CRN’s 2018 Women of the Channel list honors executives who are driving channel progress through a number of achievements—exemplary partner programs, innovative product development and marketing, effective team-building, visionary leadership and accelerated sales growth—as well as advocacy for the next generation of women channel executives.” </p> <p>“Vanessa is an incredible asset to Pythian and the driving force behind the growth and success of our public cloud partner ecosystem,” said Keith Millar, SVP of Pythian’s Business Services. “She has played a tremendous role in solidifying and strengthening these relationships and regularly advocates on their behalf both internally and externally with an extraordinary amount of energy and enthusiasm.”</p> <p><a href="https://wotc.crn.com/wotc2018-details.htm?w=582&#038;itc=refresh">The 2018 Women of the Channel</a> list will be featured in the June issue of CRN Magazine and online at www.CRN.com/wotc. </p> <p>About Pythian<br /> Pythian is a global IT company that helps businesses leverage disruptive data technologies to better compete. Our services and software solutions unleash the power of cloud, data and analytics to drive better business outcomes. Our 20 years in data, commitment to hiring the best talent, and our deep technical and business expertise allow us to meet our client promise of using technology to deliver the best outcomes faster. </p> <p>About the Channel Company<br /> The Channel Company enables breakthrough IT channel performance with our dominant media, engaging events, expert consulting and education, and innovative marketing services and platforms. As the channel catalyst, we connect and empower technology suppliers, solution providers and end users. Backed by more than 30 years of unequaled channel experience, we draw from our deep knowledge to envision innovative new solutions for ever-evolving challenges in the technology marketplace. </p> <p>CRN is a registered trademark of The Channel Company, LLC. All rights reserved. </p> <p>The Channel Company Contact:<br /> Kim Sparks<br /> The Channel Company<br /> (508) 416-1193<br /> ksparks@thechannelco.com </p> <p>Pythian Contact:<br /> Lesley Slack<br /> Pythian<br /> (613) 818- 6855<br /> slack@pythian.com<br /> ###</p> </div></div> Pythian News https://blog.pythian.com/?p=104126 Mon May 14 2018 10:09:02 GMT-0400 (EDT) LEAP#389 Two-Stage Amp Design https://blog.tardate.com/2018/05/leap389-two-stage-amp-design.html <p>Reviewing techniques for two-stage CE amplifier design. Gobsmacked that my calcs put me within 3% of actual performance, and I got a workable Class A amplifier to boot! As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Electronics101/BJT/TwoStageCommonEmitterAmplifier">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Electronics101/BJT/TwoStageCommonEmitterAmplifier"><img src="https://leap.tardate.com/Electronics101/BJT/TwoStageCommonEmitterAmplifier/assets/TwoStageCommonEmitterAmplifier_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/05/leap389-two-stage-amp-design.html Mon May 14 2018 09:20:37 GMT-0400 (EDT) KeePass 2.39.1 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/o-ae4AIh-xU/ <p><img class="size-full wp-image-5429 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2015/08/keepass.jpg" alt="" width="79" height="79" />I just noticed KeePass 2.39.1 was released about a week ago.</p> <p><a href="https://keepass.info/download.html">Downloads</a> and <a href="https://keepass.info/news/n180506_2.39.html">Changelog</a> available from the usual places.</p> <p>You can read about how I use KeePass and <a href="https://keepassxc.org/download/#linux">KeePassXC</a> on my Mac, Windows and Android devices <a href="https://oracle-base.com/blog/2012/08/11/adventures-with-dropbox-and-keepass/">here</a>.</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/14/keepass-2-39-1/">KeePass 2.39.1</a> was first posted on May 14, 2018 at 8:51 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/o-ae4AIh-xU" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8045 Mon May 14 2018 03:51:59 GMT-0400 (EDT) Oracle Code : Warsaw – The Journey Home http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/65lddczI1rs/ <p><a href="https://developer.oracle.com/code/warsaw"><img class="size-full wp-image-7058 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2017/04/oracle_code.png" alt="" width="188" height="189" /></a>I woke up at silly o&#8217;clock to begin my journey home. I checked out of the hotel and got a taxi to the airport, where I breezed through check-in and security and found myself at the boarding gate 2 hours before the flight. Another hour in bed would have been nice&#8230; <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>As usual, out came the laptop and I played catch-up on the blog and some of the other stuff I had missed during the conference.</p> <p>The flight from Warsaw to Frankfurt was a little under 2 hours. I don&#8217;t think I&#8217;ve flown with LOT before, and it was quite a nice experience. The plane had a clean and modern interior with power sockets at every seat, which was cool. I didn&#8217;t have an aisle seat, but the flight wasn&#8217;t full, so I was able to move to one. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I had a 90 minutes stop over at Frankfurt, before starting the hour flight home to Birmingham. That fine was easy, even though I had a window seat.</p> <p>A quick taxi ride home and <a href="https://developer.oracle.com/code/warsaw">Oracle Code : Warsaw</a> was complete.</p> <p>Thanks to the Oracle Code crew for inviting me to the event, and to the Oracle Developer Champion and Oracle ACE Programs for making this possible for me. Most importantly, thank to the attendees and speakers for coming to the event and making it all happen!</p> <p>The posts for this event were:</p> <ul> <li><a href="https://oracle-base.com/blog/2018/05/11/oracle-code-warsaw-the-journey-begins/">Oracle Code : Warsaw &#8211; The Journey Begins</a></li> <li><a href="https://oracle-base.com/blog/2018/05/12/oracle-code-warsaw/">Oracle Code : Warsaw</a></li> <li>Oracle Code : The Journey Home (this post)</li> </ul> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/12/oracle-code-warsaw-the-journey-home/">Oracle Code : Warsaw &#8211; The Journey Home</a> was first posted on May 12, 2018 at 5:04 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/65lddczI1rs" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8036 Sat May 12 2018 12:04:32 GMT-0400 (EDT) Oracle Code : Warsaw http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/pGpD8rP6oBM/ <p><a href="https://developer.oracle.com/code/warsaw"><img class="size-full wp-image-7058 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2017/04/oracle_code.png" alt="" width="188" height="189" /></a><a href="https://developer.oracle.com/code/warsaw">Oracle Code : Warsaw</a> started for me with my first presentation of the day as I was in the first block after the keynotes&#8230;</p> <p>My first session was about <a href="https://oracle-base.com/articles/misc/analytic-functions">Analytic Functions</a>. It&#8217;s a little difficult to predict the makeup of the Oracle Code crowds. In some cities you get predominantly Oracle developers, while in others it&#8217;s the opposite. As a result, you never know how what you are doing will be received until you get there. I shouldn&#8217;t have been concerned as the room was full. I had a little glitch at the start, which was caused by my laptop switching between the hotel and event wifi. Once I sorted that the connection to my Oracle Cloud DBaaS service was fine, which meant I was able to run through my demos. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Next I watched &#8220;Database DevOps and Agile Development with Open-Source Utilities&#8221; by <a href="https://twitter.com/SusanDuncanOr">Susan Duncan</a>, which was another standing room only session. This included a demo of Oracle Developer Cloud Service, a freebie when you buy other Oracle Cloud services, and it looked pretty good. The demo was of the full lifecycle of an incident from logging through to release of a fix, which included database changes managed by <a href="https://flywaydb.org/">FlyWay</a>, with a quick a mention of <a href="https://www.liquibase.org/">LiquiBase</a> and <a href="http://utplsql.org/">utPL/SQL</a>.</p> <p><img class="alignleft wp-image-8033" src="https://oracle-base.com/blog/wp-content/uploads/2018/05/chris-thalinger.png" alt="" width="200" height="266" />After lunch I went to watch &#8220;Graal: How to Use the New JVM JIT Compiler in Real Life&#8221; by <a href="https://twitter.com/christhalinger">Chris Thalinger</a>. I finally got to see this presentation, having clashed with Chris&#8217; session slot at all previous events. I&#8217;m trying to think of something to say to make it sound like I understood what he was talking about, but between you and me it was a complete mystery to me. He did some awesome &#8220;Jazz Hands&#8221; though! <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> The session was a live comparison of Graal with an unmodified JVM, showing examples of potential performance improvements, and examples of where performance is no better too. I guess the take-home message that will impress most people is Twitter run all their Scala microservices in production on Graal and it&#8217;s saving them a bundle of cash because of improved performance&#8230;</p> <p>Next up was <a href="https://twitter.com/ewanslater">Ewan Slater</a> with &#8220;Honey I Shrunk the Container&#8221;, who amongst other things talked about using <a href="https://github.com/oracle/smith">Smith</a> to produce microcontainers, which looks really interesting. In one example he was able to shrink a container from about 850 meg to about 85 meg, which is pretty darn impressive. It&#8217;s definitely more impressive than <a href="https://oracle-base.com/articles/linux/docker-quick-tips#reduce-image-size">&#8211;squash</a>.</p> <p>After that it was me with my session on <a href="https://oracle-base.com/articles/misc/an-introduction-to-json-support-in-the-oracle-database">REST enabling the database</a>. I think this was a case of preaching to the converted, but I did get some questions at the end. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>After my session I got chatting to some folks, so I missed the last session of the day, which meant that Oracle Code : Warsaw was over for me. Thanks to everyone that supported the event, including the Oracle Code crew, the other speakers and of course the attendees!</p> <p>In the evening we went into town to get some food and I was introduced to a drink called The Terminator, which tasted really nice, but was rather deadly. I think it contained more alcohol than I normally drink in about 2 years&#8230; I was also given a shot of some vodka which was incredibly smooth. Despite feeling rather inebriated, I was sensible enough to switch back to water and juice for the rest of the evening. The photos of me with the empty vodka bottle and some bison grass (from the bottle) in my mouth were staged. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I was intending to be in bed really early as I needed to be up in the morning at 04:45 for my flight. I got back to the hotel at about midnight, so that didn&#8217;t work out so well&#8230; Thanks to the <a href="http://poug.org/en/">POUG</a> folks for taking us out for the evening. It was much appreciated!</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/12/oracle-code-warsaw/">Oracle Code : Warsaw</a> was first posted on May 12, 2018 at 6:12 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/pGpD8rP6oBM" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8030 Sat May 12 2018 01:12:54 GMT-0400 (EDT) Connect to Snowflake with Node.js Driver http://dbaontap.com/2018/05/11/connect-snowflake-node-js-driver/ <p>This will be the first entry into my Snowflake toolbox. Today I am going to set my MacBook Pro up to connect to a Snowflake data warehouse via the Snowflake Node.js driver. This could come in handy when needing to move data between systems where direct connections are not available and you need an intermediary. ...</p> <p>The post <a rel="nofollow" href="http://dbaontap.com/2018/05/11/connect-snowflake-node-js-driver/">Connect to Snowflake with Node.js Driver</a> appeared first on <a rel="nofollow" href="http://dbaontap.com">dbaonTap</a>.</p> DB http://dbaontap.com/?p=1534 Fri May 11 2018 13:18:50 GMT-0400 (EDT) Demonstrating Oracle SQL Developer Web: the Data Modeler https://www.thatjeffsmith.com/archive/2018/05/demonstrating-oracle-sql-developer-web-the-data-modeler/ <p>In this 10 minute video, you get:</p> <ul> <li>super quick recap of the Worksheet</li> <li>building a new diagram</li> <li>moving stuff around, changing the colors</li> <li>generating DDL</li> <li>generating a data dictionary report</li> <li>saving and searching diagrams</li> </ul> <p><iframe width="720" height="405" src="https://www.youtube.com/embed/NUB1LxdrsA8" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p> <p>This was my 50th post on YouTube. If you like what you see, be sure to subscribe there so you don&#8217;t miss any updates. Your subscriptions will also prove to my kids that I AM A REAL YOUTUBER. Well, probably not. But thanks anyway. </p> <p>I got a question on YouTube, and since I can&#8217;t embed a GIF there to demonstrate, I&#8217;ll do it here:</p> <h3>Can we move the relationship lines?</h3> <p>YES.</p> <div id="attachment_6635" style="width: 810px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/sdweb-move-lines.gif"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/sdweb-move-lines.gif" alt="" width="800" height="450" class="size-full wp-image-6635" /></a><p class="wp-caption-text">Move the boxes (with the lines) or just the lines &#8211; it&#8217;s your choice.</p></div> thatjeffsmith https://www.thatjeffsmith.com/?p=6631 Fri May 11 2018 11:32:45 GMT-0400 (EDT) Exciting updates from Microsoft Build 2018 https://blog.pythian.com/favorite-news-microsoft-build-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>The Build conference is Microsoft&#8217;s premier event targeted at software developers. Over the years, a lot of big technology announcements have been made at this conference and this year was no exception. As expected, Azure keeps taking the spotlight over the other Microsoft developer platforms and the company is clearly making very large investments to expand not only its global footprint but the depth and breadth of all its services.</p> <p>In this blog post I want to highlight a few of the most exciting updates and news that came out of the event and what my take is on them. This is not an exhaustive list of course. So let&#8217;s get started!</p> <p><strong>Planet Database</strong><br /> I&#8217;m going to start with a topic that is dear and near to my heart. If you follow my blog posts you will know I&#8217;m a huge fan of Azure&#8217;s NoSQL database: Cosmos DB. And wow did the team deliver this year! Let&#8217;s check all the improvements announced:</p> <ul> <li><em>Multi-master global replication:</em> this is the big one. Imagine your database in California is also your database in Germany. What one set of users is doing is constantly being replicated to the other one, all with policy based conflict detection and respecting all of Cosmos DB consistency level guarantees. I believe a new generation of global applications will be built on top of this platform. If you have 10 minutes to spare today, you need to check out Dr. Rimma Nehme&#8217;s amazing &#8220;geo-replicated paint&#8221; demo <a href="https://youtu.be/rd0Rd8w3FZ0?t=2h59m50s" target="_blank" rel="noopener">right here</a>.</li> <li><em>Sharing throughput across containers:</em> this will allow clients to have different collections, graphs, etc all sharing a pool of Request Units. This was a blocker for clients that wanted to implement multiple data containers but the pricing model was making it cost prohibitive.<a ref="magnificPopup" href="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/1997af72-3a1b-4ef6-b192-65dc4d0c3aa6.png" target="_blank" rel="noopener"><img src="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/1997af72-3a1b-4ef6-b192-65dc4d0c3aa6.png" alt="Request Unit Sharing" /></a></li> <li><em>BulkExecutor library:</em> a .NET library to do data changes in bulk. This was a previous pain point because you had to roll your own solution and there was no difference in the backend. With this new library now we have an official way to do bulk changes and also the Cosmos Db team estimates as much as a 10x throughput improvement. Great stuff!</li> </ul> <p><strong>Always Secure</strong><br /> Security and compliance have become a continuous improvement process for our customers. Cloud providers are forced to keep up to prevent clients from simply being forced to take their services somewhere else over compliance reasons. Here, there were some really good announcements as well:</p> <ul> <li><em>Cosmos DB on VNet endpoint:</em> yes this is Cosmos DB related but again it shows Microsoft&#8217;s compromise to continue moving away from public endpoints and into secure internal service connections.</li> <li><em>Global VNet peering:</em> you can now peer VNets across Azure regions transparently. Again this is not only more secure but also allows for bigger, scalable application architectures. For example, microservices that can transparently and securely cross VNet boundaries</li> <li><em>Azure DDOS protection:</em> built in capability of Azure to protect against Distributed Denial of Service attacks. It&#8217;s a wild west out there on the Internet where someone can rent a botnet and try to bring down any public site. Anything providers can do to kill this type of attack is great. <a ref="magnificPopup" href="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/7976ade1-1db1-437f-b04f-9dcc2130bfdd.png"><img src="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/7976ade1-1db1-437f-b04f-9dcc2130bfdd.png" alt="Azure DDOS protection" /></a></li> <li><em>Confidential Compute platform:</em> in tandem with the latest security improvements on Intel Xeon processors, Azure is rolling out a new set of VMs (DC series) that will be able to do compute on Trusted Execution Environments. Basically, your data remains encrypted at all points even down to the CPU level where the Intel hardware extensions can do computation on it. This means the cloud provider has absolutely zero visibility of your data at any point in time. Many industries will be moving in this direction and I&#8217;m glad to see Azure stepping forward first.</li> </ul> <p><strong>New Services</strong></p> <p>There are two new services I want to highlight here:</p> <ul> <li><em>Azure CDN:</em> Azure now has a first party service for content delivery all over the world. This was provided previously as a third party offering, but like other services (MySQL DbaaS for example), there were likely several customers that wanted this to be officially provided by Microsoft. And of course, other cloud providers have this as a core offering so competitiveness plays a role as well.<a ref="magnificPopup" href="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/00695393-a018-49f3-b1d2-7e7fd59ec3c5.png" target="_blank" rel="noopener"><img src="https://azurecomcdn.azureedge.net/mediahandler/acomblog/media/Default/blog/00695393-a018-49f3-b1d2-7e7fd59ec3c5.png" alt="Azure CDN" /></a></li> <li><em>Azure Blockchain Workbench:</em> blockchain is everywhere and nowhere at the same time. What I mean by that is that there is large industry interest on what type of new applications will be built on distributed ledger technology, but there is not a large pool of experienced developers in this area to make all these ideas a reality. Microsoft is working to fix this by providing a turnkey solution that will allow you to deploy, manage and code against an existing blockchain platform (Ethereum, Hyperledger, etc) easily all using APIs that abstract the infrastructure complexity away. Basically, trying to bring blockchain development to the masses. I&#8217;m a big proponent of distributed ledger systems where there is a good use case for them and I&#8217;m excited about lowering the barrier of entry for building one.</li> </ul> <p><strong><br /> Service Improvements</strong><br /> A lot of existing services also got some well deserved announcements:</p> <ul> <li><em>Azure Standard Load Balancer:</em> the capabilities of the load balancing service have been greatly expanded. There is better VNet support and big scale improvement with up to 1000 VMs supported as part of the backend load balanced pool.</li> <li><em>Azure Event Hubs Kafka endpoint:</em> it&#8217;s no secret that Kafka is becoming ubiquitous on most data platform architectures due to it&#8217;s robust message processing scalability and extensibility. To capitalize on this, Microsoft has taken it&#8217;s existing Event Hubs service and built a compatibility layer for Kafka. This means your existing applications will be using Event Hubs but they will get a Kafka endpoint and will think they are talking to a Kafka cluster. This is an interesting move to entice people to migrate to a fully managed, simpler service instead of running Kafka on your own or through HDInsight.</li> <li><em>Azure Functions improvements:</em> Microsoft knows serverless computing is a big deal and they also know that AWS has a big head start here with their Lambda service. As such, there are big things happening both with Azure Functions and Azure Event Grid. Better monitoring, better diagnostics, support for long running stateful operations that open the service to more use cases.</li> <li><em>Cognitive Services improvements:</em> there is just too much here to list in this blog post. You can see the full list <a href="https://azure.microsoft.com/en-us/blog/microsoft-empowers-developers-with-new-and-updated-cognitive-services/" target="_blank" rel="noopener">here</a>. I want to highlight it though because it just shows that Microsoft is very serious on their AI investment and their approach is to bundle most AI research and operations inside easy to use APIs. The goal of course is to have developers add AI capabilities to their applications without having to be AI specialists themselves. On top of this, Microsoft is opening up their research pipeline for developers to play around with what they have cooking on their <a href="https://labs.cognitive.microsoft.com/" target="_blank" rel="noopener">Cognitive Services Labs</a>.<img class="alignnone size-full wp-image-104122" src="https://blog.pythian.com/wp-content/uploads/cognitivelabs.png" alt="" width="1444" height="690" srcset="https://blog.pythian.com/wp-content/uploads/cognitivelabs.png 1444w, https://blog.pythian.com/wp-content/uploads/cognitivelabs-465x222.png 465w, https://blog.pythian.com/wp-content/uploads/cognitivelabs-350x167.png 350w" sizes="(max-width: 1444px) 100vw, 1444px" /></li> </ul> <p>Again, this was just a roundup of my favorite announcements, however there was a lot more to get excited about. If you want to read about all of it, make sure to head over to the <a href="https://azure.microsoft.com/en-us/blog/" target="_blank" rel="noopener">official Azure Blog</a> and check the very large amount of new content with announcements. I hope you are as excited as I am about what we will be able to build together in the near future. Cheers!</p> </div></div> Warner Chaves https://blog.pythian.com/?p=104121 Fri May 11 2018 11:13:12 GMT-0400 (EDT) Skip Scan 3 https://jonathanlewis.wordpress.com/2018/05/11/skip-scan-3/ <p>If you&#8217;ve come across any references to the &#8220;index skip scan&#8221; operation for execution plans you&#8217;ve probably got some idea that this can appear when the number of distinct values for the first column (or columns &#8211; since you can skip multiple columns) is small. If so, what do you make of this demonstration:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: skip_scan_cunning.sql rem Author: Jonathan Lewis rem Dated: May 2018 rem begin dbms_stats.set_system_stats('MBRC',16); dbms_stats.set_system_stats('MREADTIM',10); dbms_stats.set_system_stats('SREADTIM',5); dbms_stats.set_system_stats('CPUSPEED',1000); end; / create table t1 nologging as with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, rownum id1, rownum id2, lpad(rownum,10,'0') v1, lpad('x',150,'x') padding /* cast(rownum as number(8,0)) id, cast(lpad(rownum,10,'0') as varchar2(10)) v1, cast(lpad('x',100,'x') as varchar2(100)) padding */ from generator v1, generator v2 where rownum &lt;= 1e6 -- &gt; comment to avoid WordPress format issue ; create index t1_i1 on t1(id1, id2); begin dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt; 'T1', method_opt =&gt; 'for all columns size 1' ); end; / </pre> <p>For repeatability I&#8217;ve set some system statistics, but if you&#8217;ve left the system stats to default you should see the same effect. All I&#8217;ve done is create a table and an index on that table. The way I&#8217;ve defined the <em><strong>id1 </strong></em>and <em><strong>id2</strong></em> columns means they could individually support unique constraints and the index clearly has 1 million distinct values for <em><strong>id1</strong></em> in the million index entries. So what execution plan do you think I&#8217;m likely to get from the following simple query:</p> <pre class="brush: plain; title: ; notranslate"> set serveroutput off alter session set statistics_level = all; prompt ======= prompt Default prompt ======= select id from t1 where id2 = 999 ; select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost')); </pre> <p>You&#8217;re probably not expecting an index skip scan to appear, but given the title of this posting you may have a suspicion that it will; so here&#8217;s the plan I got running this test on 12.2.0.1:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID 8r5xghdx1m3hn, child number 0 ------------------------------------- select id from t1 where id2 = 999 Plan hash value: 400488565 ----------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | ----------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2929 (100)| 1 |00:00:00.17 | 2932 | 5 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 1 | 2929 (1)| 1 |00:00:00.17 | 2932 | 5 | |* 2 | INDEX SKIP SCAN | T1_I1 | 1 | 1 | 2928 (1)| 1 |00:00:00.17 | 2931 | 4 | ----------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;ID2&quot;=999) filter(&quot;ID2&quot;=999) </pre> <p>So, an index skip scan doesn&#8217;t require a small number of distinct values for the first column of the index (unless you&#8217;re running a version older than 11.2.0.2 where a code change appeared that could be disabled by setting fix_control 9195582 off).</p> <p>When the optimizer doesn&#8217;t do what you expect it&#8217;s always worth hinting the code to follow the plan you were expecting &#8211; so here&#8217;s the effect of hinting a full tablescan (which happened to do direct path reads):</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID bxqwhsjwqfm7q, child number 0 ------------------------------------- select /*+ full(t1) */ id from t1 where id2 = 999 Plan hash value: 3617692013 ---------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads | ---------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 3317 (100)| 1 |00:00:00.12 | 25652 | 25635 | |* 1 | TABLE ACCESS FULL| T1 | 1 | 1 | 3317 (3)| 1 |00:00:00.12 | 25652 | 25635 | ---------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;ID2&quot;=999) </pre> <p>Note that the cost is actually more expensive than the cost of the indexed access path.  For reference you need to know that the <em><strong>blocks</strong></em> statistic for the table was 25,842 while the number of index leaf blocks was 2,922. The latter figure (combined with a couple of other details regarding the <em><strong>clustering_factor</strong></em> and undeclared uniqueness of the index) explains why the cost of the skip scan was only 2,928: the change that appeared in 11.2.0.2 limited the I/O cost of an index skip scan to the total number of leaf blocks in the index.  The tablescan cost (with my system stats) was basicividing my table block count by 16 (to get the number of multi-block reads) and then doubling (because the multiblock read time is twice the single block read time).</p> <p>As a quick demo of how older versions of Oracle would behave after setting <strong><em>&#8220;_fix_control&#8221;=&#8217;9195582:OFF&#8217;</em></strong>:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID bn0p9072w9vfc, child number 1 ------------------------------------- select /*+ index_ss(t1) */ id from t1 where id2 = 999 Plan hash value: 400488565 -------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | -------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1001K(100)| 1 |00:00:00.13 | 2932 | | 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 1 | 1001K (1)| 1 |00:00:00.13 | 2932 | |* 2 | INDEX SKIP SCAN | T1_I1 | 1 | 1 | 1001K (1)| 1 |00:00:00.13 | 2931 | -------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;ID2&quot;=999) filter(&quot;ID2&quot;=999) </pre> <p>The cost of the skip scan is now a little over 1,000,000 &#8211; corresponding (approximately) to the 1 million index probes that will have to take place. You&#8217;ll notice that the number of buffer visits recorded is 2931 for the index operation, though: this is the result of the run-time optimisation that keeps buffers pinned very aggressively for skip scan &#8211; you might expect to see a huge number of visits recorded as &#8220;buffer is pinned count&#8221;, but for some reason that doesn&#8217;t happen. The cost is essentially Oracle calculating (with pinned root and branch) the cost of <em>&#8220;id1 = {constant} and id2 = 999&#8221;</em> and multiplying by <em><strong>ndv(id1)</strong></em>.</p> <h3>Footnote:</h3> <p>Ideally, of course, the optimizer ought to work out that an index fast full scan followed by a table access ought to have a lower cost (using multi-block reads rather than walking the index in leaf block order one block at a time (which is what this particular skip scan will have to do) &#8211; but that&#8217;s not (yet) an acceptable execution plan though it does now appear a plan for deleting data.</p> <h3>tl;dr</h3> <p>If you have an index that is very much smaller than the table you may find examples where the optimizer does what appears to be an insanely stupid index skip scan when you were expecting a tablescan or, possibly, some other less efficient index to be used. There is a rationale for this, but such a plan may be much more CPU and read intensive than it really ought to be.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18278 Fri May 11 2018 09:26:56 GMT-0400 (EDT) dynamic linesize in 18.1 https://laurentschneider.com/wordpress/2018/05/dynamic-linesize-in-18-1.html <p>Whenever you select and describe in sqlplus it looks ugly</p> <p>default: pagesize 14 linesize 80<br /> <img src="https://laurentschneider.com/wp-content/uploads/2018/05/lin-def.png"/></p> <p>change the default: it is often too large or too narrow<br /> <img src="https://laurentschneider.com/wp-content/uploads/2018/05/lin-2000.png"/><br /> <img src="https://laurentschneider.com/wp-content/uploads/2018/05/lin-60.png"/></p> <p>Let&#8217;s try WINDOW in sqlplus 18.1, which is available for download on Solaris / Linux / Windows<br /> <pre><code> SQL&gt; set lin window SQL&gt; sho lin linesize 95 WINDOW SQL&gt; sho pages pagesize 86 </code></pre></p> <p>And look the result :<br /> <img src="https://laurentschneider.com/wp-content/uploads/2018/05/lin-window.png"/></p> <p>Almost perfect <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I think it would nicer if DESC didn&#8217;t have this empty line<br /> <img src="https://laurentschneider.com/wp-content/uploads/2018/05/lin-94.png"/></p> <p>The cool thing with SET LINESIZE WINDOW is that it is dynamic (as I tested with CMD.EXE/windows and XTERM/Linux). Your window is too narrow, you make it bigger, re-run your select, it looks nicer </p> <p>but&#8230; pagesize cannot be set to NON-WINDOW<br /> <pre><code> SQL&gt; set lin window SQL&gt; set pages 50000 SQL&gt; sho pages pagesize 21 SQL&gt; </code></pre></p> Laurent Schneider https://laurentschneider.com/?p=2599 Fri May 11 2018 07:49:32 GMT-0400 (EDT) Oracle Code : Warsaw – The Journey Begins http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/SIxUj3lx4ZI/ <p><a href="https://developer.oracle.com/code/warsaw"><img class="size-full wp-image-7058 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2017/04/oracle_code.png" alt="" width="188" height="189" /></a>For a change it was a normal wake-up time for me. The advantage of flying late morning is you don&#8217;t have to get up so early. The disadvantage is the traffic. I left an hour earlier than usual, just to make sure, and it paid off. I missed some of the traffic, but there were some questionable decisions by my taxi driver. He seemed like a nice guy, but his SatNav was taking us on a rather strange route, and when he chose to ignore it, it seemed to be for all the wrong reasons, like he was speaking on his phone and missing the turn&#8230; Despite the long time and erratic route the price was the same as normal. Odd&#8230; <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Having started off super early I arrived in plenty of time, so much so that the Brussels Airlines desk wasn&#8217;t open. Despite this delay, I managed to get through security pretty quickly, grabbed some food and a drink and parked at a table for about 90 minutes to do some work.</p> <p>The first flight of the day was Birmingham to Brussels. We took off on time and it took about 55 minutes, so no drama there. The lady in the seat behind had a really shrill laugh, which I couldn&#8217;t block out with headphones. I noticed a number of people turning to look, so I wasn&#8217;t the only person this was annoying.</p> <p>I had a 2.5 hour stop at Brussels, so not surprisingly I got the laptop out etc.</p> <p>The 2 hour flight from Brussels to Warsaw was delayed a little, but it didn&#8217;t make much difference to our arrival time. I was meant to wait for <a href="https://twitter.com/brendantierney">Brendan</a> to get a taxi, but instead Brendan was waiting for me. We got an Uber to the hotel, then it was pretty much time for the speaker dinner. I was going to duck out of this, but got persuaded. It was a good evening. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I mentioned in a <a href="https://oracle-base.com/blog/2018/05/02/oracle-code-warsaw-poland/">previous post</a> on the subject, I had agreed to do a second presentation to fill and empty slot. I went through that presentation a couple of days previously and wasn&#8217;t happy with it, so I spent the evening doing some work to tailor it more to the Oracle Code audience, who are not all Oracle techies&#8230;</p> <p>Tomorrow (probably today when this gets released) is <a href="https://developer.oracle.com/code/warsaw">Oracle Code : Warsaw</a>. See you there!</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/11/oracle-code-warsaw-the-journey-begins/">Oracle Code : Warsaw &#8211; The Journey Begins</a> was first posted on May 11, 2018 at 5:44 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/SIxUj3lx4ZI" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8019 Fri May 11 2018 00:44:54 GMT-0400 (EDT) VirtualBox 5.2.12 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/dHt6Wwex4dU/ <p><img class="size-full wp-image-4959 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2015/05/virtualbox.jpg" alt="" width="129" height="145" /><a href="https://www.virtualbox.org/">VirtualBox</a> 5.2.12 has been released for some platforms.</p> <p>The <a href="https://www.virtualbox.org/wiki/Downloads">downloads</a> and <a href="https://www.virtualbox.org/wiki/Changelog#2">changelog</a> are in the usual places.</p> <p>I did the upgrade to my MacBook Pro in Brussels Airport and I&#8217;ve just done the upgrade to my Windows 10 PC at work.</p> <p>The upgrade went fine on macOS, but I ran into a little glitch with the Windows 10 upgrade. The upgrade itself seemed successful, but no VMs would run once the upgrade was complete. <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I did an uninstall, followed by an install again, then everything was fine on Windows 10 too.</p> <p>I mostly use <a href="https://www.vagrantup.com/">Vagrant</a> for managing my VMs these days, so my first reaction was it was a Vagrant issue, but it wasn&#8217;t. No drama though. All working now&#8230; <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Happy upgrading.</p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/05/10/virtualbox-5-2-12/">VirtualBox 5.2.12</a> was first posted on May 10, 2018 at 10:04 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/dHt6Wwex4dU" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8025 Thu May 10 2018 17:04:32 GMT-0400 (EDT) Cloudscape Episode 4: April 2018 roundup of the key AWS, GCP and Azure updates https://blog.pythian.com/cloudscape-episode-4-april-2018-roundup-key-aws-gcp-azure-updates/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>On this month’s edition of the Cloudscape Podcast we are once again joined by our expert panel of Warner Chaves, Greg Baker and John Laham. Our guests will be helping us through all the developments, news and updates from the world of the cloud. We have loads of new things to cover in the episode with Google, Microsoft and Amazon all adding a hefty amount of products and updates to already existing services. Warner takes us through Microsoft’s SQL Server, Data Factory, Blob Storage, SQL Database and Data Warehouse and the ins and outs of each area. John gives us the lowdown on Google and Kubernetes, unpacking Stack Driver and looking at the ways in which Google is leading the pack in terms of transparency. Greg shares his insights on Amazon’s many advancements and shows us just how helpful the new Cost Explorer service will be amongst many additional developments. So if you want the latest from the cloud, tune in and get it all!</p> <p><strong>Key points from this episode:</strong></p> <p>• The latest updates to SQL Server.<br /> • Common operating systems for running AWS and why.<br /> • Some information on AWS’ new Cost Explorer.<br /> • Warner takes us through the newest improvements in Azure Data Factory.<br /> • Kubernetes&#8217; latest updates for the month.<br /> • The pay as you go pricing model for Stack Driver on Kubernetes.<br /> • Comparing Stack Driver with other similar services and looking at integration.<br /> • The price model for Stack Driver and Cloud Watch.<br /> • Improvements in capacity and price on Azure Blob Storage.<br /> • AWS’ new Open Fata Registry called RODA which was just announced.<br /> • Greg share the AWS blockchain developments.<br /> • Two new changes on Azure SQL Database.<br /> • The steps Google is taking towards better transparency and operational visibility.<br /> • Amazon’s new convenient multi-account developments.<br /> • Amplify from AWS and its new increased functionality.<br /> • The changes Microsoft has made to SQL Data Warehouse.<br /> • Google’s compute engine enhancements and what this means for reliability.<br /> • Google’s advancements in communicable AI.<br /> • The team’s thoughts on device scanners for documents.<br /> • And much more!</p> <p>Links Mentioned in Today’s Episode:</p> <p><a href="https://www.linkedin.com/in/gregbaker2/">Greg Baker</a><br /> <a href="https://pythian.com/experts/john-laham/">John Laham</a><br /> <a href="https://mvp.microsoft.com/en-us/PublicProfile/5001385?fullName=Warner%20Chaves">Warner Chaves</a><br /> <a href="https://www.ubuntu.com/">Ubuntu</a><br /> <a href="https://aws.amazon.com/about-aws/whats-new/2017/12/introducing-amazon-linux-2/">Linux Candidate Two</a><br /> <a href="https://www.redhat.com/en">Red Hat</a><br /> <a href="https://www.centos.org/">CentOS</a><br /> <a href="https://xubuntu.org/">Xubuntu</a><br /> <a href="https://linuxmint.com/">Mint</a><br /> <a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/">AWS Cost Explorer</a><br /> <a href="https://azure.microsoft.com/en-us/services/data-factory/">Azure Data Factory</a><br /> <a href="https://kubernetes.io/">Kubernetes </a><br /> <a href="https://www.cloudfoundry.org/">Cloud Foundry</a><br /> <a href="https://www.openshift.com/">OpenShift</a><br /> <a href="https://www.openservicebrokerapi.org/">Open Service Broker</a><br /> <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/monitoring">Stack Driver Kubernetes Monitoring</a><br /> <a href="https://www.datadoghq.com/">Datadog</a><br /> <a href="https://aws.amazon.com/cloudwatch/">CloudWatch</a><br /> <a href="https://azure.microsoft.com/en-us/services/storage/blobs/">Blob Storage</a><br /> <a href="https://evernote.com/">Evernote</a><br /> <a href="https://www.onenote.com/?public=1&amp;wdorigin=ondcauth2&amp;wdorigin=ondc">One Note</a><br /> <a href="https://aws.github.io/aws-amplify/">AWS Amplify</a><br /> <a href="https://graphql.org/learn/">GraphQL</a><br /> <a href="https://aws.amazon.com/appsync/">App Sync</a><br /> <a href="https://azure.microsoft.com/en-us/services/sql-data-warehouse/">Azure SQL Data Warehouse</a><br /> <a href="https://getrocketbook.com/">Rocketbook</a></p> <p><img class="alignnone wp-image-104116" src="https://blog.pythian.com/wp-content/uploads/cloudscape-episode-4.jpg-2.jpg" alt="" width="600" height="300" srcset="https://blog.pythian.com/wp-content/uploads/cloudscape-episode-4.jpg-2.jpg 800w, https://blog.pythian.com/wp-content/uploads/cloudscape-episode-4.jpg-2-465x233.jpg 465w, https://blog.pythian.com/wp-content/uploads/cloudscape-episode-4.jpg-2-350x175.jpg 350w" sizes="(max-width: 600px) 100vw, 600px" /></p> </div></div> Chris Presley https://blog.pythian.com/?p=104108 Thu May 10 2018 13:33:54 GMT-0400 (EDT) Be the Change You Want to See In the (Tech) World http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/ <blockquote><p>The only thing that stays the same is change&#8230;.</p></blockquote> <p>As the time flies by and the world turns, I often am jarred back to reality when I discover how the little things we do can have a larger impact on the world around us.</p> <p>Flying back from Interop ITX and StarEast conferences last week, I was reminded of this.  While reading the latest copy of <a href="https://www.wired.com/">Wired</a> , I came across a story on <a href="http://www.designtechhighschool.org/">D-Tech High School</a>.  This is the school that resides on the Oracle headquarters campus, in its own building(s). The story was interviewing different students to discuss how they were making an impact on the technical world and how STEM schools, like D-Tech were changing the future.  What struck me, was that the lead in picture and student interview was with a young woman I&#8217;d worked with two years earlier as part of the Oracle Education Foundation.</p> <p>When I was at Oracle, I&#8217;d volunteered some of my time to writing/teaching one of the classes curriculum and technical content using Python and Raspberry Pis.  There wasn&#8217;t a school onsite at this time.  In fact, they&#8217;d announced the campus build at a ceremony just a few weeks before the two week, onsite program at Oracle.  Of the 24 students in our class, there were only four female.  That I was the one who wrote the curriculum and instruction on writing code was something Oracle Education Foundation highlighted to these female students.  Unlike the other three students, Vani stuck out as she already knew what she wanted and had found her &#8220;voice&#8221;.  Young women are less likely to be assertive due to cultural expectations and I recognized right off, that Vani was already beyond that.</p> <p style="text-align: center;"><a href="http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/vani_suresh/" rel="attachment wp-att-7943"><img class="alignnone wp-image-7943" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/05/vani_suresh.jpg?resize=408%2C544" alt="" width="408" height="544" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/05/vani_suresh.jpg?resize=768%2C1024 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/05/vani_suresh.jpg?resize=225%2C300 225w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/05/vani_suresh.jpg?w=1300 1300w" sizes="(max-width: 408px) 100vw, 408px" data-recalc-dims="1" /></a></p> <p>At the end of the first week, they had me present to the class on why I had chosen a career in technology and what lead me to success and why I give back to the younger generation.  As this is a topic I feel very passionately about, it resonated with the students who may not have previously considered being part of the technical industry outside of what&#8217;s commonly presented.  I was a real-world example of a woman in technology.</p> <p>I refuse to take any credit for what this young woman has accomplished or what she will do, but I do hope by setting an example of a deep technical specialist and as a strong representation of a technical mentor, it gave her one more example to draw from.  With the challenges in retaining women in technology, we often discuss the &#8220;death by a 1000 pin pricks&#8221; of small slights, over sights and cultural impacts that deter women from staying in the industry.  I discovered a long time ago how just as small, positive changes appear to have the most significant ability to inoculate from reaching the dreaded 1000.  This was the focus of the talk I gave to those students-  offering positive inspiration that a job doesn&#8217;t have to just be a job and that the benefits outweigh any drawbacks.</p> <p><strong>It also reminded me of what I felt most passionate about in my career.</strong></p> <ul> <li>deep technical challenges and pursuits</li> <li>giving the next generation the opportunity to discover a passion for technology</li> <li>use data to offer answers to what challenges us today</li> </ul> <p>Although I have a number of skills that companies appreciate and need to help them be successful, I am also aware of what gives me the greatest satisfaction.  I had distinct goals put in front of me when I started and  as these goals have been reached or surpassed, I&#8217;ve made the careful decision to take on a new opportunity outside of Delphix.</p> <p><strong>This new opportunity offers me the following:</strong></p> <ul> <li>A new technology focus to challenge me.</li> <li>Work with the Technical Education, (K-12) sector</li> <li>Uses the data from schools and institutions to help our next generation succeed.</li> </ul> <p><span style="font-size: 18pt;"><em><strong>I will start at Microsoft on May 28th, as a TSP in the AI and Power BI team for Tech Ed.</strong></em>  </span></p> <p>I will continue to speak at events, blog and write articles, but my technical focus will change towards doing more with your data vs. how to get your data where you need to be to take advantage of it.  If you&#8217;re in the BI space, save me a spot, I&#8217;m coming to join you.  If you&#8217;re into AI, you know that AI can&#8217;t do much without data to learn from.  This is a huge new world and I&#8217;m thrilled to become part of it.</p> <p>I want to thank those who recommended me for the position, to those who believed in me and know how excited I am to be taking on this challenge.  I&#8217;m now working hard, transitioning with Delphix to make sure the momentum to make Delphix a household name continues.  Too many companies are challenged by how to accomplish the next big thing and how to make data part of that success.</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="http://dbakevlar.com/tag/delphix/" rel="tag">Delphix</a>, <a href="http://dbakevlar.com/tag/microsoft/" rel="tag">Microsoft</a>, <a href="http://dbakevlar.com/tag/new-job/" rel="tag">New Job</a>, <a href="http://dbakevlar.com/tag/wit/" rel="tag">WIT</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/&title=Be the Change You Want to See In the (Tech) World"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/&title=Be the Change You Want to See In the (Tech) World"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/&title=Be the Change You Want to See In the (Tech) World"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/&title=Be the Change You Want to See In the (Tech) World"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="http://dbakevlar.com/2018/05/be-the-change-you-want-to-see-in-the-tech-world/#comments">2 comments on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2017/01/smart-home-update/" >Smart Home Update</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2015/09/delphix-express-free-version-of-delphix-available/" >Delphix Express : Free version of Delphix available</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2015/08/python-pass-the-pigs/" >Python Pass the Pigs</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2015/08/enhancing-a-moving-art-project-to-beginning-robotics-with-raspberry-pi/" >Enhancing A Moving Art Project to Beginning Robotics with Raspberry Pi</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2013/07/post-redgate-webinar-on-database-12c-pluggable-databases/" >Post Redgate Webinar on Database 12c Pluggable Databases</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="http://dbakevlar.com">DBA Kevlar</a> [<a href="http://dbakevlar.com/2018/05/be-the-change-you-want-to-seehe-tech-world/">Be the Change You Want to See In the (Tech) World</a>], All Right Reserved. 2018.</small><br> dbakevlar http://dbakevlar.com/?p=7941 Thu May 10 2018 09:55:18 GMT-0400 (EDT) Orientation to Cassandra Nodetool https://blog.pythian.com/orientation-cassandra-nodetool/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Nodetool is a broadly useful tool for managing Cassandra clusters. A large percentage of questions concerning Cassandra can easily be answered with a nodetool function.</p> <p>Having been developed over time by a diverse open source community, the nodetool commands can seem at first glance to be defined within a minimally consistent syntax. On closer inspection, the individual commands can be organized into several overlapping buckets.</p> <p>The first grouping consists of commands to view (<i>get</i>) or change (<i>set</i>) configuration variables. An example pair is <i>getlogginglevels</i> and <i>setlogginglevel</i>. By default, logging is set to INFO, midway in the available range of ALL, TRACE, DEBUG, INFO, WARN, ERROR, and OFF. Running <i>nodetool getlogginglevels </i>will display the currently set value.</p> <p>Other get/set (sometimes prefixed as <i>enable</i>/<i>disable</i>) commands can be set either at startup or while Cassandra is running. For example, incremental backups can be enabled in the startup configuration file cassandra.yaml by setting <i>incremental_backups=true</i>. Alternatively, they can be started or stopped using nodetool, with the commands <i>nodetool enablebackup</i> and <i>nodetool disablebackup</i>. In general, though, most configuration values are either set in startup configuration files or set dynamically using nodetool; there is little overlap.</p> <p>Several nodetool commands can be used to get insight into status of the Cassandra node, cluster, or even data. Two very basic informational commands are <i>nodetool status</i> and <i>nodetool info</i>. <i>Nodetool status</i> provides a brief output of node state (up, down, joining cluster, etc.), IP addresses, and datacenter location. <i>Nodetool info</i> provides a less brief output of key status variables. It is a convenient way to see memory utilization, for example.</p> <p>Although the tool is named <i>nodetool</i>, not all commands apply to nodes. For example, <i>nodetool describecluster</i> provides information about the cluster &#8212; snitch and partitioner type, name, and schema versions. For another example, <i>nodetool netstats </i>provides information about communication among nodes.</p> <p>The nodetool can not only be used for basic configuration and information; it is also a powerful tool for cluster operations and data management. The operations tasks of shutting down a node within a cluster or doing maintenance on a live node are made easier with commands like <i>nodetool drain </i>(flushes writes from memory to disk, shuts off connections, replays commitlog) and <i>nodetool disablegossip </i>(makes node invisible to the cluster). Data management tasks are made easier with commands like <i>nodetool repair</i> to sync data among nodes (perhaps due to missed writes across the cluster) and <i>nodetool garbagecollect</i> to remove deleted data.</p> <p>Now that I have provided an orientation to nodetool, in future posts I will describe how to combine various information, set/get, and management commands to do common tasks such as backups, performance tuning, and upgrades.</p> <hr /> <p>Learn more about <a href="https://pythian.com/cassandra-consulting/">Pythian services for Cassandra</a>.</p> </div></div> Valerie Parham-Thompson https://blog.pythian.com/?p=104030 Thu May 10 2018 09:12:09 GMT-0400 (EDT) CAMINO A RUSIA 2018 http://noriegaaoracleexpert.blogspot.com/2018/05/camino-rusia-2018.html <div dir="ltr" style="text-align: left;" trbidi="on"><div style="font-family: &quot;Helvetica Neue&quot;; font-stretch: normal; line-height: normal; text-align: center;"><b><span style="color: yellow; font-size: x-large;">CAFETEROS MUY OPTIMISTAS PARA LLEGAR MÁS ALTO</span></b></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-UlflSaF4joo/WvObidodnuI/AAAAAAAATvM/vQ8ofR-SCYg78YoMIvabpTowbAO6KRZ9wCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.34.01%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="580" data-original-width="850" height="436" src="https://2.bp.blogspot.com/-UlflSaF4joo/WvObidodnuI/AAAAAAAATvM/vQ8ofR-SCYg78YoMIvabpTowbAO6KRZ9wCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.34.01%2BPM.png" width="640" /></a></div><div style="font-family: &quot;Helvetica Neue&quot;; font-size: 17px; font-stretch: normal; line-height: normal;"><b><span style="color: red;"><br /></span></b></div><div style="font-family: &quot;Helvetica Neue&quot;; font-stretch: normal; line-height: normal;"><b><span style="color: red;">Calendario primera ronda Rusia 2018 (Fase de Grupos)</span></b></div><div style="color: #454545; font-family: &quot;Helvetica Neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><br /></b></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo A</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 14 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Rusia</b>&nbsp;vs&nbsp;<b>Arabia Saudita</b>, en Moscú (Estadio Luzhnikí)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 15 de junio (5 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Egipto</b>&nbsp;vs&nbsp;<b>Uruguay</b>, en Ekaterimburgo (Arena Ekaterimburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 19 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Rusia</b>&nbsp;vs&nbsp;<b>Egipto</b>, en San Petersburgo (Estadio San Petersburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 20 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Uruguay</b>&nbsp;vs&nbsp;<b>Arabia Saudita</b>, en Rostov del Don (Arena Rostov)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 25 de junio (6 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Uruguay</b>&nbsp;vs&nbsp;<b>Rusia</b>, en Samara (Arena Samara)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 25 de junio (5 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Arabia Saudita</b>&nbsp;vs&nbsp;<b>Egipto</b>, en Volgogrado (Arena Volgogrado)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo B</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 15 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Portugal</b>&nbsp;vs&nbsp;<b>España</b>, en Sochi (Estadio Fisht)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 15 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Marruecos</b>&nbsp;vs&nbsp;<b>Irán</b>, en San Petersburgo (Estadio San Petersburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 20 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Irán</b>&nbsp;vs&nbsp;<b>España</b>, en Kazán (Arena Kazán)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 20 de junio (3 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Portugal</b>&nbsp;vs&nbsp;<b>Marruecos</b>, en Moscú (Estadio Luzhnikí)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 25 de junio (8 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>España</b>&nbsp;vs&nbsp;<b>Marruecos</b>, en Kaliningrado (Estadio de Kaliningrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 25 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;&nbsp;<b>Irán</b>&nbsp;vs&nbsp;<b>Portugal</b>, en Saransk (Arena Mordovia)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo C</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 16 de junio (1 p.m. hora local / 6 a.m. ET / 3 a.m. PT):&nbsp;<b>Francia</b>&nbsp;vs&nbsp;<b>Australia</b>, en Kazán (Arena Kazán)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 16 de junio (7 p.m. hora local / 12 p.m. ET / 9 a.m. PT):&nbsp;<b>Perú</b>&nbsp;vs&nbsp;<b>Dinamarca</b>, en Saransk (Arena Mordovia)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 21 de junio (8 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Francia</b>&nbsp;vs&nbsp;<b>Perú</b>, en Ekaterimburgo (Arena Ekaterimburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 21 de junio (4 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Dinamarca</b>&nbsp;vs&nbsp;<b>Australia</b>, en Samara (Arena Samara)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 26 de junio (5 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Dinamarca</b>&nbsp;vs&nbsp;<b>Francia</b>, en Moscú (Estadio Luzhnikí)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 26 de junio (5 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Australia</b>&nbsp;vs&nbsp;<b>Perú</b>, en Sochi (Estadio Fisht)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; min-height: 14px;"><span style="color: #f3f3f3;"><a data-saferedirecturl="https://www.google.com/url?hl=en&amp;q=https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/&amp;source=gmail&amp;ust=1526000737354000&amp;usg=AFQjCNGwFtLt77tIIBHh2BWdKFHhIxW1bA" href="https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/" target="_blank"></a><br /></span></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal;"><a data-saferedirecturl="https://www.google.com/url?hl=en&amp;q=https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/&amp;source=gmail&amp;ust=1526000737354000&amp;usg=AFQjCNGwFtLt77tIIBHh2BWdKFHhIxW1bA" href="https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/" target="_blank"><span style="color: #f3f3f3;"><img alt="fifa-world-cup-2018-balon-oficial.jpg" /></span></a></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal;"><a data-saferedirecturl="https://www.google.com/url?hl=en&amp;q=https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/&amp;source=gmail&amp;ust=1526000737354000&amp;usg=AFQjCNGwFtLt77tIIBHh2BWdKFHhIxW1bA" href="https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/" target="_blank"><b><span style="color: #f3f3f3;">25</span></b></a></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; min-height: 15px;"><span style="color: #f3f3f3;"><a data-saferedirecturl="https://www.google.com/url?hl=en&amp;q=https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/&amp;source=gmail&amp;ust=1526000737354000&amp;usg=AFQjCNGwFtLt77tIIBHh2BWdKFHhIxW1bA" href="https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/" target="_blank"><b></b></a><br /></span></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><a data-saferedirecturl="https://www.google.com/url?hl=en&amp;q=https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/&amp;source=gmail&amp;ust=1526000737354000&amp;usg=AFQjCNGwFtLt77tIIBHh2BWdKFHhIxW1bA" href="https://www.cnet.com/es/imagenes/conoce-el-balon-oficial-de-la-copa-mundial-de-rusia-2018/" target="_blank"><b><span style="color: #f3f3f3;">Conoce el balón oficial de la Copa Mundial de Rusia 2018 [fotos]</span></b></a></div><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo D</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 16 de junio (4 p.m. hora local / 9 a.m. ET / 6 a.m. PT):&nbsp;<b>Argentina</b>&nbsp;vs&nbsp;<b>Islandia</b>, en Moscú (Estadio Spartak)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 16 de junio (9 p.m. hora local / 3 p.m. ET / 12 p.m. PT):&nbsp;<b>Croacia</b>&nbsp;vs&nbsp;<b>Nigeria</b>, en Kaliningrado (Estadio de Kaliningrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 21 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Argentina</b>&nbsp;vs&nbsp;<b>Croacia</b>, en Nizhni Nóvgorod (Estadio Nizhni Nóvgorod)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 22 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Nigeria</b>&nbsp;vs&nbsp;<b>Islandia</b>, en Volgogrado (Arena Volgogrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 26 de junio&nbsp;(9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Islandia</b>&nbsp;vs&nbsp;<b>Croacia</b>, en Rostov del Don (Arena Rostov)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 26 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Nigeria</b>&nbsp;vs&nbsp;<b>Argentina</b>, en San Petersburgo (Estadio San Petersburgo)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo E</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 17 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Brasil</b>&nbsp;vs&nbsp;<b>Suiza</b>, en Rostov del Don (Arena Rostov)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 17 de junio (4 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Costa Rica</b>&nbsp;vs&nbsp;<b>Serbia</b>, en Samara (Arena Samara)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 22 de junio (8 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Serbia</b>&nbsp;vs&nbsp;<b>Suiza</b>, en Kaliningrado (Estadio de Kaliningrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Viernes 22 de junio (3 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Brasil</b>&nbsp;vs&nbsp;<b>Costa Rica</b>, en San Petersburgo (Estadio San Petersburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 27 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Serbia</b>&nbsp;vs&nbsp;<b>Brasil</b>, en Moscú (Estadio Spartak)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 27 de junio&nbsp;(9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Suiza</b>&nbsp;vs&nbsp;<b>Costa Rica</b>, en Nizhni Nóvgorod (Estadio Nizhni Nóvgorod)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo F</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 17 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Alemania</b>&nbsp;vs&nbsp;<b>México</b>, en Moscú (Estadio Luzhnikí)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 18 de junio (3 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Suecia&nbsp;</b>vs&nbsp;<b>Corea</b>, en Nizhni Nóvgorod (Estadio Nizhni Nóvgorod)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 23 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Corea</b>&nbsp;vs&nbsp;<b>México</b>, en Rostov del Don (Arena Rostov)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 23 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Alemania</b>&nbsp;vs&nbsp;<b>Suecia</b>, en Sochi (Estadio Fisht)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 27 de junio (7 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>México</b>&nbsp;vs&nbsp;<b>Suecia</b>, en Ekaterimburgo (Arena Ekaterimburgo)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Miércoles 27 de junio (5 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Corea</b>&nbsp;vs&nbsp;<b>Alemania</b>, en Kazán (Arena Kazán)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo G</span></b></div><ul><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 18 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Bélgica</b>&nbsp;vs&nbsp;<b>Panamá</b>, en Sochi (Estadio Fisht)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Lunes 18 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Túnez</b>&nbsp;vs&nbsp;<b>Inglaterra</b>, en Volgogrado (Arena Volgogrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Sábado 23 de junio (3 p.m. hora local / &nbsp;8 a.m. ET / 5 a.m. PT):&nbsp;<b>Bélgica</b>&nbsp;vs&nbsp;<b>Túnez</b>, en Moscú (Estadio Spartak)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 24 de junio (3 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Inglaterra</b>&nbsp;vs&nbsp;<b>Panamá</b>, en Nizhni Nóvgorod (Estadio Nizhni Nóvgorod)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 28 de junio (8 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Inglaterra</b>&nbsp;vs&nbsp;<b>Bélgica</b>, en Kaliningrado (Estadio de Kaliningrado)</span></li><li style="font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 28 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Panamá</b>&nbsp;vs&nbsp;<b>Túnez</b>, en Saransk (Arena Mordovia)</span></li></ul><div style="font-family: &quot;helvetica neue&quot;; font-size: 14px; font-stretch: normal; line-height: normal; margin-bottom: 2px;"><b><span style="color: #f3f3f3;">Grupo H</span></b></div><ul><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;">Martes 19 de junio (6 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Polonia</b>&nbsp;vs&nbsp;<b>Senegal</b>, en Moscú (Estadio Spartak)</span></li><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Martes 19 de junio (3 p.m. hora local / 8 a.m. ET / 5 a.m. PT):&nbsp;<b>Colombia</b>&nbsp;vs&nbsp;<b>Japón</b>, en Saransk (Arena Mordovia)</span></li><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 24 de junio (8 p.m. hora local / 11 a.m. ET / 8 a.m. PT):&nbsp;<b>Japón</b>&nbsp;vs&nbsp;<b>Senegal</b>, en Ekaterimburgo (Arena Ekaterimburgo)</span></li><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Domingo 24 de junio (9 p.m. hora local / 2 p.m. ET / 11 a.m. PT):&nbsp;<b>Polonia</b>&nbsp;vs&nbsp;<b>Colombia</b>, en Kazán (Arena Kazán)</span></li><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span>Jueves 28 de junio (6 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Senegal</b>&nbsp;vs&nbsp;<b>Colombia</b>, en Samara (Arena Samara)</span></li><li style="background-color: white; font-family: &quot;helvetica neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="color: #f3f3f3;"><span style="font-stretch: normal;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span></span>Jueves 28 de junio (5 p.m. hora local / 10 a.m. ET / 7 a.m. PT):&nbsp;<b>Japón</b>&nbsp;vs&nbsp;<b>Polonia</b>, en Volgogrado (Arena Volgogrado)</span></li></ul><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-t_vp09Q4uUo/WvObny-oPWI/AAAAAAAATvY/r5HzXuMYydwiQ_D42EVCUBPLqMGtTReIACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B9.02.13%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1030" data-original-width="1600" height="412" src="https://1.bp.blogspot.com/-t_vp09Q4uUo/WvObny-oPWI/AAAAAAAATvY/r5HzXuMYydwiQ_D42EVCUBPLqMGtTReIACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B9.02.13%2BPM.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-6NKbgbCuXVI/WvObxXpp-ZI/AAAAAAAATvg/CADNFCg3RkQ6_u-5TlMwE-cZU8fspB0UgCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.34.01%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="580" data-original-width="850" height="436" src="https://2.bp.blogspot.com/-6NKbgbCuXVI/WvObxXpp-ZI/AAAAAAAATvg/CADNFCg3RkQ6_u-5TlMwE-cZU8fspB0UgCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.34.01%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-9EVAuKOMMYo/WvObxd8cHRI/AAAAAAAATvc/Xf3Bj9bwzC0x9DOKLbgf8N8DZeqBSw-pACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.35.15%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="881" data-original-width="1600" height="352" src="https://1.bp.blogspot.com/-9EVAuKOMMYo/WvObxd8cHRI/AAAAAAAATvc/Xf3Bj9bwzC0x9DOKLbgf8N8DZeqBSw-pACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.35.15%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-7Mw-i5fJXSo/WvObxVSdqkI/AAAAAAAATvk/MVlgiTk2R5EDfFOM1VaF9juQQd06041MgCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.43.15%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1128" data-original-width="1338" height="538" src="https://1.bp.blogspot.com/-7Mw-i5fJXSo/WvObxVSdqkI/AAAAAAAATvk/MVlgiTk2R5EDfFOM1VaF9juQQd06041MgCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.43.15%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-PPQkZfa-MSE/WvObx2AhsYI/AAAAAAAATvo/nfo5Q79avIU4NQXcwY1aguDcOG7sHn7-gCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.47.27%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1466" data-original-width="1600" height="586" src="https://2.bp.blogspot.com/-PPQkZfa-MSE/WvObx2AhsYI/AAAAAAAATvo/nfo5Q79avIU4NQXcwY1aguDcOG7sHn7-gCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.47.27%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-mKeFtxoDLPA/WvObyq4sibI/AAAAAAAATvw/RZDaKhstIhAStQtMG99o9wQTUr9WcrFCACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.50.40%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="988" data-original-width="1518" height="416" src="https://1.bp.blogspot.com/-mKeFtxoDLPA/WvObyq4sibI/AAAAAAAATvw/RZDaKhstIhAStQtMG99o9wQTUr9WcrFCACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.50.40%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-fGUnXMZEwpQ/WvOby0lB-QI/AAAAAAAATv0/mutzmb5R_O0O13TNxFIIeggqeAGyLkXZQCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.51.44%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1182" data-original-width="1560" height="484" src="https://1.bp.blogspot.com/-fGUnXMZEwpQ/WvOby0lB-QI/AAAAAAAATv0/mutzmb5R_O0O13TNxFIIeggqeAGyLkXZQCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.51.44%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-DMwOkM7Fr3U/WvObywZMZrI/AAAAAAAATv4/wTQ_mUfZvsUSIhmfffDkR4uqkTIMfs5cACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.55.14%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1184" data-original-width="1378" height="549" src="https://3.bp.blogspot.com/-DMwOkM7Fr3U/WvObywZMZrI/AAAAAAAATv4/wTQ_mUfZvsUSIhmfffDkR4uqkTIMfs5cACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.55.14%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-hlcskxbzgFA/WvObzv7lhEI/AAAAAAAATv8/AhjOLcUp-DEYKmxQzQ6ZbDeUkeW3VPJcACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B8.59.26%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="389" data-original-width="1600" height="154" src="https://1.bp.blogspot.com/-hlcskxbzgFA/WvObzv7lhEI/AAAAAAAATv8/AhjOLcUp-DEYKmxQzQ6ZbDeUkeW3VPJcACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B8.59.26%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-GesPjghS4zs/WvObz9Uo5WI/AAAAAAAATwA/_ild1crl5hgnwZJWwTX35xwah38TnVFhACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B9.00.18%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1054" data-original-width="1600" height="420" src="https://1.bp.blogspot.com/-GesPjghS4zs/WvObz9Uo5WI/AAAAAAAATwA/_ild1crl5hgnwZJWwTX35xwah38TnVFhACLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B9.00.18%2BPM.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-9XmzrnKr89g/WvOb0HrPn-I/AAAAAAAATwE/0v7QLHNvWggLTdFshURMdo3wdTQOkf9cgCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B9.02.13%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1030" data-original-width="1600" height="412" src="https://3.bp.blogspot.com/-9XmzrnKr89g/WvOb0HrPn-I/AAAAAAAATwE/0v7QLHNvWggLTdFshURMdo3wdTQOkf9cgCLcBGAs/s640/Screen%2BShot%2B2018-05-09%2Bat%2B9.02.13%2BPM.png" width="640" /></a></div><ul><li style="background-color: white; color: #454545; font-family: &quot;Helvetica Neue&quot;; font-size: 12px; font-stretch: normal; line-height: normal; margin: 0px;"><span style="font-family: &quot;menlo&quot;; font-size: 10px; font-stretch: normal; line-height: normal;"></span><br /></li><li></li></ul></div> Anthony D Noriega tag:blogger.com,1999:blog-4535123449935735221.post-6594176457469427576 Wed May 09 2018 21:13:00 GMT-0400 (EDT) Cloudscape Podcast 2 in Review – A Deeper Dive into the Latest in Microsoft Azure https://blog.pythian.com/cloudscape-podcast-2-review-deeper-dive-latest-microsoft-azure/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">I recently joined </span><a href="https://pythian.com/experts/chris-presley/"><span style="font-weight: 400;">Chris Presley</span></a><span style="font-weight: 400;"> on episode 2 of his </span><a href="https://blog.pythian.com/?s=cloudscape"><span style="font-weight: 400;">Cloudscape</span></a><span style="font-weight: 400;"> podcast to share what is new in the world of Microsoft Azure and w</span><span style="font-weight: 400;">e discussed the following:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Cosmos DB’s Default Encryption</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Azure VNet Endpoints for SQL DB</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Azure Event Grid GA Release</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Azure M series VM’s</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Cosmos DB Graph API GA Release</span></li> </ul> <p><b>Cosmos DB&#8217;s Default Encryption</b></p> <p><span style="font-weight: 400;">We started out by discussing Cosmos DB’s default encryption at rest. As you may know, Cosmos DB, Azure SQL DB and Azure SQL Data Warehouse are all encrypted at rest by default. And it is not just default, Microsoft has decided that data engine related services have to be encrypted with no option to opt out. This is necessary for Microsoft likely because of liability and compliance issues but it is also positive for the clients.</span></p> <p><span style="font-weight: 400;">Given the importance and increasing coverage of data security regulation like GDPR, anything that the provider can do to help customers be compliant is a good idea. Of course, this is still a shared responsibility, so while the service is encrypted at rest, it is still up to the client to mask or encrypt sensitive fields on the database itself.</span></p> <p><b><br /> Azure VNet Endpoints for SQL DB</b></p> <p><span style="font-weight: 400;">The lack of VNet endpoints has been a thorn in the side of Azure SQL database, and specifically to those of us who are data professionals trying to migrate people to the service. The reason? For a very long time, Azure did not have a way to restrict access to your Azure SQL database from only inside a particular virtual network. And this was a big gap when compared to AWS for example.</span></p> <p><span style="font-weight: 400;">Amazon already has full support for VPC, so you can make an Amazon Relational Database Service (RDS) instance accessible only for specific VPC</span><span style="font-weight: 400;">—</span><span style="font-weight: 400;">Azure SQL Database did not have this functionality until now. Until now, it was all public endpoints protected by firewall IP rules. As you can imagine, this is very cumbersome to manage because if any IP changes, you’d have to go back and refresh your rules and deal with compliance. For some clients, this was a non-starter because their security standards didn’t allow for public endpoints even with a firewall.</span></p> <p><span style="font-weight: 400;">The VNet end point makes it so that you can associate one particular VNet with an Azure SQL Database. Any resource inside the VNet can see the Azure SQL database. You can also filter it down further, so only specific resources inside that VNet can touch a particular Azure SQL Database. If you choose to do so, then you can kill all Internet access to the database.</span></p> <p><b><br /> Azure Event Grid GA Release</b></p> <p><span style="font-weight: 400;">Event Grid is an event routing service, so it’s similar to how Amazon has Simple Notification Service ( SNS). Many people use Amazon SNS for the same purpose, but the difference with Event Grid is that it’s more tightly coupled directly into the services</span><span style="font-weight: 400;">—</span><span style="font-weight: 400;">so you don’t need to use an intermediate service to hook it up. </span></p> <p><span style="font-weight: 400;">The main way to go serverless in Azure is through Azure functions, but rather than doing it this way, they plan to integrate the services that consume directly into Event Grid so you can use a service like Logic App. </span></p> <p><span style="font-weight: 400;">For those not familiar with Logic App, it does serverless computing through a Microsoft Graphical Interface. Rather than having direct functions, this would take the event and forward it to Logic App. This way, routine operations that require very simple compute can be achieved completely with no code with Event Grid and a Logic App for example.</span></p> <p><span style="font-weight: 400;">To date, I haven’t seen a lot of widespread use of Logic App because, in my opinion, many people don’t realize what the service can do or how powerful it really is, but like all of these services, they take a while to reach wide adoption. </span></p> <p><span style="font-weight: 400;">Overall, I think a service like Event Grid will make it even easier to start using other services. This is about making serverless more powerful in Azure and increasing the capacity of what you can build without deploying infrastructure.</span></p> <p>&nbsp;</p> <p><b>Azure M-Series Virtual Machines</b></p> <p><span style="font-weight: 400;">Azure announced the M-Series which is the biggest virtual machine you can now get in Azure compute. This machine will give you 128 VCPUs and up to four terabytes of RAM. You can hook up eight network cards and get up to 30 gigabits of bandwidth. Its IO bandwidth is in the 160,000 IOPS range. It’s a massive, expensive, and powerful machine.</span></p> <p><span style="font-weight: 400;">The purpose of this machine seems to be for people to run either really big in memory database workloads or to entice people to host SAP HANA on Azure. There is a lot of competitive pressure because providers will continue ramping up the largest VM that you can put in the cloud which I think is good for consumers in the end.</span></p> <p>&nbsp;</p> <p><b>Cosmos DB Graph API GA Release</b></p> <p><span style="font-weight: 400;">The thing with graph databases is that they seemed to have stayed in the realm of academic exercises for the most part. Even though the modeling and the semantics are powerful, when it’s time to move into productions we have performance, high availability, encryption, and security to consider.</span></p> <p><span style="font-weight: 400;">This is what Cosmos DB is trying to streamline for Graph. If you need to design something to represent relationships in social, relationships in terms of hardware topology, solutions for routing, graph is more natural to interact with than relational. Then put it on top of Cosmos Db and you immediately get the HA, geo-replication, elasticity, etc.</span></p> <p><span style="font-weight: 400;">The reason why people have been building these systems on relational is because relational is the default hammer that everybody reaches for but it doesn’t necessarily lend itself to really good modeling for these types of solutions. It also doesn’t lend itself to solving some of the native graph problems like route traversals and path optimization. For those problems you usually up with a monster SQL query that nobody really understands.</span></p> <p><span style="font-weight: 400;">The key point here is that Cosmos DB enables you to have graph data modeling experience, but at the same time, it maintains all the production grade capabilities built into Cosmos such as your replication, the request units, and the encryption. Hopefully, this will enable a new generation of graph applications. This is something we don’t see very often, so I’m hoping that we’ll see adoption.</span></p> <p>&nbsp;</p> <p><span style="font-weight: 400;">***</span></p> <p><span style="font-weight: 400;">This was a summary of the Azure topics we discussed during the podcast, Chris also welcomed </span><a href="https://www.linkedin.com/in/gregbaker2/"><span style="font-weight: 400;">Greg Baker</span></a><span style="font-weight: 400;"> (Amazon Web Services), and </span><a href="https://pythian.com/experts/john-laham/"><span style="font-weight: 400;">John Laham</span></a><span style="font-weight: 400;"> (Google Cloud Platform) who also discussed topics related to their expertise. </span></p> <p>&nbsp;</p> <p><span style="font-weight: 400;">Listen to the full conversation <a href="https://blog.pythian.com/cloudscape-podcast-episode-2-february-2018-roundup-key-aws-gcp-azure-updates/">here</a> and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></p> </div></div> Warner Chaves https://blog.pythian.com/?p=104085 Wed May 09 2018 13:30:55 GMT-0400 (EDT) Conference review Percona Live Santa Clara 2018 https://blog.pythian.com/conference-review-percona-live-santa-clara-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Percona Live Santa Clara, an annual event where open source database users, developers and enthusiasts come together, was held in April at the Santa-Clara convention centre. Pythian was well represented once more with no less than five presentations and a total of nine attendees.</p> <p><img class="alignnone size-full wp-image-104093" src="https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-1.jpg" alt="" width="3998" height="1815" srcset="https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-1.jpg 3998w, https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-1-465x211.jpg 465w, https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-1-350x159.jpg 350w" sizes="(max-width: 3998px) 100vw, 3998px" /></p> <p>This year the conference was condensed to two days of breakout sessions and one day of tutorials. Though it was shorter in length, the organizers broadened their horizons by including not only MySQL and MongoDB tracks, but this year they even put together a full PostgreSQL track. Moving from MySQL only to multiple technologies, inspired this year&#8217;s tagline: polyglot persistence conference. The increase in number of sessions allowed for a lot more options, but the condensed schedule made it much harder to choose which sessions to attend!</p> <p>My observation from <a href="https://blog.pythian.com/percona-live-europe-2017-review/">last year&#8217;s European conference in Dublin</a> was that ProxySQL and ClickHouse were hot topics is something I noticed again in 2018. ProxySQL was the winner of the <a href="http://mysqlawards.org/mysql-community-awards-2018-the-winners/">MySQL Community Award </a>for the second year in a row. René Cannaò, ProxySQL main author, was present to accept the award and confirmed his commitment to make the software even more feature rich than it already is. We look forward to the upcoming 2.0 release.</p> <p>The community dinner is traditionally held on Tuesday night of the conference. Pythian is, and has been for many years now, the proud organizer company of this fine event. Our open-source director, Derek Downey, had arranged for a limited edition t-shirt for every attendee. The venue was again at Pedro’s restaurant where a Mexican buffet was waiting for us. We had a good turn up and we had good fun. Olé!</p> <p><img class="alignnone wp-image-104092 size-full" src="https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1.jpg" alt="" width="3236" height="1765" srcset="https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1.jpg 3236w, https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-465x254.jpg 465w, https://blog.pythian.com/wp-content/uploads/Image-uploaded-from-iOS-1-350x191.jpg 350w" sizes="(max-width: 3236px) 100vw, 3236px" /></p> <p>Last, but not least, Percona announced that they will say goodbye to Santa Clara and will move the conference to another location next year. They kept the answer as to where they will go to themselves, but the guesswork in the community runs full circle. I’ve heard suggestions from Hawaii to Toronto and even Cleveland showed up in the list. We’ll have to wait and see what it is going to be.</p> <p>But before we do, next up is <a href="https://www.percona.com/blog/2018/04/05/percona-live-europe-2018-save-the-date/">Percona Live Europe</a> which will be held in Frankfurt, Germany November 5-8 this year. Pythian is looking forward to meeting you there!</p> </div></div> Matthias Crauwels https://blog.pythian.com/?p=104090 Wed May 09 2018 12:03:04 GMT-0400 (EDT) Developing for the Oracle Database https://orastory.wordpress.com/2018/05/09/developing-for-the-oracle-database/ <p>1 Coding Standards<br /> What this document does not claim to offer is a prescriptive guide on the minutiae of coding standards.</p> <p>Coding standards and naming conventions, particularly in SQL and PL/SQL, are an emotive and divisive subject.</p> <p>This is largely the fault of a historical absence of mature IDEs and a lack of any rigorous, generally accepted standards as seen with other languages.</p> <p>Many developers still hand-craft scripts and routines and the various tools available often have differences, subtle or otherwise, in what built-in formatting they can offer.</p> <p>Good developers can adapt, good standards can adapt.</p> <p>The most important objectives for coding standards are to :<br /> • Make development faster and debugging easier<br /> • Make easier the understanding of other people’s code<br /> • Limit bugs</p> <p>The following expectations support these objectives:<br /> • Where possible use SQL before using PLSQL<br /> • Code will be reasonably formatted and legible, preferably with a consistent style within the module at least.<br /> • It is preferred but not mandated that Oracle keywords be capitalized – e.g. CREATE OR REPLACE PACKAGE, SELECT, UPDATE, DELETE, FROM &#8211; and lowercase used for objects and columns, etc<br /> • In SELECT statements, tables should be aliased – this provides a very small benefit to the optimizer when parsing but also prevents bugs particularly in subqueries.<br /> • Procedures, cursors and variables, etc should be properly scoped – e.g. public vs private, global vs local, parameter scope, etc<br /> • Meaningful names will be given for code items<br /> • Reasonable naming might include some sort of prefixed or suffixed indicator of purpose, e.g. k_ for constants, l_ or v_ for local variables, g_ for global variables, p_ for procedure, f_ for function, _pkg for package, i_ for in parameters, o_for out parameters, io_ for in out parameters.<br /> • Package and procedure level comments should indicate why/when a particular program was changed but SVN, or other code respositories, are the appropriate mechanism for code control.<br /> • Code comments should be used when they add value.<br /> • Excessive commenting and stating the obvious should be avoided – these are often more effective when refactoring the code concerned into a private routine (procedure/function) which is named appropriately (e.g. function f_format_swift_string).<br /> • CamelCase is not considered appropriate for the database as all code is stored in the db as uppercase.<br /> • Package headers and bodies should be checked into separate files for clarity and to prevent unnecessary recompilcation of unchanged code and dependencies (version dependent)<br /> • Performance should be built in and evidence of such documented.</p> <p>2 Writing Optimal SQL</p> <p>2.1 Key points<br /> Writing Optimal SQL should be relatively simple but many people struggle particularly when making the transition from an object/attribute language to a set-based language like SQL.</p> <p>The key tenets of performant database code in Oracle are:<br /> • Think in sets.<br /> • Think about how the database might be able to process and apply logic to a set of data with great efficiency.<br /> • Ask the right question in the best way.<br /> • Know your data.</p> <p>In support, when thinking about database code and SQL operations:</p> <p>• If the query needs to be long/big, make it long/big.<br /> • Bulk operations are critical, row-by-row operations are a cardinal performance sin.<br /> • Eliminate data at the earliest opportunity.<br /> • Sort on the smallest possible set – if possible avoid aggregations, sorting and distinct operations on the largest sets of data.<br /> • Use bind variable when you require shareable SQL and when bind variables make sense<br /> • Use literals when literals make sense.<br /> • Use a mix of binds and literals if appropriate.<br /> • Avoid PL/SQL in SQL.<br /> • Be careful of applying functions (TRUNC, etc) to columns in the WHERE clause.<br /> • User-defined functions which are called from SQL and which themselves contain SQL are, almost without exception, unacceptable.<br /> • Never rely on implicit datatype conversion. Use the correct datatypes for parameters and where possible convert parameters NOT columns.</p> <p>2.2 Thinking in Sets</p> <p>Crucial.</p> <p>For further reading on thinking in sets, see:<br /> <a href="http://explainextended.com/2009/07/12/double-thinking-in-sql/" rel="nofollow">http://explainextended.com/2009/07/12/double-thinking-in-sql/</a></p> <p>2.3 What’s the question?</p> <p>When writing SQL, focus on the question being asked by the SQL statement.</p> <p>If you put the question into words as a comment before a complex SQL statement, then this can often add value to the next developer.</p> <p>Often the most performant version of a SQL statement is the one which asks the question at hand in the most natural way.</p> <p>To this end, proper consideration needs to be given to:<br /> • Subqueries &#8211; EXISTS / IN / NOT EXISTS / NOT IN<br /> • Set-operators – MINUS, UNION, UNION ALL, INTERSECT<br /> • Use of DISTINCT is often an indication of a wrong sql statement or poor design<br /> • Common Table Expressions:</p> <p>Often it can help to use of Common Table Expressions (CTE), aka the WITH clause, for separating the main logic of the query from the subsequent fetching of additional data/attributes, e.g.</p> <p>WITH main_logic AS<br /> (SELECT &#8230;<br /> FROM &#8230;<br /> WHERE &#8230;)<br /> SELECT ml.*, x.this, y.that, z.something_else<br /> FROM main_logic ml<br /> , &#8230;<br /> WHERE &#8230;;</p> <p>2.4 To ANSI or Not To ANSI</p> <p>Another divisive subject is ANSI SQL vs Oracle Join syntax.</p> <p>Again, this guide should not seek to be prescriptive on the preference of one over the other.</p> <p>The bottom line should be that if a developer finds it easier to write a correct and optimal SQL statement using one rather than the other, then that is most important.</p> <p>There are some SQL statement constructs which are more conveniently written in ANSI – the FULL OUTER JOIN for example.</p> <p>It is also true that the optimizer always transforms ANSI SQL to the equivalent Oracle syntax and there are some limitations to the optimizer’s other complex query transformations when using ANSI SQL.</p> <p>And unfortunately there are bugs in both.</p> <p>2.5 Eliminate Early</p> <p>Where there are predicates (WHERE clauses) which can significantly reduce the dataset early, check that they are being applied early enough in the execution plan (more information to follow), check whether the SQL statement might be rephrased or reconstructed (CTE/WITH) to make sure they are applied at an appropriate stage.</p> <p>2.6 Sort / Aggregate on the smallest possible dataset</p> <p>Similar to eliminate early. Sorting and aggregating requires memory and under certain conditions can spill to expensive (unscalable) disk operations.<br /> Wherever possible, do the sort or aggregation on the smallest set of rows (not necessarily applicable to the order by clause of a query).</p> <p>2.7 What’s the big deal with PL/SQL functions called from SQL?</p> <p>The bottom line is that it’s about performance.</p> <p>We could get in a whole argument about reusability vs performance but performance eventually wins in the end.</p> <p>Often the correct mechanism for reusability in the Oracle database is not a function but a view joined to appropriately in the main SQL.</p> <p>Functions cause a relatively expensive context switch between SQL and PLSQL engines.</p> <p>In the name of reusability, functions encourage row-by-row operations and discourage thinking in sets.</p> <p>If the function itself contains SQL, then this SQL will not be part of the read consistency mechanism of the calling statements which can be potentially problematic.</p> <p>If you absolutely have to, have to, have to use functions in SQL, then think again.</p> <p>Then if you really, really do then please look at deterministic functions and consider wrapping the function call in a (select from dual) to expose the potential benefits of subquery caching for functions called with repeated parameters.</p> <p>2.8 What about simple functions like TRUNC in the WHERE clause?</p> <p>Using functions on columns in the WHERE clause can prevent the optimizer from using an index or from pruning a partition unless a function-based index is in place on the column.</p> <p>For this reason, it is often best to avoid this sort of construct:<br /> WHERE TRUNC(some_date_column) = TO_DATE(’01-NOV-2013’,’DD-MON-YYYY’)</p> <p>In favour of this:<br /> WHERE some_date_column) &gt;= TO_DATE(’01-NOV-2013’,’DD-MON-YYYY’)<br /> AND some_date_column) &lt; TO_DATE(’02-NOV-2013’,’DD-MON-YYYY’)</p> <p>2.9 Using the correct datatype, be explicit</p> <p>Performance problems related to using the incorrect datatypes are common.</p> <p>The optimizer will implicitly add functions to make sure the datatypes on both sides of the predicate match.</p> <p>Always convert date-like parameters to DATEs where the column datatype is also DATE.</p> <p>Never rely on implicit datatype conversion.</p> <p>3 Execution Plans &amp; Metrics – What we want, why we want it and how to get it<br /> We have flown through some aspects of how to have a better chance of writing an optimal SQL statement.</p> <p>3.1 How can we tell if it’s optimal?</p> <p>Run it.</p> <p>Run it twice to rule out the effect of uncontrollable factors like OS caching, SAN caching, etc.</p> <p>Run it on representative data.</p> <p>Run it on current volumes.</p> <p>Run it on expected future volumes.</p> <p>Then what?</p> <p>In order to validate that our SQL statement is likely to have effective performance, what we want is the actual execution plan used and preferably the actual rowsource metrics.</p> <p>3.2 How?</p> <p>3.2.1 Serial (i.e. non-parallel) Execution Plans</p> <p>In general, the following is usually a good approach across a variety of tools – SQL Developer, Toad and SQL*Plus for example:<br /> Alter session set statistics_level = all;<br /> &#8211;bind setup<br /> Var bind1 number<br /> Exec :bind1 := …;<br /> &#8230;<br /> &#8211;run target sql statement<br /> select ….<br /> &#8212; fetch execution plan and metrics<br /> select * from table(dbms_xplan.display_cursor(null,null,’allstats last’));</p> <p>Then run as a script.</p> <p>Firstly, for getting actual execution metrics we can do one of two things prior to running the SQL statement concerned:<br /> 1. Add the /*+ gather_plan_statistics */ hint to the SQL or<br /> 2. In the same session, run alter session set statistics_level = all;</p> <p>Then run the target SQL and immediately afterwards run this select:</p> <p>select * from table(dbms_xplan.display_cursor(null,null,’allstats last’));</p> <p>This is a convenient wrapper to get the execution plan and metrics from V$SQL_PLAN.</p> <p>The first parameter is SQL_ID and by passing in NULL, we default to the previous SQL_ID run in this session.</p> <p>The second parameter is CHILD_CURSOR_NO and this should be the previous child_id for the previous sql_id.<br /> The third parameter is the FORMAT and ‘ALLSTATS LAST’ format says to get all statistics for the last execution.</p> <p>If this works this should produce an output which is examined in more detail in section 6.</p> <p>3.2.2 What if this doesn’t work?</p> <p>If you find you don’t have privilege to run these commands – you need access to V$SESSION for example to use DBMS_XPLAN.DISPLAY_CURSOR – then you need privilege. There is no reason for privilege not to be given.</p> <p>Otherwise the approach above is effective 90% of the time.</p> <p>For parallel execution plans, see section 3.2.3 below.</p> <p>However, in addition and specifically to SQL Developer, there are some recursive operations run by the tool which means that SQL Developer runs some internal commands such that when our DBMS_XPLAN statement runs, the previous SQL ID is no longer our target SQL statement.</p> <p>There is one such example in SQL Developer 3 related to timestamp columns which affects the test script when running everything as a script (F5). In this case, there are two alternatives. Firstly, run the individual commands in SQL Developer as Run Statement (F9 / Ctrl + Enter). Alternatively, just comment out the timestamp columns in the SELECT part of the statement, for the purposes of this exercise.</p> <p>Furthermore, in SQL Developer 4 there are further changes to recursive operations which seem to affect some statements.</p> <p>In all such cases, if the output of the DBMS_XPLAN.DISPLAY_CURSOR is not the execution plan of the statement being profiled then the approach should be to identify the SQL statement in the shared pool (Look for matching SQL_TEXT in V$SQL) and plug the specific SQL_ID into the first argument of the DBMS_XPLAN call (no need to rerun the target SQL statement).</p> <p>3.2.3 Parallel Execution Plans</p> <p>For parallel execution plans, the approach of using DBMS_XPLAN.DISPLAY_CURSOR with the format of ‘ALLSTATS LAST’ is not appropriate because it fetches the execution metrics from the last execution of the statement – which is the Query Coordinator (QC) and does not include the metrics of the parallel slaves.</p> <p>A better approach for parallel execution plans is to use real time sql monitoring and the easiest way to do this is to run the following and capture the output report:<br /> select dbms_sqltune.report_sql_monitor(‘’) from dual;</p> <p>This requires you to identify the SQL_ID of the target sql statement from V$SQL (seek matching text in SQL_TEXT/SQL_</p> <p>FULLTEXT column).<br /> It may also require you to add the /*+ monitor */ hint to your SQL statement as by default this only kicks in on executions which last longer than a default number of seconds (2?) and for statements which are less than a certain length</p> <p>3.2.4 When all else fails</p> <p>Fall back on SQL Trace.<br /> Alter session set sql_trace = true;<br /> SELECT….<br /> Alter session set sql_trace = false;</p> <p>This produces a trace file on the database server and the trace file name can be identified by:<br /> select * from v$diag_info where name = ‘Default Trace File’;</p> <p>This can be run through TKPROF to get the execution metrics but TKPROF can also lie about the execution plan so this should be double checked in V$SQL_PLAN or by using DBMS_XPLAN.</p> <p>In rare circumstances and if all the above alternatives are unavailable or impractical for some reason, only then might EXPLAIN PLAN or AUTOTRACE be acceptable.</p> <p>For example, in any modern version of Oracle, you can do the following<br /> explain plan for select…;<br /> select * from table(dbms_xplan.display);</p> <p>Now this is not useless but, for numerous reasons, EXPLAIN PLAN cannot be trusted and is not sufficient for our purposes.</p> <p>AUTOTRACE also does not tell the truth (because it itself relies on EXPLAIN PLAN).<br /> EXPLAIN PLAN is an estimate of what the execution plan for a SQL statement will be.</p> <p>It doesn’t peek at binds.</p> <p>It assumes all binds are VARCHAR2.</p> <p>3.3 Why are we doing this?</p> <p>We want this information documented as part of the change, attached to the Jira or whatever tool using for change management and included in any code review.</p> <p>The most effective mechanism for tuning SQL is “Tuning by Cardinality Feedback”: <a href="http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.pdf" rel="nofollow">http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.pdf</a></p> <p>This follows the principle that:<br /> “if an access plan is not optimal it is because the cardinality estimate for one or more of the row sources is grossly incorrect”<br /> and<br /> “the cbo (cost-based optimizer) does an excellent job of finding the best access plan for a given sql provided it is able to accurately estimate the cardinalities of the row sources in the plan”</p> <p>By gathering the actual execution plan and the actual execution metrics, we can show whether the optimizer was accurate in its estimations and if it was accurate, then, from a developer perspective and for the purposes of most code reviews, there is a good likelihood that the SQL is good enough for the optimizer to do a good job with.</p> <p>4 Interpretation of Execution Plans and Execution Metrics</p> <p>If we’ve been lucky we should have the actual execution plan and the executions metrics.</p> <p>4.1 What are we looking for? How do we interpret it?</p> <p>Providing a thorough guide on how to interpret most variations of execution plans is beyond the scope of this guide, although we ill provide a basic guide in Appendix A.</p> <p>Essentially, what we want to see in the execution metrics is that the optimizer’s estimates are broadly accurate.</p> <p>How accurate?</p> <p>In general, we shouldn’t necessarily be overly concerned until we get to a factor of 10x or even more.</p> <p>Estimated 100K rows, Actual 1M rows – probably not too bothered.</p> <p>Estimate of 1 row, Actual 10000 rows – likely to be significant inefficiencies in either join order, join mechanism and/or access path.</p> <p>And when we are looking at estimates vs actual, we need to consider the “Starts” so what we are looking for is that “Starts * E-rows” is in the right ballpark compared to “A-rows”. For more information, please see Appendix A.</p> <p>Here are a couple of examples:<br /> SQL_ID fst03j2p1czpb, child number 0<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> select * from t1 , t2 where t1.col1 = t2.col1<br /> Plan hash value: 1838229974</p> <p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.03 | 1172 |<br /> |* 1 | HASH JOIN | | 1 | 10000 | 10000 |00:00:00.03 | 1172 |<br /> | 2 | TABLE ACCESS FULL| T1 | 1 | 10000 | 10000 |00:00:00.01 | 576 |<br /> | 3 | TABLE ACCESS FULL| T2 | 1 | 10000 | 10000 |00:00:00.01 | 596 |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</p> <p>Predicate Information (identified by operation id):<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</p> <p>1 &#8211; access(&#8220;T1&#8243;.&#8221;COL1&#8243;=&#8221;T2&#8221;.&#8221;COL1&#8243;)</p> <p>In the above, the estimates are accurate so there is a very good chance that this is a good plan.</p> <p>Here’s another, this time not so good because the estimate of rows in T1 was 1 whereas the actual was 10000.</p> <p>This led the optimizer to choose an index access path over a full table scan and a NESTED LOOP rather than a HASH JOIN.<br /> SQL_ID 9quvuvkf8tzwj, child number 0<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> select /*+ cardinality(t1 1) */ * from t1 , t2 where t1.col1 =<br /> t2.col1</p> <p>Plan hash value: 931243032</p> <p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br /> | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;<br /> | 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 | 12640 |<br /> | 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 | 12640 |<br /> | 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 | 2640 |<br /> | 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 | 596 |<br /> |* 4 | INDEX UNIQUE SCAN | SYS_C00446778 | 10000 | 1 | 10000 |00:00:00.01 | 2044 |<br /> | 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 | 10000 |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p> <p>Predicate Information (identified by operation id):<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</p> <p>4 &#8211; access(&#8220;T1&#8243;.&#8221;COL1&#8243;=&#8221;T2&#8221;.&#8221;COL1&#8243;)</p> <p>4.2 Do I need to worry about things like NESTED LOOPS vs HASH JOINS?</p> <p>For the purposes of this exercise, no but the more knowledge the better.</p> <p>Accuracy of estimates should be sufficient.</p> <p>The remainder of the information should be attached to the change tool for review.</p> <p>5 Appendix A: Basic guide to reading an execution plan</p> <p>Using the following execution plan from a two table join:<br /> SQL_ID 9quvuvkf8tzwj, child number 0<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> select /*+ cardinality(t1 1) */ * from t1 , t2 where t1.col1 =<br /> t2.col1</p> <p>Plan hash value: 931243032</p> <p>&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 |<br /> | 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |<br /> | 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |<br /> | 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |<br /> |* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |<br /> | 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</p> <p>Predicate Information (identified by operation id):<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</p> <p>4 &#8211; access(&#8220;T1&#8243;.&#8221;COL1&#8243;=&#8221;T2&#8221;.&#8221;COL1&#8243;)</p> <p>There are four key elements:<br /> • The SQL statement<br /> • The SQL ID – a hash value of the sql statement, usually consistent between databases and even across versions<br /> • The execution plan<br /> • The predicate section – not to be overlooked. Can highlight issues with implicit functions and datatype conversions amongst other things</p> <p>For the execution plan itself there are a number of elements to be concerned with:<br /> • Optimizer – all modern version of Oracle use the Cost-based Optimizer (CBO). This uses statistics and cost calculations to choose a best-cost plan for execution.</p> <p>• Cost – Cost is an estimated indicator of time which the optimizer uses to compare execution plan possibilities, usually choosing the lowest cost plan. However, to all intents and purposes, developers should ignore it.</p> <p>• Cardinality – An estimate of the number of rows for a particular rowsource, for a particular join, etc. Exposed in the execution plan as E-Rows for estimate and A-Rows for actuals. When comparing E-Rows to A-Rows it is important to take Starts into account, i.e to compare “Starts * E-Rows” to A-Rows. The Nested loop operations for example will have multiple starts for the inner/probed rowsource.</p> <p>• Parent:child operations – An execution plan is generally a succession of parent:child operations – follow and match the indentation. A join mechanism should have two children.</p> <p>• Join mechanism – A join mechanism joins two rowsources. There are a variety of mechanisms but in general there are two main methods depending on the cardinalities:</p> <p>o NESTED LOOP – Essentially a FOR LOOP – For each row in the outer/driving rowsource, probe the inner/probed rowsource. Generally used for low cardinality rowsources.</p> <p>o HASH JOIN – Hash all the outer/driving rowsource based on the join key(s) then hash all the inner rowsource. Generally used for high cardinality rowsources. If the cardinality estimate is too low, work area sizes used for hashing maybe too small and spill to temp space on disk – slow/unscalable</p> <p>• Join order – Depending on the cardinalities, the optimizer can choose to join T1 to T2 or T2 to T1. The number of permutations for join order is N! where N is the number of tables being joined. The optimizer will limit itself to a maximum number of permutations to evaluate.</p> <p>• Access path – how the data is fetched from the table, i.e. by index via various different index access mechanisms or by tablescan, etc.</p> <p>• Buffers – A measure of logical IO. See below.</p> <p>5.1 What happens first in the execution plan?<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> | 0 | SELECT STATEMENT | | 1 | | 10000 |00:00:00.04 |<br /> | 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |<br /> | 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |<br /> | 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |<br /> |* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |<br /> | 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br /> Predicate Information (identified by operation id):<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br /> 4 &#8211; access(&#8220;T1&#8243;.&#8221;COL1&#8243;=&#8221;T2&#8221;.&#8221;COL1&#8243;)</p> <p>There are a couple of exceptions but in general the execution plan starts at the first operation without a child.</p> <p>So, following the indentation, the first operation without a child is:<br /> | 3 | TABLE ACCESS FULL | T1 | 1 | 1 | 10000 |00:00:00.01 |</p> <p>This is considered to be the inner/driving rowsource of the parent operation at:<br /> | 2 | NESTED LOOPS | | 1 | 1 | 10000 |00:00:00.02 |</p> <p>For each row in this rowsource, we probe the inner rowsource:<br /> |* 4 | INDEX UNIQUE SCAN | SYS_C.. | 10000 | 1 | 10000 |00:00:00.01 |</p> <p>Which is actually an index lookup on the primary key using the predicate:<br /> 4 &#8211; access(&#8220;T1&#8243;.&#8221;COL1&#8243;=&#8221;T2&#8221;.&#8221;COL1&#8243;)</p> <p>The data produced by this join is then used in the parent operation:<br /> | 1 | NESTED LOOPS | | 1 | | 10000 |00:00:00.04 |</p> <p>Which uses the rowids from the unique index/primary key for table T2 to get the actual table data from T2:<br /> | 5 | TABLE ACCESS BY INDEX ROWID| T2 | 10000 | 1 | 10000 |00:00:00.02 |</p> <p>5.2 Red flags?</p> <p>• Row estimates of 1:<br /> o The minimum row estimate is 1 and in some cases this actually means 0.<br /> o If this is not a primary key access and there really isn’t 0/1, then are there any statistics for this object?<br /> o Row estimates of 0/1 where the actual number of rows is significantly more than 1 can cause significant performance problems</p> <p>• MERGE JOIN CARTESIAN + BUFFER SORT – particularly where the estimate is 1. Can be particularly detrimental if the actual rows are greater than 1. Rarely a good operation but can be symptomatic of a missing join.</p> <p>• Implicit datatype conversions</p> <p>• Nested loop operations where the inner/probed table/rowsource is a FULL segment scan.</p> <p>• VIEW operations – symptomatic of a non-mergeable view which may or may not be a problem</p> <p>• FILTER operations where the row-by-row operation is significant</p> <p>5.3 Is there anything else to look out for?</p> <p>Yes, that buffers column is a measure of logical IO.</p> <p>When comparing different ways of doing things, when tuning SQL, one of the key measures that should be targeted is a reduction in logical IO.</p> <p>If one approach uses significantly less logical IO compared to another approach then that is significant. The statement with the lower IO is likely to be better, is more likely to benefit from having more of the data it’s interested in cached and is less likely to impact other queries and the caching of other data.</p> <p>There should probably be a rule of thumb about the ratio of logical IO to rows fetched. The difficulty is picking the right indicators.</p> <p>If a query selects 100 rows from 100 million buffer gets and those all-important estimates are reasonably accurate, this should be a strong signal that perhaps the indexes are not optimal for that particular query.</p> <p>As a rule of thumb, a ratio of a couple of consistent gets or less per row is damn good. 100,000s or millions may well be an indicator of significant inefficiencies.</p> <p>But, as always, it depends.</p> <p>It also significantly depends on whether the query itself is fast enough for the business requirement and whether it has the potential to impact other users of the database.</p> <p>Furthermore, one lone query is unlikely to justify a new index but that is beyond the scope of this guide.</p> <p>5.4 Further Reading</p> <p>A 10053 Trace provides a very detailed walkthrough of the optimizer’s process of coming up with the execution plan. Not for the faint-hearted.<br /> <a href="http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf" rel="nofollow">http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-explain-the-explain-plan-052011-393674.pdf</a><br /> Jonathan Lewis: Cost-based Oracle Fundamentals: <a href="http://www.apress.com/9781590596364" rel="nofollow">http://www.apress.com/9781590596364</a></p> <p>6 Appendix B Interacting with the database<br /> The Oracle database is a very powerful piece of software.</p> <p>It’s also likely to be one of the most expensive pieces of software in an application’s tech stack.</p> <p>The keys to a performant Oracle database also differ significantly from other vendors.</p> <p>How you do best approach something in Sybase or SQL Server is not necessarily how you should do something in Oracle.</p> <p>One classic example is the use of temporary tables.</p> <p>Developers should know how to get the best out of a particular database.</p> <p>To treat it like a bit bucket or a “slow, dumb backup datastore” is to waste money and resources.</p> <p>6.1 Vendor-specific Database features</p> <p>Application developers should not be overly wary of using a feature particular to the Oracle database. Some tools can make it difficult to use vendor-specific features or optimizations but an investment in time and effort to do so can reap significant performance benefits.</p> <p>Whilst this attitude might be relevant for third-party product developers who have to write software that can be installed on different database vendors, this is largely not true of enterprises writing internal software systems.</p> <p>It is unlikely that the Oracle database on a particular system will be replaced by another vendor database.</p> <p>It is far more likely that a Java component interacting with the database will eventually be replaced by a C# component or that the usage of the Oracle database will be deprecated in favour of caching and NOSQL technologies, so if you’re going to use SQL, use Oracle-specific features where they offer benefit.</p> <p>6.2 Round-tripping</p> <p>The default fetchsize for JDBC and for SQL*Plus is 10. The default is almost never appropriate for general usage as many SQL statements can be expected to fetch significantly more than 10 rows and therefore significant gains can be made by increasing this setting beyond the default.</p> <p>The issue is not only about roundtrips across the network, it’s also related to the logical IO that a query needs to do. If you ask for just 10 rows, the database will do all the IO it needs to do to fetch the first ten rows. When you ask for the next 10 rows, the server process on the database might well have to do a logical read of some of the same blocks as the previous fetch which can lead to significant inefficiencies compared to a larger fetchsize.</p> <p>6.3 Abstraction &amp; Interfacing with the database</p> <p>Abstraction is a principle that is put on a pedestal in the middle tier and yet often abandoned when interacting with the database.</p> <p>Put simply if SQL is embedded in Java code then this introduces unnecessary dependencies on the database model and limits the ability to make subtle changes to the SQL or to the model without making a change to the application server code and doing an app server release.</p> <p>Views, procedures and packages can all provide an interface to the database and the data model.</p> <p>6.4 It’s all about the data.</p> <p>Interacting with data appropriately, regardless of database vendor, is crucial.</p> <p>Think in Sets.</p> <p>Also consider the success of Engineered Systems like Oracle Exadata.</p> <p>One of the things that Exadata focuses on, for example, is being able to eliminate redundant data as early as possible.</p> <p>This means that the logic in the storage cells can eliminate the data before it even gets to the traditional database memory, before it goes anywhere near the network, long before it goes up to the application.</p> <p>And it can do this with significant degrees of parallelism, usually with far more efficiency than similar processing in application threads.</p> <p>Why is this relevant?</p> <p>Eliminate early.</p> <p>Let the database do the work it was designed to do.</p> <p>Applications should let the database give them the smallest set of data that they need and should not bring excessive amounts of data into the middle tier for elimination and aggregation there.</p> <p>Volumes of data are exploding. The best chances of scaling efficiently to deal with these volumes of data are to interact with the data appropriately.</p> Dom Brooks http://orastory.wordpress.com/?p=2347 Wed May 09 2018 10:49:20 GMT-0400 (EDT) Tracking applied Patches in WebLogic Server outfile http://dirknachbar.blogspot.com/2018/05/tracking-applied-patches-in-weblogic.html With a small trick you can track your applied patches in your Oracle Software Home on your Oracle WebLogic Server in the outfile.<br /><br />Simply add&nbsp;-Dweblogic.log.DisplayPatchInfo=true to your already existing setUserOverrides.sh or create a new setUserOverrides.sh in your $DOMAIN_HOME/bin directory.<br /><br /><pre class="brush:bash"> # Display applied patches in WebLogic Server outfile<br />JAVA_OPTIONS="$JAVA_OPTIONS -Dweblogic.log.DisplayPatchInfo=true "<br /></pre><br />After that just restart your WebLogic Server and you will find in the outfile of your WebLogic Server following entries:<br /><br /><pre class="brush:bash"> # Snippet from outfile<br />&lt;May 9, 2018 9:15:30 AM CEST&gt; &lt;Info&gt; &lt;Management&gt; &lt;BEA-141107&gt; &lt;Version: WebLogic Server 12.2.1.3.0 Thu Aug 17 13:39:49 PDT 2017 1882952<br />OPatch Patches:<br />27342434;21933966;Thu Apr 26 16:14:03 CEST 2018;WLS PATCH SET UPDATE 12.2.1.3.180417<br />26355633;21447583;Thu Aug 31 14:26:20 CEST 2017;One-off<br />26287183;21447582;Thu Aug 31 14:26:10 CEST 2017;One-off<br />26261906;21344506;Thu Aug 31 14:25:53 CEST 2017;One-off<br />26051289;21455037;Thu Aug 31 14:25:48 CEST 2017;One-off&gt;<br /></pre><br />The provided data in the outfile is in following format:<br /><br /><ol><li>Patch Number</li><li>Unique Patch ID</li><li>On which date and time the Patch was applied</li><li>Patch description</li></ol>To crosscheck just run an opatch lsinventory in order to validate the provided data in your outfile:<br /><br /><pre class="brush:bash"> cd $ORACLE_HOME/OPatch<br />./opatch lsinventory | grep applied<br /><br />Patch 27342434 : applied on Thu Apr 26 16:14:03 CEST 2018<br />Patch 26355633 : applied on Thu Aug 31 14:26:20 CEST 2017<br />Patch 26287183 : applied on Thu Aug 31 14:26:10 CEST 2017<br />Patch 26261906 : applied on Thu Aug 31 14:25:53 CEST 2017<br />Patch 26051289 : applied on Thu Aug 31 14:25:48 CEST 2017<br /></pre><br />As you can see, the provided patch data in the outfile of your WebLogic Server is exactly the same as in the opatch utility.<br /><br /><br /> Dirk Nachbar tag:blogger.com,1999:blog-4344684978957885806.post-6639009739227708730 Wed May 09 2018 03:43:00 GMT-0400 (EDT) Deploying a Spring Boot Application on a Pivotal Container Service (PKS) Cluster on GCP http://feedproxy.google.com/~r/blogspot/PEqWE/~3/vq9bQhOh9ic/deploying-spring-boot-application-on.html I have been "<b>cf pushing</b>" for as long as I can remember so with Pivotal Container Service (PKS) let's walk through the process of deploying a basic Spring Boot Application with a PKS cluster running on GCP.<br /><br /><b>Few assumptions:</b><br /><br />1. PKS is already installed as shown by my Operations Manager UI below<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-N3YyVq3kl6I/WugsjMFhksI/AAAAAAAABHo/pFPS2Hccce4JDmG8pXGNi7wVxZG2he1nwCLcBGAs/s1600/Screen%2BShot%2B2018-05-01%2Bat%2B6.58.37%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="604" data-original-width="1600" height="120" src="https://4.bp.blogspot.com/-N3YyVq3kl6I/WugsjMFhksI/AAAAAAAABHo/pFPS2Hccce4JDmG8pXGNi7wVxZG2he1nwCLcBGAs/s320/Screen%2BShot%2B2018-05-01%2Bat%2B6.58.37%2BPM.png" width="320" /></a></div><br /><br />2. A PKS Cluster already exists as shown by the command below<br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ pks list-clusters</span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">Name&nbsp; &nbsp; &nbsp; &nbsp; Plan Name&nbsp; UUID&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Status&nbsp; &nbsp; &nbsp;Action</span><br /><span style="color: #3d85c6;">my-cluster&nbsp; small&nbsp; &nbsp; &nbsp; 1230fafb-b5a5-4f9f-9327-55f0b8254906&nbsp; succeeded&nbsp; CREATE</span><br /><br /><b>Example:</b><br /><br />We will be using this Spring Boot application at the following GitHub URL<br /><br />&nbsp;&nbsp;<a href="https://github.com/papicella/springboot-actuator-2-demo">https://github.com/papicella/springboot-actuator-2-demo</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-_JH949fJzGw/WuguPWt4QvI/AAAAAAAABH0/WAFTwwDpO5I7GYhQwsQBTqyCawmHCx7OwCLcBGAs/s1600/Screen%2BShot%2B2018-05-01%2Bat%2B7.05.43%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="966" data-original-width="1446" height="213" src="https://3.bp.blogspot.com/-_JH949fJzGw/WuguPWt4QvI/AAAAAAAABH0/WAFTwwDpO5I7GYhQwsQBTqyCawmHCx7OwCLcBGAs/s320/Screen%2BShot%2B2018-05-01%2Bat%2B7.05.43%2BPM.png" width="320" /></a></div><br />1. In this example my Spring Boot application has what is required within my maven build.xml file to allow me to create a Docker image as shown below<br /><pre class="brush: xml"> <br />&lt;!-- tag::plugin[] --&gt;<br /> &lt;plugin&gt;<br /> &lt;groupId&gt;com.spotify&lt;/groupId&gt;<br /> &lt;artifactId&gt;dockerfile-maven-plugin&lt;/artifactId&gt;<br /> &lt;version&gt;1.3.6&lt;/version&gt;<br /> &lt;configuration&gt;<br /> &lt;repository&gt;${docker.image.prefix}/${project.artifactId}&lt;/repository&gt;<br /> &lt;buildArgs&gt;<br /> &lt;JAR_FILE&gt;target/${project.build.finalName}.jar&lt;/JAR_FILE&gt;<br /> &lt;/buildArgs&gt;<br /> &lt;/configuration&gt;<br /> &lt;/plugin&gt;<br /> &lt;!-- end::plugin[] --&gt;<br /><br /> &lt;plugin&gt;<br /> &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;<br /> &lt;artifactId&gt;maven-dependency-plugin&lt;/artifactId&gt;<br /> &lt;executions&gt;<br /> &lt;execution&gt;<br /> &lt;id&gt;unpack&lt;/id&gt;<br /> &lt;phase&gt;package&lt;/phase&gt;<br /> &lt;goals&gt;<br /> &lt;goal&gt;unpack&lt;/goal&gt;<br /> &lt;/goals&gt;<br /> &lt;configuration&gt;<br /> &lt;artifactItems&gt;<br /> &lt;artifactItem&gt;<br /> &lt;groupId&gt;${project.groupId}&lt;/groupId&gt;<br /> &lt;artifactId&gt;${project.artifactId}&lt;/artifactId&gt;<br /> &lt;version&gt;${project.version}&lt;/version&gt;<br /> &lt;/artifactItem&gt;<br /> &lt;/artifactItems&gt;<br /> &lt;/configuration&gt;<br /> &lt;/execution&gt;<br /> &lt;/executions&gt;<br />&lt;/plugin&gt;<br /></pre><br />2. Once a docker image was built I then pushed that to Docker Hub as shown below<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-zx2YiN_wznQ/WvKB5wqsNqI/AAAAAAAABIU/0nUxo9yQC9QkQ3nrqWYVxXY0zmBT0MPyACLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B2.04.52%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1039" data-original-width="1600" height="206" src="https://3.bp.blogspot.com/-zx2YiN_wznQ/WvKB5wqsNqI/AAAAAAAABIU/0nUxo9yQC9QkQ3nrqWYVxXY0zmBT0MPyACLcBGAs/s320/Screen%2BShot%2B2018-05-09%2Bat%2B2.04.52%2BPM.png" width="320" /></a></div><br /><br />3. Now we will need a PKS cluster as shown below before we can continue<br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>pks cluster my-cluster</b></span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">Name:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;my-cluster</span><br /><span style="color: #3d85c6;">Plan Name:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; small</span><br /><span style="color: #3d85c6;">UUID:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1230fafb-b5a5-4f9f-9327-55f0b8254906</span><br /><span style="color: #3d85c6;">Last Action:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; CREATE</span><br /><span style="color: #3d85c6;">Last Action State:&nbsp; &nbsp; &nbsp; &nbsp; succeeded</span><br /><span style="color: #3d85c6;">Last Action Description:&nbsp; Instance provisioning completed</span><br /><span style="color: #3d85c6;">Kubernetes Master Host:&nbsp; &nbsp;cluster1.pks.pas-apples.online</span><br /><span style="color: #3d85c6;">Kubernetes Master Port:&nbsp; &nbsp;8443</span><br /><span style="color: #3d85c6;">Worker Instances:&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;3</span><br /><span style="color: #3d85c6;">Kubernetes Master IP(s):&nbsp; 192.168.20.10</span><br /><br />4. Now we want to wire "<b>kubectl</b>" using a command as follows<br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>pks get-credentials my-cluster</b></span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">Fetching credentials for cluster my-cluster.</span><br /><span style="color: #3d85c6;">Context set for cluster my-cluster.</span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">You can now switch between clusters by using:</span><br /><span style="color: #3d85c6;">$kubectl config use-context <cluster-name></cluster-name></span><br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>kubectl cluster-info</b></span><br /><span style="color: #3d85c6;">Kubernetes master is running at https://cluster1.pks.pas-apples.online:8443</span><br /><span style="color: #3d85c6;">Heapster is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/heapster/proxy</span><br /><span style="color: #3d85c6;">KubeDNS is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/kube-dns/proxy</span><br /><span style="color: #3d85c6;">monitoring-influxdb is running at https://cluster1.pks.pas-apples.online:8443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy</span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.</span><br /><br />5. Now we are ready to deploy a Spring Boot workload to our cluster. To do that lets download the YAML file below<br /><br /><a href="https://github.com/papicella/springboot-actuator-2-demo/blob/master/lb-withspringboot.yml">https://github.com/papicella/springboot-actuator-2-demo/blob/master/lb-withspringboot.yml</a><br /><br />Once downloaded create a deployment as follows<br /><br /><span style="color: #e69138;">$ kubectl create -f lb-withspringboot.yml</span><br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>kubectl create -f lb-withspringboot.yml</b></span><br /><span style="color: #3d85c6;">service "spring-boot-service" created</span><br /><span style="color: #3d85c6;">deployment "spring-boot-deployment" created</span><br /><br />6. Now let’s verify our deployment using some kubectl commands as follows<br /><br />$ kubectl get deployment spring-boot-deployment<br />$ kubectl get pods<br />$ kubectl get svc<br /><div><br /></div><div><div><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>kubectl get deployment spring-boot-deployment</b></span></div><div><span style="color: #3d85c6;">NAME&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;DESIRED&nbsp; &nbsp;CURRENT&nbsp; &nbsp;UP-TO-DATE&nbsp; &nbsp;AVAILABLE&nbsp; &nbsp;AGE</span></div><div><span style="color: #3d85c6;">spring-boot-deployment&nbsp; &nbsp;1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1m</span></div><div><span style="color: #3d85c6;"><br /></span></div><div><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>kubectl get pods</b></span></div><div><span style="color: #3d85c6;">NAME&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;READY&nbsp; &nbsp; &nbsp;STATUS&nbsp; &nbsp; RESTARTS&nbsp; &nbsp;AGE</span></div><div><span style="color: #3d85c6;">spring-boot-deployment-ccd947455-6clwv&nbsp; &nbsp;1/1&nbsp; &nbsp; &nbsp; &nbsp;Running&nbsp; &nbsp;0&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 2m</span></div><div><span style="color: #3d85c6;"><br /></span></div><div><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ <b>kubectl get svc</b></span></div><div><span style="color: #3d85c6;">NAME&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; TYPE&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;CLUSTER-IP&nbsp; &nbsp; &nbsp; &nbsp;EXTERNAL-IP&nbsp; &nbsp; &nbsp;PORT(S)&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AGE</span></div><div><span style="color: #3d85c6;">kubernetes&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ClusterIP&nbsp; &nbsp; &nbsp; 10.100.200.1&nbsp; &nbsp; &nbsp;<none>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 443/TCP&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 23m</none></span></div><div><span style="color: #3d85c6;">spring-boot-service&nbsp; &nbsp;LoadBalancer&nbsp; &nbsp;10.100.200.137&nbsp; &nbsp;35.197.187.43&nbsp; &nbsp;8080:31408/TCP&nbsp; &nbsp;2m</span></div></div><div><br /></div><div>7.&nbsp;Using the external IP Address we got GCP to expose for us we can access our Spring Boot application on port 8080 as shown below using the external IP address. In this example</div><div><br /></div><div><span style="color: #e69138;">http://35.197.187.43:8080/</span></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-Rwyu2dMTVII/WvKGmD_g8NI/AAAAAAAABIs/IBl-qcOpTNg4LapCMHRtdvMJONHZWIgvQCLcBGAs/s1600/Screen%2BShot%2B2018-05-09%2Bat%2B2.25.52%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="505" data-original-width="1476" height="108" src="https://2.bp.blogspot.com/-Rwyu2dMTVII/WvKGmD_g8NI/AAAAAAAABIs/IBl-qcOpTNg4LapCMHRtdvMJONHZWIgvQCLcBGAs/s320/Screen%2BShot%2B2018-05-09%2Bat%2B2.25.52%2BPM.png" width="320" /></a></div><br /><br /><b>RESTful End Point</b><br /><br /><span style="color: #3d85c6;">pasapicella@pas-macbook:~$ http http://35.197.187.43:8080/employees/1</span><br /><span style="color: #3d85c6;">HTTP/1.1 200</span><br /><span style="color: #3d85c6;">Content-Type: application/hal+json;charset=UTF-8</span><br /><span style="color: #3d85c6;">Date: Wed, 09 May 2018 05:26:19 GMT</span><br /><span style="color: #3d85c6;">Transfer-Encoding: chunked</span><br /><span style="color: #3d85c6;"><br /></span><span style="color: #3d85c6;">{</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; "_links": {</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; "employee": {</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; "href": "http://35.197.187.43:8080/employees/1"</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; },</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; "self": {</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; "href": "http://35.197.187.43:8080/employees/1"</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; &nbsp; &nbsp; }</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; },</span><br /><span style="color: #3d85c6;">&nbsp; &nbsp; "name": "pas"</span><br /><span style="color: #3d85c6;">}</span><br /><br /><b>More Information</b><br /><br />Using PKS<br /><a href="https://docs.pivotal.io/runtimes/pks/1-0/using.html">https://docs.pivotal.io/runtimes/pks/1-0/using.html</a><br /><br /><div class="blogger-post-footer">http://feeds.feedburner.com/TheBlasFromPas</div><img src="http://feeds.feedburner.com/~r/blogspot/PEqWE/~4/vq9bQhOh9ic" height="1" width="1" alt=""/> Pas Apicella tag:blogger.com,1999:blog-6527688743456205256.post-5952934954598005959 Wed May 09 2018 01:31:00 GMT-0400 (EDT) OCM 12C Journey http://oracle-help.com/articles/ocm-12c-journey/ <p>I wrote two <strong>OCM exams</strong> back to back because I wanted to write <strong>2-days OCM 12C</strong> exam but there was no schedule in India for <strong>2-days OCM 12C</strong> exam, So first, I wrote 2-days OCM 11G on 9th &amp; 10th April and then I wrote an exam of <strong>OCM 12C Upgrade</strong> on 23r<span style="font-size: 13.3333px;">d <span style="font-size: 12pt;">April in India, After 18 long days, the result was out. Now officially I’ve successfully passed <strong>12C OCM, </strong></span></span><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:12COCMU"><span style="text-decoration: underline;"><strong>Oracle Database 12c Certified Master</strong></span></a></span><span style="font-size: 13.3333px;"><span style="font-size: 12pt;">. It was almost more than 15 months that I’ve been preparing for this exam but actual preparation started almost 6 years back when I started my DBA career.</span></span></p> <p>The Exam contains <strong>4 skillsets</strong> which you have to complete in <strong>1-Days</strong>. To earn this certification you need to get overall <strong>59.17%</strong> marks.</p> <ul> <li><strong>Minimum Skill-set wise Passing Scores:</strong> <ul> <li><strong>General Database and Network Administration, and Backup Strategy: 55.56%</strong></li> <li><strong>Data and Performance Management: 31.74%</strong></li> <li><strong>Data Guard: 43.39% </strong></li> <li><strong>Grid Infrastructure and RAC: 44.28%</strong></li> </ul> </li> </ul> <p>This was the result of the journey: <strong>Oracle Certified Master 12c</strong></p> <p><strong><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg"><img data-attachment-id="4423" data-permalink="http://oracle-help.com/articles/ocm-12c-journey/attachment/ocm12cert/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?fit=758%2C579" data-orig-size="758,579" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1525823876&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="ocm12cert" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?fit=300%2C229" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?fit=758%2C579" class="size-full wp-image-4423 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?resize=758%2C579" alt="" width="758" height="579" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?w=758 758w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?resize=300%2C229 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?resize=60%2C46 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12cert.jpg?resize=150%2C115 150w" sizes="(max-width: 758px) 100vw, 758px" data-recalc-dims="1" /></a></strong></p> <p>After earning this certificate, I achieved all level of certification in Oracle.</p> <ul> <li>Oracle Certified Master <strong>OCM</strong></li> <li><strong>OCM Cloud &amp; OCM MAA</strong></li> <li>Oracle Certified Professional <strong>OCP</strong></li> <li>Oracle Certified Expert <strong>OCE</strong></li> <li>Oracle Certified Specialist <strong>OCS</strong></li> <li>Oracle Certified Associate <strong>OCA</strong></li> </ul> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg"><img data-attachment-id="4431" data-permalink="http://oracle-help.com/articles/ocm-12c-journey/attachment/logo-2/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?fit=686%2C453" data-orig-size="686,453" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1525825759&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="logo" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?fit=300%2C198" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?fit=686%2C453" class="alignnone size-full wp-image-4431" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?resize=686%2C453" alt="" width="686" height="453" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?w=686 686w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?resize=300%2C198 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?resize=60%2C40 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/logo.jpg?resize=150%2C99 150w" sizes="(max-width: 686px) 100vw, 686px" data-recalc-dims="1" /></a></p> <p><strong>To earn OCM credentials, we need to complete following steps</strong></p> <p>Step 1: Candidate must be OCM 11g</p> <p>Step 2: Candidate must pass the OCM exam</p> <p>Step 3:  Submit Fulfillment Kit Request</p> <p><strong>As per Oracle OCM website below is the exam environment for 12c OCM</strong></p> <ul> <li>Oracle Linux Release 6.5 64 bit</li> <li>Mozilla Browser, Text (emacs, gedit) and vi editors</li> <li>Shell environment: bash, csh</li> <li>Use either CLI or GUI environment and tools when available</li> <li>Oracle Database 12<em>c</em> Enterprise Edition Release 12.1.0.2.0  64 bit</li> <li>Oracle Grid Infrastructure 12<em>c </em>Release 1 (12.1.0.2)</li> <li>Oracle Enterprise Manager Cloud Control 12<em>c</em> Rel 4</li> </ul> <p><strong>How to prepare for the exam?</strong></p> <ul> <li><strong>Oracle Documentation</strong>-&gt; Download the offline documentation for Oracle Database 12cR1 and Enterprise Manager 12.1 <ul> <li><strong><a href="http://download.oracle.com/docs/cds/E24628_01.zip">Oracle Enterprise Manager 12.1</a></strong></li> <li><strong><a href="http://docs.oracle.com/cds/E11882_01.zip">Oracle Database, 12</a><a href="http://docs.oracle.com/cds/E11882_01.zip"><em>c</em></a><a href="http://download.oracle.com/docs/cds/database/121.zip">Release 1 (12.1)</a> </strong></li> </ul> </li> <li><a href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=609&amp;get_params=dc:D94327"><strong>Oracle Database 12c: OCM Exam Preparation Workshop Ed 1</strong></a></li> <li><strong><a href="http://www.dbarj.com.br/en/ocm-12c-preparation/">Blog for OCM 12c Preparation</a> from Rodrigo Jorge</strong></li> </ul> <p><strong>How to register the exam?</strong></p> <p>If you want to enroll exam, please click on below link.</p> <p><strong>Exam Number:</strong><span class="examId">12COCMU</span></p> <p><strong>Exam Title:</strong> <span class="examTitleH2">Oracle Database 12c Certified Master Upgrade Exam</span></p> <p><a href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:12COCMU">https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:12COCMU</a></p> <p>Note: This exam has not conducted by Pearson, so we need to check the OCM Schedule from the below link.</p> <p><a href="http://education.oracle.co.uk/html/oracle/28US/SCHED_SP_OCM.htm">View A Worldwide OCM Schedule</a></p> <p>Below are the Exam Details related to scoring and pricing ( which may vary as per country)?</p> <p><strong><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg"><img data-attachment-id="4422" data-permalink="http://oracle-help.com/articles/ocm-12c-journey/attachment/ocm12c/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?fit=453%2C413" data-orig-size="453,413" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1525823744&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="ocm12c" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?fit=300%2C274" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?fit=453%2C413" class="size-full wp-image-4422 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?resize=453%2C413" alt="" width="453" height="413" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?w=453 453w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?resize=300%2C274 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?resize=60%2C55 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/05/ocm12c.jpg?resize=150%2C137 150w" sizes="(max-width: 453px) 100vw, 453px" data-recalc-dims="1" /></a></strong></p> <p><strong>Special Thanks</strong></p> <p><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez</a></strong> is an <strong>Oracle OCM</strong> and <strong>ACED</strong> who gave an opportunity to write with him and motivating me to learn new things.  I’m very thankful to him as this was the kick off for my studies.</p> <p>I am also thankful <strong>OracleHelp Team.</strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong></span></span><span class="s1"><span class="s2"> LinkedIn: </span></span><span class="s1"><span class="s2"><a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong></span><span class="s1"> LinkedIn: </span><span class="s1"><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez’s Profile</a></strong></span></p> <p>LinkedIn Group: <strong><em><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></em></strong></p> <p>Facebook Page: <strong><em><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></em></strong></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/ocm-12c-journey/">OCM 12C Journey</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=4421 Tue May 08 2018 14:29:41 GMT-0400 (EDT) Announcing Oracle SQL Developer Web! https://www.thatjeffsmith.com/archive/2018/05/announcing-oracle-sql-developer-web/ <p>We are now live in the Oracle Cloud with Oracle SQL Developer Web.</p> <p>Wait, what&#8217;s SQL Developer Web?</p> <p>It&#8217;s a browser based version of Oracle SQL Developer powered by Oracle REST Data Services.</p> <p>If you are a Database Cloud Service customer in the Oracle Cloud, it&#8217;s rolling out now to those subscribers. </p> <p>If you&#8217;d like to know more and see a quick demo, I made you a video <img src="https://s.w.org/images/core/emoji/2.4/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p><iframe width="720" height="405" src="https://www.youtube.com/embed/asHlUW-Laxk" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p> thatjeffsmith https://www.thatjeffsmith.com/?p=6628 Tue May 08 2018 10:42:44 GMT-0400 (EDT) 20 Indexes https://jonathanlewis.wordpress.com/2018/05/08/20-indexes/ <p>If your system had to do a lot of distributed queries there&#8217;s a limit on indexes that might affect performance: when deriving an execution plan for a distributed query the optimizer will consider a maximum of twenty indexes on each remote table. if you have any tables with a ridiculous number of indexes (various 3rd party accounting and CRM systems spring to mind) and if you drop and recreate indexes on those tables in the wrong order then execution plans may change for the simple reason that the optimizer is considering a different subset of the available indexes.</p> <p>Although the limit is stated in the manuals (a few lines into a section on <a href="https://docs.oracle.com/cd/B19306_01/server.102/b14231/ds_admin.htm#i1008788"><em><strong>managing statement transparency</strong></em></a>) there is no indication about which 20 indexes the optimizer is likely to choose &#8211; a couple of experiments, with tracing enabled and shared pool flushes, gives a fairly strong indication that it&#8217;s the last 20 indexes created (or, to be more explicit, the ones with the 20 highest <em><strong>object_id</strong></em> values).</p> <p>Here&#8217;s a little code to help demonstrate the point &#8211; first just the table and index creation</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: indexes_20.sql rem Author: Jonathan Lewis rem Dated: Apr 2008 rem rem Last tested rem 12.2.0.1 rem create table t1 as with generator as ( select --+ materialize rownum id from all_objects where rownum &lt;= 3000 -- &gt; comment to avoid WordPress format issue ) select mod(rownum,trunc(5000/1)) n01, mod(rownum,trunc(5000/2)) n02, mod(rownum,trunc(5000/3)) n03, mod(rownum,trunc(5000/4)) n04, mod(rownum,trunc(5000/5)) n05, mod(rownum,trunc(5000/6)) n06, mod(rownum,trunc(5000/7)) n07, mod(rownum,trunc(5000/8)) n08, mod(rownum,trunc(5000/9)) n09, mod(rownum,trunc(5000/10)) n10, mod(rownum,trunc(5000/11)) n11, mod(rownum,trunc(5000/12)) n12, mod(rownum,trunc(5000/13)) n13, mod(rownum,trunc(5000/14)) n14, mod(rownum,trunc(5000/15)) n15, mod(rownum,trunc(5000/16)) n16, mod(rownum,trunc(5000/17)) n17, mod(rownum,trunc(5000/18)) n18, mod(rownum,trunc(5000/19)) n19, mod(rownum,trunc(5000/20)) n20, mod(rownum,trunc(5000/21)) n21, mod(rownum,trunc(5000/22)) n22, mod(rownum,trunc(5000/23)) n23, mod(rownum,trunc(5000/24)) n24, rownum id, rpad('x',40) padding from generator v1, generator v2 where rownum &lt;= 1e5 -- &gt; comment to avoid WordPress format issue ; -- -- Typo, I missed the semi-colon at the end of this line. -- See comment 3. -- alter table t1 add constraint t1_pk primary key(id) create table t2 as with generator as ( select --+ materialize rownum id from all_objects where rownum &lt;= 3000 -- &gt; comment to avoid WordPress format issue ) select mod(rownum,trunc(5000/1)) n01, mod(rownum,trunc(5000/2)) n02, mod(rownum,trunc(5000/3)) n03, mod(rownum,trunc(5000/4)) n04, mod(rownum,trunc(5000/5)) n05, mod(rownum,trunc(5000/6)) n06, mod(rownum,trunc(5000/7)) n07, mod(rownum,trunc(5000/8)) n08, mod(rownum,trunc(5000/9)) n09, mod(rownum,trunc(5000/10)) n10, mod(rownum,trunc(5000/11)) n11, mod(rownum,trunc(5000/12)) n12, mod(rownum,trunc(5000/13)) n13, mod(rownum,trunc(5000/14)) n14, mod(rownum,trunc(5000/15)) n15, mod(rownum,trunc(5000/16)) n16, mod(rownum,trunc(5000/17)) n17, mod(rownum,trunc(5000/18)) n18, mod(rownum,trunc(5000/19)) n19, mod(rownum,trunc(5000/20)) n20, mod(rownum,trunc(5000/21)) n21, mod(rownum,trunc(5000/22)) n22, mod(rownum,trunc(5000/23)) n23, mod(rownum,trunc(5000/24)) n24, rownum id, rpad('x',40) padding from generator v1, generator v2 where rownum &lt;= 1e5 -- &gt; comment to avoid WordPress format issue ; create index t2_a21 on t2(n21); create index t2_a22 on t2(n22); create index t2_a23 on t2(n23); create index t2_a24 on t2(n24); create index t2_z01 on t2(n01); create index t2_z02 on t2(n02); create index t2_z03 on t2(n03); create index t2_z04 on t2(n04); create index t2_z05 on t2(n05); create index t2_z06 on t2(n06); create index t2_z07 on t2(n07); create index t2_z08 on t2(n08); create index t2_z09 on t2(n09); create index t2_z10 on t2(n10); create index t2_i11 on t2(n11); create index t2_i12 on t2(n12); create index t2_i13 on t2(n13); create index t2_i14 on t2(n14); create index t2_i15 on t2(n15); create index t2_i16 on t2(n16); create index t2_i17 on t2(n17); create index t2_i18 on t2(n18); create index t2_i19 on t2(n19); create index t2_i20 on t2(n20); alter index t2_a21 rebuild; alter index t2_a22 rebuild; alter index t2_a23 rebuild; alter index t2_a24 rebuild; begin dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt;'t1', method_opt =&gt; 'for all columns size 1', cascade =&gt; true ); dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt;'t2', method_opt =&gt; 'for all columns size 1', cascade =&gt; true ); end; / </pre> <p>I&#8217;m going to use a loopback database link to join &#8220;local&#8221; table <em><strong>t1</strong></em> to &#8220;remote&#8221; table <em><strong>t2</strong></em> on all 24 of the <em><strong>nXX</strong></em> columns. I&#8217;ve created indexes on all the columns, messing around with index names, order of creation, and rebuilding, to cover possible selection criteria such as alphabetical order, ordering by <em><strong>data_object_id</strong></em> (rather than <em><strong>object_id</strong></em>), even ordering by name of indexed columns(!).</p> <p>Now the code to run a test:</p> <pre class="brush: plain; title: ; notranslate"> define m_target=orcl@loopback alter session set events '10053 trace name context forever'; set serveroutput off select t1.id, t2.id, t2.padding from t1 t1, t2@&amp;m_target t2 where t1.id = 99 and t2.n01 = t1.n01 and t2.n02 = t1.n02 and t2.n03 = t1.n03 and t2.n04 = t1.n04 and t2.n05 = t1.n05 and t2.n06 = t1.n06 and t2.n07 = t1.n07 and t2.n08 = t1.n08 and t2.n09 = t1.n09 and t2.n10 = t1.n10 /* */ and t2.n11 = t1.n11 and t2.n12 = t1.n12 and t2.n13 = t1.n13 and t2.n14 = t1.n14 and t2.n15 = t1.n15 and t2.n16 = t1.n16 and t2.n17 = t1.n17 and t2.n18 = t1.n18 and t2.n19 = t1.n19 and t2.n20 = t1.n20 /* */ and t2.n21 = t1.n21 and t2.n22 = t1.n22 and t2.n23 = t1.n23 and t2.n24 = t1.n24 ; select * from table(dbms_xplan.display_cursor(null,null,'outline')); alter session set events '10053 trace name context off'; </pre> <p>I&#8217;ve used a substitution variable for the name of the database link &#8211; it&#8217;s a convenience I have with all my distributed tests, a list of possible defines at the top of the script depending on which database I happen to be using at the time &#8211; then enabled the optimizer (10053) trace, set serveroutput off so that I can pull the execution plan from memory most easily, then executed the query.</p> <p>Here&#8217;s the execution plan &#8211; including the Remote section and Outline.</p> <pre class="brush: plain; title: ; notranslate"> ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT| ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 270 (100)| | | | | 1 | NESTED LOOPS | | 1 | 243 | 270 (6)| 00:00:01 | | | |* 2 | TABLE ACCESS FULL| T1 | 1 | 101 | 268 (6)| 00:00:01 | | | | 3 | REMOTE | T2 | 1 | 142 | 2 (0)| 00:00:01 | ORCL@~ | R-&gt;S | ------------------------------------------------------------------------------------------- Outline Data ------------- /*+ BEGIN_OUTLINE_DATA IGNORE_OPTIM_EMBEDDED_HINTS OPTIMIZER_FEATURES_ENABLE('12.2.0.1') DB_VERSION('12.2.0.1') ALL_ROWS OUTLINE_LEAF(@&quot;SEL$1&quot;) FULL(@&quot;SEL$1&quot; &quot;T1&quot;@&quot;SEL$1&quot;) FULL(@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) LEADING(@&quot;SEL$1&quot; &quot;T1&quot;@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) USE_NL(@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) END_OUTLINE_DATA */ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter(&quot;T1&quot;.&quot;ID&quot;=99) Remote SQL Information (identified by operation id): ---------------------------------------------------- 3 - SELECT &quot;N01&quot;,&quot;N02&quot;,&quot;N03&quot;,&quot;N04&quot;,&quot;N05&quot;,&quot;N06&quot;,&quot;N07&quot;,&quot;N08&quot;,&quot;N09&quot;,&quot;N10&quot;,&quot;N11&quot;,&quot;N1 2&quot;,&quot;N13&quot;,&quot;N14&quot;,&quot;N15&quot;,&quot;N16&quot;,&quot;N17&quot;,&quot;N18&quot;,&quot;N19&quot;,&quot;N20&quot;,&quot;N21&quot;,&quot;N22&quot;,&quot;N23&quot;,&quot;N24&quot;,&quot;ID&quot;,&quot;PA DDING&quot; FROM &quot;T2&quot; &quot;T2&quot; WHERE &quot;N01&quot;=:1 AND &quot;N02&quot;=:2 AND &quot;N03&quot;=:3 AND &quot;N04&quot;=:4 AND &quot;N05&quot;=:5 AND &quot;N06&quot;=:6 AND &quot;N07&quot;=:7 AND &quot;N08&quot;=:8 AND &quot;N09&quot;=:9 AND &quot;N10&quot;=:10 AND &quot;N11&quot;=:11 AND &quot;N12&quot;=:12 AND &quot;N13&quot;=:13 AND &quot;N14&quot;=:14 AND &quot;N15&quot;=:15 AND &quot;N16&quot;=:16 AND &quot;N17&quot;=:17 AND &quot;N18&quot;=:18 AND &quot;N19&quot;=:19 AND &quot;N20&quot;=:20 AND &quot;N21&quot;=:21 AND &quot;N22&quot;=:22 AND &quot;N23&quot;=:23 AND &quot;N24&quot;=:24 (accessing 'ORCL@LOOPBACK' ) </pre> <p>There&#8217;s a little oddity with the plan &#8211; specifically in the Outline: there&#8217;s a <em>&#8220;full(t2)&#8221;</em> hint which is clearly inappropriate and isn&#8217;t consistent with the cost of 2 for the REMOTE operation reported in the body of the plan. Fortunately the SQL forwarded to the &#8220;remote&#8221; database doesn&#8217;t include this hint and (you&#8217;ll have to take my word for it) used an indexed access path into the table.</p> <p>Where, though, is the indication that Oracle considered only 20 indexes? It&#8217;s in the 10053 trace file under the <em>&#8220;Base Statistical Information&#8221;</em> section in the subsection headed <em>&#8220;Index Stats&#8221;</em>:</p> <pre class="brush: plain; title: ; notranslate"> Index Stats:: Index: 0 Col#: 20 (NOT ANALYZED) LVLS: 1 #LB: 204 #DK: 250 LB/K: 1.00 DB/K: 400.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 19 (NOT ANALYZED) LVLS: 1 #LB: 204 #DK: 263 LB/K: 1.00 DB/K: 380.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 18 (NOT ANALYZED) LVLS: 1 #LB: 205 #DK: 277 LB/K: 1.00 DB/K: 361.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 17 (NOT ANALYZED) LVLS: 1 #LB: 205 #DK: 294 LB/K: 1.00 DB/K: 340.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 16 (NOT ANALYZED) LVLS: 1 #LB: 205 #DK: 312 LB/K: 1.00 DB/K: 320.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 15 (NOT ANALYZED) LVLS: 1 #LB: 205 #DK: 333 LB/K: 1.00 DB/K: 300.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 14 (NOT ANALYZED) LVLS: 1 #LB: 206 #DK: 357 LB/K: 1.00 DB/K: 280.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 13 (NOT ANALYZED) LVLS: 1 #LB: 206 #DK: 384 LB/K: 1.00 DB/K: 260.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 12 (NOT ANALYZED) LVLS: 1 #LB: 206 #DK: 416 LB/K: 1.00 DB/K: 240.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 11 (NOT ANALYZED) LVLS: 1 #LB: 206 #DK: 454 LB/K: 1.00 DB/K: 220.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 10 (NOT ANALYZED) LVLS: 1 #LB: 207 #DK: 500 LB/K: 1.00 DB/K: 200.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 9 (NOT ANALYZED) LVLS: 1 #LB: 207 #DK: 555 LB/K: 1.00 DB/K: 180.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 8 (NOT ANALYZED) LVLS: 1 #LB: 207 #DK: 625 LB/K: 1.00 DB/K: 160.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 7 (NOT ANALYZED) LVLS: 1 #LB: 208 #DK: 714 LB/K: 1.00 DB/K: 140.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 6 (NOT ANALYZED) LVLS: 1 #LB: 208 #DK: 833 LB/K: 1.00 DB/K: 120.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 5 (NOT ANALYZED) LVLS: 1 #LB: 208 #DK: 1000 LB/K: 1.00 DB/K: 100.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 4 (NOT ANALYZED) LVLS: 1 #LB: 208 #DK: 1250 LB/K: 1.00 DB/K: 80.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 3 (NOT ANALYZED) LVLS: 1 #LB: 209 #DK: 1666 LB/K: 1.00 DB/K: 60.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 2 (NOT ANALYZED) LVLS: 1 #LB: 209 #DK: 2500 LB/K: 1.00 DB/K: 40.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 Index: 0 Col#: 1 (NOT ANALYZED) LVLS: 1 #LB: 209 #DK: 5000 LB/K: 1.00 DB/K: 20.00 CLUF: 2002.00 NRW: 0.00 SSZ: 0.00 LGR: 0.00 CBK: 0.00 GQL: 0.00 CHR: 0.00 KQDFLG: 0 BSZ: 0 KKEISFLG: 0 </pre> <p>We have 20 indexes listed, and while they&#8217;re all called <em>&#8220;Index 0&#8221;</em> (and reported as <em>&#8220;Not Analyzed&#8221;</em>) we can see from their column definitions that they are (in reverse order) the indexes on columns <em><strong>n01</strong></em> through to <em><strong>n20</strong></em> &#8211; i.e. the last 20 indexes created. The optimizer has created its plan based only on its knowledge of these indexes.</p> <p>We might ask whether this matters or not &#8211; after all when the remote SQL gets to the remote database the remote optimizer is going to (re-)optimize it anyway and do the best it can with it, so at run-time Oracle could still end up using remote indexes that the local optimizer didn&#8217;t know about. So let&#8217;s get nasty and give the local optimizer a problem:</p> <pre class="brush: plain; title: ; notranslate"> create index t2_id on t2(id); select t1.id, t2.id, t2.padding from t1 t1, t2@&amp;m_target t2 where t1.id = 99 and t2.n01 = t1.n01 ; </pre> <p>I&#8217;ve created one more index on <em><strong>t2</strong></em>, which means the local optimizer is going to &#8220;forget&#8221; about the index that was the previous 20th index on the most recently created list for <em><strong>t2</strong></em>. That&#8217;s the index on <em><strong>(n01)</strong></em>, which would have been a very good index for this query. If this query were to run locally the optimizer would do a nested loop from <em><strong>t1</strong></em> to <em><strong>t2</strong></em> using the index on <em><strong>(n01)</strong></em> &#8211; but the optimizer no longer knows about that index, so we get the following plan:</p> <pre class="brush: plain; title: ; notranslate"> ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT| ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 538 (100)| | | | |* 1 | HASH JOIN | | 20 | 1140 | 538 (7)| 00:00:01 | | | |* 2 | TABLE ACCESS FULL| T1 | 1 | 9 | 268 (6)| 00:00:01 | | | | 3 | REMOTE | T2 | 100K| 4687K| 268 (6)| 00:00:01 | ORCL@~ | R-&gt;S | ------------------------------------------------------------------------------------------- Outline Data ------------- /*+ BEGIN_OUTLINE_DATA IGNORE_OPTIM_EMBEDDED_HINTS OPTIMIZER_FEATURES_ENABLE('12.2.0.1') DB_VERSION('12.2.0.1') ALL_ROWS OUTLINE_LEAF(@&quot;SEL$1&quot;) FULL(@&quot;SEL$1&quot; &quot;T1&quot;@&quot;SEL$1&quot;) FULL(@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) LEADING(@&quot;SEL$1&quot; &quot;T1&quot;@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) USE_HASH(@&quot;SEL$1&quot; &quot;T2&quot;@&quot;SEL$1&quot;) END_OUTLINE_DATA */ Predicate Information (identified by operation id): --------------------------------------------------- 1 - access(&quot;T2&quot;.&quot;N01&quot;=&quot;T1&quot;.&quot;N01&quot;) 2 - filter(&quot;T1&quot;.&quot;ID&quot;=99) Remote SQL Information (identified by operation id): ---------------------------------------------------- 3 - SELECT &quot;N01&quot;,&quot;ID&quot;,&quot;PADDING&quot; FROM &quot;T2&quot; &quot;T2&quot; (accessing 'ORCL@LOOPBACK' ) </pre> <p>Oracle is going to do a hash join and apply the join predicate late. Although the remote optimizer can sometimes rescue us from a mistake made by the local optimizer and use indexes that the local optimizer doesn&#8217;t know about, there are times when the remote SQL generated by the local optimizer is so rigidly associated with the expected plan that there&#8217;s no way the remote optimizer can workaround the assumptions made by the local optimizer.</p> <p>So when you create (or drop and recreate) an index, it&#8217;s just possible that a distributed plan will have to change because the local optimizer is no longer aware of an index that exists at the remote site.</p> <h3>tl;dr</h3> <p>Be very cautious about dropping and recreating indexes if the table in question</p> <ol> <li>has more than 20 indexes</li> <li>and is used at the remote end of a distributed execution plan</li> </ol> <p>The optimizer will consider only 20 of the indexes on the table, choosing the ones with the highest object_ids. If you drop and recreate an index then it gets a new (highest) object_id and a plan may change because the index that Oracle was previously using is no longer in the top 20.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18256 Tue May 08 2018 07:53:28 GMT-0400 (EDT) 18c Scalable Sequences Part III (Too Much Rope) https://richardfoote.wordpress.com/2018/05/08/18c-scalable-sequences-part-iii-too-much-rope/ I previously looked in Part I and Part II how Scalable Sequences officially released in 18c can reduce index contention issues, by automatically assigning a 6 digit prefix to the sequence value based on the instance ID and session ID of the session. We need to be careful and consider this 6 digit prefix if [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5620 Tue May 08 2018 03:06:26 GMT-0400 (EDT) How to Hide SchemaNotation in your Data Modeler Data Dictionary Reports https://www.thatjeffsmith.com/archive/2018/05/how-to-hide-schema-notation-in-your-data-modeler-data-dictionary-reports/ <p>Someone asked, hey, I know how to hide the schema notation in our diagrams</p> <p>You can too &#8211; <a href="https://www.thatjeffsmith.com/archive/2013/01/how-to-hide-schema-notation-from-tables-in-oracle-sql-developer-data-modeler/" rel="noopener" target="_blank">READ THIS POST</a> &#8211; </p> <p>&#8230;but (there is always a BUT), how can we hide it from the data dictionary reports as well?</p> <p>The answer is &#8211; you need to manage the report template.</p> <p>When you open the Report dialog, switch to the Custom Templates.</p> <div id="attachment_6624" style="width: 576px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/custom-template1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/custom-template1.png" alt="" width="566" height="728" class="size-full wp-image-6624" /></a><p class="wp-caption-text">Click the &#8216;Manage&#8217; button.</p></div> <p>We give you two custom templates to play with, &#8216;Table_one_level_list_Props&#8217; and &#8216;Tables_2_Levels&#8217;.</p> <!-- Easy AdSense V7.43 --> <!-- [midtext: 1 urCount: 1 urMax: 0] --> <div class="ezAdsense adsense adsense-midtext" style="float:left;margin:12px;"><script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> <!-- 336-rectangle --> <ins class="adsbygoogle" style="display:inline-block;width:336px;height:280px" data-ad-client="ca-pub-1495560488595385" data-ad-slot="5904412551"></ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script></div> <!-- Easy AdSense V7.43 --> <p>Pick one.</p> <p>Click the Edit button.</p> <div id="attachment_6625" style="width: 761px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/custom-template2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/custom-template2.png" alt="" width="751" height="364" class="size-full wp-image-6625" /></a><p class="wp-caption-text">This button.</p></div> <p>Cool.</p> <p>Now, you&#8217;re going to see a lot of properties. The first one on both the &#8216;Columns&#8217; and &#8216;PK, UKs, and Indexes&#8217; tables has a property name of &#8216;Schema Object.&#8217; </p> <p>Remove that from the report property by using the Left arrow button, and save the report design.</p> <div id="attachment_6626" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/no-schema-report.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/no-schema-report.png" alt="" width="1024" height="571" class="size-full wp-image-6626" /></a><p class="wp-caption-text">Remove, save, run report using this custom template.</p></div> <p>Once you&#8217;re back to the Reports dialog, make sure the right Custom Reports template is assigned and generate the report.</p> <p>Voila, no mention of a SCHEMA, anywhere.</p> <div id="attachment_6627" style="width: 1091px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/no-schema-report2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/no-schema-report2.png" alt="" width="1081" height="760" class="size-full wp-image-6627" /></a><p class="wp-caption-text">This isn&#8217;t a schema-less design&#8230;that&#8217;s&#8230;DIFFERENT.</p></div> thatjeffsmith https://www.thatjeffsmith.com/?p=6623 Mon May 07 2018 16:33:43 GMT-0400 (EDT) Container orchestrators and persistent volumes – Part 1: DC/OS https://blog.pythian.com/container-orchestrators-persistent-volumes-part-1-dc-os/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><img class="alignleft wp-image-103914" src="https://blog.pythian.com/wp-content/uploads/image4.jpg" alt="Container orchestrators" width="190" height="127" srcset="https://blog.pythian.com/wp-content/uploads/image4.jpg 612w, https://blog.pythian.com/wp-content/uploads/image4-465x310.jpg 465w, https://blog.pythian.com/wp-content/uploads/image4-180x120.jpg 180w, https://blog.pythian.com/wp-content/uploads/image4-350x233.jpg 350w" sizes="(max-width: 190px) 100vw, 190px" /></p> <p><i style="font-size: large;">One key fac</i><i style="font-size: large;">tor on running stateful services (i.e., database servers) on container-centric, managed environments is portability: you need the dataset to follow your container instance. I spent some time reviewing how external persistent volumes are handled when working with DC/OS in AWS.</i></p> <h3></h3> <h3></h3> <h3 id="h.izv906wdj2fv" class="c1 subtitle"><span class="c7">DC/OS on AWS in 15 minutes</span></h3> <p class="c3">I first started deploying the latest version of <span class="c16"><a class="c12" href="https://www.google.com/url?q=https://dcos.io/&amp;sa=D&amp;ust=1523635781099000">DC/OS</a></span> on AWS using the <span class="c16"><a class="c12" href="https://www.google.com/url?q=https://dcos.io/install/&amp;sa=D&amp;ust=1523635781099000">cloud formation template</a></span><span class="c0"> provided by Mesosphere. Master HA is not relevant or required for a lab so I went with the Single master deployment.</span></p> <p class="c3">Once your DC/OS CloudFormation deployment is ready you will get a URL for the web console in the output section, under <span class="c4 c13">DnsAddress</span><span class="c0">. You can use Google, Github or Microsoft SSO credentials to log into the console.<br /> </span></p> <p><img class="wp-image-103915 alignright" src="https://blog.pythian.com/wp-content/uploads/image3-2.png" alt="DC/OS MySQL" width="359" height="465" srcset="https://blog.pythian.com/wp-content/uploads/image3-2.png 609w, https://blog.pythian.com/wp-content/uploads/image3-2-465x602.png 465w, https://blog.pythian.com/wp-content/uploads/image3-2-350x453.png 350w" sizes="(max-width: 359px) 100vw, 359px" /></p> <p><img class="alignleft size-full wp-image-103913" src="https://blog.pythian.com/wp-content/uploads/image5-2.png" alt="DC/OS service" width="1829" height="361" srcset="https://blog.pythian.com/wp-content/uploads/image5-2.png 1829w, https://blog.pythian.com/wp-content/uploads/image5-2-465x92.png 465w, https://blog.pythian.com/wp-content/uploads/image5-2-350x69.png 350w" sizes="(max-width: 1829px) 100vw, 1829px" /></p> <h3 id="h.izv906wdj2fv" class="c1 subtitle"><span class="c7">Experimenting with external persistent volumes</span></h3> <p class="c3">Let’s deploy a simple MySQL instance from the DC/OS catalog (<span class="c4"><b>DC/OS GUI -&gt; Catalog -&gt; Search -&gt; MySQL</b></span><span class="c0">). </span></p> <p class="c3"><span class="c0">In the service configuration, we will make the data volume “persistent” and “external”. </span></p> <p class="c3">After launching the service you will notice that an EBS volume was created with the name specified in the “volume name” field. That volume is also mounted on the slave node where the MySQL service is running (to see the IP of the node, go to <strong><span class="c4">Services -&gt; mysql</span></strong>).</p> <p>&nbsp;</p> <p class="c3"><span class="c0">You can also see any volumes associated with a service by clicking in the Volumes tab:</span></p> <p><img class="alignleft size-full wp-image-103917" src="https://blog.pythian.com/wp-content/uploads/image1-2.png" alt="DC/OS volumes" width="1820" height="287" srcset="https://blog.pythian.com/wp-content/uploads/image1-2.png 1820w, https://blog.pythian.com/wp-content/uploads/image1-2-465x73.png 465w, https://blog.pythian.com/wp-content/uploads/image1-2-350x55.png 350w" sizes="(max-width: 1820px) 100vw, 1820px" /></p> <p>&nbsp;</p> <p class="c3">So at this point we have a MySQL instance running within a container on node 10.0.0.160, and since we requested a persistent, external volume and we are in AWS, DC/OS took care of it, provisioned an EBS volume and made the necessary arrangements for the data directory to be stored there. Pretty neat isn’t it?</p> <p class="c3"><span class="c0">Now let’s see how DC/OS deals with node failures by forcing one on the node where the MySQL instance is running (10.0.0.160 in this case). To make things more interesting, I created a table, and inserted some data.</span></p> <p class="c3"><span class="c0">Something worth mentioning here is that the CloudFormation template used will make slave nodes part of autoscale groups, so if you try to stop/terminate the instance, new instances will be provisioned. Furthermore, if the instance is restarted, the service will be migrated, but it may die if the failed server comes back to life as DC/OS will not be able to detach the EBS volume.</span></p> <p class="c3"><span class="c0">I was able to get the MySQL service migrated by restarting the node, but as you can see in the image below, I ran into the issue mentioned above: the service failed and retried until I shutdown the 10.0.0.160 instance for the volume to be detached.</span></p> <p><img class="alignleft size-full wp-image-103916" src="https://blog.pythian.com/wp-content/uploads/image2-3.png" alt="DC/OS service restart" width="1824" height="411" srcset="https://blog.pythian.com/wp-content/uploads/image2-3.png 1824w, https://blog.pythian.com/wp-content/uploads/image2-3-465x105.png 465w, https://blog.pythian.com/wp-content/uploads/image2-3-350x79.png 350w" sizes="(max-width: 1824px) 100vw, 1824px" /></p> <p class="c3"><span class="c0">After DC/OS detects that the node is unreachable, it will start migrating the service to an online node. After the service is marked as Healthy (Green) again, we can check the slave node IP where it is running. Furthermore, if we check the EBS volume, it will now appear as attached to the new slave node, and the data is accessible again.</span></p> <p><img class="alignleft size-full wp-image-103912" src="https://blog.pythian.com/wp-content/uploads/image6-1.png" alt="EBS volumes" width="1750" height="85" srcset="https://blog.pythian.com/wp-content/uploads/image6-1.png 1750w, https://blog.pythian.com/wp-content/uploads/image6-1-465x23.png 465w, https://blog.pythian.com/wp-content/uploads/image6-1-350x17.png 350w" sizes="(max-width: 1750px) 100vw, 1750px" /><br /> &nbsp;</p> <h3 id="h.izv906wdj2fv" class="c1 subtitle"><span class="c7">REXRay</span></h3> <p class="c3">REXRay is the technology in charge of provisioning, mounting and unmounting the EBS volumes to EC2 instances and making the mount point available to the container. You will find rexray running as a daemon on the DC/OS slave nodes and containers running with the <span class="c5"><i>&#8211;volume-driver=rexray</i> </span>parameter. REXRay configuration files specifies “ebs” as the service and the CloudFormation template assigns the necessary roles to the compute instances for interact<span class="c0">ion with the EBS APIs. Running a docker inspect on the running MySQL instance will reveal the EBS mount point, the associated container path and the driver in use:</span></p> <pre class="brush: bash; title: ; notranslate"> ip-10-0-0-54 ~ # docker inspect 182c97e9d4e4 | jq .[].Mounts [ { &quot;Name&quot;: &quot;mysql&quot;, &quot;Source&quot;: &quot;/var/lib/libstorage/volumes/mysql/data&quot;, &quot;Destination&quot;: &quot;/var/lib/mysql&quot;, &quot;Driver&quot;: &quot;rexray&quot;, &quot;Mode&quot;: &quot;rw&quot;, &quot;RW&quot;: true, &quot;Propagation&quot;: &quot;rprivate&quot; }, </pre> <h3 id="h.izv906wdj2fv" class="c1 subtitle"><span class="c7">Conclusions</span></h3> <p class="c3">We proved<span class="c0"> that DC/OS will automatically and transparently take care of moving your external persistent volumes (EBS in this case) to the slave nodes where the associated service tasks are running. It is important to highlight that EBS volumes cannot be detached from running instances.</span></p> <p class="c3">Finally, even though you could deploy a stand alone database instance as described here, Galera cluster may be a better choice for running MySQL on DC/OS (as explained in this <a href="https://www.percona.com/live/18/sessions/running-database-services-on-dcos">Percona Live 2018 session</a>).</p> <p>&nbsp;</p> </div></div> Gabriel Ciciliani https://blog.pythian.com/?p=103910 Mon May 07 2018 14:44:40 GMT-0400 (EDT) Build a Integrated Replicat using JSON https://dbasolved.com/2018/05/07/build-a-integrated-replicat-using-json/ <p dir="auto">In a previous post, I showed you how to build an Integrated Extract (IE) and Distribution Path by using JSON and cURL. In this post, let’s look at how you can build an Integrated Replicat (IR) in the same manner.</p> <p dir="auto">To build a replicat using JSON, the JSON document is made up of the following 8:</p> <p dir="auto">Config &#8211; Details for the associated parameter file <br />Source &#8211; Where the replicat should read transactions from <br />Credentials &#8211; What credentials in the credential stores should be used <br />Checkpoint &#8211; What checkpoint table is used by the replicat<br />Mode &#8211; What type of replicat will be built<br />Registration &#8211; Register the replicat with the database<br />Begin &#8211; At what timeframe the replicat should start <br />Status &#8211; If the extract should be started or not</p> <p dir="auto">The resulting JSON document would look like the following:</p> <p dir="ltr">{<br /> &#8220;config&#8221;:[<br /> &#8220;Replicat REPTS&#8221;,<br /> &#8220;UseridAlias TGGATE&#8221;,<br /> &#8220;Map SOE.*, Target SOE.*;&#8221;<br /> ],<br /> &#8220;source&#8221;:{<br /> &#8220;name&#8221;:&#8221;bc&#8221;<br /> },<br /> &#8220;credentials&#8221;:{<br /> &#8220;alias&#8221;:&#8221;TGGATE&#8221;<br /> },<br /> &#8220;checkpoint&#8221;:{<br /> &#8220;table&#8221;:&#8221;ggate.checkpoints&#8221;<br /> },<br /> &#8220;mode&#8221;:{<br /> &#8220;type&#8221;:&#8221;integrated&#8221;,<br /> &#8220;parallel&#8221;: true<br /> },<br /> &#8220;registration&#8221;: &#8220;standard&#8221;,<br /> &#8220;begin&#8221;:&#8221;now&#8221;,<br /> &#8220;status&#8221;:&#8221;stopped&#8221;<br />}</p> <p dir="ltr">Now that you have a valid JSON document, a cURL command for building the integrated replicat can be done as follows:</p> <p>curl -X POST \<br /> http://<a href="http://localhost:17001/services/v2/replicats/REPTS" target="_blank"></a>localhost:17001/services/v2/replicats/REPTS\<br /> -H &#8216;Cache-Control: no-cache&#8217; \<br /> -d &#8216;{<br /> &#8220;config&#8221;:[<br /> &#8220;Replicat REPTS&#8221;,<br /> &#8220;UseridAlias TGGATE&#8221;,<br /> &#8220;Map SOE.*, Target SOE.*;&#8221;<br /> ],<br /> &#8220;source&#8221;:{<br /> &#8220;name&#8221;:&#8221;bc&#8221;<br /> },<br /> &#8220;credentials&#8221;:{<br /> &#8220;alias&#8221;:&#8221;TGGATE&#8221;<br /> },<br /> &#8220;checkpoint&#8221;:{<br /> &#8220;table&#8221;:&#8221;ggate.checkpoints&#8221;<br /> },<br /> &#8220;mode&#8221;:{<br /> &#8220;type&#8221;:&#8221;integrated&#8221;,<br /> &#8220;parallel&#8221;: true<br /> },<br /> &#8220;registration&#8221;: &#8220;standard&#8221;,<br /> &#8220;begin&#8221;:&#8221;now&#8221;,<br /> &#8220;status&#8221;:&#8221;stopped&#8221;<br />}&#8217;</p> <p dir="ltr">Just like the Integrated Extract (IE) and Distribution Service, the Integrated Replicat (IR) is created in a stopped state. At this point, you can start the IR and validate whatchanges need to be made to ensure replication happens.</p> <p dir="ltr">Enjoy!!!</p> Bobby Curtis http://dbasolved.com/?p=1882 Mon May 07 2018 14:15:00 GMT-0400 (EDT) Cassandra open-source log analysis in Kibana, using filebeat, modeled in Docker https://blog.pythian.com/cassandra-open-source-log-analysis-kibana-using-filebeat-modeled-docker/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">I was recently asked to set up a solution for Cassandra open-source log analysis to include in an existing Elasticsearch-Logstash-Kibana (ELK) stack. After some research on more of the newer capabilities of the technologies, I realized I could use &#8220;beats&#8221; in place of the heavier logstash processes for basic monitoring. This basic monitoring would not involve extensive log transformation.</span></p> <p><span style="font-weight: 400;">The code to run this demo is available to clone or fork at </span><a href="https://github.com/pythian/cassandra-elk"><span style="font-weight: 400;">https://github.com/pythian/cassandra-elk</span></a><span style="font-weight: 400;">. The only other requirement is Docker (I am using Docker version 18.05.0-ce-rc1) &#8212; using Docker for Mac or Docker for Windows will be most convenient. </span></p> <p><span style="font-weight: 400;">In a typical production system, you would already have Cassandra running, but all the pieces are included in the Docker stack here so you can start from zero. The model here assumes ELK and a Cassandra cluster are running in your environment, and you need to stream the Cassandra logs into your monitoring system. </span></p> <p><span style="font-weight: 400;">In this setup, the Cassandra logs are being ingested into Elasticsearch and visualized via Kibana. I have included some ways to see data at each step of the workflow in the final section below.</span></p> <p><b>Start the containers:</b></p> <pre><span style="font-weight: 400;">docker-compose up -d </span></pre> <p><span style="font-weight: 400;">(Note: The cassandra-env.sh included with this test environment limits the memory used by the setup via MAX_HEAP_SIZE and HEAP_NEWSIZE, allowing it to be run on a laptop with small memory. This would not be the case in production.)</span></p> <p><b>Set up the test Cassandra cluster:</b></p> <p><span style="font-weight: 400;">As the Docker containers are starting up, it can be convenient to see resource utilization via ctop:</span></p> <p><img class="alignnone wp-image-104079 size-full" src="https://blog.pythian.com/wp-content/uploads/image3-4.png" alt="Example of ctop resource monitor for Docker containers in open-source log analysis for Cassandra" width="826" height="181" srcset="https://blog.pythian.com/wp-content/uploads/image3-4.png 826w, https://blog.pythian.com/wp-content/uploads/image3-4-465x102.png 465w, https://blog.pythian.com/wp-content/uploads/image3-4-350x77.png 350w" sizes="(max-width: 826px) 100vw, 826px" /></p> <p><b>Set up the filebeat software</b></p> <p><span style="font-weight: 400;">Do the following on each Cassandra node.</span></p> <p><b>1. Download the software</b></p> <p><span style="font-weight: 400;">You would likely not need to install curl in your environment, but the Docker images used here are bare-bones by design. The </span><i><span style="font-weight: 400;">apt update</span></i><span style="font-weight: 400;"> statement is also necessary since typically repos are cleared of files after the requested packages are installed via the Dockerfile.</span></p> <pre><span style="font-weight: 400;">apt update</span> <span style="font-weight: 400;">apt install curl -y</span> <span style="font-weight: 400;">curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.3-amd64.deb</span> <span style="font-weight: 400;">dpkg -i filebeat-6.2.3-amd64.deb</span></pre> <p><span style="font-weight: 400;">For other operating systems, see: </span><a href="https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html"><span style="font-weight: 400;">https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html</span></a><span style="font-weight: 400;">.</span></p> <p>&nbsp;</p> <p><b>2. Configure filebeat</b></p> <p><span style="font-weight: 400;">The beats software allows for basic filtering and transformation via this configuration file. Put the below in /etc/filebeat/filebeat.yml.</span></p> <p><span style="font-weight: 400;">(This is edited from an example at: </span><a href="https://github.com/thelastpickle/docker-cassandra-bootstrap/blob/master/cassandra/config/filebeat.yml"><span style="font-weight: 400;">https://github.com/thelastpickle/docker-cassandra-bootstrap/blob/master/cassandra/config/filebeat.yml</span></a><span style="font-weight: 400;">.)</span></p> <p><span style="font-weight: 400;">The values in the output.elasticsearch and setup.kibana are their respective IP addresses and port numbers. For filebeat.prospectors &#8212; a </span><i><span style="font-weight: 400;">prospector</span></i><span style="font-weight: 400;"> manages all the log inputs &#8212; two types of logs are used here, the system log and the garbage collection log. For each, we will exclude any compressed (.zip) files. The multiline* settings define how multiple lines in the log files are handled. Here, the log manager will find files that start with any of the patterns shown and append the following lines not matching the pattern until it reaches a new match. More options available at: </span><span style="font-weight: 400;"><a href="https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html">https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html</a>.</span></p> <pre><span style="font-weight: 400;">output.elasticsearch:</span> <span style="font-weight: 400;">    enabled: true</span> <span style="font-weight: 400;">    hosts: ["172.16.238.31:9200"]</span> <span style="font-weight: 400;">setup.kibana:</span> <span style="font-weight: 400;">    host: "172.16.238.33:5601"</span> <span style="font-weight: 400;">filebeat.prospectors:</span> <span style="font-weight: 400;">    - input_type: log</span> <span style="font-weight: 400;">      paths:</span> <span style="font-weight: 400;">        - "/var/log/cassandra/system.log*"</span> <span style="font-weight: 400;">      document_type: cassandra_system_logs</span> <span style="font-weight: 400;">      exclude_files: ['\.zip$']</span> <span style="font-weight: 400;">      multiline.pattern: '^TRACE|DEBUG|WARN|INFO|ERROR'</span> <span style="font-weight: 400;">      multiline.negate: true</span> <span style="font-weight: 400;">      multiline.match: after</span> <span style="font-weight: 400;">    - input_type: log</span> <span style="font-weight: 400;">      paths:</span> <span style="font-weight: 400;">        - "/var/log/cassandra/debug.log*"</span> <span style="font-weight: 400;">      document_type: cassandra_debug_logs</span> <span style="font-weight: 400;">      exclude_files: ['\.zip$']</span> <span style="font-weight: 400;">      multiline.pattern: '^TRACE|DEBUG|WARN|INFO|ERROR'</span> <span style="font-weight: 400;">      multiline.negate: true</span> <span style="font-weight: 400;">      multiline.match: after</span></pre> <p>&nbsp;</p> <p><b>3. Set up Kibana dashboards</b></p> <pre><span style="font-weight: 400;">filebeat setup --dashboards</span></pre> <p>&nbsp;</p> <p><span style="font-weight: 400;">Example output:</span></p> <pre><span style="font-weight: 400;">Loaded dashboards</span></pre> <p>&nbsp;</p> <p><b>4. Start the beat</b></p> <pre><span style="font-weight: 400;">service filebeat start</span></pre> <p>&nbsp;</p> <p><span style="font-weight: 400;">Example output:</span></p> <pre><span style="font-weight: 400;">2018-04-12T20:43:03.798Z</span> <span style="font-weight: 400;">INFO</span> <span style="font-weight: 400;">instance/beat.go:468</span> <span style="font-weight: 400;">Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]</span> <span style="font-weight: 400;">2018-04-12T20:43:03.799Z</span> <span style="font-weight: 400;">INFO</span> <span style="font-weight: 400;">instance/beat.go:475</span> <span style="font-weight: 400;">Beat UUID: 2f43562f-985b-49fc-b229-83535149c52b</span> <span style="font-weight: 400;">2018-04-12T20:43:03.800Z</span> <span style="font-weight: 400;">INFO</span> <span style="font-weight: 400;">instance/beat.go:213</span> <span style="font-weight: 400;">Setup Beat: filebeat; Version: 6.2.3</span> <span style="font-weight: 400;">2018-04-12T20:43:03.801Z</span> <span style="font-weight: 400;">INFO</span> <span style="font-weight: 400;">elasticsearch/client.go:145</span> <span style="font-weight: 400;">Elasticsearch url: http://172.16.238.31:9200</span> <span style="font-weight: 400;">2018-04-12T20:43:03.802Z</span> <span style="font-weight: 400;">INFO</span> <span style="font-weight: 400;">pipeline/module.go:76</span> <span style="font-weight: 400;">Beat name: C1</span> <span style="font-weight: 400;">Config OK</span></pre> <p>&nbsp;</p> <p><b>View the graphs:</b></p> <p><span style="font-weight: 400;">Then view the Kibana graphs in a local browser at: </span><a href="http://localhost:5601"><span style="font-weight: 400;">http://localhost:5601</span></a><span style="font-weight: 400;">. </span></p> <p>&nbsp;</p> <p><span style="font-weight: 400;">Run some sample load against one of the nodes to get more logs to experiment with:</span></p> <pre><span style="font-weight: 400;">cassandra-stress write n=20000 -pop seq=1..20000 -rate threads=4</span></pre> <p><img class="alignnone wp-image-104077 size-full" src="https://blog.pythian.com/wp-content/uploads/image1-4.png" alt="Example output from Cassandra-stress being used to populate test data" width="720" height="171" srcset="https://blog.pythian.com/wp-content/uploads/image1-4.png 720w, https://blog.pythian.com/wp-content/uploads/image1-4-465x110.png 465w, https://blog.pythian.com/wp-content/uploads/image1-4-350x83.png 350w" sizes="(max-width: 720px) 100vw, 720px" /></p> <p><span style="font-weight: 400;">Here are some sample queries to run in Kibana:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">message:WARN*</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">message:(ERROR* OR WARN*)</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">message:(ERROR* OR WARN*) AND beat.hostname:DC1C2</span></li> </ul> <p>&nbsp;</p> <p><span style="font-weight: 400;">You can also filter the display by choosing from the available fields on the left.</span></p> <p><img class="alignnone wp-image-104078 size-full" src="https://blog.pythian.com/wp-content/uploads/image2-5.png" alt="Kibana dashboard example display" width="1020" height="550" srcset="https://blog.pythian.com/wp-content/uploads/image2-5.png 1020w, https://blog.pythian.com/wp-content/uploads/image2-5-465x251.png 465w, https://blog.pythian.com/wp-content/uploads/image2-5-350x189.png 350w" sizes="(max-width: 1020px) 100vw, 1020px" /></p> <p>&nbsp;</p> <p><span style="font-weight: 400;">If you would like to see what the logs look at each step of the workflow, view logs within the Cassandra container in /var/log/cassandra like this:</span></p> <pre><span style="font-weight: 400;">tail /var/log/cassandra/debug.log</span></pre> <p><span style="font-weight: 400;">Example output:</span></p> <pre><span style="font-weight: 400;">WARN  [PERIODIC-COMMIT-LOG-SYNCER] 2018-05-07 14:01:09,216 NoSpamLogger.java:94 - Out of 0 commit log syncs over the past 0.00s with average duration of Infinityms, 1 have exceeded the configured commit interval by an average of 80.52ms</span></pre> <p>&nbsp;</p> <p><span style="font-weight: 400;">View this data stored in Elasticsearch (in JSON format) in a browser like this:</span></p> <p><a href="http://localhost:9200/_search?q=(message:(ERROR*%20OR%20WARN*)%20AND%20beat.hostname:DC1C2)"><span style="font-weight: 400;">http://localhost:9200/_search?q=(message:(ERROR*%20OR%20WARN*)%20AND%20beat.hostname:DC1C2)</span></a></p> <p><span style="font-weight: 400;">Example output:</span></p> <p><img class="alignnone size-full wp-image-104082" src="https://blog.pythian.com/wp-content/uploads/image4-3.png" alt="" width="836" height="570" srcset="https://blog.pythian.com/wp-content/uploads/image4-3.png 836w, https://blog.pythian.com/wp-content/uploads/image4-3-465x317.png 465w, https://blog.pythian.com/wp-content/uploads/image4-3-350x239.png 350w" sizes="(max-width: 836px) 100vw, 836px" /></p> </div></div> Valerie Parham-Thompson https://blog.pythian.com/?p=104076 Mon May 07 2018 13:27:43 GMT-0400 (EDT) Managing AWR in Active Data Guard Standby Databases http://oracledba.blogspot.com/2018/05/managing-awr-in-adg-standby-databases.html <div class="separator" style="clear: both; text-align: center;"><a imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-QVzhl1vvf70/WvBzm6xeR_I/AAAAAAAFssU/5VE6UQGUeUYwiTRwwZAlDAJUWrmj6H7pQCLcBGAs/s400/awr3.png" width="400" height="216" data-original-width="483" data-original-height="261" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><b>Problem</b><br />- Automatic Workload Repository (AWR) snapshots cannot be taken in a read-only standby environment.<br />- Performance monitoring and analysis is limited to basic STATSPACK functionality .<br /><br /><b>Solution</b><br />In Oracle Database <b>12.2</b>, the AWR framework is enhanced to support capture of remote snapshots from any generic database, including Active Data Guard (ADG) databases. This framework is called the Remote Management Framework (RMF).<br />• A target catalog database collects snapshots from the remote databases (sources).<br />• Snapshots can be collected automatically or manually.<br />• AWR tables on the catalog database accumulate snapshot data from all sources via database links.<br />• Source databases must be registered on the catalog via new DBMS_WORKLOAD_REPOSITORY.REGISTER_REMOTE_DATABASE API<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://draft.blogger.com/null" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="734" data-original-width="1515" height="308" src="https://3.bp.blogspot.com/-ePXO9KTQW4g/WvBreAYJbOI/AAAAAAAFsrw/dKfMsmXiCuU0pswFWS2oopMkvz9ouF_RwCLcBGAs/s640/awr2.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://draft.blogger.com/null" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="716" data-original-width="1462" height="312" src="https://4.bp.blogspot.com/-HyUxXYhEX54/WvBrrtcVBRI/AAAAAAAFsr0/DeIqr3afVoAYdA-8uRPVxD8Dldl54KWgQCLcBGAs/s640/awr22.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><br />These are the basic steps for setting up the RMF topology for generating AWR on the physical standby:<br />1. Configure database nodes to add to the topology.<br />2. Create the topology.<br />3. Register database nodes with the topology.<br />4. Create remote snapshots.<br />5. Generate the AWR.<br /><br />In this example, we set the repository in the primary database; therefore it is called the target system. The standby database is called the source system.<br />In the following example the primary database name is: pdb and the standby database name is: sdb. For clarify, the sqlprompt was changed from SQL&gt; to the database role Primary/Standby (for example: set sqlprompt "PRIMARY&gt; ")<br />The RMF APIs are declared in the PL/SQL package DBMS_UMF. All the AWR-related operations in RMF can be performed only by the SYS$UMF user. Since the SYS$UMF user is locked by default, it must be unlocked before deploying the RMF topology:<br /><blockquote>PRIMARY&gt; alter user sys$umf identified by sysumf account unlock;</blockquote>Create the database link between the primary database and the standby database and vice versa:<br /><blockquote>PRIMARY&gt; create DATABASE LINK DBLINK_pdb_to_sdb CONNECT TO sys$umf IDENTIFIED BY sysumf using 'sdb';<br />PRIMARY&gt; create DATABASE LINK DBLINK_sdb_to_pdb CONNECT TO sys$umf IDENTIFIED BY sysumf using 'pdb';</blockquote>Each database node in a topology must be assigned a unique name (default is DB_UNIQUE_NAME):<br /><blockquote>PRIMARY&gt; exec DBMS_UMF.configure_node ('PDB');</blockquote>Since the standby database is remote to the target system (the primary database), we can register it via the corresponding database link:<br /><blockquote>STANDBY&gt; exec DBMS_UMF.configure_node ('sdb', 'DBLINK_sdb_to_pdb');</blockquote>Create the RMF topology and designate the node on which it is executed as the destination node for that topology:<br /><blockquote>PRIMARY&gt; exec DBMS_UMF.create_topology ('Topology_1');<br />PRIMARY&gt; select * from DBA_UMF_TOPOLOGY;<br />PRIMARY&gt; select * from DBA_UMF_REGISTRATION;</blockquote>Register the standby database in the AWR using the RMF:<br /><blockquote>PRIMARY&gt; exec DBMS_WORKLOAD_REPOSITORY.register_remote_database(node_name=&gt;'sdb');<br />PRIMARY&gt; select * from DBA_UMF_SERVICE;</blockquote>Create a remote snapshot using the RMF:<br /><blockquote>PRIMARY&gt; exec DBMS_WORKLOAD_REPOSITORY.CREATE_REMOTE_SNAPSHOT('sdb');</blockquote>Create AWR Report.<br /><blockquote class="tr_bq">PRIMARY&gt; @?/rdbms/admin/awrrpti.sql<br />Specify the Report Type<br />~~~~~~~~~~~~~~~~~~~~~~~<br />AWR reports can be generated in the following formats. Please enter the name of the format at the prompt. Default value is 'html'.<br /><br />'html' HTML format (default)<br />'text' Text format<br />'active-html' Includes Performance Hub active report<br /><br />Enter value for report_type:<br /><br />Type Specified: html<br /><br />Instances in this Workload Repository schema<br />~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br />DB Id Inst Num DB Name Instance Host<br />------------ ---------- --------- ---------- ------<br /><b>3810102760 1 PDB sdb Local_host</b><br />* 3393159014 1 PDB pdb Remote_Host<br />Enter value for dbid: <b>3810102760</b></blockquote><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://draft.blogger.com/null" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="543" data-original-width="1528" height="228" src="https://4.bp.blogspot.com/-vQpDfJ6j_ZU/WvBo30FSeVI/AAAAAAAFsrk/9JOzEgqAdYgo2SmlMMtPhY7WcumXgFhCgCLcBGAs/s640/Capture2.JPG" width="640" /></a></div><br />Source:<br />Gathering Database Statistics - <a href="https://docs.oracle.com/database/122/TGDBA/gathering-database-statistics.htm#TGDBA232">https://docs.oracle.com/database/122/TGDBA/gathering-database-statistics.htm#TGDBA232</a><br />DBMS_UMF - <a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/arpls/DBMS_UMF.html">https://docs.oracle.com/en/database/oracle/oracle-database/12.2/arpls/DBMS_UMF.html </a><br />DBMS_WORKLOAD_REPOSITORY - <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/DBMS_WORKLOAD_REPOSITORY.html">https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/DBMS_WORKLOAD_REPOSITORY.html </a><br />Oracle Active Data Guard: Power, Speed, Ease, and Protection - <a href="http://www.oracle.com/technetwork/database/availability/con6531-oracle-active-data-guard-3334919.pdf">http://www.oracle.com/technetwork/database/availability/con6531-oracle-active-data-guard-3334919.pdf </a> Yossi Nixon tag:blogger.com,1999:blog-6061714.post-8174786804151429844 Mon May 07 2018 11:43:00 GMT-0400 (EDT) ODTUG Kscope18 Updates https://www.odtug.com/p/bl/et/blogaid=796&source=1 This week's ODTUG Kscope18 updates includes the announcement of the Thursday Deep Dives, the Daily Events, special hotel room rate, and the Oracle usability activity sign up. ODTUG https://www.odtug.com/p/bl/et/blogaid=796&source=1 Mon May 07 2018 09:03:40 GMT-0400 (EDT) FBIs don’t exist https://jonathanlewis.wordpress.com/2018/05/07/fbis-dont-exist/ <p>This is a reprint (of a reprint) of <a href="http://www.jlcomp.demon.co.uk/no_fbi.html"><em><strong>a note I wrote more than 11 years ago</strong></em></a> on my old website. I&#8217;ve decided to republish it on the blog simply because one day I&#8217;ll probably decide to stop paying for the website given how old all the material is and this article makes an important point about the need (at least some of the time) for accuracy in the words you use to describe things.</p> <p style="text-align:center;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</p> <h3>There&#8217;s no such thing as a function-based index.</h3> <p>Well, okay, that’s what the manuals call them but it would be so much better if they were called <em>“indexes with virtual columns”</em> &#8211; because that’s what they are and that’s a name that would eliminate confusion.</p> <p>To demonstrate what I mean, ask yourself this question: <em>“Can the rule based optimizer use a function-based index ?”</em>. The answer is <em>‘Yes’</em>, as the following code fragment demonstrates:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: fbi_rule.sql rem Author: Jonathan Lewis rem Dated: Jan 2005 rem create table t1 as select rownum id, dbms_random.value(0,500) n1, rpad('x',10) small_vc, rpad('x',100) padding from all_objects where rownum &lt;= 3000 ; create index t1_i1 on t1(id, trunc(n1)); set autotrace traceonly explain select /*+ rule */ small_vc from t1 where id = 55 and trunc(n1) between 1 and 10 ; set autotrace off Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=HINT: RULE 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T1' 2 1 INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) </pre> <p>Last time I asked an audience if the rule-based optimizer (RBO) could use a <strong><em>function-based index</em></strong>, most of them thought the answer was <em>‘No’</em>. Even the Oracle manuals make the same mistake &#8211; for example in the <a href="https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_indexes.htm#ADFNS257"><em><strong>10g Release 2 Application Developers Guide p5-8</strong></em></a>, one of the restrictions on <strong><em>function-based indexes</em></strong> is <em>“Only cost based optimization can use function-based indexes”</em>.</p> <p>If I had asked the audience <em>“Can the rule-based optimizer use an index which includes a virtual column ?”</em> I wonder how many of them would have paused for thought, then asked themselves what would happen if the index started with <em>“ordinary”</em> columns and the <em>“function-based”</em> bit was later on in the index.</p> <p>The manuals should, of course, state: <em>“The rule-based optimizer cannot take advantage of any virtual columns in an index, or of any columns that follow the first virtual column”.</em> Given a correct name and a correct description of functionality you can then conclude that if the first column is a <strong><em>virtual column</em></strong> the rule-based optimizer won’t use the index.</p> <p>I’m not suggesting, by the way, that you should be using the rule-based optimizer, or even that this specific example of functionality is going to be particularly beneficial to many people (RBO still uses the <strong><em>“trunc(n1)”</em></strong> as a <strong><em>filter predicate</em></strong> after reaching the table rather than as an <strong><em>access predicate</em></strong> – or even <strong><em>filter predicate</em></strong> &#8211; on the index); but it does demonstrate how easy it is for the wrong name, or terminology, to distract people from the truth.</p> <p>And here’s another thought for Oracle Corporation. Since it seems to be easy to implement <strong><em>virtual columns</em></strong> (there is a hidden entry for each such column in the data dictionary, and the text of the function defining the column appears as the <strong><em>default value</em></strong>), why should they exist only in indexes? Why can’t we have virtual columns which aren’t indexed, so that we can collect statistics on a virtual column and give the optimizer some information about the data distribution of some commonly used expression that we don’t actually want to build an index on.</p> <p>(<strong><em>Update Jan 2007</em></strong> – this is likely to happen in 11g according to ‘sneak preview’ presentations made by Oracle at OW2006.</p> <p>P.S. There really are <strong><em>function-based indexes</em></strong> in Oracle. But Oracle Corp. calls them <strong><em>domain indexes</em></strong> (or <strong><em>co-operative indexes</em></strong>) and tells you that the things you build them with are <strong><em>operators</em></strong>, not <strong><em>functions</em></strong> &#8230; which actually makes them <strong><em>operator-based indexes</em></strong>!</p> <p style="text-align:center;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</p> <h3> Footnote (May 2018)</h3> <p>I&#8217;ve updated the reference to the 10g manuals (chapter 5 page 8) to include a URL, but the URL is for 11gR2 since the only 10g manual I could find online was the full pdf download.  It&#8217;s  interesting to note what restrictions on the use of &#8220;function-based&#8221; indexes are reported in this manual, and I&#8217;m not sure that all of them were true at the time, and I&#8217;m fairly sure that some of them must be false by now, which is why it&#8217;s always good to have test scripts that you can run as you upgrade.</p> <p>There is an interesting variation over time for this example:</p> <ul> <li>In 9.2.0.8 and 10.2.0.5 the predicate on <em><strong>trunc(n1)</strong></em> is a filter predicate on the table</li> <li>In 11.1.0.7 the predicate <em><strong>trunc(n1)</strong></em> became an access predicate in the index</li> <li>In 11.2.0.4 the optimizer (finally) declined to use the index under the rule hint (but introduced a strange side effect &#8230; more about that later)</li> </ul> <p>Execution plan from 11.1.0.7</p> <pre class="brush: plain; title: ; notranslate"> Execution Plan ---------------------------------------------------------- Plan hash value: 1429545322 --------------------------------------------- | Id | Operation | Name | --------------------------------------------- | 0 | SELECT STATEMENT | | | 1 | TABLE ACCESS BY INDEX ROWID| T1 | |* 2 | INDEX RANGE SCAN | T1_I1 | --------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;ID&quot;=55 AND TRUNC(&quot;N1&quot;)&gt;=1 AND TRUNC(&quot;N1&quot;)&lt;=10) Note ----- - rule based optimizer used (consider using cbo) </pre> <p>In passing &#8211; the change in the execution plan from 10g to 11.1 to 11.2 does mean that anyone still using the rule-based optimizer could find that an upgrade makes a difference to rule-based execution plans.</p> <p>As well as ignoring the index, 11.2.0.4 did something else that was new. I happened to have a second index on the table defined as (n1, trunc(id)); this had no impact on the execution plan for all the previous versions of Oracle, apart from switching to a full tablescan 11.2.0.4 also introduced an extra predicate:</p> <pre class="brush: plain; title: ; notranslate"> PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------ Plan hash value: 3617692013 ---------------------------------- | Id | Operation | Name | ---------------------------------- | 0 | SELECT STATEMENT | | |* 1 | TABLE ACCESS FULL| T1 | ---------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(TRUNC(&quot;N1&quot;)&lt;=10 AND TRUNC(&quot;N1&quot;)&gt;=1 AND TRUNC(&quot;ID&quot;)=TRUNC(55) AND &quot;ID&quot;=55) Note ----- - rule based optimizer used (consider using cbo) </pre> <p>Some piece of code somewhere must have been looking at the second &#8220;function-based index&#8221; &#8211; or, at least, it&#8217;s virtual column definition &#8211; to be able to generate that trunc(id) = trunc(55) predicate. This was a detail introduced in 11.2.0.2, affected by fix control 9263333: <em>&#8220;generate transitive predicates for virtual column expressions&#8221;</em>. It&#8217;s possible that a change like this could result in changes in execution plan due to the extra predicates &#8211; even under rule-based optimisation.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18246 Mon May 07 2018 04:24:22 GMT-0400 (EDT) Start transport and Application of redo in Data Guard http://oracle-help.com/dataguard/start-transport-and-application-of-redo-in-data-guard/ <p>In a previous post, we have seen various methods to <strong>create Physical Standby Database.</strong></p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="liybKofLBx"><p><a href="http://oracle-help.com/dataguard/create-standby-database-using-duplicate-database-for-standby-rman-command/">Create Standby Database using duplicate database for standby RMAN command</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/create-standby-database-using-duplicate-database-for-standby-rman-command/embed/#?secret=liybKofLBx" data-secret="liybKofLBx" width="600" height="338" title="&#8220;Create Standby Database using duplicate database for standby RMAN command&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>In this post, we will see how archive logs are transferred and recovered from a <strong>standby database</strong> and concern processes for redo transport and recovery.</p> <p>There are main two processes in Data Guard Physical Standby.</p> <ol> <li>RFS</li> <li>MRP</li> </ol> <p>1. <strong>RFS Process:</strong> Process is responsible to archive log transfer. When the log switch occurs at the Primary database, <strong>LNS</strong> process of <strong>primary</strong> <strong>database</strong> captures redo and sent it to Standby using Oracle Net.</p> <p>Let us see the example.</p> <p><strong>Step 1:</strong> Switch logfile at Primary Database :</p><pre class="crayon-plain-tag">SQL&gt; ALTER SYSTEM SWITCH LOGFILE; System altered.</pre><p>Check <strong>alert log</strong> of standby database :</p><pre class="crayon-plain-tag">[oracle@test1 testdb]$ tail -15f /u01/oracle/diag/rdbms/std_mgr/mgr/trace/alert_mgr.log RFS[8]: Selected log 4 for thread 1 sequence 37 dbid 1905869882 branch 972570193 Wed Apr 25 15:20:44 2018 Archived Log entry 34 added for thread 1 sequence 37 ID 0x71a565df dest 4: Wed Apr 25 15:23:59 2018 Archived Log entry 35 added for thread 1 sequence 38 ID 0x71a565df dest 4: Wed Apr 25 15:23:59 2018 RFS[7]: Selected log 4 for thread 1 sequence 39 dbid 1905869882 branch 972570193 Wed Apr 25 15:27:05 2018 Archived Log entry 36 added for thread 1 sequence 39 ID 0x71a565df dest 4: Wed Apr 25 15:27:05 2018 RFS[7]: Selected log 5 for thread 1 sequence 40 dbid 1905869882 branch 972570193 Wed Apr 25 15:39:34 2018 Archived Log entry 37 added for thread 1 sequence 40 ID 0x71a565df dest 4: Wed Apr 25 15:39:34 2018 RFS[7]: Selected log 4 for thread 1 sequence 41 dbid 1905869882 branch 972570193</pre><p>Here we can see whenever log switch occurs at Primary, <strong>RFS</strong> process adds an entry of <strong>archive log</strong> to standby database.</p> <p>Let&#8217;s check it in <strong>v$managed_standby</strong> view.</p><pre class="crayon-plain-tag">SQL&gt; select process ,pid,sequence# from v$managed_standby; PROCESS PID SEQUENCE# --------- ---------- ---------- ARCH 4000 34 ARCH 4002 35 ARCH 4008 37 ARCH 4006 0 ARCH 4004 36 ARCH 4010 38 ARCH 4012 39 ARCH 4014 40 RFS 12609 0 RFS 12603 0 RFS 12605 41 RFS 12607 0 12 rows selected.</pre><p><strong>2.</strong> <strong>MRP Process: </strong>MRP process applies archived redo log information to the physical standby database that is captured by RFS process at standby.</p> <p>How to start MRP?</p> <p>We can start <strong>MRP</strong> [<strong>recovery</strong>] using <strong>RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION</strong>.</p><pre class="crayon-plain-tag">SQL&gt; RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION; Media recovery complete.</pre><p>As we have used <strong>disconnect from the session</strong>, recovery will be started in background session. If we don&#8217;t use it it will be started as a <strong>foreground</strong> session.</p> <p>Once we start recovery <strong>MRP</strong> process will awake and recover all archive logs pending from <strong>last applied archive log</strong>.</p> <p>We can check recovery process in alert log also.</p><pre class="crayon-plain-tag">RFS[7]: Selected log 4 for thread 1 sequence 44 dbid 1905869882 branch 972570193 Media Recovery Log /u01/arc/mgr/stdby/1_42_972570193.dbf Media Recovery Log /u01/arc/mgr/stdby/1_43_972570193.dbf Media Recovery Waiting for thread 1 sequence 44 (in transit) Wed Apr 25 15:56:56 2018 Archived Log entry 41 added for thread 1 sequence 44 ID 0x71a565df dest 4: Wed Apr 25 15:56:56 2018 RFS[7]: Selected log 5 for thread 1 sequence 45 dbid 1905869882 branch 972570193 Wed Apr 25 15:56:58 2018 Media Recovery Log /u01/arc/mgr/stdby/1_44_972570193.dbf</pre><p>check recovery process with <strong>SQL</strong> query.</p><pre class="crayon-plain-tag">SQL&gt; select Process,pid,sequence# from v$managed_standby where process like 'MRP%' or process like 'RFS%'; PROCESS PID SEQUENCE# --------- ---------- ---------- RFS 12609 0 RFS 12603 0 RFS 12605 44 RFS 12607 0 MRP0 13649 44 SQL&gt; / PROCESS PID SEQUENCE# --------- ---------- ---------- RFS 12609 0 RFS 12603 0 RFS 12605 45 RFS 12607 0 MRP0 13649 44 SQL&gt; / PROCESS PID SEQUENCE# --------- ---------- ---------- RFS 12609 0 RFS 12603 0 RFS 12605 45 RFS 12607 0 MRP0 13649 45</pre><p>Now, Stopping recovery process.</p> <p>We can stop recovery process using <strong>recover managed standby database cancel</strong> command.</p><pre class="crayon-plain-tag">SQL&gt; RECOVER MANAGED STANDBY DATABASE CANCEL; Media recovery complete.</pre><p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/start-transport-and-application-of-redo-in-data-guard/">Start transport and Application of redo in Data Guard</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4282 Sun May 06 2018 15:51:41 GMT-0400 (EDT) Create Physical Standby Database using RMAN Backup Restore http://oracle-help.com/dataguard/create-physical-standby-database-using-rman-backup-restore/ <p>We have seen preparing <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/"><span style="text-decoration: underline;"><strong>Primary Database for Dataguard</strong></span></a></span> and creating <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/oracle-12c/oracle-12cr2/oracle-net-configuration-for-data-guard/"><strong>Oracle network service</strong></a></span> on both sides.</p> <p>In this article, we will see <strong>Physical Standby database</strong> creation and configuration using <strong>RMAN backup and restore</strong>.</p> <p><strong>Step 1:</strong> Connect to the <strong>Primary database</strong> and check if <strong>recovery</strong> <strong>area</strong> has enough space configured or not.</p><pre class="crayon-plain-tag">SQL&gt; show parameter db_recovery NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string /u01/oracle/fast_recovery_area db_recovery_file_dest_size big integer 4182M SQL&gt; exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options</pre><p><strong>Step 2:</strong> Connect to <strong>RMAN</strong> and take backup :</p><pre class="crayon-plain-tag">[oracle@test1 oradata]$ rman target / Recovery Manager: Release 11.2.0.4.0 - Production on Wed Apr 18 17:54:27 2018 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: TESTDB (DBID=2756866105) RMAN&gt; backup database plus archivelog; Starting backup at 18-APR-18 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=1 device type=DISK channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=2 RECID=1 STAMP=973538201 input archived log thread=1 sequence=3 RECID=2 STAMP=973791335 input archived log thread=1 sequence=4 RECID=3 STAMP=973791681 input archived log thread=1 sequence=5 RECID=4 STAMP=973791684 input archived log thread=1 sequence=6 RECID=5 STAMP=973792480 channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_annnn_TAG20180418T175441_ffgg8b13_.bkp tag=TAG20180418T175441 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08 Finished backup at 18-APR-18 Starting backup at 18-APR-18 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/u01/oracle/oradata/testdb/system01.dbf input datafile file number=00002 name=/u01/oracle/oradata/testdb/sysaux01.dbf input datafile file number=00003 name=/u01/oracle/oradata/testdb/undotbs01.dbf input datafile file number=00004 name=/u01/oracle/oradata/testdb/users01.dbf channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp tag=TAG20180418T175449 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:05 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_ncsnf_TAG20180418T175449_ffggbswc_.bkp tag=TAG20180418T175449 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-APR-18 Starting backup at 18-APR-18 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=7 RECID=6 STAMP=973792563 channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_annnn_TAG20180418T175603_ffggbvmn_.bkp tag=TAG20180418T175603 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-APR-18 RMAN&gt;</pre><p><strong>Step 3:</strong> Create <strong>standby control file</strong> from the primary database and create <strong>pfile</strong> from <strong>spfile</strong>.</p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/u01/std_testdb.ctl'; Database altered. SQL&gt; CREATE PFILE FROM SPFILE; File created.</pre><p><strong>Step 4:</strong> Change following <strong>parameter</strong> in pfile.</p><pre class="crayon-plain-tag">CHANGE FOLLOWING PARAMETER IN PFILE : *.db_unique_name='std_testdb' *.fal_server='testdb' *.log_archive_dest_2='SERVICE=testdb ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdb'</pre><p><strong>Step 5:</strong> Connect to Standby database server and create <strong>necessary</strong> directories.</p><pre class="crayon-plain-tag">[oracle@localhost ~]$ mkdir -p /u01/oracle/oradata/testdb/ [oracle@localhost ~]$ mkdir -p /u01/oracle/fast_recovery_area/TESTDB/ [oracle@localhost ~]$ mkdir -p /u01/oracle/admin/testdb/adump [oracle@localhost ~]$ mkdir -p /u01/arc/testdb/stdby</pre><p><strong>Step 6</strong>: Transfer standby control file to standby database and rename it as defined in <strong>control_files</strong> initialization parameter.</p><pre class="crayon-plain-tag">[oracle@localhost ~]$ scp 192.168.1.16:/u01/std_testdb.ctl /u01/oracle/oradata/testdb/ oracle@192.168.1.16's password: std_testdb.ctl 100% 9520KB 9.3MB/s 00:01 [oracle@localhost ~]$ cd /u01/oracle/oradata/testdb/ [oracle@localhost testdb]$ ll total 9520 -rw-r----- 1 oracle oinstall 9748480 Apr 18 18:11 std_testdb.ctl [oracle@localhost testdb]$ cp std_testdb.ctl control01.tl [oracle@localhost testdb]$ cp std_testdb.ctl control02.tl</pre><p><strong>Step 7:</strong> Transfer <strong>backup</strong> to Standby database server :</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ rsync -azvrh --progress 192.168.1.16:/u01/oracle/fast_recovery_area/TESTDB/ /u01/oracle/fast_recovery_area/TESTDB/ oracle@192.168.1.16's password: receiving incremental file list ./ backupset/ backupset/2018_04_18/ backupset/2018_04_18/o1_mf_annnn_TAG20180418T175441_ffgg8b13_.bkp 90.85M 100% 9.48MB/s 0:00:09 (xfer#1, to-check=3/8) backupset/2018_04_18/o1_mf_annnn_TAG20180418T175603_ffggbvmn_.bkp 494.59K 100% 2.09MB/s 0:00:00 (xfer#2, to-check=2/8) backupset/2018_04_18/o1_mf_ncsnf_TAG20180418T175449_ffggbswc_.bkp 9.83M 100% 18.86MB/s 0:00:00 (xfer#3, to-check=1/8) backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp 1.06G 100% 8.36MB/s 0:02:01 (xfer#4, to-check=0/8) onlinelog/ sent 102 bytes received 275.61M bytes 1.78M bytes/sec total size is 1.17G speedup is 4.23 [oracle@localhost testdb]$</pre><p><strong>Step 8:</strong> Transfer <strong>pfile</strong> to standby database :</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ scp 192.168.1.16:$ORACLE_HOME/dbs/inittestdb.ora $ORACLE_HOME/dbs/ oracle@192.168.1.16's password: inittestdb.ora 100% 1291 1.3KB/s 00:00</pre><p><strong>Step 9:</strong> Transfer <strong>password</strong> file to standby database.</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ scp 192.168.1.16:$ORACLE_HOME/dbs/orapwtestdb $ORACLE_HOME/dbs/ oracle@192.168.1.16's password: orapwtestdb 100% 1536 1.5KB/s 00:00</pre><p><strong>Step 10:</strong> Connect to Standby database and <strong>create spfile from pfile.</strong></p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Wed Apr 18 18:26:04 2018 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL&gt; create spfile from pfile; File created.</pre><p><strong>Step 11:</strong> In standby database connect to <strong>RMAN</strong> and start the database in <strong>mount</strong> stage.</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ rman target / Recovery Manager: Release 11.2.0.4.0 - Production on Wed Apr 18 18:29:16 2018 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database (not started) RMAN&gt; startup mount Oracle instance started database mounted Total System Global Area 663908352 bytes Fixed Size 2256192 bytes Variable Size 465568448 bytes Database Buffers 192937984 bytes Redo Buffers 3145728 bytes RMAN&gt;</pre><p><strong>Step 12</strong>: <strong>Restore</strong> database using restore database command.</p><pre class="crayon-plain-tag">RMAN&gt; restore database; Starting restore at 18-APR-18 Starting implicit crosscheck backup at 18-APR-18 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=51 device type=DISK Crosschecked 4 objects Finished implicit crosscheck backup at 18-APR-18 Starting implicit crosscheck copy at 18-APR-18 using channel ORA_DISK_1 Finished implicit crosscheck copy at 18-APR-18 searching for all files in the recovery area cataloging files... no files cataloged using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /u01/oracle/oradata/testdb/system01.dbf channel ORA_DISK_1: restoring datafile 00002 to /u01/oracle/oradata/testdb/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00003 to /u01/oracle/oradata/testdb/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00004 to /u01/oracle/oradata/testdb/users01.dbf channel ORA_DISK_1: reading from backup piece /u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp channel ORA_DISK_1: piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp tag=TAG20180418T175449 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:01:15 Finished restore at 18-APR-18 RMAN&gt;</pre><p><strong>Step 13:</strong> Connect to <strong>SQL</strong> prompt of standby database and create <strong>redo log files</strong>.</p><pre class="crayon-plain-tag">SQL&gt; alter system set standby_file_management=manual; System altered. SQL&gt; alter database add logfile ('/u01/oracle/oradata/testdb/redo01.log') size 512m; Database altered. SQL&gt; alter database add logfile ('/u01/oracle/oradata/testdb/redo02.log') size 512m; Database altered. SQL&gt; alter database add logfile ('/u01/oracle/oradata/testdb/redo03.log') size 512m; Database altered. SQL&gt; alter system set standby_file_management=AUTO; System altered. SQL&gt;</pre><p><strong>Check Standby database synchronization with the Primary database</strong></p> <p><strong>Step 14:</strong> Connect to the Primary database and check the role of the primary database.</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- TESTDB READ WRITE PRIMARY</pre><p><strong>Step 15:</strong> Connect to Standby database and check the role of the database.</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- TESTDB MOUNTED PHYSICAL STANDBY</pre><p><strong>Step 16:</strong> Check maximum archive log sequence from the primary.</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 96</pre><p><strong>Step 17:</strong> Check maximum archive log sequence from standby database.</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 96</pre><p><strong>Step 18:</strong> Switch logfile at primary database :</p><pre class="crayon-plain-tag">SQL&gt; alter system switch logfile; System altered.</pre><p><strong>Step 19:</strong> Again check max archive log sequence at the standby database.</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 97</pre><p>Physical standby database using RMAN backup restore is successfully created.</p> <p>&nbsp;</p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/create-physical-standby-database-using-rman-backup-restore/">Create Physical Standby Database using RMAN Backup Restore</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4122 Sun May 06 2018 15:46:32 GMT-0400 (EDT) Create Standby Database using duplicate database for standby RMAN command http://oracle-help.com/dataguard/create-standby-database-using-duplicate-database-for-standby-rman-command/ <p>We have seen preparing <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/"><span style="text-decoration: underline;"><strong>Primary Database for Dataguard</strong></span></a></span> and creating <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/oracle-12c/oracle-12cr2/oracle-net-configuration-for-data-guard/"><strong>Oracle network service</strong></a></span> on both sides.</p> <p>In this article, we will see Physical Standby creation using RMAN <strong>duplicate database for standby </strong>command.</p> <p><strong>Note:</strong> For this process, we need to copy password file from primary side to standby side under <strong>$ORACLE_HOME/dbs</strong></p> <p>Database Detail :</p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg"><img data-attachment-id="4398" data-permalink="http://oracle-help.com/dataguard/create-standby-database-using-duplicate-database-for-standby-rman-command/attachment/det/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?fit=669%2C108" data-orig-size="669,108" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1525690100&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="det" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?fit=300%2C48" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?fit=669%2C108" class="alignnone size-full wp-image-4398" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?resize=669%2C108" alt="" width="669" height="108" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?w=669 669w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?resize=300%2C48 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?resize=60%2C10 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/05/det.jpg?resize=150%2C24 150w" sizes="(max-width: 669px) 100vw, 669px" data-recalc-dims="1" /></a></p> <p><b>Connect to Primary Database :</b></p> <p><strong>Step 1:</strong> Take <strong>backup</strong> :</p> <p>Connect to <strong>RMAN</strong> and take <strong>full</strong> database backup using rman <strong>backup database</strong> command.</p><pre class="crayon-plain-tag">[oracle@test1 oradata]$ rman target / Recovery Manager: Release 11.2.0.4.0 - Production on Wed Apr 18 17:54:27 2018 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: TESTDB (DBID=2756866105) RMAN&gt; backup database plus archivelog; Starting backup at 18-APR-18 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=1 device type=DISK channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=2 RECID=1 STAMP=973538201 input archived log thread=1 sequence=3 RECID=2 STAMP=973791335 input archived log thread=1 sequence=4 RECID=3 STAMP=973791681 input archived log thread=1 sequence=5 RECID=4 STAMP=973791684 input archived log thread=1 sequence=6 RECID=5 STAMP=973792480 channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_annnn_TAG20180418T175441_ffgg8b13_.bkp tag=TAG20180418T175441 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08 Finished backup at 18-APR-18 Starting backup at 18-APR-18 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/u01/oracle/oradata/testdb/system01.dbf input datafile file number=00002 name=/u01/oracle/oradata/testdb/sysaux01.dbf input datafile file number=00003 name=/u01/oracle/oradata/testdb/undotbs01.dbf input datafile file number=00004 name=/u01/oracle/oradata/testdb/users01.dbf channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp tag=TAG20180418T175449 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:05 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set including current SPFILE in backup set channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_ncsnf_TAG20180418T175449_ffggbswc_.bkp tag=TAG20180418T175449 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-APR-18 Starting backup at 18-APR-18 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=7 RECID=6 STAMP=973792563 channel ORA_DISK_1: starting piece 1 at 18-APR-18 channel ORA_DISK_1: finished piece 1 at 18-APR-18 piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_annnn_TAG20180418T175603_ffggbvmn_.bkp tag=TAG20180418T175603 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-APR-18 RMAN&gt;</pre><p><strong>Step 2</strong>: Transfer backup :</p> <p>Connect to <strong>standby database</strong> and <strong>transfer backup</strong> from Primary to Standby server.</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ rsync -azvrh --progress 192.168.1.16:/u01/oracle/fast_recovery_area/TESTDB/ /u01/oracle/fast_recovery_area/TESTDB/ oracle@192.168.1.16's password: receiving incremental file list ./ backupset/ backupset/2018_04_18/ backupset/2018_04_18/o1_mf_annnn_TAG20180418T175441_ffgg8b13_.bkp 90.85M 100% 9.48MB/s 0:00:09 (xfer#1, to-check=3/8) backupset/2018_04_18/o1_mf_annnn_TAG20180418T175603_ffggbvmn_.bkp 494.59K 100% 2.09MB/s 0:00:00 (xfer#2, to-check=2/8) backupset/2018_04_18/o1_mf_ncsnf_TAG20180418T175449_ffggbswc_.bkp 9.83M 100% 18.86MB/s 0:00:00 (xfer#3, to-check=1/8) backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp 1.06G 100% 8.36MB/s 0:02:01 (xfer#4, to-check=0/8) onlinelog/ sent 102 bytes received 275.61M bytes 1.78M bytes/sec total size is 1.17G speedup is 4.23</pre><p><strong>Step 3 :</strong> Transfer <strong>pfile</strong> :</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ scp 192.168.1.16:$ORACLE_HOME/dbs/inittestdb.ora $ORACLE_HOME/dbs/ oracle@192.168.1.16's password: inittestdb.ora 100% 1291 1.3KB/s 00:00</pre><p><strong>Step 4 :</strong> Transfer <strong>password</strong> file.</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ scp 192.168.1.16:$ORACLE_HOME/dbs/orapwtestdb $ORACLE_HOME/dbs/ oracle@192.168.1.16's password: orapwtestdb 100% 1536 1.5KB/s 00:00</pre><p><strong>Step 5:</strong> Start database in <strong>nomount</strong> stage.</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 19 12:43:32 2018 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL&gt; startup nomount ORACLE instance started. Total System Global Area 663908352 bytes Fixed Size 2256192 bytes Variable Size 461374144 bytes Database Buffers 197132288 bytes Redo Buffers 3145728 bytes</pre><p><strong>Step 6:</strong> Connect to <strong>RMAN</strong> with a target and <strong>auxiliary</strong> database :</p><pre class="crayon-plain-tag">[oracle@localhost testdb]$ rman target sys/oracle@testdb auxiliary / Recovery Manager: Release 11.2.0.4.0 - Production on Thu Apr 19 12:43:43 2018 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: TESTDB (DBID=2756866105) connected to auxiliary database: TESTDB (not mounted) RMAN&gt; duplicate target database for standby nofilenamecheck; Starting Duplicate Db at 19-APR-18 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=1 device type=DISK contents of Memory Script: { restore clone standby controlfile; } executing Memory Script Starting restore at 19-APR-18 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: copied control file copy input file name=/u01/std_testdb.ctl output file name=/u01/oracle/oradata/testdb/control01.ctl output file name=/u01/oracle/oradata/testdb/control02.ctl Finished restore at 19-APR-18 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database contents of Memory Script: { set newname for tempfile 1 to "/u01/oracle/oradata/testdb/temp01.dbf"; switch clone tempfile all; set newname for datafile 1 to "/u01/oracle/oradata/testdb/system01.dbf"; set newname for datafile 2 to "/u01/oracle/oradata/testdb/sysaux01.dbf"; set newname for datafile 3 to "/u01/oracle/oradata/testdb/undotbs01.dbf"; set newname for datafile 4 to "/u01/oracle/oradata/testdb/users01.dbf"; restore clone database ; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to /u01/oracle/oradata/testdb/temp01.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 19-APR-18 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/oracle/oradata/testdb/system01.dbf channel ORA_AUX_DISK_1: restoring datafile 00002 to /u01/oracle/oradata/testdb/sysaux01.dbf channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/oracle/oradata/testdb/undotbs01.dbf channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/oracle/oradata/testdb/users01.dbf channel ORA_AUX_DISK_1: reading from backup piece /u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp channel ORA_AUX_DISK_1: piece handle=/u01/oracle/fast_recovery_area/TESTDB/backupset/2018_04_18/o1_mf_nnndf_TAG20180418T175449_ffgg8kob_.bkp tag=TAG20180418T175449 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:55 Finished restore at 19-APR-18 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=1 STAMP=973860316 file name=/u01/oracle/oradata/testdb/system01.dbf datafile 2 switched to datafile copy input datafile copy RECID=2 STAMP=973860317 file name=/u01/oracle/oradata/testdb/sysaux01.dbf datafile 3 switched to datafile copy input datafile copy RECID=3 STAMP=973860317 file name=/u01/oracle/oradata/testdb/undotbs01.dbf datafile 4 switched to datafile copy input datafile copy RECID=4 STAMP=973860317 file name=/u01/oracle/oradata/testdb/users01.dbf Finished Duplicate Db at 19-APR-18 RMAN&gt;</pre><p><strong>Check Physical Standby Database Synchronization :</strong></p> <p><strong>Step 7:</strong> Check role of <strong>primary</strong> database :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- TESTDB READ WRITE PRIMARY SQL&gt;</pre><p><strong>Step 8:</strong> Check role of <strong>Standby</strong> database :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode ,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- TESTDB MOUNTED PHYSICAL STANDBY</pre><p><strong>Step 9:</strong> Check max archive log sequence from the Primary database :</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 93</pre><p><strong>Step 10:</strong> Check max archive log sequence from Standby database :</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 93</pre><p><strong>Step 11:</strong> Switch logfile in Primary database :</p><pre class="crayon-plain-tag">SQL&gt; alter system switch logfile; System altered.</pre><p><strong>Step 12:</strong> Check sequence from Standby database :</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 94</pre><p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/create-standby-database-using-duplicate-database-for-standby-rman-command/">Create Standby Database using duplicate database for standby RMAN command</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4195 Sun May 06 2018 15:40:26 GMT-0400 (EDT) Create Standby Database DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command. http://oracle-help.com/dataguard/create-standby-database-duplicate-target-database-for-standby-from-active-database-rman-command/ <p>We have seen preparing <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/"><span style="text-decoration: underline;"><strong>Primary Database for Dataguard</strong></span></a></span> and creating <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/oracle-12c/oracle-12cr2/oracle-net-configuration-for-data-guard/"><strong>Oracle network service</strong></a></span> on both sides.</p> <p>In this article we will see <strong>RMAN duplicate target database for standby using active database</strong> command to prepare auxiliary database.</p> <p><strong>Note:</strong> For this process, we need to copy password file from primary side to standby side under <strong>$ORACLE_HOME/dbs</strong></p> <p>Database  Detail :</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg"><img data-attachment-id="4400" data-permalink="http://oracle-help.com/dataguard/create-standby-database-duplicate-target-database-for-standby-from-active-database-rman-command/attachment/det1/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?fit=692%2C105" data-orig-size="692,105" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1525690169&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="det1" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?fit=300%2C46" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?fit=692%2C105" class="alignnone size-full wp-image-4400" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?resize=692%2C105" alt="" width="692" height="105" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?w=692 692w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?resize=300%2C46 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?resize=60%2C9 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/05/det1.jpg?resize=150%2C23 150w" sizes="(max-width: 692px) 100vw, 692px" data-recalc-dims="1" /></a></p> <p><strong>Step 1 :</strong> Connect to <strong>RMAN</strong> using netservice we have created at primary side :</p><pre class="crayon-plain-tag">[oracle@localhost ~]$ rman target sys/oracle@mgr auxiliary sys/oracle@std_mgr Recovery Manager: Release 11.2.0.4.0 - Production on Tue Apr 17 17:43:32 2018 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: MGR (DBID=1905869882) connected to auxiliary database: MGR (not mounted) RMAN&gt;</pre><p><strong>Step 2 :</strong> Prepare a script to perform <strong>RMAN</strong> duplicate command :</p><pre class="crayon-plain-tag">run { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; allocate channel c4 type disk; allocate auxiliary channel aux type disk; duplicate target database for standby from active database nofilenamecheck spfile set log_archive_max_processes='8' set db_unique_name='std_mgr' set standby_file_management='AUTO' set log_archive_config='dg_config=(mgr,std_mgr)' set log_archive_dest_1='LOCATION=/u01/arc/mgr VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)' set log_archive_dest_4='LOCATION=/u01/arc/mgr/stdby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=std_mgr' set DG_BROKER_START='TRUE' set fal_client='std_mgr' set fal_server='mgr' set log_Archive_dest_2='service=mgr lgwr sync affirm valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=mgr'; }</pre><p>We can see in above script, we have set all parameters which will be needed for <strong>Physical Standby database.</strong></p> <p><strong>Step 3 :</strong> run above <strong>script</strong> to create standby database .</p><pre class="crayon-plain-tag">RMAN&gt; run 2&gt; { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; allocate channel c4 type disk; allocate auxiliary channel aux type disk; duplicate target database for standby from active database nofilenamecheck spfile set log_archive_max_processes='8' set db_unique_name='std_mgr' set standby_file_management='AUTO' set log_archive_config='dg_config=(mgr,std_mgr)' set log_archive_dest_1='LOCATION=/u01/arc/mgr VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)' set log_archive_dest_4='LOCATION=/u01/arc/mgr/stdby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=std_mgr' set DG_BROKER_START='TRUE' set fal_client='std_mgr' set fal_server='mgr' set log_Archive_dest_2='service=mgr lgwr sync affirm valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=mgr'; }3&gt; 4&gt; 5&gt; 6&gt; 7&gt; 8&gt; 9&gt; 10&gt; 11&gt; 12&gt; 13&gt; 14&gt; 15&gt; 16&gt; 17&gt; 18&gt; 19&gt; using target database control file instead of recovery catalog allocated channel: c1 channel c1: SID=103 device type=DISK allocated channel: c2 channel c2: SID=89 device type=DISK allocated channel: c3 channel c3: SID=95 device type=DISK allocated channel: c4 channel c4: SID=97 device type=DISK allocated channel: aux channel aux: SID=20 device type=DISK Starting Duplicate Db at 17-APR-18 contents of Memory Script: { backup as copy reuse targetfile '/u01/oracle/product/11.2.0/db_1/dbs/orapwmgr' auxiliary format '/u01/oracle/product/11.2.0/db_1/dbs/orapwmgr' targetfile '/u01/oracle/product/11.2.0/db_1/dbs/spfilemgr.ora' auxiliary format '/u01/oracle/product/11.2.0/db_1/dbs/spfilemgr.ora' ; sql clone "alter system set spfile= ''/u01/oracle/product/11.2.0/db_1/dbs/spfilemgr.ora''"; } executing Memory Script Starting backup at 17-APR-18 Finished backup at 17-APR-18 sql statement: alter system set spfile= ''/u01/oracle/product/11.2.0/db_1/dbs/spfilemgr.ora'' contents of Memory Script: { sql clone "alter system set log_archive_max_processes = 8 comment= '''' scope=spfile"; sql clone "alter system set db_unique_name = ''std_mgr'' comment= '''' scope=spfile"; sql clone "alter system set standby_file_management = ''AUTO'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_config = ''dg_config=(mgr,std_mgr)'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_dest_1 = ''LOCATION=/u01/arc/mgr VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_dest_4 = ''LOCATION=/u01/arc/mgr/stdby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=std_mgr'' comment= '''' scope=spfile"; sql clone "alter system set DG_BROKER_START = TRUE comment= '''' scope=spfile"; sql clone "alter system set fal_client = ''std_mgr'' comment= '''' scope=spfile"; sql clone "alter system set fal_server = ''mgr'' comment= '''' scope=spfile"; sql clone "alter system set log_Archive_dest_2 = ''service=mgr lgwr sync affirm valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=mgr'' comment= '''' scope=spfile"; shutdown clone immediate; startup clone nomount; } executing Memory Script sql statement: alter system set log_archive_max_processes = 8 comment= '''' scope=spfile sql statement: alter system set db_unique_name = ''std_mgr'' comment= '''' scope=spfile sql statement: alter system set standby_file_management = ''AUTO'' comment= '''' scope=spfile sql statement: alter system set log_archive_config = ''dg_config=(mgr,std_mgr)'' comment= '''' scope=spfile sql statement: alter system set log_archive_dest_1 = ''LOCATION=/u01/arc/mgr VALID_FOR=(ONLINE_LOGFILES, ALL_ROLES)'' comment= '''' scope=spfile sql statement: alter system set log_archive_dest_4 = ''LOCATION=/u01/arc/mgr/stdby/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLES) DB_UNIQUE_NAME=std_mgr'' comment= '''' scope=spfile sql statement: alter system set DG_BROKER_START = TRUE comment= '''' scope=spfile sql statement: alter system set fal_client = ''std_mgr'' comment= '''' scope=spfile sql statement: alter system set fal_server = ''mgr'' comment= '''' scope=spfile sql statement: alter system set log_Archive_dest_2 = ''service=mgr lgwr sync affirm valid_for= (ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=mgr'' comment= '''' scope=spfile Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 392495104 bytes Fixed Size 2253584 bytes Variable Size 176164080 bytes Database Buffers 209715200 bytes Redo Buffers 4362240 bytes allocated channel: aux channel aux: SID=19 device type=DISK contents of Memory Script: { backup as copy current controlfile for standby auxiliary format '/u01/oracle/oradata/mgr/control01.ctl'; restore clone controlfile to '/u01/oracle/oradata/mgr/control02.ctl' from '/u01/oracle/oradata/mgr/control01.ctl'; } executing Memory Script Starting backup at 17-APR-18 channel c1: starting datafile copy copying standby control file output file name=/u01/oracle/product/11.2.0/db_1/dbs/snapcf_mgr.f tag=TAG20180417T174406 RECID=2 STAMP=973705448 channel c1: datafile copy complete, elapsed time: 00:00:04 Finished backup at 17-APR-18 Starting restore at 17-APR-18 channel aux: copied control file copy Finished restore at 17-APR-18 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database RMAN-05538: WARNING: implicitly using DB_FILE_NAME_CONVERT contents of Memory Script: { set newname for tempfile 1 to "/u01/oracle/oradata/mgr/temp01.dbf"; switch clone tempfile all; set newname for datafile 1 to "/u01/oracle/oradata/mgr/system01.dbf"; set newname for datafile 2 to "/u01/oracle/oradata/mgr/sysaux01.dbf"; set newname for datafile 3 to "/u01/oracle/oradata/mgr/undotbs01.dbf"; set newname for datafile 4 to "/u01/oracle/oradata/mgr/users01.dbf"; backup as copy reuse datafile 1 auxiliary format "/u01/oracle/oradata/mgr/system01.dbf" datafile 2 auxiliary format "/u01/oracle/oradata/mgr/sysaux01.dbf" datafile 3 auxiliary format "/u01/oracle/oradata/mgr/undotbs01.dbf" datafile 4 auxiliary format "/u01/oracle/oradata/mgr/users01.dbf" ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to /u01/oracle/oradata/mgr/temp01.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting backup at 17-APR-18 channel c1: starting datafile copy input datafile file number=00001 name=/u01/oracle/oradata/mgr/system01.dbf channel c2: starting datafile copy input datafile file number=00002 name=/u01/oracle/oradata/mgr/sysaux01.dbf channel c3: starting datafile copy input datafile file number=00003 name=/u01/oracle/oradata/mgr/undotbs01.dbf channel c4: starting datafile copy input datafile file number=00004 name=/u01/oracle/oradata/mgr/users01.dbf output file name=/u01/oracle/oradata/mgr/users01.dbf tag=TAG20180417T174420 channel c4: datafile copy complete, elapsed time: 00:00:03 output file name=/u01/oracle/oradata/mgr/undotbs01.dbf tag=TAG20180417T174420 channel c3: datafile copy complete, elapsed time: 00:00:07 output file name=/u01/oracle/oradata/mgr/sysaux01.dbf tag=TAG20180417T174420 channel c2: datafile copy complete, elapsed time: 00:01:15 output file name=/u01/oracle/oradata/mgr/system01.dbf tag=TAG20180417T174420 channel c1: datafile copy complete, elapsed time: 00:01:35 Finished backup at 17-APR-18 sql statement: alter system archive log current contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=2 STAMP=973705557 file name=/u01/oracle/oradata/mgr/system01.dbf datafile 2 switched to datafile copy input datafile copy RECID=3 STAMP=973705557 file name=/u01/oracle/oradata/mgr/sysaux01.dbf datafile 3 switched to datafile copy input datafile copy RECID=4 STAMP=973705557 file name=/u01/oracle/oradata/mgr/undotbs01.dbf datafile 4 switched to datafile copy input datafile copy RECID=5 STAMP=973705557 file name=/u01/oracle/oradata/mgr/users01.dbf Finished Duplicate Db at 17-APR-18 released channel: c1 released channel: c2 released channel: c3 released channel: c4 released channel: aux RMAN&gt;</pre><p><strong>Now check physical standby database :</strong></p> <p><strong>Step 4 :</strong> Check database role in Primary :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR READ WRITE PRIMARY SQL&gt;</pre><p><strong>Step 5 :</strong> Check database role at standby :</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE --------- -------------------- ---------------- MGR MOUNTED PHYSICAL STANDBY SQL&gt;</pre><p><strong>Step 6 :</strong> Check max archive log sequence at primary :</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 35 SQL&gt;</pre><p><strong>Step 7 :</strong> Check max archive log sequence at standby :</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 35 SQL&gt;</pre><p><strong>Step 8 :</strong> Switch archive log at Primary database :</p><pre class="crayon-plain-tag">SQL&gt; alter system switch logfile; System altered.</pre><p><strong>Step 9 :</strong> Again check max archive log sequence at Standby database:</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$thread; MAX(SEQUENCE#) -------------- 36 SQL&gt;</pre><p>We can see here archive log is transferred to standby database.</p> <p>In next article, we will <strong>create standby with RMAN Restore</strong></p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/create-standby-database-duplicate-target-database-for-standby-from-active-database-rman-command/">Create Standby Database DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE RMAN command.</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4115 Sun May 06 2018 15:35:17 GMT-0400 (EDT) Oracle Net Configuration for Data Guard http://oracle-help.com/dataguard/oracle-net-configuration-for-data-guard/ <p>In Previous article we have seen <strong>Primary Database Preparation for Data Guard Configuration</strong>.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="6jkPx5XhAw"><p><a href="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/">Setting the parameter on primary database for physical standby database.</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/embed/#?secret=6jkPx5XhAw" data-secret="6jkPx5XhAw" width="600" height="338" title="&#8220;Setting the parameter on primary database for physical standby database.&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>In this article we will see <strong>Oracle Net Configuration on Primary and Standby.</strong></p> <p><strong>Step 1 :</strong> Configure tnsnames.ora file at <strong>Production</strong>.</p><pre class="crayon-plain-tag">[oracle@test1 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/oracle/product/11.2.0/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. TESTDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = testdb)(UR=A) ) ) STD_TESTDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.10)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A) ) ) [oracle@test1 ~]$</pre><p>Here we can see two net services. <strong>TESTDB</strong> for <strong>Primary</strong> Database Connection and <strong>STD_TESTDB</strong> for <strong>standby</strong> database configuration.</p> <p><strong>Step 2 :</strong> We need to add <strong>static</strong> entry for <strong>standby</strong> database in listener.ora file at <strong>standby</strong> side.</p><pre class="crayon-plain-tag">[oracle@localhost admin]$ cat listener.ora # listener.ora Network Configuration File: /u01/oracle/product/11.2.0/db_1/network/admin/listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (ORACLE_HOME=/u01/oracle/product/11.2.0/db_1) (SID_NAME=testdb) ) ) ADR_BASE_LISTENER = /u01/oracle</pre><p><strong>Step 3 :</strong> Add net service entry in <strong>tnsnames.ora</strong> at standby side for <strong>Primary</strong> and <strong>Standby</strong> Database.</p><pre class="crayon-plain-tag">[oracle@localhost admin]$ cat tnsnames.ora testdb= (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.16)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A) ) ) STD_TESTDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.10)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A) ) ) [oracle@localhost admin]$</pre><p><strong>Step 4 :</strong> Start <strong>listener</strong> at standby side.</p><pre class="crayon-plain-tag">[oracle@localhost admin]$ lsnrctl start LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 21-APR-2018 23:14:29 Copyright (c) 1991, 2013, Oracle. All rights reserved. Starting /u01/oracle/product/11.2.0/db_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 11.2.0.4.0 - Production System parameter file is /u01/oracle/product/11.2.0/db_1/network/admin/listener.ora Log messages written to /u01/oracle/diag/tnslsnr/localhost/listener/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production Start Date 21-APR-2018 23:14:30 Uptime 0 days 0 hr. 0 min. 0 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/oracle/product/11.2.0/db_1/network/admin/listener.ora Listener Log File /u01/oracle/diag/tnslsnr/localhost/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Services Summary... Service "testdb" has 1 instance(s). Instance "testdb", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully [oracle@localhost admin]$</pre><p><strong>Step 5:</strong> Check <strong>tnsping</strong> for standby and primary from both side.</p> <p><strong>Primary Database :</strong></p><pre class="crayon-plain-tag">[oracle@test1 admin]$ tnsping testdb TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 21-APR-2018 23:18:38 Copyright (c) 1997, 2013, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = testdb)(UR=A))) OK (0 msec) [oracle@test1 admin]$ tnsping std_testdb TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 21-APR-2018 23:18:45 Copyright (c) 1997, 2013, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.10)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A))) OK (10 msec) [oracle@test1 admin]$</pre><p><strong>Standby Database.</strong></p><pre class="crayon-plain-tag">[oracle@localhost admin]$ tnsping std_testdb TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 21-APR-2018 23:19:22 Copyright (c) 1997, 2013, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.10)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A))) OK (0 msec) [oracle@localhost admin]$ tnsping testdb TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 21-APR-2018 23:20:20 Copyright (c) 1997, 2013, Oracle. All rights reserved. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.16)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = testdb)(UR=A))) OK (10 msec) [oracle@localhost admin]$</pre><p>In next articles we will see creating standby database using different methods.</p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/oracle-net-configuration-for-data-guard/">Oracle Net Configuration for Data Guard</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4190 Sun May 06 2018 15:24:55 GMT-0400 (EDT) Setting the parameter on primary database for physical standby database. http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/ <p>In the previous article, we have seen basics about the <strong>Architecture of Oracle Dataguard.</strong></p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="crxt9lw8W6"><p><a href="http://oracle-help.com/dataguard/oracle-dataguard-architecture/">Oracle Dataguard Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/oracle-dataguard-architecture/embed/#?secret=crxt9lw8W6" data-secret="crxt9lw8W6" width="600" height="338" title="&#8220;Oracle Dataguard Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>We can configure DataGuard with main three stages.</p> <ol> <li>Preparing Primary Database.</li> <li>Setting up net services between primary and standby database.</li> <li>Creating Standby Database.</li> </ol> <p>In this article, we will set the parameters on the primary database. We will see parameter setting of Primary database necessary for standby database configuration.</p> <table style="height: 40px;" width="745"> <tbody> <tr> <td style="width: 179px;">Primary DB</td> <td style="width: 179px;">Standby DB</td> <td style="width: 179px;">Primary DB Server IP</td> <td style="width: 180px;">Standby DB Server IP</td> </tr> <tr> <td style="width: 179px;">testdb</td> <td style="width: 179px;">std_testdb</td> <td style="width: 179px;">192.168.1.16</td> <td style="width: 180px;">192.168.1.10</td> </tr> </tbody> </table> <p><strong>Step 1:</strong> Your <strong>Database</strong> must be in <strong>archive</strong> log mode.</p><pre class="crayon-plain-tag">[oracle@test1 ~]$ export ORACLE_SID=testdb [oracle@test1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 19 23:47:21 2018 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL&gt; select name,open_mode ,log_mode from v$database; NAME OPEN_MODE LOG_MODE --------- -------------------- ------------ TESTDB READ WRITE ARCHIVELOG SQL&gt;</pre><p><strong>Step 2:</strong> Check if a database is enabled for <strong>force logging</strong> or not. If not enabled, <strong>enable</strong> it.</p><pre class="crayon-plain-tag">SQL&gt; SELECT FORCE_LOGGING FROM V$DATABASE; FOR --- NO SQL&gt; ALTER DATABASE FORCE LOGGING; Database altered. SQL&gt;</pre><p><strong>Step 3:</strong> Set Initialization parameters.</p><pre class="crayon-plain-tag">SQL&gt; SHOW PARAMETER DB_NAME NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_name string testdb SQL&gt; SHOW PARAMETER DB_UNIQUE_NAME NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_unique_name string testdb SQL&gt;</pre><p>For this configuration, I have set db_name and db_unique_name same, we can change it according to requirement.</p> <p>Set Initialization parameters <strong>remote_login_passwordfile</strong> must be set to <strong>EXCLUSIVE</strong>.</p><pre class="crayon-plain-tag">SQL&gt; ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='%t_%s_%r.arc' SCOPE=SPFILE; System altered. SQL&gt; SHOW PARAMETER REMOTE_LOGIN_PASSWORDFILE NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ remote_login_passwordfile string EXCLUSIVE</pre><p><strong>Step 4:</strong> Set following parameters on <strong>Production</strong> to configure data guard :</p><pre class="crayon-plain-tag">SQL&gt; ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(TESTDB,STD_TESTDB)'; System altered. SQL&gt; alter system set log_archive_dest_1='LOCATION=/u01/arc/testdb VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)'; System altered. SQL&gt; alter system set log_archive_dest_2='service=std_testdb lgwr sync AFFIRM valid_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) db_unique_name=std_testdb'; System altered. SQL&gt; alter system set log_archive_dest_4='LOCATION=/u01/arc/testdb/stdby/ valid_for=(STANDBY_LOGFILES, STANDBY_ROLES) db_unique_name=testdb'; System altered. SQL&gt; ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE; System altered. SQL&gt; ALTER SYSTEM SET FAL_SERVER=STD_TESTDB; System altered. SQL&gt; ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; System altered. SQL&gt; ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30; System altered.</pre><p>The primary database is now configured for <strong>DataGuard</strong> creation.</p> <p>Here we can see I have set <strong>log_archive_dest_2</strong> for Primary Role. That will help us in <strong>switchover</strong> scenario.</p> <p>In next article, we will see <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/oracle-12c/oracle-12cr2/oracle-net-configuration-for-data-guard/"><strong>Oracle Net Configuration for Data Guard.</strong></a></span></p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/setting-the-parameter-on-primary-database-for-physical-standby-database/">Setting the parameter on primary database for physical standby database.</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4100 Sun May 06 2018 15:21:51 GMT-0400 (EDT) Automatic Gap Detection and Resolution http://oracle-help.com/dataguard/automatic-gap-detection-and-resolution/ <p><strong>Archive Gap:</strong> It means set of Archive Logs are not transmitted from Primary Database to Standby Database for some reason. The main reason could be your <strong>Network Connectivity</strong>. When network connectivity resumes data guard resumes redo data transmission from the Primary to Standby site.</p> <p><strong>Causes of Archive Gaps :</strong></p> <ol> <li>Network Disconnection.</li> <li>Standby database outage.</li> <li>I/O issues at the standby database.</li> <li>Insufficient Bandwith in the Network between the Primary and Standby Site</li> </ol> <p>Oracle Data Guard has two <strong>mechanisms</strong> for <strong>Gap Detection and  Resolution.</strong></p> <ol> <li>Automatic Gap Resolution</li> <li>FAL configuration</li> </ol> <p><strong>Automatic Gap Resolution :</strong></p> <p>We don&#8217;t need any extra configuration for this. Let us understand this with an example.</p> <p>To simulate this situation, I have stopped listener at the standby database.</p> <p><strong>Step 1 :</strong> Stop listener at standby database.</p> <p><strong>Step 2:</strong> Check archive log sequence on both sides.</p> <p><strong>Primary :</strong></p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode from v$database; NAME OPEN_MODE --------- -------------------- TESTDB READ WRITE SQL&gt; archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination /u01/arc/testdb/stdby/ Oldest online log sequence 46 Next log sequence to archive 48 Current log sequence 48</pre><p><strong>Standby :</strong></p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$archived_log; MAX(SEQUENCE#) -------------- 46</pre><p><strong>Step 3:</strong> Generate archive logs using &#8220;<strong>ALTER SYSTEM SWITCH LOGFILE</strong>&#8221; command.</p><pre class="crayon-plain-tag">SQL&gt; alter system switch logfile; System altered. SQL&gt; alter system switch logfile; System altered. SQL&gt; alter system switch logfile; System altered.</pre><p><strong>Step 4:</strong> We can check error at the <strong>alert log</strong> that <strong>listener</strong> is <strong>not</strong> up at <strong>standby</strong> side so, there is trouble creating archive logs at the standby site.</p><pre class="crayon-plain-tag">*********************************************************************** Fatal NI connect error 12541, connecting to: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.10)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb)(UR=A)(CID=(PROGRAM=oracle)(HOST=test1.localdomain)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 11.2.0.4.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production Time: 23-APR-2018 16:33:05 Tracing not turned on. Tns error struct: ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code: 111 nt OS err code: 0 Error 12541 received logging on to the standby Check whether the listener is up and running. PING[ARC2]: Heartbeat failed to connect to standby 'std_testdb'. Error is 12541. Mon Apr 23 16:33:40 2018 Thread 1 cannot allocate new log, sequence 51 Checkpoint not complete Current log# 2 seq# 50 mem# 0: /u01/oracle/oradata/testdb/redo02.log Thread 1 advanced to log sequence 51 (LGWR switch) Current log# 3 seq# 51 mem# 0: /u01/oracle/oradata/testdb/redo03.log Mon Apr 23 16:33:43 2018 Archived Log entry 66 added for thread 1 sequence 50 ID 0xa4536939 dest 1: Mon Apr 23 16:34:05 2018 *********************************************************************** Fatal NI connect error 12541, connecting to: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.10)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb)(UR=A)(CID=(PROGRAM=oracle)(HOST=test1.localdomain)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 11.2.0.4.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production Time: 23-APR-2018 16:34:05 Tracing not turned on. Tns error struct: ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code: 111 *********************************************************************** Fatal NI connect error 12541, connecting to: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.10)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb)(UR=A)(CID=(PROGRAM=oracle)(HOST=test1.localdomain)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 11.2.0.4.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production Time: 23-APR-2018 16:34:05 Tracing not turned on. Tns error struct: ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code: 111 nt OS err code: 0 *********************************************************************** Fatal NI connect error 12541, connecting to: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.10)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb)(UR=A)(CID=(PROGRAM=oracle)(HOST=test1.localdomain)(USER=oracle)))) VERSION INFORMATION: TNS for Linux: Version 11.2.0.4.0 - Production TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production Time: 23-APR-2018 16:34:05 Tracing not turned on. Tns error struct: ns main err code: 12541 TNS-12541: TNS:no listener ns secondary err code: 12560 nt main err code: 511 TNS-00511: No listener nt secondary err code: 111 nt OS err code: 0 Error 12541 received logging on to the standby Check whether the listener is up and running. PING[ARC2]: Heartbeat failed to connect to standby 'std_testdb'. Error is 12541.</pre><p>We can check error: <strong>&#8220;Heartbeat failed to connect to standby &#8216;std_testdb&#8221;.</strong></p> <p><strong>Step 5:</strong> Start <strong>listener</strong> at Standby :</p> <p><strong>Step 6:</strong> Check <strong>alert log</strong> of standby database  :</p><pre class="crayon-plain-tag">RFS[1]: Assigned to RFS process 6622 RFS[1]: Opened log for thread 1 sequence 48 dbid -1538101191 branch 973537980 Mon Apr 23 16:35:07 2018 RFS[2]: Assigned to RFS process 6624 RFS[2]: Opened log for thread 1 sequence 49 dbid -1538101191 branch 973537980 Mon Apr 23 16:35:07 2018 RFS[3]: Assigned to RFS process 6626 RFS[3]: Selected log 4 for thread 1 sequence 47 dbid -1538101191 branch 973537980 Mon Apr 23 16:35:07 2018 RFS[4]: Assigned to RFS process 6620 RFS[4]: Opened log for thread 1 sequence 50 dbid -1538101191 branch 973537980 Archived Log entry 18 added for thread 1 sequence 48 rlc 973537980 ID 0xa4536939 dest 2: Archived Log entry 19 added for thread 1 sequence 50 rlc 973537980 ID 0xa4536939 dest 2: Archived Log entry 20 added for thread 1 sequence 49 rlc 973537980 ID 0xa4536939 dest 2: Mon Apr 23 16:35:10 2018 Archived Log entry 21 added for thread 1 sequence 47 ID 0xa4536939 dest 4: Mon Apr 23 16:35:10 2018 Primary database is in MAXIMUM PERFORMANCE mode RFS[5]: Assigned to RFS process 6628 RFS[5]: Selected log 4 for thread 1 sequence 52 dbid -1538101191 branch 973537980 Mon Apr 23 16:35:10 2018 RFS[6]: Assigned to RFS process 6632 RFS[6]: Selected log 5 for thread 1 sequence 51 dbid -1538101191 branch 973537980 Mon Apr 23 16:35:10 2018 Archived Log entry 22 added for thread 1 sequence 51 ID 0xa4536939 dest 4:</pre><p>We can see here Old archive logs are <strong>automatically</strong> transmitted to Standby Database.</p> <p><strong>FAL Configuration :</strong></p> <p><strong>Step 1:</strong> Configure <strong>FAL_SERVER</strong>:</p><pre class="crayon-plain-tag">SQL&gt; show parameter FAL NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ fal_client string std_testdb fal_server string TESTDB SQL&gt;</pre><p><strong>Note:</strong> <strong>FAL_SERVER</strong> accepts the name that is associated in <strong>tnsnames.ora</strong> entry.</p> <p><strong>Step 2:</strong> Generate archive logs in primary using <strong>alter system switch logfile.</strong></p> <p><strong>Step 3:</strong> Check maximum applied log at the standby database.</p><pre class="crayon-plain-tag">SQL&gt; select max(sequence#) from v$archived_log where applied='YES'; MAX(SEQUENCE#) -------------- 73 SQL&gt; recover managed standby database cancel; Media recovery complete. SQL&gt;</pre><p><strong>Step 4:</strong> To simulate this situation manually delete archive logs from standby.</p><pre class="crayon-plain-tag">[oracle@localhost stdby]$ rm -rfv 1_76_973537980.arc removed `1_76_973537980.arc' [oracle@localhost stdby]$ rm -rfv 1_77_973537980.arc removed `1_77_973537980.arc' [oracle@localhost stdby]$</pre><p><strong>Step 5:</strong> Start recovery</p><pre class="crayon-plain-tag">SQL&gt; recover managed standby database disconnect from session; Media recovery complete. SQL&gt;</pre><p><strong>Step 6:</strong> Check alert log file.</p><pre class="crayon-plain-tag">MRP0: Background Managed Standby Recovery process started (testdb) Serial Media Recovery started Managed Standby Recovery not using Real Time Apply Waiting for all non-current ORLs to be archived... All non-current ORLs have been archived. Media Recovery Log /u01/arc/testdb/stdby/1_74_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_75_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_76_973537980.arc Error opening /u01/arc/testdb/stdby/1_76_973537980.arc Attempting refetch Media Recovery Waiting for thread 1 sequence 76 Fetching gap sequence in thread 1, gap sequence 76-76 Completed: ALTER DATABASE RECOVER managed standby database disconnect from session Mon Apr 23 22:25:48 2018 RFS[5]: Assigned to RFS process 4299 RFS[5]: Allowing overwrite of partial archivelog for thread 1 sequence 76 RFS[5]: Opened log for thread 1 sequence 76 dbid -1538101191 branch 973537980 Archived Log entry 57 added for thread 1 sequence 76 rlc 973537980 ID 0xa4536939 dest 2: Mon Apr 23 22:25:57 2018 Media Recovery Log /u01/arc/testdb/stdby/1_76_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_77_973537980.arc Error opening /u01/arc/testdb/stdby/1_77_973537980.arc Attempting refetch Media Recovery Waiting for thread 1 sequence 77 Fetching gap sequence in thread 1, gap sequence 77-77 Mon Apr 23 22:25:57 2018 RFS[6]: Assigned to RFS process 4301 RFS[6]: Allowing overwrite of partial archivelog for thread 1 sequence 77 RFS[6]: Opened log for thread 1 sequence 77 dbid -1538101191 branch 973537980 Archived Log entry 58 added for thread 1 sequence 77 rlc 973537980 ID 0xa4536939 dest 2: Mon Apr 23 22:26:07 2018 Media Recovery Log /u01/arc/testdb/stdby/1_77_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_78_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_79_973537980.arc Media Recovery Log /u01/arc/testdb/stdby/1_80_973537980.arc</pre><p>We can see here Error of deleted file: <strong>Error opening /u01/arc/testdb/stdby/1_76_973537980.arc </strong>and then it is fetched by <strong>FAL</strong> server &#8211; <strong>Fetching gap sequence in thread 1, gap sequence 76-76. </strong>So it is automatically detected and resolved by <strong>FAL</strong>.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/automatic-gap-detection-and-resolution/">Automatic Gap Detection and Resolution</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4232 Sun May 06 2018 15:13:03 GMT-0400 (EDT) ORA-01144: File size (4194304 blocks) exceeds maximum of 4194303 blocks http://oracle-help.com/ora-errors/ora-01144-file-size-4194304-blocks-exceeds-maximum-of-4194303-blocks/ <p>You may get this error while creating a tablespace in your environment.</p><pre class="crayon-plain-tag">SQL&gt; create tablespace test extent management local datafile size 40 G uniform size 256 K; create tablespace test extent management local datafile size 40 G uniform size 256 K * ERROR at line 1: ORA-01144: File size (5242880 blocks) exceeds maximum of 4194303 blocks</pre><p>Oracle has some boundaries to set datafile maximum size based on the <strong>db_block_size</strong> parameter.</p> <table style="height: 172px; width: 543px;"> <tbody> <tr> <td style="width: 282px;">db_block_size</td> <td style="width: 251px;">maximum allowed size for datafile</td> </tr> <tr> <td style="width: 282px;">2k</td> <td style="width: 251px;">8GB</td> </tr> <tr> <td style="width: 282px;">4K</td> <td style="width: 251px;">16GB</td> </tr> <tr> <td style="width: 282px;">8K</td> <td style="width: 251px;">32GB</td> </tr> <tr> <td style="width: 282px;">16K</td> <td style="width: 251px;">64GB</td> </tr> <tr> <td style="width: 282px;">32K</td> <td style="width: 251px;">128GB</td> </tr> </tbody> </table> <p><strong>The formula to calculate the max size is db_block_size * 4194303.</strong></p> <p>If you want to create the greater size of the tablespace.</p> <ol> <li>Either your create multiple datafiles.</li> <li>Use bigfile datafile.<br /> <pre class="crayon-plain-tag">SQL&gt; create tablespace test extent management local datafile size 30 G uniform size 256k; Tablespace created. SQL&gt;</pre> </li> </ol> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/ora-errors/ora-01144-file-size-4194304-blocks-exceeds-maximum-of-4194303-blocks/">ORA-01144: File size (4194304 blocks) exceeds maximum of 4194303 blocks</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4110 Sun May 06 2018 15:03:22 GMT-0400 (EDT) Logical Standby Database : SQL Apply Architecture http://oracle-help.com/dataguard/logical-standby-database-sql-apply-architecture/ <p><strong>Oracle DataGuard</strong> supports logical and physical standby databases.</p> <p><strong>Logical Standby</strong>: When primary database generates any redo entries it is transferred to standby database and then redo data are converted into SQL statements and then those SQL statements are applied to standby database.</p> <p>To read more about <strong><a href="http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/">Physical Standby Database</a></strong></p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="fJinfZdqRD"><p><a href="http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/">Physical Standby Database: Redo Apply Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/embed/#?secret=fJinfZdqRD" data-secret="fJinfZdqRD" width="600" height="338" title="&#8220;Physical Standby Database: Redo Apply Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>A thing that got clear to us till now is, Physical Standby works on the redo, while Logical Standby have an extra layer of conversion and it converts redo to SQL statements and those SQL statements then applied to Logical Standby Database. That is called <strong>log mining process</strong>.</p> <p>Now, what happens when Redo to SQL conversion takes place.</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg"><img data-attachment-id="4136" data-permalink="http://oracle-help.com/dataguard/logical-standby-database-sql-apply-architecture/attachment/sql-apply-2/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?fit=621%2C394" data-orig-size="621,394" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="sql apply" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?fit=300%2C190" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?fit=621%2C394" class="aligncenter wp-image-4136 size-full" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?resize=621%2C394" alt="" width="621" height="394" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?w=621 621w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?resize=300%2C190 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?resize=60%2C38 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/sql-apply.jpg?resize=150%2C95 150w" sizes="(max-width: 621px) 100vw, 621px" data-recalc-dims="1" /></a></p> <p><a href="https://docs.oracle.com/cd/B19306_01/server.102/b14239/manage_ls.htm#CHDFCDGB">Image Source</a></p> <p>We can see Six processes for Logical Standby in Above Image.It is categorized in two parts. Log Mining Process and SQL apply process.</p> <p><strong>1. Log Mining Process</strong></p> <ul> <li><strong>Reader Process</strong>: It reads redo records from the archived redo log files or standby redo log files.</li> <li><strong>Preparer Process:</strong> This converts block changes contained in redo records into logical change records (LCR). Multiple <strong>PREPARER</strong> processes can be active for a given redo log file. The LCRs are kept in the system global area (SGA), known as the <span class="italic"><strong>LCR</strong> cache.</span></li> <li><strong>Builder Process:</strong> This process groups LCRs into transactions, and performs other tasks, such as memory management in the LCR cache, checkpointing related to SQL Apply, restart and filtering out of uninteresting changes.</li> </ul> <p><strong>2. Apply Process</strong></p> <ul> <li><strong>Analyzer Process:</strong> This process identifies dependencies between different transactions.</li> <li><strong>Coordinator Process:</strong> This process assigns transactions to different appliers and coordinates among them to ensure that dependencies between transactions are honored. It is also known as <strong>LSP process.</strong> Main Process in the logical standby database.</li> <li><strong>Applier Process:</strong> This process applies transactions to the logical standby database under the supervision of the coordinator process.</li> </ul> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/logical-standby-database-sql-apply-architecture/">Logical Standby Database : SQL Apply Architecture</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4130 Sun May 06 2018 15:01:22 GMT-0400 (EDT) Benefits of using Dataguard http://oracle-help.com/dataguard/benefits-of-using-dataguard/ <p>In the previous article, we have seen basics about the architecture of <strong>Oracle Dataguard.</strong></p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="SGKkxeDjAq"><p><a href="http://oracle-help.com/dataguard/oracle-dataguard-architecture/">Oracle Dataguard Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/oracle-dataguard-architecture/embed/#?secret=SGKkxeDjAq" data-secret="SGKkxeDjAq" width="600" height="338" title="&#8220;Oracle Dataguard Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>The main benefit of any DR server is we can say to protect Production Server&#8217;s critical data from any natural and unnatural disasters. We keep our Production Server in one place while our DR server can set up to even a different continent.</p> <p>Oracle Data Guard provides us this main protection as well as we can take many other benefits from Data Guard as follows :</p> <ol> <li>Oracle Data Guard Provides <strong>high availability of data</strong>. Most of the business want to recover data from any disaster without loss of single byte. Oracle Data Guard have Maximum Protection mode for that.</li> <li>We can treat out <strong>Data Guard Server as Reporting Server</strong> and we can take share Production Server&#8217;s responsibility. In Data Guard Server Database can be opened in Read Only mode with the real-time apply state, so we can give access of our standby database to reporting team and that won&#8217;t affect OLTP transactions.</li> <li>We can <strong>offload RMAN backup to Standby Database. </strong>We can configure our backup server to Standby Database and that resource intensive task will be done from Standby Database.</li> <li>We can utilize Standby Database <strong>to test any new or existing functionality. </strong>Sometimes our developers want a replica of Production Server to test the functionality of new release of an application. And yes, our Standby Database is always ready for that. We don&#8217;t need to take backup and restore it another server. We can save that time. And if that testing needs write operation for testing, we can not simply make standby database in a read-write mode. But Data Guard has a feature of <strong>snapshot standby</strong> for that. An Oracle snapshot standby database feature allows you open the database in a read-write mode and the ability to roll back the changes to resume the standby synchronization from the point where it was stopped. When a standby database is converted to a snapshot standby database mode, the redo from the production will be shipped to the standby location; however, the redo will be not applied over the standby database. When the snapshot database revert back to standby database mode after all the testing, Oracle will resume the recovery from the point it was stopped and all the changes made temporary will be removed.</li> <li>We<strong> don&#8217;t need any additional software setup</strong> for Data Guard, as it is integrated with Oracle Database.</li> <li>Oracle Data Guard <strong>provides automatic role transition</strong> for Primary and Standby role.</li> <li>If any network outage occurs, Data Guard provides<strong> automatic gap detection of archive logs and resolution</strong>.</li> </ol> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/benefits-of-using-dataguard/">Benefits of using Dataguard</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4099 Sun May 06 2018 14:55:27 GMT-0400 (EDT) Physical Standby Database: Redo Apply Architecture http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/ <p>In the previous article, we have seen basics about the architecture of Oracle Dataguard.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="CQdht9IOfl"><p><a href="http://oracle-help.com/dataguard/oracle-dataguard-architecture/">Oracle Dataguard Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/dataguard/oracle-dataguard-architecture/embed/#?secret=CQdht9IOfl" data-secret="CQdht9IOfl" width="600" height="338" title="&#8220;Oracle Dataguard Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>Oracle Data Guard works on physical standby and logical standby sites.</p> <ul> <li><strong>Physical Standby</strong>: When primary database generates redo entries, those redo are transferred to standby database and then redo is applied to standby database.</li> <li><strong>Logical Standby:</strong> When primary database generates any redo entries it is transferred to standby database and then redo data are converted into SQL statements and then those SQL statements are applied to standby database.</li> </ul> <p>In this article, we are going to see Redo Apply Architecture of <strong>Oracle Dataguard.</strong></p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png"><img data-attachment-id="4081" data-permalink="http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/attachment/oracle-dataguard-physical-standby-architecture/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?fit=704%2C513" data-orig-size="704,513" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="oracle dataguard physical standby architecture" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?fit=300%2C219" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?fit=704%2C513" class="wp-image-4081 aligncenter" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?resize=621%2C453" alt="" width="621" height="453" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?resize=300%2C219 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?resize=60%2C44 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?resize=150%2C109 150w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/oracle-dataguard-physical-standby-architecture.png?w=704 704w" sizes="(max-width: 621px) 100vw, 621px" data-recalc-dims="1" /></a></p> <p>By default, apply services wait for the <span class="italic">full</span> archived redo log file to arrive on the standby database before applying it to the standby database.</p> <p>But we have the option to enable real-time apply. For that, we need to configure standby log files. which allows Data Guard to recover redo data from the current standby redo log file as it is being filled.</p> <p>In above diagram, we can see <strong>LNS</strong> process is responsible for transferring redo data from primary database&#8217;s Redo Buffer or Online redo log files to <strong>RFS</strong> process of the standby database.</p> <p>Then <strong>RFS</strong> process will write that redo to standby log files.</p> <p>Then <strong>MRP</strong> process will read that redo and directly apply redo data to Standby Database.</p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/physical-standby-database-redo-apply-architecture/">Physical Standby Database: Redo Apply Architecture</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4080 Sun May 06 2018 14:50:36 GMT-0400 (EDT) Basic of Data Pump in Oracle http://oracle-help.com/oracle-database/basic-of-data-pump-in-oracle/ <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png"><img data-attachment-id="4119" data-permalink="http://oracle-help.com/oracle-database/basic-of-data-pump-in-oracle/attachment/download-1-3/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?fit=309%2C163" data-orig-size="309,163" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="download (1)" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?fit=300%2C158" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?fit=309%2C163" class="wp-image-4119 alignleft" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?resize=184%2C97" alt="" width="184" height="97" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?resize=300%2C158 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?resize=60%2C32 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?resize=150%2C79 150w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-1.png?w=309 309w" sizes="(max-width: 184px) 100vw, 184px" data-recalc-dims="1" /></a>This post is going to explain about the <strong>Oracle Data Pump</strong>. We may find n number of articles we may find on<strong> Google</strong> regards, <strong>Oracle DataPump</strong>. Let&#8217;s add one more. Being <strong>Oracle DBA</strong> we must explore Oracle RDBMS as much as possible to be veteran in our field. Normal water become Lemonade after adding lemon as we can see the additional feature in<strong> RDBMS</strong> becomes more useful and valuable.</p> <p>&nbsp;</p> <p>As we know what is the purpose of Data Pump let&#8217;s have a quick review with technical definition.</p> <p><strong>Oracle Data Pump</strong> technology enables very high-speed movement of data and metadata from one database to another. Oracle Data Pump is available only on Oracle Database 10g release 1 (10.1) and later</p><pre class="crayon-plain-tag">SQL&gt; CREATE DIRECTORY dpump AS 'D:\Dpump'; SQL&gt; SELECT directory_path FROM all_directories WHERE directory_name = 'DPUMP’; SQL&gt; CREATE USER Oraclehelp IDENTIFIED BY iloveoraclehelp DEFAULT TABLESPACE Users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; SQL&gt; GRANT CREATE SESSION, CREATE TABLE TO Oraclehelp; SQL&gt; GRANT READ, WRITE ON DIRECTORY dpump TO scott; SQL&gt; GRANT READ, WRITE ON DIRECTORY dpump TO Oraclehelp;</pre><p>Table Export/Import:</p><pre class="crayon-plain-tag">J:\&gt;EXPDP scott/tiger DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP LOGFILE=EMP_DEPT.LOG TABLES=emp,dept</pre><p>Import with Remap Schema</p><pre class="crayon-plain-tag">J:\&gt;IMPDP Oraclehelp/iloveoraclehelp DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP LOGFILE=IMP_EMP_DEPT.LOG TABLES=emp REMAP_SCHEMA=scott:Oraclehelp</pre><p>Export Metadata Only</p><pre class="crayon-plain-tag">J:\&gt;EXPDP scott/tiger DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP LOGFILE=EMP_DEPT.LOG TABLES=emp,dept CONTENT=METADATA_ONLY J:\&gt;IMPDP Oraclehelp/iloveoraclehelp DIRECTORY=dpump DUMPFILE=EMP_DEPT.DMP LOGFILE=EMP_DEPT.LOG TABLES=emp,dept CONTENT=METADATA_ONLY REMAP_SCHEMA=scott:Oraclehelp</pre><p>CONTENT={ALL | DATA_ONLY | METADATA_ONLY}</p> <ul> <li>ALL loads any data and metadata contained in the source. This is the <strong>default</strong>.</li> <li>DATA_ONLY loads only table row data into existing tables; no database objects are created.</li> <li>METADATA_ONLY loads only database object definitions; no table row data is loaded.</li> </ul> <p>Schema Exports/Imports</p><pre class="crayon-plain-tag">J:\&gt;EXPDP scott/tiger DIRECTORY=dpump DUMPFILE=SCOTT.DMP LOGFILE=SCOTT.LOG SCHEMAS=scott</pre><p>Remap Schema</p><pre class="crayon-plain-tag">J:\&gt;IMPDP Oraclehelp/iloveoraclehelp DIRECTORY=dpump DUMPFILE=SCOTT.DMP LOGFILE=IMPSCOTT.LOG SCHEMAS=scott REMAP_SCHEMA=scott:Oraclehelp</pre><p>Remap Tablespace</p><pre class="crayon-plain-tag">J:\&gt;IMPDP Oraclehelp/iloveoraclehelp DIRECTORY=dpump DUMPFILE=SCOTT.DMP LOGFILE=IMPSCOTT.LOG SCHEMAS=scott REMAP_TABLESPACE=USER1:USER3 REMAP_TABLESPACE=USER2:USER4</pre><p>Database Exports/Imports</p><pre class="crayon-plain-tag">J:\&gt;EXPDP system/manager DIRECTORY=dpump DUMPFILE=DATABASE.DMP LOGFILE=DATABASE.LOG FULL=Y J:\&gt;IMPDP system/sys DIRECTORY=dpump DUMPFILE=DATABASE.DMP LOGFILE=DATABASE.LOG FULL=Y</pre><p>Tablespace Exports/Imports</p><pre class="crayon-plain-tag">J:\&gt;EXPDP 'sys/sys as sysdba' DIRECTORY= dpump DUMPFILE=TBSUSERS.DMP LOGFILE=TBSUSER.LOG TABLESPACES=USERS J:\&gt;IMPDP 'Oraclehelp/iloveoraclehelp' DIRECTORY= dpump DUMPFILE=TBSUSERS.DMP LOGFILE=TBSUSER.LOG TABLESPACES=USERS TABLE_EXISTS_ACTION=REPLACE REMAP_SCHEMA=scott:Oraclehelp</pre><p>The TABLE_EXISTS_ACTION parameter for Data Pump impdp provides four options:</p> <ul> <li>SKIPis the default: A table is skipped if it already exists.</li> <li>APPEND will append rows if the target table’s geometry is compatible. This is the default when the user specifies CONTENT=DATA_ONLY.</li> <li>TRUNCATEwill truncates the table, and then load rows from the source if the geometries are compatible and truncation is possible. For example, it is not possible to truncate a table if it is the target of referential constraints.</li> <li>REPLACEwill drops the existing table, then create and load it from the source.</li> </ul> <p>Thanks for giving valuable time to add new gems to Oracle&#8217;s treasure.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/basic-of-data-pump-in-oracle/">Basic of Data Pump in Oracle</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4118 Sun May 06 2018 14:46:41 GMT-0400 (EDT) Offline Relocation of a PDB using RMAN https://hemantoracledba.blogspot.com/2018/05/offline-relocation-of-pdb-using-rman.html <div dir="ltr" style="text-align: left;" trbidi="on">I've published a new video on <a href="https://youtu.be/wuDAvDQs6r8" target="_blank">Offline Relocation of a PDB</a> using RMAN in 12.2<br />.<br />.<br />.<br /><br /></div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-3861449740250723373 Sun May 06 2018 12:17:00 GMT-0400 (EDT) Oracle home management using Ansible https://ilmarkerm.eu/blog/2018/05/oracle-home-management-using-ansible/#utm_source=rss&utm_medium=rss <p>Managing Oracle Database homes and patching them on a large scale is a challenge. Patching is a must today due to all the security threats out there plus all the bugs that you will hit during normal database operations.<br /> You can read about the challenges and solutions in <a href="http://www.ludovicocaldara.net/dba/oh-mgmt-1/?utm_source=rss&utm_medium=rss">Ludovico Caldara blog series</a></p> <p>Here I&#8217;d like to share my solution. The main idea of this solution is simple:<br /> <strong>Never patch existing Oracle home, even when you just need to apply tiny one-off patch. Always install a new home and eventually remove the old one.</strong></p> <p>It is not possible to execute this strategy in a large environment without automation and here I&#8217;m sharing my automation solution using Ansible.</p> <p>Features of this solution:</p> <ul> <li>Oracle home configurations become code</li> <li>Runs over any number of clusters or single hosts, with same configuration in parallel</li> <li>Maintain list of homes or flavours of homes each cluster/single host is having installed or what need to be removed</li> <li>Oracle Grid infrastructure or Oracle Restart installation is required</li> <li>Fully automated, up to the point that you have a job in Jenkins that is triggered by push to version control system (git)</li> <li>Home description in Ansible variable file also servers as documentation</li> <li>All tasks are idempotent, so you can execute playbook multiple times. If the servers already have the desired state, nothing will be changed</li> </ul> <p>Ideal workflow to install a new home:</p> <ul> <li>Describe in Ansible variable file the home name, base installer location and list of patches needed</li> <li>Attach home name to clusters/hosts in Ansible files</li> <li>Commit and push to git</li> <li>Go through your typical git workflows to push the change into release branch, create pull requests, have them reviewed by peers, merge pull request into release branch</li> <li>Job in jenkins triggers on push to release branch in git and then executes ansible playbook in target/all hosts</li> </ul> <p><a href="https://github.com/ilmarkerm/ansible-oracle-home-mgmt?utm_source=rss&utm_medium=rss">Read more about it and get the code from github</a></p> <div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=MbA3TjZMa9w:IfaoFnA5wvw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=MbA3TjZMa9w:IfaoFnA5wvw:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?i=MbA3TjZMa9w:IfaoFnA5wvw:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=MbA3TjZMa9w:IfaoFnA5wvw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?i=MbA3TjZMa9w:IfaoFnA5wvw:V_sGLiPBpWU" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/ilmarkerm/~4/MbA3TjZMa9w" height="1" width="1" alt=""/> ilmarkerm https://ilmarkerm.eu/blog/?p=435 Sun May 06 2018 05:45:54 GMT-0400 (EDT) LEAP#387 GPS Modules https://blog.tardate.com/2018/05/leap387-gps-modules.html <p>This is the first time I’ve played around with a GPS module, so it was an interesting dive into NEMA standards. But at the end of the day, the <a href="https://github.com/mikalhart/TinyGPSPlus">TinyGPSPlus</a> library makes it a piece of cake to get GPS readings. I log these to serial and display the main facts on an LCD. As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/GpsBasics">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/GpsBasics"><img src="https://leap.tardate.com/playground/GpsBasics/assets/GpsBasics_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/05/leap387-gps-modules.html Sun May 06 2018 01:34:08 GMT-0400 (EDT) Protection modes in Oracle Dataguard http://oracle-help.com/dataguard/protection-modes-in-oracle-dataguard/ <p>In previous article we have gone through <span style="color: #0000ff;"><a style="color: #0000ff;" href="http://oracle-help.com/dataguard/oracle-dataguard-architecture/"><strong>architecture of Oracle Dataguard</strong></a></span> and redo apply and sql apply architecture. A main role of any DR server is to protect Production Server&#8217;s critical data from any natural and unnatural disasters. Oracle Data Guard have different flavors to provide protection to Production&#8217;s critical data. That we can see as Protection Modes in Data Guard.</p> <p>There are three types of Protection Modes in Dataguard .</p> <ol> <li>Maximum Availability</li> <li>Maximum Performance</li> <li>Maximum Protection.</li> </ol> <p><strong>Maximum Availability : </strong></p> <p>It provides highest level of protection of data without compromising availability of database. In this protection mode primary database&#8217;s transaction does not commit until that redo is transferred to one of synchronized database and acknowledged from standby process of redo data arrival.</p> <p>Once transaction&#8217;s redo data is transferred and written to standby redo log files then RFS process sent a acknowledgement to primary database and then that transaction commits at primary database.</p> <p>If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.</p> <p><strong>Maximum Performance :</strong></p> <p>As its name says it provides maximum level of data  protection that a database can have without compromising performance of database.</p> <p>This is <strong>default Protection mode</strong> in dataguard.</p> <p>Unlike Maximum Availability Protection mode, this mode allows transactions to commit as soon as all redo data generated by those transactions are written to online redo log files. Here , redo data is transferred to standby database in asynchronous way , so it doesn&#8217;t affect primary database&#8217;s performance.</p> <p>This mode offers less data protection than Maximum Availability mode as there is no concept of acknowledgment of redo data written to standby redo log file at standby database.</p> <p>But as the primary database is unaffected by this delay , it offers maximum performance.</p> <p><strong>Maximum Protection :</strong></p> <p>This protection mode ensures zero data loss if primary database fails.</p> <p>We have seen in Maximum Availability protection mode that if it cannot write to one of synchronized standby database it works as if it were in Maximum Performance mode  But here in Maximum Protection mode , if database can not write to one of synchronized standby database , Primary database will shut down as this prioritize protection of primary database than availability of data.</p> <p><strong>Note :</strong> Oracle recommends when you use this mode of protection , you should keep at least two standby database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.</p> <p>Stay tuned for <strong>More articles on Oracle DataGuard<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel : </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/dataguard/protection-modes-in-oracle-dataguard/">Protection modes in Oracle Dataguard</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=4090 Sat May 05 2018 20:35:39 GMT-0400 (EDT) Redolog in Oracle http://oracle-help.com/oracle-database/redolog-in-oracle/ <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg"><img data-attachment-id="4104" data-permalink="http://oracle-help.com/oracle-database/redolog-in-oracle/attachment/images-9-2/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?fit=241%2C209" data-orig-size="241,209" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="images (9)" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?fit=241%2C209" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?fit=241%2C209" class="wp-image-4104 alignleft" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?resize=76%2C66" alt="" width="76" height="66" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?w=241 241w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?resize=60%2C52 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-9.jpg?resize=150%2C130 150w" sizes="(max-width: 76px) 100vw, 76px" data-recalc-dims="1" /></a>As DBA we are aware that Oracle is a mixture of CRD files. Now we are going to have look at Redolog.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Let&#8217;s have a technical definition of Redolog.</p> <p><strong>The most crucial structure for recovery operations is the <span class="bold">redo log</span>, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.</strong></p> <p>Check Redo Log file Status: with the help of given select query.</p><pre class="crayon-plain-tag">SQL&gt; select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 CURRENT 2 INACTIVE 3 INACTIVE</pre><p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png"><img data-attachment-id="4105" data-permalink="http://oracle-help.com/oracle-database/redolog-in-oracle/attachment/capture-17/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?fit=397%2C324" data-orig-size="397,324" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Capture" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?fit=300%2C245" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?fit=397%2C324" class="wp-image-4105 aligncenter" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?resize=186%2C152" alt="" width="186" height="152" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?resize=300%2C245 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?resize=60%2C49 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?resize=150%2C122 150w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/Capture-2.png?w=397 397w" sizes="(max-width: 186px) 100vw, 186px" data-recalc-dims="1" /></a></p> <p>The log files have the following status values:</p> <ul style="list-style-type: circle;"> <li><b><span lang="EN-US">USED         </span></b><span lang="EN-US">Indicates either that a log has just been added but never used.</span></li> <li><strong>CURRENT  </strong>Indicates a valid log that is in use.</li> <li><strong>ACTIVE      </strong>Indicates a valid log file that is not currently in use.</li> <li><strong>CLEARING </strong>Indicates a log is being re-created as an empty log due to DBA action.</li> <li><strong>CLEARING CURRENT</strong> Means that a current log is being cleared of a closed thread. If a log stays in this status, it could indicate there is some failure in the log switch.</li> <li><strong>INACTIVE  </strong>Means that the log is no longer needed for instance recovery but may be needed for media recovery.</li> </ul> <p>The v$logfile table has a status indicator that gives these additional codes:</p> <ul style="list-style-type: disc;"> <li><strong>INVALID </strong>File is inaccessible.</li> <li><strong>STALE     </strong>File contents are incomplete (such as when an instance is shut down with <strong>SHUTDOWN ABORT</strong> or due to a system crash).</li> <li><strong>DELETED </strong>File is no longer used.</li> <li><strong>Null  </strong> File in use.</li> </ul> <p><strong>Adding Redo Log Groups:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE ADD LOGFILE GROUP 4 'D:\oracle\product\11.2.0.4\oradata\orcl\REDO04.LOG' SIZE 10M;</pre><p><strong>Adding Redo Log Members:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE ADD LOGFILE MEMBER 'D:\oracle\product\11.2.0.4\oradata\orcl\REDO04b.LOG' TO GROUP 4;</pre><p><strong>Check the file Location of redo log files:</strong></p><pre class="crayon-plain-tag">SQL&gt; select group#,member from v$logfile; GROUP# MEMBER ----- -------------------------------------------------------- 3 D:\oracle\product\11.2.0.4\oradata\orcl\REDO03.LOG 2 D:\oracle\product\11.2.0.4\oradata\orcl\REDO02.LOG 1 D:\oracle\product\11.2.0.4\oradata\orcl\REDO01.LOG 4 D:\oracle\product\11.2.0.4\oradata\orcl\REDO04.LOG 4 D:\oracle\product\11.2.0.4\oradata\orcl\REDO04B.LOG</pre><p><strong>Dropping Online Redo Log Member:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE DROP LOGFILE MEMBER 'D:\oracle\product\11.2.0.4\oradata\orcl\REDO04B.LOG';</pre><p><strong>Dropping Online Redo Log Groups:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER DATABASE DROP LOGFILE GROUP 4;</pre><p><strong>Move Redo Log File Destinations</strong></p><pre class="crayon-plain-tag">SQL&gt;SHUTDOWN; Copy the redo log file in new location. SQL&gt; STARTUP MOUNT; SQL&gt; ALTER DATABASE RENAME FILE 'D:\oracle\product\11.2.0.4\oradata\orcl\REDO01.LOG' TO 'D:\oracle\product\11.2.0.4\oradata\orcl\redologfile\REDO01.LOG'; SQL&gt; alter database open;</pre><p><strong>Forcing Log Switch:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER SYSTEM SWITCH LOGFILE;</pre><p><strong>Forcing Checkpoint:</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER SYSTEM CHECKPOINT;</pre><p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel : </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/redolog-in-oracle/">Redolog in Oracle</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4103 Sat May 05 2018 15:21:57 GMT-0400 (EDT) Controlfile in Oracle Database http://oracle-help.com/oracle-database/controlfile-in-oracle-database/ <p><span style="font-family: georgia, palatino, serif;"><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png"><img data-attachment-id="4095" data-permalink="http://oracle-help.com/oracle-database/controlfile-in-oracle-database/attachment/download-16/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?fit=239%2C211" data-orig-size="239,211" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="download (16)" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?fit=239%2C211" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?fit=239%2C211" class="size-full wp-image-4095 alignleft" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?resize=239%2C211" alt="" width="239" height="211" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?w=239 239w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?resize=60%2C53 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/04/download-16.png?resize=150%2C132 150w" sizes="(max-width: 239px) 100vw, 239px" data-recalc-dims="1" /></a> </span><span style="font-family: georgia, palatino, serif;">Today we are going to have look at the most important element of the Oracle Databases.Being Oracle DBA we must have good knowledge about Controlfile. It plays an important role in Oracle Database. If something happens wrong with it we are not able to start our database. Let&#8217;s have some important knowledge about Controlfile. Whenever we face oracle interviews. More than one questions come from a Controlfile session.</span></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong><span style="font-family: georgia, palatino, serif;">Controlfile Contains the following type of information:</span></strong></p> <ol> <li><span style="font-family: georgia, palatino, serif;">Archive log history</span></li> <li><span style="font-family: georgia, palatino, serif;">Tablespace and datafile records (filenames, datafile checkpoints, read/write status, offline or not)</span></li> <li><span style="font-family: georgia, palatino, serif;">Current redo log file sequence number</span></li> <li><span style="font-family: georgia, palatino, serif;">Database&#8217;s creation date</span></li> <li><span style="font-family: georgia, palatino, serif;">D</span><span style="font-family: georgia, palatino, serif;">atabase name</span></li> <li><span style="font-family: georgia, palatino, serif;">C</span><span style="font-family: georgia, palatino, serif;">urrent archive log mode</span></li> <li><span style="font-family: georgia, palatino, serif;">Backup information</span></li> <li><span style="font-family: georgia, palatino, serif;">Database block corruption information</span></li> <li><span style="font-family: georgia, palatino, serif;">Database ID, which is unique to each DB</span></li> </ol> <p>As we know Control file plays an important role in the Oracle database. Multiplexing of controlfile is also playing an important role. Now we are going to know about the steps of multiplexing of controlfile.</p> <p><strong><u><b>Multiplexing Control file using SPFILE:</b></u></strong></p><pre class="crayon-plain-tag">SQL&gt;ALTER SYSTEM SET control_files='D:\oracle\product\11.2.0\oradata\orcl\CONTROL01.CTL','D:\oracle\product\11.2.0\oradata\orcl\CONTROL02.CTL','D:\oracle\product\11.2.0\oradata\orcl\CONTROL03.CTL','D:\oracle\product\11.2.0\oradata\orcl\CONTROL04.CTL' SCOPE=spfile; SQL&gt; shutdown; D:\&gt; copy D:\oracle\product\11.2.0\oradata\orcl\CONTROL01.CTL D:\oracle\product\11.2.0\oradata\orcl\CONTROL04.CTL SQL&gt; startup;</pre><p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong></span></span><span class="s1"><span class="s2"> LinkedIn: </span></span><span class="s1"><span class="s2"><a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong></span><span class="s1"> LinkedIn: </span><span class="s1"><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez’s Profile</a></strong></span></p> <p>LinkedIn Group: <strong><em><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></em></strong></p> <p>Facebook Page: <strong><em><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></em></strong></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/controlfile-in-oracle-database/">Controlfile in Oracle Database</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4094 Sat May 05 2018 15:11:42 GMT-0400 (EDT) Schedule rman backup in windows http://oracle-help.com/scripts/schedule-rman-backup-in-windows/ <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg"><img data-attachment-id="4049" data-permalink="http://oracle-help.com/scripts/schedule-rman-backup-in-windows/attachment/images-3-8/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?fit=276%2C183" data-orig-size="276,183" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="images (3)" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?fit=276%2C183" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?fit=276%2C183" class="wp-image-4049 alignleft" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?resize=137%2C91" alt="" width="137" height="91" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?w=276 276w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?resize=60%2C40 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/04/images-3.jpg?resize=150%2C99 150w" sizes="(max-width: 137px) 100vw, 137px" data-recalc-dims="1" /></a>We are going to learn about the steps which we will use for schedule rman backup in windows. Most probably Organisations use Linux and Unix OS as per there requirements but sometimes as per company&#8217;s requirements, we have to use windows. Here we are going to have look at the process of schedule rman backup in windows.</p> <p>&nbsp;</p> <p>Let&#8217;s have look at it.</p> <p>–rman.cmd need to schedule only this which call another script.</p><pre class="crayon-plain-tag">set oracle_sid=orcl rman target sys/passwd nocatalog cmdfile=’d:\oracle\rman\dbrman.cmd’ log=’d:\oracle\rman\bk_log_1.log’</pre><p>–dbrman.cmd</p><pre class="crayon-plain-tag">run { allocate channel d1 type disk; backup tag whole_database_open format ‘d:\oracle\archive\db_%t_%s_p%p’ database; sql ‘alter database switch logfile’; sql ‘alter database switch logfile’; backup arch_1ivelog all format ‘d:\oracle\archive\al_%t_%s_p%p’ delete all input; delete noprompt obsolete; backup current controlfile tag =cf1 format ‘d:\oracle\archive\cf_%t_%s_p%p’; }</pre><p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong></span></span><span class="s1"><span class="s2"> LinkedIn: </span></span><span class="s1"><span class="s2"><a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong></span><span class="s1"> LinkedIn: </span><span class="s1"><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez’s Profile</a></strong></span></p> <p>LinkedIn Group: <strong><em><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></em></strong></p> <p>Facebook Page: <strong><em><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></em></strong></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/scripts/schedule-rman-backup-in-windows/">Schedule rman backup in windows</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4048 Sat May 05 2018 15:04:00 GMT-0400 (EDT) Undo Segment is in Needs Recovery http://oracle-help.com/ora-errors/undo-segment-is-in-needs-recovery/ <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png"><img data-attachment-id="4046" data-permalink="http://oracle-help.com/ora-errors/undo-segment-is-in-needs-recovery/attachment/download-9/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?fit=272%2C185" data-orig-size="272,185" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="download" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?fit=272%2C185" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?fit=272%2C185" class="wp-image-4046 alignleft" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?resize=148%2C101" alt="" width="148" height="101" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?w=272 272w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?resize=60%2C41 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/04/download.png?resize=150%2C102 150w" sizes="(max-width: 148px) 100vw, 148px" data-recalc-dims="1" /></a>Whenever we face any DBA interviews we have to go through some intersting topics interviewers ask. Undo Management is the most important part of DBA&#8217;s job responsibilities.</p> <p>This article will help you out to solve the issue when <strong>&#8220;Undo Segment is in Needs Recovery or Undo Segments are corrupted&#8221; </strong></p> <p>&nbsp;</p> <p>The database is shutdown and can’t start it, asking for Undo recovery while dropping Undo tablespace</p> <p>Dropping a Undo tablespace give a message</p><pre class="crayon-plain-tag">ORA-01548: active rollback segment Or Undo segment shows status as needs recovery</pre><p><strong>Cause</strong></p> <p>The issue could happen if the datafile on which the undo segments reside is offline and the transaction cannot be rolled backed since the file is offline</p> <p>Or</p> <p>This could also happen if there is an issue in the Undo segment itself</p> <p><strong>Solution</strong></p> <p>Check if the Undo segment status first<br /> —————————————-</p><pre class="crayon-plain-tag">SQL&gt; select segment_name,status,tablespace_name from dba_rollback_segs where status not in (‘ONLINE’, ‘OFFLINE’) ; SEGMENT_NAME STATUS TABLESPACE_NAME —————– ———– —————- _SYSSMX9$ NEEDS RECOVERY UNDO02</pre><p>In the above example Undo segment _SYSSMX9$ is in Needs recovery status.<br /> This segment belongs to Undo tablespace UNDO02</p> <p>Check the status of the datafile present in the tablespace UNDO02</p><pre class="crayon-plain-tag">SQL&gt; select status, name, file# from v$datafile where ts# in (Select ts# from v$tablespace where name=’UNDO02′ ); STATUS NAME FILE# ——- ————————————————– ———- ONLINE /u01/UNDO02_01.dbf 56 RECOVER /u02/UNDO02_03.dbf 77</pre><p>So clearly one file is in Recover status</p> <p>Option A</p> <p>=======</p> <p>If the database is in Archive log mode and you have all the required archive log mode you can do the following :-</p> <p>Find if you have all the required Archive logs on disk or If using Rman ensure they exist in the backup</p> <p>Query 1<br /> ———</p> <p>SQL&gt; Select checkpoint_change# from v$datafile_header where file_id=&lt;file# in Recover status from previous query&gt; ;</p> <p>Now find these changes are present in which Archive log<br /> Query 2<br /> ———</p> <p>SQL&gt; select sequence#,thread#,name from v$archived_log</p> <p>where &lt;checkpoint_change# from query 1&gt; between first_change# and next_change# ;</p> <p>Ensure you have all the archive logs starting from this sequence# till the current sequence# in your database</p> <p>For example<br /> ==========</p><pre class="crayon-plain-tag">SQL&gt; select checkpoint_change#,file#,status from v$datafile_header where file#=77; CHECKPOINT_CHANGE# FILE# STATUS —————— ———- ——- 2103113 4 OFFLINE 77 SQL&gt;Select sequence#,thread#,name from v$archived_log where 2103113 between first_change# and next_change# ; SEQUENCE# THREAD# NAME ——————————————————————————– 96 1 /u01/arch/O1_MF_1_96_6OKHP.Arc</pre><p>If using rman</p> <p>Check if the archive log from this sequence till current sequence is available</p><pre class="crayon-plain-tag">RMAN&gt; list backup of archivelog from sequence &lt;No listed in query2&gt; RMAN&gt; recover datafile &lt;fileno&gt; ; RMAN&gt; sql ‘alter database datafile &lt;fileno&gt; online’ ;</pre><p>if using sqlplus</p> <p>Ensure the archive logs are present on disk</p><pre class="crayon-plain-tag">SQL&gt; recover datafile &lt;fileno&gt; ; Type AUTO and hit enter Once recovery is done SQL&gt; alter database datafile &lt;fileno&gt; online ;</pre><p>If the archive logs have been restored to a different location than the Default archive log destination your database is using then specify the same using set source command in sqlplus</p><pre class="crayon-plain-tag">SQL&gt; set logsource “/u01/arch/newlocation” ; SQL&gt; recover datafile &lt;fileno&gt; ; Type AUTO and hit enter Once recovery is done SQL&gt; alter database datafile &lt;fileno&gt; online ; SQL&gt; drop rollback segment “_SYSSMX9$”; Rollback segment dropped. SQL&gt; drop tablespace undotbs1; Tablespace dropped. SQL&gt; select name from v$tablespace; NAME —————————— SYSTEM UNDOTBS2 SYSAUX USERS EXAMPLE TEMP 6 rows selected. Now the corrupted UNDO tablespace is droped. SQL&gt; select * from v$rollname;</pre><p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong></span></span><span class="s1"><span class="s2"> LinkedIn: </span></span><span class="s1"><span class="s2"><a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong></span><span class="s1"> LinkedIn: </span><span class="s1"><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez’s Profile</a></strong></span></p> <p>LinkedIn Group: <strong><em><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></em></strong></p> <p>Facebook Page: <strong><em><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></em></strong></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/ora-errors/undo-segment-is-in-needs-recovery/">Undo Segment is in Needs Recovery</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4045 Sat May 05 2018 14:33:15 GMT-0400 (EDT) Control file recovery http://oracle-help.com/backup-and-recovery/control-file-recovery/ <p>Today we are going to have look at the backbone of an Oracle Database. Being DBA we all are most aware of the recovery of the controlfile . It is the most important element of our database. Let&#8217;s have a look at the steps which are required for the recovery of controlfile.<b></b></p> <p>Instance terminates on startup with</p><pre class="crayon-plain-tag">ORA-00600: internal error code, arguments: [3716], [], [], [], [], [], [], [] ORA-600 signalled during: ALTER DATABASE OPEN… Wed Mar 18 21:56:19 2009 Trace dumping is performing id=[cdmp_20090318215619] Wed Mar 18 21:56:22 2009 Shutting down instance (abort) License high water mark = 10 Instance terminated by USER, pid = 1716436</pre><p><strong><b>Solution</b></strong></p> <p>Take a backup of existing state of controlfile</p> <p>SQL&gt; Alter database backup controlfile to ‘&lt;Name&gt;’ ;</p> <p>SQL&gt; Alter database backup controlfile to trace ;</p> <p>Get the list of Current file backup</p> <p>Rman&gt; List backup of controlfile ;</p> <p>Rman&gt; Shutdown immediate ;</p> <p>Rman&gt; connect target / catalog username/pwd@connectstring</p> <p>Rman&gt; Startup nomount</p> <p>Rman&gt; Restore controlfile from ‘&lt;piece handle&gt;’ ;</p> <p>Rman&gt; recovery database ;</p> <p>SQL&gt; Alter database open resetlogs;</p> <p>&nbsp;</p> <p>Now we have recovered the controlfile.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong></span></span><span class="s1"><span class="s2"> LinkedIn: </span></span><span class="s1"><span class="s2"><a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong></span><span class="s1"> LinkedIn: </span><span class="s1"><strong><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez">Joel Perez’s Profile</a></strong></span></p> <p>LinkedIn Group: <strong><em><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></em></strong></p> <p>Facebook Page: <strong><em><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></em></strong></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/backup-and-recovery/control-file-recovery/">Control file recovery</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Anuradha Mudgal http://oracle-help.com/?p=4042 Sat May 05 2018 14:26:58 GMT-0400 (EDT) Broadening Your Audience http://dbakevlar.com/2018/05/broadening-your-audience/ <p>I spent this week speaking at two conferences that may not be familiar to my usual crowd:<br /> • <a href="https://stareast.techwell.com/">Techwell StarEast Testing Conference </a>in Orlando, FL<br /> •<a href="https://www.interop.com/"> Interop ITX Data Conference</a> in Las Vegas, NV</p> <p><a href="http://dbakevlar.com/2018/05/broadening-your-audience/d04dd55f-f841-4d49-870b-e7c1b55ed04c/" rel="attachment wp-att-7932"><img class="alignnone size-full wp-image-7932" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/05/D04DD55F-F841-4D49-870B-E7C1B55ED04C.gif?resize=400%2C225" alt="" width="400" height="225" data-recalc-dims="1" /></a></p> <h3>StarEast Testing Conference</h3> <p>Techwell’s event is attended by testers and had over 2000 attendees at the Hyatt Regency Orlando’s Convention Center. This is a huge convention center and I won’t lie- I did try to first register at the Mazda event and then the KPMG event. One of the attendees in my session admitted she was crashing my talk from the KPMG conference and I was pretty psyched that I’d stolen an attendee from another event, (is that terrible of me to admit? )</p> <p>The attendance for this session was packed and I was really thrilled with the interaction of these testers during my presentation. They asked questions, nodded enthusiastically when I discussed what tools would best suit a DevOps solution from beginning to end that included the testing team. I had numerous conversations after my presentation with the attendees, some to ask further questions and some just to say what a valuable presentation it was for them to attend.</p> <p>I also was able to speak with some existing Delphix customers that are in a testing team that were experiencing some challenges and get them support assistance where they’d previously been one step separated, so Delphix wasn’t aware that their network challenge hadn’t been addressed by their infrastructure team and can now follow up.</p> <h3>Interop ITX Conference</h3> <p>Interop ITX was at the Mirage in Las Vegas. It was a long, 4 ½ hour flight from Orlando to the destination, but well worth it. I’d been asked to submit an abstract by Karen Lopez and I was really happy that I had.</p> <p>As I’d just been in Las Vegas the week before to speak at Collaborate and am not a gambler, the 24 hrs I was in Vegas was more than enough for me, but the Mirage convention center gave me a great break from the loud and annoyance of the Vegas strip.</p> <p>I also got to visit with Gwen Shapira, a fellow Oakie who works at Kafka, who I hadn’t seen in over a year. It was awesome to visit with her and then got to have Kitty, (Karen Lopez) ambassador my session as the Data and Analytics track lead this morning.</p> <p>I greatly admire both these women in technology and visiting with them reminds me just how awesome the women are in my field. My session at Interop ITX was also well attended, especially considering it was the last day of the event and I was happy to have a few questions ensuring me that people were still able to absorb knowledge after days of technical content being pushed at them.</p> <p>I presented on similar topics at both these events even though the audiences were quite different. We’re finding that with all the talk of DevOps, rarely is data included in the equation. Companies begin on an agile and DevOps methodology and then find they can’t meet the development cycle requirements because the data is holding them back. This is due to two factors:</p> <p><b>1</b>. <b>Data Gravity</b>&#8211; everyone’s need of the data, which both pulls them in like a gravitational pull and the sheer weight of data, which creates its own gravitational pull, making it difficult to escape from.<br /> <b>2. Data Friction</b>&#8211; Everyone wants the data now and in their own way and in their own environment. The DBA, Data Scientist, Developer, Tester and Analyst are all fighting on how to get the data where it needs to be and on their schedule.</p> <p><b>No matter the focus audience of my talk or the tools we discussed, the overall goal was for people to embrace:</b><br /> 1. Virtualization of data sources- relational, flat files and big data.<br /> 2. Masking of non-production data to eliminate 80% of the data scope for General Data Protection Regulations, (GDPR) and other critical data regulations.</p> <p><b>So what should you take away from this post:</b><br /> 1. Consider submitting talks or attending these events. There is so much these audiences could learn from data experts.<br /> 2. No matter the technology focus, it all comes back to the data and everyone and I do mean everyone is having the same challenges.</p> <p><b><i>Provide them with some options and solutions. I know you’re up to it.</i></b></p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="http://dbakevlar.com/tag/devops/" rel="tag">DevOps</a>, <a href="http://dbakevlar.com/tag/test-data-management/" rel="tag">Test Data Management</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=http://dbakevlar.com/2018/05/broadening-your-audience/&title=Broadening Your Audience"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=http://dbakevlar.com/2018/05/broadening-your-audience/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=http://dbakevlar.com/2018/05/broadening-your-audience/&title=Broadening Your Audience"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=http://dbakevlar.com/2018/05/broadening-your-audience/&title=Broadening Your Audience"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=http://dbakevlar.com/2018/05/broadening-your-audience/&title=Broadening Your Audience"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="http://dbakevlar.com/2018/05/broadening-your-audience/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2014/03/addm-compare-in-em12c/" >ADDM Compare in EM12c</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2017/06/optimizer-oracle-sql-server-hints/" >Optimizer- Oracle and SQL Server, Hints</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2011/01/243/" >Index Reviews and Fruit Baskets</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2013/02/using-the-em-diagnostic-kit-with-em12c/" >Using the EM Diagnostic REPVFY Kit with EM12c</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2012/11/nocougrmoug-presentations-november/" >NoCOUG/RMOUG Presentations- November!</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="http://dbakevlar.com">DBA Kevlar</a> [<a href="http://dbakevlar.com/2018/05/broadening-your-audience/">Broadening Your Audience</a>], All Right Reserved. 2018.</small><br> dbakevlar http://dbakevlar.com/?p=7931 Sat May 05 2018 07:18:22 GMT-0400 (EDT) LEAP#386 The Blue Pill https://blog.tardate.com/2018/05/leap386-the-blue-pill.html <p>Popularly known as the <a href="http://wiki.stm32duino.com/index.php?title=Blue_Pill">Blue Pill</a>, the STM32F103C8T6 Minimum System Development Board seems like an excellent gateway drug for getting into ARM Cortex-M3 development.</p> <p>This is particularly true as it is possible to program it with the familar Arduino IDE.</p> <p>It is my first look at one of these boards, so I have simple expectations - follow along some of the tutorials on the web/youtube and at least get a simple program running on the board. In the process I’ll learn a bit more about the board’s capabilities and quirks.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/STM32/BluePill">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/STM32/BluePill"><img src="https://leap.tardate.com/STM32/BluePill/assets/BluePill_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/05/leap386-the-blue-pill.html Sat May 05 2018 01:33:41 GMT-0400 (EDT) Build a Distribution Path using JSON https://dbasolved.com/2018/05/04/build-a-distribution-path-using-json/ <p dir="auto">In a previous post, I showed you how to build an Integrated Extract (IE) using JSON and simple cURL command. In this post, let’s take a look at how to build a Distribution Service Path. </p> <p dir="auto">First think to understand is that the Distribution Service is the replace for the Extract Data Pump in the traditional Oracle GoldenGate architecture. The Distribution Service does the same thing as the Extract Data Pump with the exception of transformations. If you have a need to do transformations with Oracle GoldenGate Microservices; the transformations have to be pushed to either the extract or the replicat to be done.</p> <p dir="auto">The purpose of the Distribution Service is to ensure that the local trail files are shipped across the network and reach the Reciever Service which will create the remote trail files.</p> <p dir="auto">Note: The Receiver Service, on the target side, will be started automatically when the Distribution Service connects to the port number for it.</p> <p dir="auto">Within the Distribution Service, you will create Distribution Paths between the source and target hosts. The way you do this with JSON quite simple. There are 4 main items the JSON should contain.</p> <p dir="auto">1. Name &#8211; This is what the Distribution Path will be named<br />2. Status &#8211; Should the Distribution Path be running or stopped<br />3. Source &#8211; This specifies the local trail file that should be read for transactions<br />4. Target &#8211; This specifies the Login and URL to write to the remote trail files. </p> <p dir="auto">Note: For the Target setting, there are 4 protocols that can be used: <br /> Secure Websockets (wss) &#8211; default<br /> Websockets (ws)<br /> UDP-based Data Transfer Protocol (udt)<br /> Oracle GoldenGate (ogg)</p> <p dir="auto">An example of a JSON document that would be used to build a Distribution Path is as follows:</p> <p dir="auto">{<br /> &#8220;name&#8221;: &#8220;TSTPATH&#8221;,<br /> &#8220;status&#8221;: &#8220;stopped&#8221;,<br /> &#8220;source&#8221;: {<br /> &#8220;uri&#8221;: &#8220;trail://localhost:16002/services/v2/sources?trail=bb&#8221;<br /> },<br /> &#8220;target&#8221;: {<br /> &#8220;uri&#8221;: &#8220;ws://OracleGoldenGate+WSTARGET@localhost:17003/services/v2/targets?trail=bc&#8221;<br /> }<br />}</p> <p dir="ltr">To build this Distirbution Path (TSTPATH), a cURL command as such can be used to build it:</p> <p dir="ltr">curl -X POST \<br /> <a href="http://localhost:16002/services/v2/sources/TSTPATH" rel="nofollow">http://localhost:16002/services/v2/sources/TSTPATH</a> \<br /> -H &#8216;Cache-Control: no-cache&#8217; \<br /> -d &#8216;{<br /> &#8220;name&#8221;: &#8220;TSTPATH&#8221;,<br /> &#8220;status&#8221;: &#8220;stopped&#8221;,<br /> &#8220;source&#8221;: {<br /> &#8220;uri&#8221;: &#8220;trail://localhost:16002/services/v2/sources?trail=bb&#8221;<br /> },<br /> &#8220;target&#8221;: {<br /> &#8220;uri&#8221;: &#8220;ws://OracleGoldenGate+WSTARGET@localhost:17003/services/v2/targets?trail=bc&#8221;<br /> }<br />}&#8217;</p> <p dir="ltr">Once the Distribution Path is created, you can start it. Upon starting the path, you can check the Receiver Service on the target side. It should have been started as well.</p> <p dir="ltr">Enjoy!!!</p> Bobby Curtis http://dbasolved.com/?p=1880 Fri May 04 2018 14:15:00 GMT-0400 (EDT) FBI Limitation https://jonathanlewis.wordpress.com/2018/05/04/fbi-limitation/ <p>A recent <a href="https://community.oracle.com/thread/4142600"><strong><em>question on the ODC (OTN) database forum</em></strong></a> prompted me to point out that the optimizer doesn&#8217;t consider <em><strong>function-based indexes</strong></em> on remote tables in distributed joins. I then spent 20 minutes trying to find the blog note where I had demonstrated this effect, or an entry in the manuals reporting the limitation &#8211; but I couldn&#8217;t find anything, so I&#8217;ve written a quick demo which I&#8217;ve run on 12.2.0.1 to show the effect. First, the SQL to create a couple of tables and a couple of indexes:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: fbi_limitation.sql rem Author: Jonathan Lewis rem Dated: May 2018 rem -- create public database link orcl@loopback using 'orcl'; define m_target = orcl@loopback create table t1 segment creation immediate nologging as with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, rownum n1, lpad(rownum,10,'0') v1, lpad('x',100,'x') padding from generator v1, generator v2 where rownum &lt;= 1e6 -- &gt; comment to avoid WordPress format issue ; create table t2 nologging as select * from t1 ; alter table t1 add constraint t1_pk primary key(id); alter table t2 add constraint t2_pk primary key(id); create unique index t2_f1 on t2(id+1); begin dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt; 'T1', cascade =&gt; true, method_opt =&gt; 'for all columns size 1' ); dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt; 'T2', cascade =&gt; true, method_opt =&gt; 'for all columns size 1' ); end; / </pre> <p>The code is very simple, it creates a couple of identical tables with an id column that will produce an index with a very good clustering_factor. You&#8217;ll notice that I&#8217;ve (previously) created a <em><strong>public database link</strong></em> that is (in my case) a <em><strong>loopback</strong></em> to the current database and the code defines a variable that I can use as a substitution variable later on. If you want to do further tests with this model you&#8217;ll need to make some changes in these two lines.</p> <p>So now I&#8217;m going to execute a query that should result in the optimizer choosing a nested loop between the tables &#8211; but I have two versions of the query, one which treats <em><strong>t2</strong></em> as the local table it really is, and one that pretends (through the loopback) that <em><strong>t2</strong></em> is remote.</p> <pre class="brush: plain; title: ; notranslate"> set serveroutput off select t1.v1, t2.v1 from t1, t2 -- t2@orcl@loopback where t2.id+1 = t1.id and t1.n1 between 101 and 110 ; select * from table(dbms_xplan.display_cursor); select t1.v1, t2.v1 from t1, -- t2 t2@orcl@loopback where t2.id+1 = t1.id and t1.n1 between 101 and 110 ; select * from table(dbms_xplan.display_cursor); </pre> <p>Here are the two execution plans, pulled from memory &#8211; including the <em>&#8220;remote&#8221;</em> section in the distributed case:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID fthq1tqthq8js, child number 0 ------------------------------------- select t1.v1, t2.v1 from t1, t2 -- t2@orcl@loopback where t2.id+1 = t1.id and t1.n1 between 101 and 110 Plan hash value: 1798294492 -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2347 (100)| | | 1 | NESTED LOOPS | | 11 | 407 | 2347 (3)| 00:00:01 | |* 2 | TABLE ACCESS FULL | T1 | 11 | 231 | 2325 (4)| 00:00:01 | | 3 | TABLE ACCESS BY INDEX ROWID| T2 | 1 | 16 | 2 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | T2_F1 | 1 | | 1 (0)| 00:00:01 | -------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter((&quot;T1&quot;.&quot;N1&quot;&lt;=110 AND &quot;T1&quot;.&quot;N1&quot;&gt;=101)) 4 - access(&quot;T2&quot;.&quot;SYS_NC00005$&quot;=&quot;T1&quot;.&quot;ID&quot;) Note ----- - this is an adaptive plan SQL_ID ftnmywddff1bb, child number 0 ------------------------------------- select t1.v1, t2.v1 from t1, -- t2 t2@orcl@loopback where t2.id+1 = t1.id and t1.n1 between 101 and 110 Plan hash value: 1770389500 ------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT| ------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4663 (100)| | | | |* 1 | HASH JOIN | | 11 | 616 | 4663 (4)| 00:00:01 | | | |* 2 | TABLE ACCESS FULL| T1 | 11 | 231 | 2325 (4)| 00:00:01 | | | | 3 | REMOTE | T2 | 1000K| 33M| 2319 (3)| 00:00:01 | ORCL@~ | R-&gt;S | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access(&quot;T1&quot;.&quot;ID&quot;=&quot;T2&quot;.&quot;ID&quot;+1) 2 - filter((&quot;T1&quot;.&quot;N1&quot;&lt;=110 AND &quot;T1&quot;.&quot;N1&quot;&gt;=101)) Remote SQL Information (identified by operation id): ---------------------------------------------------- 3 - SELECT &quot;ID&quot;,&quot;V1&quot; FROM &quot;T2&quot; &quot;T2&quot; (accessing 'ORCL@LOOPBACK' ) </pre> <p>Both plans show that the optimizer has estimated the number of rows that would be retrieved from <em><strong>t1</strong></em> correctly (very nearly); but while the fully local query does a nested loop join using the high-precision, very efficient function-based index (reporting the internal supporting column referenced in the predicate section) the distributed query seems to have no idea about the remote function-based index and select all the required rows from the remote table and does a hash join.</p> <h3>Footnote:</h3> <p>Another reason for changes in execution plan when you test fully local and then run distributed is due to the optimizer ignoring remote histograms, as demonstrated in <a href="https://jonathanlewis.wordpress.com/2013/08/19/distributed-queries-3/"><em><strong>a much older blog note</strong></em></a> (though still true in 12.2.0.1).</p> <h3>Addendum</h3> <p>After finishing this note, I discovered that I had written a similar note about reverse key indexes <a href="https://jonathanlewis.wordpress.com/2013/11/11/reverse-key/"><em><strong>nearly five years ago</strong></em></a>. Arguably a reverse key is just a special case of a function-based index &#8211; except it&#8217;s not labelled as such in <em><strong>user_tab_cols</strong></em>, and doesn&#8217;t depend on a system-generated hidden column.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18235 Fri May 04 2018 04:19:08 GMT-0400 (EDT) Fun with Oracle Data Guard Broker https://blog.pythian.com/fun-oracle-data-guard-broker/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><h1>Issues during the switchover to standby database</h1> <h2>The beginning</h2> <p>So, there we were attending to our jobs and tending to our customers when an emergency alert came in: a production server had gone down. All bells and whistles were sounded and an emergency bridge phone was opened. A hardware failure had brought the server down and the repair time was unknown, so we have to failover to the standby database to restore the service. This is a simple physical standby database with Data Guard Broker in place to take care of the basic things.</p> <p>Simply said and simply done. For reasons out of the scope of this post, we disabled the Broker and manually opened the database and all went well, except that the Broker had at some point changed the <em>log_archive_dest_1</em> initialisation parameter to be &#8220;<em>location=&#8221;USE_DB_RECOVERY_FILE_DEST&#8221;, valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)</em>&#8220;. This was the first problem, but known from our previous tests, so we simply changed it to be &#8220;<em>location=&#8221;USE_DB_RECOVERY_FILE_DEST&#8221;, valid_for=(All_LOGFILES,ALL_ROLES)</em>&#8221; and everything worked as expected. Kids, <strong>test your DR scenarios while you still can :).</strong></p> <h2>The rebuilding</h2> <p>For reasons out of the scope of this blog post, flashback database was not enabled on the primary side, so after the crash and a few hours of production data being added to the new primary database, we decided that the best way to continue from here was to rebuild the old primary as a standby and execute a switchover to get everything back to where it was before the outage.</p> <p>The main caveat here is that the servers are located on opposite sides of the Atlantic ocean and this is quite a database with some 2TiBs of data that does not get compressed much during the backup, plus a few GiBs of archived logs generated daily. This means that using the standard procedure to rebuild a standby database will incur a severe delay due to the time it takes to take a new full backup of the new-primary, copy it over to the new-standby site and then subsequent log shipping to get both sides in sync.</p> <p><strong>Enter Oracle documentation</strong>. I find that Oracle documentation is some of the best I&#8217;ve had the opportunity to work with, although it&#8217;s getting a bit cluttered and incoherent sometimes due to the many different features, options, new versions and partial updates that are only visible in MOS. Well, it may not be the best anymore, but I still use it a lot as a trusted source of information.</p> <p>So there is the procedure of rebuilding a primary database as a standby from its own backup and then sync it up with the primary. You can see the documentation in the following <a href="https://docs.oracle.com/cd/E11882_01/server.112/e41134/scenarios.htm#SBYDB4898">link</a>.</p> <p>And off we go. Get the restore scripts ready, kick off the restore and wait for it to complete.</p> <p>In case someone is curious, here are the scripts we used. Simple sloppy scripts that do the job.</p> <p>First the <em>bash</em> script that wraps the RMAN call and allows us to execute it in <em>nohup</em> mode. You know, the network connection always drops at the most interesting moment.</p> <pre lang="php" line="1">export ORACLE_SID=PRIMARY RMAN_COMMAND=restore_PRIMARY EMAIL_RECIPIENT=no-reply@pythian.com export NLS_DATE_FORMAT='YYYY-MM-DD:hh24:mi:ss' if [ ! -d "log" ]; then mkdir log fi . oraenv &lt;&lt;&lt; $ORACLE_SID TS=`date +'%Y%m%d-%H.%M'` WORK_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &amp;&amp; pwd )" RMAN_LOG=${WORK_DIR}/log/RMAN-${RMAN_COMMAND}_${TS}.log date echo "Begin RMAN ${RMAN_COMMAND} on $SID" rman cmdfile=${WORK_DIR}/${RMAN_COMMAND}.rcv log=${RMAN_LOG} echo "#" date echo "End RMAN ${RMAN_COMMAND} on $SID" mail -s "RMAN script ${RMAN_COMMAND} completed on ${SID}" ${EMAIL_RECIPIENT} &lt; ${RMAN_LOG} exit 0 </pre> <p>And this is the very basic RMAN script.</p> <pre lang="sql" line="1" highlight="0">connect target / connect catalog rman/XXXXXX@RMANCAT RUN { SET UNTIL SCN 123456789; ALLOCATE CHANNEL D1 TYPE DISK; ALLOCATE CHANNEL D2 TYPE DISK; ALLOCATE CHANNEL D3 TYPE DISK; ALLOCATE CHANNEL D4 TYPE DISK; ALLOCATE CHANNEL D5 TYPE DISK; ALLOCATE CHANNEL D6 TYPE DISK; ALLOCATE CHANNEL D7 TYPE DISK; ALLOCATE CHANNEL D8 TYPE DISK; ALLOCATE CHANNEL D9 TYPE DISK; ALLOCATE CHANNEL D10 TYPE DISK; ALLOCATE CHANNEL D11 TYPE DISK; ALLOCATE CHANNEL D12 TYPE DISK; ALLOCATE CHANNEL D13 TYPE DISK; ALLOCATE CHANNEL D14 TYPE DISK; ALLOCATE CHANNEL D15 TYPE DISK; RESTORE DATABASE; RECOVER DATABASE; } </pre> <p>After the restore completed it was time for the recover in which, yes you guessed right, there was a missing archived log and there is more fun to be had. </p> <p>Of course, there was a backup of the missing archived log on the site we were trying to rebuild so we recovered it and applied it to the standby but then, surprise! See the line 22 in the snippet below:</p> <pre lang="sql" line="1" highlight="1,22">SQL&gt; recover standby database; ORA-00279: change 14943816301 generated at 04/09/2018 00:02:23 needed for thread 1 ORA-00289: suggestion : +FRA/conpro_uk2/archivelog/2018_04_11/thread_1_seq_15211.2442.973138947 ORA-00280: change 14943816301 for thread 1 is in sequence #15211 Specify log: {=suggested | filename | AUTO | CANCEL} AUTO ORA-00279: change 14943827295 generated at 04/09/2018 00:07:18 needed for thread 1 ORA-00289: suggestion : +FRA/conpro_uk2/archivelog/2018_04_11/thread_1_seq_15212.458.973138947 ORA-00280: change 14943827295 for thread 1 is in sequence #15212 ORA-00278: log file '+FRA/conpro_uk2/archivelog/2018_04_11/thread_1_seq_15211.2442.973138947' no longer needed for this recovery ORA-00279: change 14943841236 generated at 04/09/2018 00:13:56 needed for thread 1 ORA-00289: suggestion : +FRA ORA-00280: change 14943841236 for thread 1 is in sequence #15213 ORA-00278: log file '+FRA/conpro_uk2/archivelog/2018_04_11/thread_1_seq_15212.458.973138947' no longer needed for this recovery ORA-00308: cannot open archived log '+FRA' ORA-17503: ksfdopn:2 Failed to open file +FRA &lt;-- at this point logs from 15213 are gone from FRA ORA-15045: ASM file name '+FRA' is not in reference form ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below ORA-01152: file 1 was not restored from a sufficiently old backup ORA-01110: data file 1: '+DATA/conpro_uk2/datafile/system.298.973129969' </pre> <p><strong>Enter Oracle bugs</strong>. Yes ladies and gentlemen, upon restoring the missing archived log from the backup, the database started some FRA clean up process that deleted the newest archived logs! For these, of course, there was no backup as they had been created between the last backup and the crash. For those interested in the details, here is the bug we believe we may have swallowed while riding our bikes: <strong>&#8220;Bug 17370174 : NEW ARCHIVELOG FILES ARE DELETED FIRST FROM THE FRA&#8221;</strong>.</p> <p>So, here we are, the old-primary, now an almost new-standby, is missing some archived logs to be able to sync up with the current primary.<br /> Fortunately, these archived logs had been shipped and applied to the old-standby, now the primary, before the crash, so we were able to restore and ship them to the other side of the Atlantic and everything was up and running again, including Data Guard Broker &#8230; after a few hours.</p> <p>And, after applying some eighty archived logs, here we go again: <strong>ORA-00365</strong></p> <pre lang="sql" line="1" highlight="5">Media Recovery Log +FRA/PRIMARY/archivelog/2018_04_11/thread_1_seq_15291.2126.973169799 Errors with log +FRA/PRIMARY/archivelog/2018_04_11/thread_1_seq_15291.2126.973169799 MRP0: Background Media Recovery terminated with error 365 Errors in file /apps/oracle/diag/rdbms/PRIMARY/PRIMARY/trace/PRIMARY_pr00_13538.trc: ORA-00365: the specified log is not the correct next log Managed Standby Recovery not using Real Time Apply Recovery interrupted! MRP0: Background Media Recovery process shutdown (PRIMARY) Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL SWITCHOVER DISCONNECT USING CURRENT LOGFILE </pre> <p>This issue was more complicated. Another bug entered the game &#8220;<em>Bug 19181583 &#8211; Allow Recover Until Time for a Redo Log that has the activation bit set (Doc ID 19181583.8)</em>&#8221; and a severity 1 SR was opened with Oracle. Also, we escalated internally to our IPC for him to take a look.</p> <p>Working around this one required creating a fresh standby controlfile from the current primary, ship it over to the standby server and replace the current controlfiles with it.</p> <p>Also, as ASM is involved, we had to do some RMAN magic to have the controlfile pointing to the right datafiles.</p> <p>Not the cleanest process, it may have been better to start a bit down in the ASM directory tree to avoid trying to catalog files like the ASM password file but, again, it did the job.</p> <pre lang="sql" line="1" highlight="0">RMAN&gt; catalog start with '+DATA'; Starting implicit crosscheck backup at 11-APR-18 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=1037 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=3 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=118 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=234 device type=DISK Crosschecked 1111 objects Finished implicit crosscheck backup at 11-APR-18 Starting implicit crosscheck copy at 11-APR-18 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 Finished implicit crosscheck copy at 11-APR-18 searching for all files in the recovery area cataloging files... cataloging done List of Cataloged Files ======================= File Name: +FRA/PRIMARY/ARCHIVELOG/2018_04_11/thread_1_seq_15211.2442.973138947 File Name: +FRA/PRIMARY/ARCHIVELOG/2018_04_11/thread_1_seq_15212.458.973138947 File Name: +FRA/PRIMARY/ARCHIVELOG/2018_04_11/thread_1_seq_15291.2126.973169799 File Name: +FRA/PRIMARY/CONTROLFILE/current.256.922462525 searching for all files that match the pattern +DATA List of Files Unknown to the Database ===================================== File Name: +DATA/orapwasm File Name: +DATA/PRIMARY/spfileCONPRO.ora File Name: +DATA/PRIMARY/TEMPFILE/LMTEAMTEMP.275.973170913 (...) File Name: +DATA/TEST/CONTROLFILE/Current.257.922321133 File Name: +DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.922315255 Do you really want to catalog the above files (enter YES or NO)? YES cataloging files... cataloging done List of Cataloged Files ======================= File Name: +DATA/PRIMARY/TEMPFILE/LMTEAMTEMP.275.973170913 File Name: +DATA/PRIMARY/TEMPFILE/MGMTTEMP.274.973170913 File Name: +DATA/PRIMARY/TEMPFILE/LICMANTEMP.273.973170913 File Name: +DATA/PRIMARY/TEMPFILE/TEMP2.272.973170913 File Name: +DATA/PRIMARY/TEMPFILE/TEMP2.271.973170913 (...) List of Files Which Where Not Cataloged ======================================= File Name: +DATA/orapwasm RMAN-07517: Reason: The file header is corrupted File Name: +DATA/PRIMARY/spfileCONPRO.ora RMAN-07518: Reason: Foreign database file DBID: 0 Database Name: File Name: +DATA/PRIMARY/ONLINELOG/group_20.381.948561469 RMAN-07529: Reason: catalog is not supported for this file type File Name: +DATA/PRIMARY/ONLINELOG/group_21.380.948561487 (...) RMAN&gt; list copy of database; List of Datafile Copies ======================= Key File S Completion Time Ckp SCN Ckp Time ------- ---- - --------------- ---------- --------------- 248 1 A 11-APR-18 14947795005 09-APR-18 Name: +DATA/PRIMARY/datafile/system.298.973129969 231 2 A 11-APR-18 14947795005 09-APR-18 Name: +DATA/PRIMARY/datafile/sysaux.315.973127749 251 3 A 11-APR-18 14947795005 09-APR-18 (...) Name: +DATA/PRIMARY/datafile/sfsdata.302.973129281 246 93 A 11-APR-18 14947795005 09-APR-18 Name: +DATA/PRIMARY/datafile/sfsdata.300.973129815 RMAN&gt; switch database to copy; datafile 1 switched to datafile copy "+DATA/PRIMARY/datafile/system.298.973129969" datafile 2 switched to datafile copy "+DATA/PRIMARY/datafile/sysaux.315.973127749" datafile 3 switched to datafile copy "+DATA/PRIMARY/datafile/bodata_new.295.973130253" datafile 4 switched to datafile copy "+DATA/PRIMARY/datafile/users.358.973111005" datafile 5 switched to datafile copy "+DATA/PRIMARY/datafile/wsidx.285.973130519" datafile 6 switched to datafile copy "+DATA/PRIMARY/datafile/wsdata.284.973130527" datafile 7 switched to datafile copy "+DATA/PRIMARY/datafile/gt_data.283.973130533" (...) RMAN&gt; report schema; RMAN-06139: WARNING: control file is not current for REPORT SCHEMA Report of database schema for database with db_unique_name PRIMARY List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 9000 SYSTEM *** +DATA/PRIMARY/datafile/system.298.973129969 2 27610 SYSAUX *** +DATA/PRIMARY/datafile/sysaux.315.973127749 3 1876 BODATA_NEW *** +DATA/PRIMARY/datafile/bodata_new.295.973130253 4 32715 USERS *** +DATA/PRIMARY/datafile/users.358.973111005 5 100 WSIDX *** +DATA/PRIMARY/datafile/wsidx.285.973130519 (...) 92 10240 SFSDATA *** +DATA/PRIMARY/datafile/sfsdata.302.973129281 93 10240 SFSDATA *** +DATA/PRIMARY/datafile/sfsdata.300.973129815 List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 8000 TEMP2 8000 +DATA/STANDBY/tempfile/temp2.360.967677721 2 1000 LMTEAMTEMP 30000 +DATA/STANDBY/tempfile/lmteamtemp.318.967677721 3 500 MGMTTEMP 30000 +DATA/STANDBY/tempfile/mgmttemp.309.967677721 4 500 LICMANTEMP 30000 +DATA/STANDBY/tempfile/licmantemp.345.967 (...) </pre> <p>After this, a reboot of the database got it back up and working and applying archived logs again, although with a little noise in the alert log.</p> <h2>The Data Guard switch over</h2> <p>So, we finally got to the point where we were at the normal DR situation, that is, the original standby is now our primary and <em>vice versa</em>.</p> <p>After getting a maintenance window, the applications were brought down and the switch over was about to be initiated and&#8230; here we go again.</p> <pre lang="sql" line="1" highlight="3">GMGRL&gt; show database old-primary Database - old-primary Role: PHYSICAL STANDBY Intended State: APPLY-ON Transport Lag: 11 minutes 3 seconds &lt;-- This means that something is not right, but let's hope that DG Broker takes care of it. Apply Lag: 0 seconds Real Time Query: OFF Instance(s): PRIMARY Database Status: SUCCESS DGMGRL&gt; switchover to old-primary; Performing switchover NOW, please wait... Error: ORA-16775: target standby database in broker operation has potential data loss &lt;-- No, it doesn't Failed. Unable to switchover, primary database is still "new-primary" </pre> <p>It was the Broker changing <em>log_archive_dest_1</em> again. Fixed it, the missing archived logs where shipped and now, yes, this is the one&#8230; no.<br /> The old-primary failed to shutdown. We believe that due to something like &#8220;Bug 20139391 &#8211; Dataguard Broker calls CRS to restart for Databases not being managed by CRS (Doc ID 20139391.8)&#8221; although this is not our case. We were running out of time and the maintenance window had been stretched way too far already.<br /> Our IPC got involved here again and helped complete a manual switchover after getting rid of the Broker and all was fine.</p> <h2>The end, well&#8230; not really</h2> <p>During all this hassle we were unable to actually troubleshoot what was wrong with Data Guard Broker. Was it a series of bugs? Or was there something wrong in our, quite old, configuration?</p> <p>So we set up a dummy Data Guard Broker configuration with an empty database and started our testing. Another IPC was involved here to help us figure out what may be wrong and first hit right in the forehead: we were using a bequeath connection for <em>dgmgrl</em> which is the source of many issues during a switch over.<br /> The Broker requires a static listener service registered on both ends of the replica and uses it to connect to the remote database to perform shutdowns and startups during the change of roles.<br /> Funny thing is that it is able to connect to the remote database to shut it down but it won&#8217;t be able to connect to start it up again, hence the role switch fails.<br /> During our tests, the Broker was able to simply continue the process after we manually started/mounted the remote database but we didn&#8217;t get a chance to test this during the production outage.</p> <p>So: <strong>try to avoid bequeath connections with <em>dgmgrl</em> </strong> it will do you no harm and will save you a few headaches.</p> <p>The second important finding is related to how the Data Guard Broker collects database configuration parameters and sets them upon startup.</p> <p>In the alert log of the databases we can see the following commands executed by DGB as part of its startup process. Note the highlighted line number 8.</p> <pre lang="sql" line="1" highlight="8">ALTER SYSTEM SET log_archive_trace=0 SCOPE=BOTH SID='PRIMARY'; ALTER SYSTEM SET log_archive_format='PRIMARY_%t_%r_%s.dbf' SCOPE=SPFILE SID='PRIMARY'; ALTER SYSTEM SET standby_file_management='AUTO' SCOPE=BOTH SID='*'; ALTER SYSTEM SET archive_lag_target=1800 SCOPE=BOTH SID='*'; ALTER SYSTEM SET log_archive_max_processes=4 SCOPE=BOTH SID='*'; ALTER SYSTEM SET log_archive_min_succeed_dest=1 SCOPE=BOTH SID='*'; ALTER SYSTEM SET fal_server='STANDBY' SCOPE=BOTH; ALTER SYSTEM SET log_archive_dest_1='location="USE_DB_RECOVERY_FILE_DEST"',' valid_for=(STANDBY_LOGFILE,STANDBY_ROLE)' SCOPE=BOTH SID='PRIMARY'; </pre> <p>So, it is the Broker that changes the parameters related to a standby configuration but, why these?<br /> According to <a href="https://docs.oracle.com/cd/E11882_01/server.112/e40771/concepts.htm#DGBKR065" target="_blank" rel="noopener">Oracle documentation</a>:</p> <blockquote><p>Associated with each database are various properties that the DMON process uses to control the database&#8217;s behavior. The properties are recorded in the configuration file as a part of the database&#8217;s object profile that is stored there. Many database properties are used to control database initialization parameters related to the Data Guard environment.</p> <p>To ensure that the broker can update the values of parameters in both the database itself and in the configuration file, you must use a server parameter file to control static and dynamic initialization parameters. The use of a server parameter file gives the broker a mechanism that allows it to reconcile property values selected by the database administrator (DBA) when using the broker with any related initialization parameter values recorded in the server parameter file.</p> <p>When you set values for database properties in the broker configuration, the broker records the change in the configuration file and propagates the change to all of the databases in the Data Guard configuration.</p></blockquote> <p>So, we&#8217;ve learnt the hard way that once the Broker is in charge, we must use the Broker to change standby related parameters or it will revert the changes during the next startup time. In this case, we were unlucky to have a setting in place that added even more noise to the overall situation.</p> <h2>Final thoughts</h2> <p>At the end of the day, even the best make assumptions and mistakes. The first assumption we made is that everything was properly set up and ready in case of a DR scenario. We didn&#8217;t have much opportunity to test it, so we had to work with this assumption. It meant that the setup was quite good, but not good enough.<br /> The main mistake here was not a mistake <em>per se</em> in the sense that is was something done the wrong way, the mistake was not questioning what we were doing and, more importantly, how were we doing it. We simply assumed, and here it is again, that a bequeath connection that allows us to change the configuration and even a shutdown of the remote database will be valid for a switchover operation, which it was not.</p> <p>So, lessons learned here: <strong>test your DR scenarios, never make assumptions and question what you are doing</strong>, even more if you are not doing it frequently or following instructions given by others, and that includes Oracle support. There is always room to learn something new and the bigger the failure, the better the experience.</p> </div></div> Jose Rodriguez https://blog.pythian.com/?p=103941 Thu May 03 2018 15:29:54 GMT-0400 (EDT) Log Buffer #545: A Carnival of the Vanities for DBAs https://blog.pythian.com/log-buffer-545-carnival-vanities-dbas/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>From Oracle to SQL Server, and from Cloud to PostgreSQL; things are changing lightning fast. Innovation in these areas is being captured promptly by the bloggers and this Log Buffer Edition covers a few of the most pertinent ones.<br /> <span id="more-104001"></span></p> <p><strong>Cloud:</strong></p> <p>Every application has to store and access operational data, usually in a database. <a href="https://cloudplatform.googleblog.com/2018/04/Accelerating-innovation-for-cloud-native-managed-databases.html">Managed</a> database services can help you ship apps faster and reduce operational toil so you can focus on what makes your business successful and unique.</p> <p>Introducing <a href="https://cloudplatform.googleblog.com/2018/04/Introducing-Kubernetes-Catalog-and-GCP-Open-Service-Broker.html">Kubernetes</a> Service Catalog and Google Cloud Platform Service Broker: find and connect services to your cloud-native apps</p> <p>Exploring container security: <a href="https://cloudplatform.googleblog.com/2018/04/Exploring-container-security-Running-a-tight-ship-with-Kubernetes-Engine-1-10.html">Running</a> a tight ship with Kubernetes Engine 1.10</p> <p>Introducing <a href="https://cloudplatform.googleblog.com/2018/04/introducing-Partner-Interconnect-a-fast-economical-onramp-to-GCP.html">Partner</a> Interconnect, a fast, economical onramp to GCP</p> <p>Kubernetes best practices: How and why to build <a href="https://cloudplatform.googleblog.com/2018/04/Kubernetes-best-practices-how-and-why-to-build-small-container-images.html">small</a> container images</p> <p>Good news for <a href="https://aws.amazon.com/blogs/security/announcing-the-new-aws-certified-security-specialty-exam/">cloud</a> security experts: the AWS Certified Security — Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.</p> <p><strong>Oracle:</strong></p> <p>There was an incident where statistics were being gathered during prime operating hours causing performance issues.<br /> One DBA already verified <a href="https://mdinh.wordpress.com/2018/04/27/whos-gathering-db-stats/">GATHER_STATS_JOB</a> has already been configured to not run during critical hours.<br /> Speculation is stats are being gathered manually and how to prove this?</p> <p>An Interesting Problem with ODI: <a href="http://gokhanatil.com/2018/04/an-interesting-problem-with-odi-unable-to-retrieve-user-guid.html">Unable</a> to retrieve user GUID</p> <p>Modifications to <a href="http://www.ora-solutions.net/web/2018/04/26/modifications-to-hidden-parameters-between-12-1-0-2-and-12-2-0-1/">hidden</a> parameters between 12.1.0.2 and 12.2.0.1</p> <p>logic apps inserting to an ‘<a href="http://dbaharrison.blogspot.com/2018/04/logic-apps-inserting-to-on-premises.html">on premises</a>’ database</p> <p>Maven <a href="http://barrymcgillin.blogspot.com/2018/04/maven-duplicated-versions-consolidated.html">Duplicated</a> Versions consolidated with flatten</p> <p><strong>SQL Server:</strong></p> <p>Detecting Problems on <a href="http://www.sqlservercentral.com/redirect/articles/171094/">Databases</a> that use Snapshot-based Transaction Isolation</p> <p>Packaged-Application Database <a href="http://www.sqlservercentral.com/articles/packaged+applications/134244/">Nightmares</a> &#8211; A Horror Story</p> <p>Concurrency Week: An Odd Case of <a href="https://www.brentozar.com/archive/2018/04/an-odd-case-of-blocking/">Blocking</a></p> <p>Introducing <a href="https://azure.microsoft.com/en-us/blog/introducing-sql-information-protection-for-azure-sql-database-and-on-premises-sql-server/">SQL</a> Information Protection for Azure SQL Database and on-premises SQL Server!</p> <p>Accelerate <a href="https://azure.microsoft.com/en-us/blog/accelerate-data-analytics-with-spark-connector-for-azure-sql-database-sql-server/">real-time</a> big data analytics with Spark connector for Microsoft SQL Databases</p> <p><strong>PostgreSQL:</strong></p> <p>Thanks to Jeremy Schneider for PostgreSQL section.</p> <p>The Seattle Times is forecasting a balmy high of almost 80 degrees Fahrenheit today. That&#8217;s unseasonably warm for this part of the year but nobody is complaining! And the PostgreSQL world continues to heat up too. As usual, lots of exciting new content online to keep up with. :)</p> <p>Starting off with headlines &#8211; I think the top prize goes to the new PostgreSQL website! At PostgresConf last week, Scott Yara from Pivotal had a keynote where he talked about how the PostgreSQL website hadn&#8217;t changed in about 15 years. Within a matter of hours, the new website rolled out. No kidding! In true PostgreSQL style, I couldn&#8217;t find any official announcement from the web team beyond this one little <a href="https://twitter.com/postgresql/status/986635393696174081">tweet</a>. (Though there&#8217;s plenty of commentary online!)</p> <p>Three other headlines deserve mention. First, <a href="https://content.pivotal.io/announcements/pivotal-software-lists-on-nyse-as-pvtl">Pivotal Software</a> is now officially a public company. They forked PostgreSQL years ago to create their product &#8220;greenplum&#8221; and now they have open sourced the entire code base, hired key PostgreSQL developers, and are aggressively merging the community code to catch up to the latest PostgreSQL release. Their rapid pivot is a strong endorsement of the broader PostgreSQL community and codebase, and the IPO is a great indicator of it&#8217;s growing commercial importance.</p> <p>Second &#8211; on the tails of Microsoft&#8217;s announcement last month &#8211; <a href="https://cloudplatform.googleblog.com/2018/04/Cloud-SQL-for-PostgreSQL-now-generally-available-and-ready-for-your-production-workloads.html">Google Cloud SQL</a> for PostgreSQL is now officially GA.</p> <p>Third, <a href="https://www.percona.com/blog/2018/04/24/percona-expands-services-offerings-with-postgresql-support/">Percona</a> &#8211; perhaps one of the best-known open source database companies &#8211; announced this week that they are officially adding PostgreSQL support to their services offerings. I also noticed a number of PostgreSQL sessions at Percona Live this week!</p> <p>Moving beyond headlines, what articles were people talking about the most this week? A few stand out to me.</p> <p>First, on March 28 Craig Ringer sent an email to the postgresql hackers list reporting on a user who&#8217;s PostgreSQL database became corrupted after a storage error. After a lot of digging, Craig realized that the root problem was incorrect assumptions about how the linux kernel implements fsync(). In fact it seems that every OS platform behaves a little differently in how it handles errors when flushing filesystem cache to disk &#8211; which means that it&#8217;s even harder than we thought to write cross-platform code that reliably gets data to persistent storage in a default OS config with filesystem caching. I have to wonder how many other databases are also affected by this; it can&#8217;t just be PostgreSQL.</p> <p>The best write-up I&#8217;ve seen so far is on <a href="https://lwn.net/Articles/752063/">LWN</a> and it just went public today!</p> <p>Outside of the fsync tale, I&#8217;ll mention two other articles that I thought had a fair bit of attention. On April 16, <a href="https://blog.anayrat.info/en/2018/04/16/postgresqls-developments-for-high-volumes-processing/">Adrien Nayrat</a> wrote a really nice high-level overview listing out many specific technical enhancements that PostgreSQL has shipped in the past few years. It&#8217;s a testament to the surprising velocity of development happening in this already-quite-advanced database.</p> <p>And after spending a day at PostgresConf last week, <a href="https://www.zdnet.com/article/has-the-time-finally-come-for-postgresql/">Tony Baer</a> published a nice article in ZDNet along the familiar theme that something is happening with PostgreSQL these days.</p> <p>Is something happening with PostgreSQL?</p> <p>Of course, companies using the PostgreSQL code to build products is nothing new. IBM&#8217;s <a href="https://wiki.postgresql.org/wiki/PostgreSQL_derived_databases">Netezza? Pivotal</a>&#8216;s Greenplum? Amazon&#8217;s Redshift? Check out the related documents on the PostgreSQL wiki for a list of many products which you didn&#8217;t know were based on this database:</p> <p>But I think there might be something happening. Maybe the PostgreSQL community itself is getting some new currency and importance. I also notice that many recent startups building on PostgreSQL are moving toward building as extensions instead of forks.</p> <p>At PostgresConf last week, the closing panel included a VC-backed startup doing timeseries in PostgreSQL (timescale). I had dinner with the co-founder of a startup doing AI in PostgreSQL (ziff). I saw articles on twitter over the past two weeks about startups doing stream processing in PostgreSQL (pipelinedb) and building a graph database on PostgreSQL (edgedb). Even Cockroach chose PostgreSQL for its wire protocol and client libraries. Three of these companies are aiming to ship as extensions.</p> <p>I&#8217;m leaving these links off the newsletter because I want to focus on core PostgreSQL here, but feel free to google them. My point here is simply that PostgreSQL might be more important than you thought, if you didn&#8217;t realize just how big the ecosystem is!</p> <p>Anyway, moving on&#8230; let&#8217;s dive in to a few technical articles.</p> <p>First up, a quick article on every DBAs job #1: recovery (and backups). Making sure data is safe and durable. On April 16, <a href="https://severalnines.com/blog/top-backup-tools-postgresql">Viorel Tabara</a> published a nice list of &#8220;Top Backup Tools for PostgreSQL&#8221; on the severalnines blog.</p> <p>Quick example of how powerful regular expressions are in PostgreSQL? Take a look at this short blog on <a href="https://cybersandwich.com/programming/adding-custom-delimiters-to-a-string-with-regex-postgresql-and-plpgsql/">cybersandwich.com</a> that was published April 24.</p> <p>Two articles about bind variables showed up recently on the jOOQ blog &#8211; a reliable source of great articles digging into database optimizers. While the examples were focused on Oracle, both articles touch on PostgreSQL too. First, Why <a href="https://blog.jooq.org/2018/04/12/why-sql-bind-variables-are-important-for-performance/">SQL Bind Variables</a> are Important for Performance &#8230; and second, When Using Bind Variables is not Enough: <a href="https://blog.jooq.org/2018/04/13/when-using-bind-variables-is-not-enough-dynamic-in-lists/">Dynamic IN Lists</a>.</p> <p>Finally, my past newsletters have been following <a href="https://tapoueh.org/blog/2018/04/postgresql-data-types-date-and-time-processing/">Dimitri Fontaine</a>&#8216;s series on PostgreSQL data types. I&#8217;m getting behind. He&#8217;s got five new articles out already!!! Data and time processing, <a href="https://tapoueh.org/blog/2018/04/postgresql-data-types-network-addresses/">network addresses</a>, <a href="https://tapoueh.org/blog/2018/04/postgresql-data-types-ranges/">ranges</a>, <a href="https://tapoueh.org/blog/2018/04/postgresql-data-types-arrays/">arrays</a> and <a href="https://tapoueh.org/blog/2018/04/postgresql-data-types-xml/">XML</a>!</p> <p>Finally, would you be interested in a digest of PostgreSQL articles in video format instead of email? Last week I stumbled across <a href="https://www.scalingpostgres.com/episodes/9-high-volume-processing-pg_wal-issues-restore-pg_receivewal/">Creston</a> Jamison&#8217;s blog where he does exactly that! His most recent digest covers 10 articles (many of which I also have covered in writing). Check it out!</p> <p>That&#8217;s a wrap for today. Have a great week and keep learning!</p> </div></div> Fahd Mirza https://blog.pythian.com/?p=104001 Thu May 03 2018 14:58:50 GMT-0400 (EDT) Build an Integrated Extract using JSON https://dbasolved.com/2018/05/03/build-an-integrated-extract-using-json/ <p dir="auto">Now that Oracle GoldenGate 12.3 Microservices have been out for about 9 month; there seems to be more and more discussions around how microservices can be used. The mircoservices architecture provides a faster way for users to build extract, replicats, distribution paths and many other items by using a JSON document and simply calling a REST end-point. </p> <p dir="auto">In this post, I’ll show you how to build an integrated extract using JSON and REST APIs. First think you need to understand, is the steps that it takes to build an extract currently in GGSCI/AdminClient.</p> <p dir="auto">Note: AdminClient can be used, with debug on, to see how these commands translate back into JSON and REST calls.</p> <p dir="auto">To build an Integrated Extract via GGSCI/AdminClient:</p> <p dir="auto">1. add extract exttst, integrated, begin now<br />2. register extract exttst, database container pdb1<br />3. add exttrail aa, extract exttst, megabytes 250<br />4. start extract exttst</p> <p dir="auto">As you can tell, it takes 4 steps to add and start the extract to an Oracle GoldenGate configuration. </p> <p dir="auto">If your a command line geek or a developer who wants to do more with Oracle GoldenGate, the mircroservices architecture provides you a way to build an extract via JSON files. A simple JSON file for building an integrated extract looks as follows:</p> <p dir="auto">{<br /> &#8220;description”:”Integrated Extract&#8221;,<br /> &#8220;config&#8221;:[<br /> &#8220;Extract EXTTST&#8221;,<br /> &#8220;ExtTrail bb&#8221;,<br /> &#8220;UseridAlias SGGATE&#8221;,<br /> &#8220;Table SOE.*;&#8221;<br /> ],<br /> &#8220;source&#8221;:{<br /> &#8220;tranlogs&#8221;:&#8221;integrated&#8221;<br /> },<br /> &#8220;credentials&#8221;:{<br /> &#8220;alias&#8221;:&#8221;SGGATE&#8221;<br /> },<br /> &#8220;registration&#8221;:{<br /> &#8220;containers&#8221;: [ &#8220;pdb1&#8221; ],<br /> &#8220;optimized&#8221;:false<br /> },<br /> &#8220;begin&#8221;:&#8221;now&#8221;,<br /> &#8220;targets&#8221;:[<br /> {<br /> &#8220;name&#8221;:&#8221;bb&#8221;,<br /> &#8220;sizeMB&#8221;:250<br /> }<br /> ],<br /> &#8220;status&#8221;:&#8221;stopped&#8221;<br />}</p> <p dir="auto">This JSON example, describes all the attributes needed to build an integrated extract. The main items in this JSON are:</p> <p dir="auto">Description &#8211; Provide a description for the parameter file<br />Config &#8211; Details for the associated parameter file<br />Source &#8211; Where the extract should read transactions from<br />Credentials &#8211; What credentials in the credential stores should be used<br />Registration &#8211; Register the extract with the database and against associated pdbs<br />Begin &#8211; At what timeframe the extract should start<br />Targets &#8211; What trail files the extract should write to<br />Status &#8211; If the extract should be started or not</p> <p dir="auto">These 8 categories cover what we traditioanlly did in the classic architecture in 3 steps. With all these items in the JSON file, you can now quickly build the extract by calling a simple curl command. </p> <p dir="auto">In order to build the extract, you need to know the REST API end-point that is needed. All extracts are built against the Administration Server (AdminService) within the microservices architecture. In my configuration, my AdminService is running on port 16000; so the REST API end-point would be:</p> <p dir="auto">{{Source_AdminServer}}/services/v2/extracts/{{extract_name}}</p> <p dir="auto"><a href="http://localhost:16000/services/v2/extracts/EXTTST" rel="nofollow">http://localhost:16000/services/v2/extracts/EXTTST</a></p> <p dir="auto">The REST API end-point, requires you to specify the extract name in the URL. Now with the URL and associated JSON, you can create an extract with a simple cURL command or embed the call into an application. An example of a cURL command that would be used is:</p> <p dir="auto">curl -X POST \<br /> http://<a href="http://localhost:16001/services/v2/extracts/EXTTST" target="_blank"></a>localhost:16001/services/v2/extracts/EXTTST\<br /> -H &#8216;Cache-Control: no-cache&#8217; \<br /> -d &#8216;{<br /> &#8220;description&#8221;:&#8221;&#8221;,<br /> &#8220;config&#8221;:[<br /> &#8220;Extract EXTTST&#8221;,<br /> &#8220;ExtTrail bb&#8221;,<br /> &#8220;UseridAlias SGGATE&#8221;,<br /> &#8220;Table SOE.*;&#8221;<br /> ],<br /> &#8220;source&#8221;:{<br /> &#8220;tranlogs&#8221;:&#8221;integrated&#8221;<br /> },<br /> &#8220;credentials&#8221;:{<br /> &#8220;alias&#8221;:&#8221;SGGATE&#8221;<br /> },<br /> &#8220;registration&#8221;:{<br /> &#8220;containers&#8221;: [ &#8220;pdb1&#8221; ],<br /> &#8220;optimized&#8221;:false<br /> },<br /> &#8220;begin&#8221;:&#8221;now&#8221;,<br /> &#8220;targets&#8221;:[<br /> {<br /> &#8220;name&#8221;:&#8221;bb&#8221;,<br /> &#8220;sizeMB&#8221;:250<br /> }<br /> ],<br /> &#8220;status&#8221;:&#8221;stopped&#8221;<br />}&#8217;</p> <p dir="auto">Once the extract is created, you will notice that the extract is stopped. This is due to the “status” that was feed through the JSON document. You should be able to start the extract and start seeing transactions being extracted.</p> <p dir="auto">Enjoy!!!</p> Bobby Curtis http://dbasolved.com/?p=1877 Thu May 03 2018 14:15:00 GMT-0400 (EDT) StarEast, InteropITX and GDPR http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/ <p>I’m getting ready to get on a plane between two events today and have been so busy, that there’s been a break in blogging.  That’s right folks, Kellyn has let a few things slide&#8230;.</p> <p style="text-align: center;"><a href="http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/62753167-26e2-4b98-a8f1-9cd364185b5d/" rel="attachment wp-att-7921"><img class="alignnone size-full wp-image-7921" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/05/62753167-26E2-4B98-A8F1-9CD364185B5D.gif?resize=200%2C150" alt="" width="200" height="150" data-recalc-dims="1" /></a></p> <p>For those people on top of all the happenings in Kevlar’s life, I’ve been busy removing 15 years of possessions from my home so we can sell it in the next month, along with the purchase, upgrade and consolidation into a 42Ft. travel trailer.  It’s quite an undertaking, so a few things have had to be put lower on the priority list to complete the rest.  I am finishing up a technical review of an Oracle Cloud book from Apress and signed on to write a Women in Technology book.</p> <p>I’m happiest when I’m busy, but this is a bit too much for anyone, needless to say.  I’m going to add a short blog while I have a minute on what is going on this week.  I’m representing <a href="https://www.delphix.com/">Delphix</a> and just finished speaking at <a href="https://stareast.techwell.com/">Techwell’s StarEast</a> Software Testing conference in Orlando.  This session was incredibly well attended, with great interaction from those there, (I love questions and comments, what can I say?)  I met with lay requested audience for my “Genius Bar” meeting and was able to update my slides to compliment the audience, which was a majority of testers looking to embrace DevOps into their business.</p> <p>I will then get on a plane in a few hours for Las Vegas for the<a href="https://www.interop.com/"> Interop ITX</a> conference.  Thanks to the referral from Karen Lopez, I’m going to speak on data and DevOps at this event, (if my plane will stop with the delays, already&#8230;. :))  I’m looking forward to getting back to Las Vegas, (wasn’t I just there last week for Collaborate??) and hopefully get to see Karen Lopez and maybe even Gwen Shapira for a minute or two!</p> <p>Keep an eye on this space-  lots of great stuff in the next couple weeks with Great Lakes Oracle Conference, (GLOC)  Data Summit and SQL Saturday Dallas!</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="http://dbakevlar.com/tag/conferences/" rel="tag">Conferences</a>, <a href="http://dbakevlar.com/tag/data/" rel="tag">Data</a>, <a href="http://dbakevlar.com/tag/dataops/" rel="tag">DataOps</a>, <a href="http://dbakevlar.com/tag/delphix/" rel="tag">Delphix</a>, <a href="http://dbakevlar.com/tag/devops/" rel="tag">DevOps</a>, <a href="http://dbakevlar.com/tag/oracle/" rel="tag">oracle</a>, <a href="http://dbakevlar.com/tag/testing/" rel="tag">Testing</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/&title=StarEast, InteropITX and GDPR"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/&title=StarEast, InteropITX and GDPR"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/&title=StarEast, InteropITX and GDPR"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/&title=StarEast, InteropITX and GDPR"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2017/10/linux-sql-server-dba-part/" >Linux for the SQL Server DBA- Part I</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2017/08/successful-evangelism/" >Successful Evangelism</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2014/02/the-new-chapter/" >The New Chapter</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2017/03/preparing-azureenvironment-part/" >Preparing an AzureEnvironment- Part I</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="http://dbakevlar.com/2013/02/rmoug-2013-the-conference-director-perspective/" >RMOUG 2013, The Conference Director Perspective</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="http://dbakevlar.com">DBA Kevlar</a> [<a href="http://dbakevlar.com/2018/05/stareast-interopitx-and-gdpr/">StarEast, InteropITX and GDPR</a>], All Right Reserved. 2018.</small><br> dbakevlar http://dbakevlar.com/?p=7920 Thu May 03 2018 13:58:10 GMT-0400 (EDT) Cloudscape Podcast Episode 2 in review – A deeper dive into the latest in Amazon Web Services https://blog.pythian.com/blog-post-cloudscape-podcast-episode-2-amazon-web-services/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">I recently joined Chris Presley for episode two of his new podcast, </span><a href="https://blog.pythian.com/?s=cloudscape"><span style="font-weight: 400;">Cloudscape</span></a><span style="font-weight: 400;">, to talk about what’s happening in the world of cloud-related matters. My focus was to share the most recent events surrounding Amazon Web Services (AWS).</span></p> <p><span style="font-weight: 400;">Topics of discussion included at-rest encryption in Dynamo DB, AWS network bandwidth increases, M5 instances on AWS, and the latest Serverless announcement for AWS.</span></p> <p><b>The implementation of at-rest Encryption</b></p> <p><span style="font-weight: 400;">We spoke a little about how large enterprises such as </span><a href="https://aws.amazon.com/"><span style="font-weight: 400;">Amazon</span></a><span style="font-weight: 400;"> have been adding encryption to many of their services, the most recent is DynamoDB, a common backend for mobile app providers. </span></p> <p><span style="font-weight: 400;">With DynamoDB, when a new table is added, you can choose to enable encryption on local secondary indexes and global secondary indexes, all using Advanced Encryption Standards (AES) 256. It also uses a service default key stored in KMS, has very low overhead and is transparent so you don’t have to, from the application layer, manage any of it because it’s all going to be done for you on the back end. </span></p> <p><span style="font-weight: 400;">A huge plus for DynamoDB is that there’s no charge. You pay for the calls to the Knowledge Management System (KMS) but you get the encryption at no extra charge.</span></p> <p><b>Increased EC2 Network Bandwidth</b></p> <p><span style="font-weight: 400;">Amazon increased the network bandwidth for EC2 instances using the Enhanced Network Adapter (ENA), in most cases up to 25Gb/s.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">Instances need to be running the latest ENA-enabled AMI and they need to be modern instances to take advantage of this enhanced networking. Enhanced networking includes other benefits besides increased bandwidth including lower CPU utilization and lower latency connections.  These new instances and connectivity pair well with activities like machine learning, where you’ll need that bandwidth as you begin to scale your activities to multiple nodes.</span></p> <p><span style="font-weight: 400;">You may have also heard that new hypervisor changes are in the pipeline in the form of Nitro which promises to further reduce virtualization overhead and improve performance.  Now is a great time to begin migrating to the latest instances and AMIs to take advantage of these great new features.</span></p> <p><b>M5 instances on AWS</b></p> <p><span style="font-weight: 400;">M5 instances are the rollout of a new type of virtualization hypervisor that Amazon is calling “Nitro.” I admit I had to dig into this, and what I found was amazing. </span></p> <p><span style="font-weight: 400;">Brandon Greg, a performance engineer for Netflix, wrote an </span><a href="http://www.brendangregg.com/blog/2017-11-29/aws-ec2-virtualization-2017.html"><span style="font-weight: 400;">amazing blog post</span></a><span style="font-weight: 400;"> that details the history of virtualization and explains exactly why Nitro is a big deal. The gist is that Nitro is a departure from the Xen hypervisor and a step towards near-metal performance.  </span></p> <p><span style="font-weight: 400;">Nitro improves the performance of network and storage I/O via SR-IOV and also introduces hardware virtualization support for interrupts. These enhancements of Nitro’s predecessor result in performance measurements that are oftentimes within 1% of the performance of bare-metal servers. Nice!</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">I’m excited to watch Nitro develop and become more broadly available. Nitro is currently available on C5 and M5 instances.</span></p> <p><b>Serverless Announcement for AWS</b></p> <p><span style="font-weight: 400;">Amazon introduced a new serverless application repository to help people discover new serverless solutions that are quick to implement.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><a href="https://blog.pythian.com/aws-serverless-application-repository-now-generally-available/"><span style="font-weight: 400;">I’ve talked about this previously</span></a><span style="font-weight: 400;">, so be sure to check that post out for more details on my initial thoughts.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">I’m generally excited for Serverless, but think the new application repository needs time to mature.  Eventually, I think it will be a great place to look for out-of-the-box solutions to common problems.</span></p> <p>&nbsp;</p> <p><span style="font-weight: 400;">***</span></p> <p><span style="font-weight: 400;">This was a summary of the AWS topics we discussed during the podcast, Chris also welcomed </span><a href="https://pythian.com/experts/john-laham/"><span style="font-weight: 400;">John Laham</span></a><span style="font-weight: 400;"> (Google Cloud Platform), and <a href="https://pythian.com/experts/warner-chaves/">Warner Chaves</a> (Microsoft Azure) who also discussed topics related to their expertise. </span></p> <p><span style="font-weight: 400;">To hear the full conversation, click </span><a href="https://blog.pythian.com/cloudscape-podcast-episode-2-february-2018-roundup-key-aws-gcp-azure-updates/"><span style="font-weight: 400;">here</span></a><span style="font-weight: 400;"> and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></p> </div></div> Greg Baker https://blog.pythian.com/?p=104041 Thu May 03 2018 11:24:20 GMT-0400 (EDT) Cosmos DB Consistency Models – SQL On The Edge Episode 16 https://blog.pythian.com/cosmos-db-consistency-models-sql-edge-episode-16/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>We&#8217;re back with a new video blog to discuss yet another interesting facet of Cosmos DB! For those who are not familiar with the service, Cosmos DB is <a href="https://azure.microsoft.com/en-us/services/cosmos-db/">Microsoft Azure&#8217;s NoSQL Database as a Service offering</a>. Cosmos DB has many interesting properties including native geo-replication, multi-model API support and multiple consistency models. This time we are going to focus on this last property.</p> <h2><strong>The Consistency Dilemma</strong></h2> <p>NoSQL databases popularized the term of &#8216;eventual consistency&#8217; as opposed to the usual &#8216;strong consistency&#8217; that is more commonly used on relational database systems. Eventual consistency was seen as a reasonable trade-off on distributed databases in exchange of easier and linear scalability. The main trade-off is summed up by this question: do I want the most truthful data I can get or do I want the data as fast as possible even if it&#8217;s not the latest?</p> <p>In a single data center or close geographic proximity this is usually not a large trade-off, since modern connections can have multiple nodes sync and converge on the same value in a matter of milliseconds. However, as we move towards build global &#8220;planet-scale&#8221; databases, it becomes more interesting to consider consistency models live &#8220;in-between&#8221; the full strong guarantee and the very loose &#8220;eventual consistency&#8221;.</p> <h2><strong>Cosmos DB Consistency Models</strong></h2> <p>Because Cosmos DB was built with geographical replication as a core feature, the Microsoft team decided to implement multiple consistency models, that could provide more controlled behaviors and trade-offs for developers than simply &#8220;eventual consistency&#8221;. Let&#8217;s see all the consistency options provided by Cosmos DB:</p> <ol> <li><strong>Strong</strong>: Cosmos will provide the latest value of the record.</li> <li><strong>Bounded Staleness</strong>: Cosmos will provide a value up to a certain specified lag of time or versions.</li> <li><strong>Session</strong>: Cosmos will allow a session to always read it&#8217;s own writes, for other sessions it provides consistent prefix.</li> <li><strong>Consistent Prefix</strong>: Cosmos will provide monotonic reads and always respect ordering.</li> <li><strong>Eventual</strong>: no guarantee other than if a value stops being updated, eventually all copies will be the same.</li> </ol> <p>According to Microsoft, the majority of their Cosmos databases are set to Session consistency by default. The cool thing is that you don&#8217;t have to stick to the default, you can override this setting at the client and event at the individual query level. There is one caveat though, you can only override towards a &#8220;looser&#8221; consistency model, not to a stronger one. For example, if you select Session as your default then you can override it to Consistent Prefix or Eventual but not to Bounded Staleness.</p> <h3><strong>Impact on Performance</strong></h3> <p>The goal of manipulating these consistency models is to get the best performance for your application by having the lowest tolerable latency for a particular operation. Cosmos DB also introduces another incentive: some consistency models consume more Request Units than others. Request Units are Cosmos DB throughput reservations and doing Strong or Bounded Staleness operations consumes double the amount of the other consistency levels. This means that if you can live with the guarantees provided by Session, Consistent Prefix or Eventual, then you will be able to push twice the amount of operations per second through your Cosmos DB container.</p> <h3><strong>Future development</strong></h3> <p>At this point in time there are a couple of limitations that I&#8217;m hopeful the Microsoft team will resolve in the near future. For example, strong consistency is not allowed on a multi-region database. As well, I hope that Microsoft will develop global consistency on a full multi-master model to enable true global read-write anywhere databases.</p> <h2><strong>Demo Time!</strong></h2> <p>Alright, let&#8217;s jump into the demo now, and let&#8217;s check out the impact of the different consistency models on the Request Unit consumption of a Cosmos DB workload. Enjoy!</p> <p><iframe src="https://www.youtube.com/embed/WkQ8cemKwI4?rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> </div></div> Warner Chaves https://blog.pythian.com/?p=104036 Thu May 03 2018 10:33:39 GMT-0400 (EDT) ODTUG Kscope18 Conference: Change of Presentation Times https://richardfoote.wordpress.com/2018/05/03/odtug-kscope18-conference-change-of-presentation-times/ For those of you that were planning to attend my presentations at the upcoming ODTUG Kscope18 Conference in Orlando (yes, I&#8217;m specifically talking to the two of you), both presentations have been allotted new times and locations. They are now confirmed as: New Indexing Features Introduced in Oracle 12c Release 2 (and 18c): When:  June [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5616 Wed May 02 2018 23:13:29 GMT-0400 (EDT) How to Compare a Specific Subset of DB Objects https://www.thatjeffsmith.com/archive/2018/05/how-to-compare-a-specific-subset-of-db-objects/ <p>A question from the innerwebs &#8211;</p> <blockquote><p>Hello,</p> <p>please, is it possible to define and save a (sub)set of objects for comparison? In Databasediff wizard we have an option to define objects, but it&#8217;s a bit cumbersome to do the same thing over and over again, so it would be highly useful to save options and selected items for comparison for future use.</p> <p>Thanks a lot,</p></blockquote> <p>The answer is, Yes!</p> <p><em><a href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/12c_sqldev/cart_sql_dev/12cCart.html" rel="noopener" target="_blank">We have a nice Oracle By Example on this topic available here. </a></em></p> <h3>Use the Cart!</h3> <p>Define the objects to be used for the compare using Cart. Find them in your target database. Drag them to a cart.</p> <div id="attachment_6457" style="width: 834px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/01/17cart1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/01/17cart1.png" alt="" width="824" height="594" class="size-full wp-image-6457" /></a><p class="wp-caption-text">Open the Cart.</p></div> <p>I&#8217;m going to add 3 TABLES from HR.</p> <p>Now I&#8217;m going to invoke the DIFF using the DIFF button on the Cart toolbar.</p> <div id="attachment_6618" style="width: 801px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart1.png" alt="" width="791" height="537" class="size-full wp-image-6618" /></a><p class="wp-caption-text">I need to pick my HR_COPY connection for the Destination Connection.</p></div> <p>I need to change my CONNECTION to the Target Instance (HR_COPY) and click &#8216;Apply.&#8217;</p> <p>And now we&#8217;re only going to compare those 3 tables. </p> <div id="attachment_6619" style="width: 734px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart3.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart3.png" alt="" width="724" height="289" class="size-full wp-image-6619" /></a><p class="wp-caption-text">Sweet.</p></div> <p>And then we&#8217;ll get our results, as you normally would using the Database Diff item on the Tools menu.</p> <div id="attachment_6620" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-cart2.png" alt="" width="1024" height="745" class="size-full wp-image-6620" /></a><p class="wp-caption-text">I&#8217;m missing some important stuff it appears.</p></div> <h3>More on the Cart</h3> <p>The Cart can do a LOT.</p> <p><a href="https://www.thatjeffsmith.com/archive/2018/01/exporting-multiple-tables-to-a-single-excel-file-using-sql-developers-cart/">Exporting Multiple Tables to a Single Excel file using the Cart</a></p> <p><a href="https://www.thatjeffsmith.com/archive/2014/09/30-sql-developer-tips-in-30-days-day-8-use-the-cart-to-build-deployment-scripts/" rel="noopener" target="_blank">Use the Cart to Build Deployment Scripts</a></p> <p><a href="https://www.thatjeffsmith.com/archive/2011/11/introducing-the-sql-developer-shopping-cart/" rel="noopener" target="_blank">Introducing the &#8216;Shopping&#8217; Cart</a></p> <p><a href="https://www.thatjeffsmith.com/archive/2015/03/batch-loading-your-oracle-database-cloud-services-via-the-sql-developer-command-line-interface/" rel="noopener" target="_blank">Batch Load Your Cloud via the Cart and our command line interface</a></p> <h3>More on Database Diff</h3> <p><a href="https://www.thatjeffsmith.com/archive/2012/09/sql-developer-database-diff-compare-objects-from-multiple-schemas/" rel="noopener" target="_blank">Compare Objects from Multiple Schemas</a></p> <p><a href="https://www.thatjeffsmith.com/archive/2012/09/using-database-diff-to-compare-schemas-when-you-dont-have-the-destination-user-password/" rel="noopener" target="_blank">Database Diff Enhancements</a></p> <h3>PS What about PL/SQL?</h3> <p>Yeah, you can drop that into a CART and do a DB DIFF to compare your PLSQL objects as well. </p> <div id="attachment_6621" style="width: 1130px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-plsql.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/05/diff-plsql.png" alt="" width="1120" height="705" class="size-full wp-image-6621" /></a><p class="wp-caption-text">It&#8217;s just a text-compare&#8230;</p></div> <!-- Easy AdSense Unfiltered [count: 3 is not less than 3] --> thatjeffsmith https://www.thatjeffsmith.com/?p=6617 Wed May 02 2018 14:47:14 GMT-0400 (EDT) Planning Track Highlights – Leo Gonzalez https://www.odtug.com/p/bl/et/blogaid=791&source=1 Leo Gonzalez, Planning track lead for ODTUG Kscope18, shares his top five Planning sessions with reasons why they are his "don't-miss sessions" at ODTUG Kscope18. ODTUG https://www.odtug.com/p/bl/et/blogaid=791&source=1 Wed May 02 2018 12:35:25 GMT-0400 (EDT) EPM Infrastructure Track Highlights – Richard Philipson https://www.odtug.com/p/bl/et/blogaid=788&source=1 Richard Philipson, EPM Infrastructure track lead for ODTUG Kscope18, shares his top EPM Infrastructure Track Sessions with reasons why they are his "don't-miss sessions" at ODTUG Kscope18. ODTUG https://www.odtug.com/p/bl/et/blogaid=788&source=1 Wed May 02 2018 12:35:20 GMT-0400 (EDT) Essbase Track Highlights – Matias Panario https://www.odtug.com/p/bl/et/blogaid=789&source=1 Matias Panario, Essbase track lead, shares his top four sessions with reasons why they are his “don’t-miss sessions” at ODTUG Kscope18. ODTUG https://www.odtug.com/p/bl/et/blogaid=789&source=1 Wed May 02 2018 12:35:14 GMT-0400 (EDT)