ODTUG Aggregator ODTUG Blogs http://localhost:8080 Tue, 14 Aug 2018 02:52:29 +0000 http://aggrssgator.com/ No communication skills? Tech is not for you! http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/Re-T_Iy2vxo/ <p><img class="alignleft wp-image-7110" src="https://oracle-base.com/blog/wp-content/uploads/2017/04/loud-2028623_640.png" alt="" width="200" height="237" />Sometimes the tech world drives me to despair. A quick look around Stack Exchange and forums and you can see most people have terrible written communication skills. I have a long history of trying to encourage people to improve their communication skills because it really matters.</p> <p>This is something I have had to work on myself, and still do. If you don&#8217;t put some effort into developing your communication skills you will always remain a second-class member of staff.</p> <p>I&#8217;ve got to the point now where I&#8217;m becoming hard nosed about it. If you&#8217;ve not already recognised this in yourself and started to try and do something about it, why should an employer waste their time with you?</p> <p>If you really don&#8217;t know where to start, you might want to look through these series of posts I wrote a while ago.</p> <ul> <li><a href="/blog/2015/05/29/writing-tips-summary/">Writing Tips</a></li> <li><a href="/blog/2014/01/13/public-speaking-tips/">Public Speaking Tips</a></li> <li><a href="/blog/2017/07/31/what-employers-want-a-series-of-posts/">What Employers Want</a></li> </ul> <p>You might think it&#8217;s all about silent geniuses, but the tech industry is really about communication. If you can&#8217;t communicate efficiently with colleagues and the users in the business area you are working in, there is no point you being there.</p> <p>Please, please, please make the effort. Once you do you will never look back!</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/08/12/no-communication-skills-tech-is-not-for-you/">No communication skills? Tech is not for you!</a> was first posted on August 12, 2018 at 9:06 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/Re-T_Iy2vxo" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8273 Sun Aug 12 2018 04:06:35 GMT-0400 (EDT) LEAP#409 Boldport Bugs: The Conehead https://blog.tardate.com/2018/08/leap409-boldport-bugs-conehead.html <p>Conocephalus is a genus of bush-crickets, known as coneheads. Now I have one beautifully rendered in a 3D kit from Boldport. It even has a pretty convincing FM-synthesised chirp that varies in response to lighting conditions.</p> <p>Boldport projects never fail to inspire some new learning. In the case it introduced me to <a href="https://puredata.info/">Pure Data</a> - a nifty open source visual programming language for multimedia. I reproduced the Pure Data chirp model and added a few more controls to make it easier to play around with.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/conehead">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p><a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/conehead"><img src="https://leap.tardate.com/BoldportClub/conehead/assets/conehead_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/08/leap409-boldport-bugs-conehead.html Sat Aug 11 2018 14:33:16 GMT-0400 (EDT) OPS$Oracle user after Wallet Creation in Oracle 12c http://www.fahdmirza.com/2018/08/opsoracle-user-after-wallet-creation-in.html <div dir="ltr" style="text-align: left;" trbidi="on"><br /><b>----- In Oracle 12.1.0.2, created the wallet by using below commands:</b><br /><br />TEST$ orapki wallet create -wallet "/u01/app/oracle/admin/TEST/wallet" -pwd ****&nbsp; -auto_login_local<br />Oracle PKI Tool : Version 12.1.0.2<br />Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br /><br /><a name='more'></a><br />TEST$ mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys ********<br />Oracle Secret Store Tool : Version 12.1.0.2<br />Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.<br /><br />Enter wallet password:<br />Create credential oracle.security.client.connect_string1<br /><br /><b>----- But when I logged into the database with sys user, the show user showed OPS$ORACLE user instead of sys:</b><br /><br />TEST$ sqlplus /@TEST2<br /><br />SQL*Plus: Release 12.1.0.2.0 Production on Thu Aug 9 13:09:38 2018<br /><br />Copyright (c) 1982, 2014, Oracle.&nbsp; All rights reserved.<br /><br />Last Successful login time: Thu Aug 09 2018 03:18:20 -04:00<br /><br />Connected to:<br />Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production<br />With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,<br />Advanced Analytics and Real Application Testing options<br /><br />SQL&gt; sho user<br />USER is "OPS$ORACLE"<br />SQL&gt;<br /><br /><b>----- So made following changes and it worked fine:</b><br /><br />Put following entry in sqlnet.ora file:<br /><br />SQLNET.WALLET_OVERRIDE = TRUE<br />The SQLNET.WALLET_OVERRIDE entry allows this method to override any existing OS authentication configuration.<br /><br />and used mkstore to create the wallet:<br /><br />TEST$&nbsp; mkstore -wrl "/u01/app/oracle/admin/TEST/wallet" -createCredential TEST2 sys<br /><div><br /></div></div> Fahd Mirza tag:blogger.com,1999:blog-3496259157130184660.post-376112356164360134 Fri Aug 10 2018 01:37:00 GMT-0400 (EDT) Log Buffer #552: A Carnival of the Vanities for DBAs https://blog.pythian.com/log-buffer-552-a-carnival-of-the-vanities-for-dbas/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>This Log Buffer Edition covers Cloud, Oracle, and MySQL.</p> <p><strong>Cloud:</strong></p> <p>AWS <a href="https://aws.amazon.com/blogs/mt/aws-config-best-practices/">Config</a> is a service that maintains a configuration history of your AWS resources and evaluates the configuration against best practices and your internal policies.</p> <p><a href="https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/">AWS</a> released a new IAM feature that makes it easier for you to delegate permissions management to trusted employees.</p> <p><a href="https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/">Amazon</a> ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD).</p> <p><a href="https://istio.io/blog/2018/announcing-1.0/">Many</a> companies use Istio to connect, manage and secure their services from the ground up.</p> <p><a href="https://azure.microsoft.com/en-gb/blog/current-use-cases-for-machine-learning-in-healthcare/">Machine Learning</a> (ML) is causing quite the buzz at the moment, and it’s having a huge impact on healthcare. Payers, providers, and pharmaceutical companies are all seeing applicability in their spaces and are taking advantage of ML today.</p> <p>&nbsp;</p> <h6>Highlights of the <a href="https://www.confluent.io/blog/highlights-of-the-kafka-summit-san-francisco-2018-agenda/">Kafka</a> Summit San Francisco 2018 Agenda:</h6> <p><strong>Oracle:</strong></p> <p><a href="https://jonathanlewis.wordpress.com/2018/07/31/extended-histograms/">Today</a>’s little puzzle comes courtesy of the Oracle-L mailing list as listed by Jonathan.</p> <p><a href="http://andrejusb.blogspot.com/2018/07/text-classification-with-deep-neural.html">Text</a> classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied: chatbot text processing and intent resolution.</p> <p>Automating the <a href="https://technology.amis.nl/2018/07/27/automate-the-installation-of-oracle-jdk-8-and-10-on-rhel-and-debian-derivatives/">Oracle</a> JDK installation on RHEL derivatives (such as CentOS, Oracle Linux) and Debian derivatives (such as Mint, Ubuntu) differs.</p> <p>Oracle <a href="https://www.mahir-quluzade.com/2018/07/oracle-database-18c-install-on-premises.html">Database</a> 18c was available on Oracle Cloud. Now everybody can download Oracle Database 18c On-Premises for Linux (LINUX.X64_180000_db_home.zip)</p> <p>Christian <a href="https://antognini.ch/2018/07/observations-about-the-scalability-of-data-loads-in-adwc/">Antognini</a> is posting some interesting observations about scalability of data loads in ADWC.</p> <p><strong>MySQL:</strong></p> <p><a href="https://elephantdolphin.blogspot.com/2018/07/a-kind-introduction-to-mysql-windowing.html">Windows</a> over data can be framed and this is where things can get wild and woolly. Table x has a column named x (me being overly creative again) that has the values one through 10. If we sum the values of x we can get different values depending on how the frame is constructed.</p> <p>MariaDB 5.5.61, <a href="https://mariadb.org/mariadb-5-5-61-mariadb-connector-node-js-0-7-0-and-mariadb-connector-j-2-2-6-now-available/">MariaDB</a> Connector/Node.js 0.7.0 and MariaDB Connector/J 2.2.6 now available</p> <p>With package installations of MySQL using YUM or APT, it’s quick and easy to manage your server’s state by executing systemctl commands to stop, start, restart, and status. But what do you do when you want to install MySQL using the binary installation with a single or with multiple <a href="https://blog.pythian.com/manage-multiple-mysql-binary-installations-with-systemd/">MySQL</a> instances?</p> <p><a href="https://www.thoughts-on-java.org/hibernate-5-3/">Hibernate</a> 5.3 is available for a little more than three months now, and last week the team released the third maintenance release.</p> <p>There are many nice <a href="http://mysql.wisborg.dk/2018/07/29/mysql-8-0-12-instant-alter-table/">changes</a> included in the MySQL 8.0.12 release that were published a couple of days ago. One of the most exciting is the ability to make instant schema changes to tables.</p> </div></div> Fahd Mirza https://blog.pythian.com/?p=104905 Thu Aug 09 2018 15:32:59 GMT-0400 (EDT) Cloudscape Podcast Episode 7: August 2018 https://blog.pythian.com/cloudscape-podcast-episode-7-august-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>In this episode of The Cloudscape Podcast, we are joined by AWS expert Greg Baker and Microsoft Azure expert Warner Chaves. We will be doing the usual round-up of the latest developments from the cloud vendors and filling you in on everything you need to know. This month we are missing a representative from GCP so will just be switching between news from Azure and AWS.</p> <p>We start off the discussion with Greg talking about Lifecycle Management on AWS. From there, Warner explains Azure’s new instant data movement before we break down AWS’s Snowball Edge and Storage Gateway. We also hear about Microsoft’s Data Lake and Firewall and Warner fills us in on the newest things around IoT Edge. We round up chatting about Amazon CloudTrail and the Free Tier Widget that is now available. For all this and much more be sure to tune and get it all!</p> <p>Key Points From This Episode:</p> <p>• Some information on the upcoming AWS re:Invent 2018 event.<br /> • The new Lifecycle Management of snapshots on AWS.<br /> • Microsoft’s new instant data movement from data warehouses.<br /> • Amazon’s Snowball Edge device and how it can help people out in the field.<br /> • Azure’s Data Lake and how it relates to Blob Storage.<br /> • The recent quality of life improvements Amazon has made Storage Gateway.<br /> • A preview of AWS Bring Your Own IP rollout.<br /> • Information on Azure’s new Firewall.<br /> • Azure&#8217;s latest with regards to DNS SLA.<br /> • A new feature from AWS in the Elastic Load Balancers.<br /> • Azure’s latest with regards to IoT Edge.<br /> • Microsoft Azure new container updates.<br /> • The brand new AWS’ Free Tier Widget.<br /> • Cloud monetization and Azure’s Virtual Win in preview.<br /> • The convenient feature that Azure is offering through File Sync.<br /> • A new way to use Amazon CloudTrail and MQ.<br /> • And much more!</p> <p>Links Mentioned in Today’s Episode:</p> <p><a href="https://nextconf.eu/">NEXT</a><br /> <a href="https://www.linkedin.com/in/gregbaker2/">Greg Baker</a><br /> <a href="https://mvp.microsoft.com/en-us/PublicProfile/5001385?fullName=Warner%20Chaves">Warner Chaves</a><br /> <a href="https://reinvent.awsevents.com/">AWS re:Invent 2018</a><br /> <a href="https://aws.amazon.com/developer/community/evangelists/jeff-barr/">Jeff Barr</a><br /> <a href="https://aws.amazon.com/snowball-edge/">AWS Snowball Edge</a><br /> <a href="https://www.ubuntu.com/">Ubuntu</a><br /> <a href="https://azure.microsoft.com/en-us/solutions/data-lake/">Azure Data Lake</a><br /> <a href="https://azure.microsoft.com/en-us/services/storage/blobs/">Blob Storage</a><br /> <a href="https://aws.amazon.com/storagegateway/">AWS Storage Gateway</a><br /> <a href="https://aws.amazon.com/about-aws/whats-new/2018/07/announcing-bring-your-own-ip-for-amazon-virtual-private-cloud-preview/">AWS Bring Your Own IP</a><br /> <a href="https://azure.microsoft.com/en-us/services/azure-firewall/">Azure Firewall</a><br /> <a href="https://aws.amazon.com/elasticloadbalancing/">AWS Elastic Load Balancer</a><br /> <a href="https://azure.microsoft.com/en-us/services/iot-edge/">IoT Edge</a><br /> <a href="https://aws.amazon.com/eks/">Amazon EKS</a><br /> <a href="https://kubernetes.io/">Kubernetes</a><br /> <a href="https://aws.amazon.com/about-aws/whats-new/2018/07/aws-billing-dashboard-free-tier-widget/">AWS Free Tier Widget</a><br /> <a href="https://azure.microsoft.com/en-us/roadmap/azure-file-sync/">File Sync</a><br /> <a href="https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-overview">Azure Site Recovery</a><br /> <a href="https://aws.amazon.com/amazon-mq/">Amazon MQ</a><br /> <a href="https://aws.amazon.com/cloudtrail/">CloudTrail</a><br /> <a href="https://visualstudio.microsoft.com/xamarin/">Xamarin </a></p> <p><iframe src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/482851797&amp;color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true" width="100%" height="166" frameborder="no" scrolling="no"></iframe></p> </div></div> Chris Presley https://blog.pythian.com/?p=104961 Thu Aug 09 2018 09:38:25 GMT-0400 (EDT) Partitioning -- 2 : Simple Range Partitioning -- by DATE https://hemantoracledba.blogspot.com/2018/08/partitioning-2-simple-range.html <div dir="ltr" style="text-align: left;" trbidi="on">Range Partitioning allows you to separate a logical table into a number of distinct physical segments, each segment holding data that maps to a range of values.<br />(I encourage you to read <a href="https://hemantoracledba.blogspot.com/2018/08/partitioning-1-introduction.html" target="_blank">the Introduction</a> in the first post in this series)<br /><br />The simplest and most common implementation is Range Partitioning by a DATE column.<br /><br /><pre>SQL&gt; l<br /> 1 create table sales_data<br /> 2 (sale_id number primary key,<br /> 3 sale_date date,<br /> 4 invoice_number varchar2(21),<br /> 5 customer_id number,<br /> 6 product_id number,<br /> 7 sale_value number)<br /> 8 partition by range (sale_date)<br /> 9 (partition P_2015 values less than (to_date('01-JAN-2016','DD-MON-YYYY'))<br /> 10 tablespace TBS_YEAR_2015,<br /> 11 partition P_2016 values less than (to_date('01-JAN-2017','DD-MON-YYYY'))<br /> 12 tablespace TBS_YEAR_2016,<br /> 13 partition P_2017 values less than (to_date('01-JAN-2018','DD-MON-YYYY'))<br /> 14 tablespace TBS_YEAR_2017,<br /> 15 partition P_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY'))<br /> 16 tablespace TBS_YEAR_2018,<br /> 17 partition P_2019 values less than (to_date('01-JAN-2020','DD-MON-YYYY'))<br /> 18 tablespace TBS_YEAR_2019,<br /> 19 partition P_MAXVALUE values less than (MAXVALUE)<br /> 20 tablespace USERS<br /> 21* )<br />SQL&gt; /<br /><br />Table created.<br /><br />SQL&gt; <br /></pre><br /><br />Here, I have created each Partition in a separate tablespace.&nbsp; Note that the Partition Key (SALE_DATE) does not have to be the same as the Primary Key (SALE_ID)<br /><br />I have also created a MAXVALUE Partition&nbsp; (Some DBAs/Developers may mistakenly assume this to be a *default* partition.&nbsp; Range Partitioning, unlike List Partitioning, does not have the concept of a "default" partition.&nbsp; This simply is the Partition for incoming rows that have Partition Key value that is higher than the last (highest) defined Partition Key Upper Bound (31-Dec-2019 23:59:59 in this case)).<br /><br />I can look up the data dictionary for these partitions in this manner :<br /><br /><pre>SQL&gt; select partition_name, tablespace_name <br /> 2 from user_tab_partitions<br /> 3 where table_name = 'SALES_DATA'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_NAME TABLESPACE_NAME<br />------------------------------ ------------------------------<br />P_2015 TBS_YEAR_2015<br />P_2016 TBS_YEAR_2016<br />P_2017 TBS_YEAR_2017<br />P_2018 TBS_YEAR_2018<br />P_2019 TBS_YEAR_2019<br />P_MAXVALUE USERS<br /><br />6 rows selected.<br /><br />SQL&gt; <br /></pre><br /><br />Partitions are ordered by Partition *Position*&nbsp; not Name.<br /><br />How do I add a new partition for data for the year 2020 ?&nbsp; By "splitting" the MAXVALUE partition.<br /><br /><pre>SQL&gt; alter table sales_data <br /> 2 split partition P_MAXVALUE<br /> 3 at (to_date('01-JAN-2021','DD-MON-YYYY'))<br /> 4 into<br /> 5 (partition P_2020 tablespace TBS_YEAR_2020, partition P_MAXVALUE)<br /> 6 /<br /><br />Table altered.<br /><br />SQL&gt; <br />SQL&gt; select partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'SALES_DATA'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_NAME HIGH_VALUE<br />------------------------------ ---------------------------------------------<br />P_2015 TO_DATE(' 2016-01-01 00:00:00', 'SYYYY-MM-DD<br />P_2016 TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD<br />P_2017 TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD<br />P_2018 TO_DATE(' 2019-01-01 00:00:00', 'SYYYY-MM-DD<br />P_2019 TO_DATE(' 2020-01-01 00:00:00', 'SYYYY-MM-DD<br />P_2020 TO_DATE(' 2021-01-01 00:00:00', 'SYYYY-MM-DD<br />P_MAXVALUE MAXVALUE<br /><br />7 rows selected.<br /><br />SQL&gt; <br />SQL&gt; l<br /> 1 select partition_name, tablespace_name<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'SALES_DATA'<br /> 4* order by partition_position<br />SQL&gt; /<br /><br />PARTITION_NAME TABLESPACE_NAME<br />------------------------------ ------------------------------<br />P_2015 TBS_YEAR_2015<br />P_2016 TBS_YEAR_2016<br />P_2017 TBS_YEAR_2017<br />P_2018 TBS_YEAR_2018<br />P_2019 TBS_YEAR_2019<br />P_2020 TBS_YEAR_2020<br />P_MAXVALUE USERS<br /><br />7 rows selected.<br /><br />SQL&gt; <br /></pre><br /><br />Note that, irrespective of the data format I specify in the CREATE or SPLIT commands, Oracle presents the Upper Bound Date (HIGH_VALUE) in it's own format, using a Gregorian Calendar.<br /><br />How do I remove an older partition ?<br /><br /><pre>SQL&gt; alter table sales_data<br /> 2 drop partition P_2015<br /> 3 /<br /><br />Table altered.<br /><br />SQL&gt; <br /></pre><br /><br />A DROP command is very simple.<br /><br />In my next post, I will add Indexes to this table.<br /><br /><br /><br /></div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-5041363658730964039 Thu Aug 09 2018 04:35:00 GMT-0400 (EDT) Partitioning -- 1 : Introduction https://hemantoracledba.blogspot.com/2018/08/partitioning-1-introduction.html <div dir="ltr" style="text-align: left;" trbidi="on">I am beginning a new series of Blog Posts on Partitioning in Oracle.&nbsp; I plan to cover 11g and 12c.&nbsp; &nbsp;I might add posts on changes in 18c&nbsp; (which is really 12.2.0.2 currently)<br /><br />First, this is <a href="https://drive.google.com/file/d/0B0NkYQj3ZlwKMld6d3h0bFctdWc/view?usp=drive_web" target="_blank">my presentation at AIOUG Sangam 11</a><br />and this <a href="https://drive.google.com/file/d/0B0NkYQj3ZlwKVk5HVmVQb2Q5aDg/view?usp=drive_web" target="_blank">the corresponding article</a><br /><br />This series of posts will have new examples, from the simple to the complex, not present in the above presentation / article.</div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-1291353969579147317 Thu Aug 09 2018 03:54:00 GMT-0400 (EDT) Facebook : My Recent Experience http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/GR9Aaq_Jj3g/ <p><img class="alignleft wp-image-7978" src="https://oracle-base.com/blog/wp-content/uploads/2018/04/disgusting-1300592_640.png" alt="" width="200" height="234" />Here&#8217;s a little story of what has happened to me recently on Facebook.</p> <p>First a little history lesson. For a long time I had an extremely small list of friends on Facebook. I would only accept friend requests from people I really knew, like IRL friends and a few work colleagues. That was it. No Oracle people were allowed&#8230; The <a href="http://debrasoracle.blogspot.com/">wife</a> has a rule that only people she would let stay in her house are friends on Facebook. Nobody is allowed in my house, so my definition had to be a little different than that.</p> <p>Some time ago I changed my stance on Facebook friends and started to accept other people, mostly assigning them to the &#8220;Restricted&#8221; list, and so it went on for some time.</p> <p>Recently I tweeted that I was getting a lot of friend requests and wondered what was going on. I figured I have a lot of readers, so it&#8217;s natural people would reach out, and I didn&#8217;t think to much about it. After a while I started to get some really odd things happen, so I did a little digging and found some rather &#8220;interesting&#8221; people in my friend list. I don&#8217;t really want to say more about it than that.</p> <p>The long and short of it was I decided to remove several thousand friends and I&#8217;ve returned to something close to my original policy. I&#8217;m sorry if you are a decent person and feel offended that I have unfriended you, but if I don&#8217;t really know you, that&#8217;s the way it is.</p> <p>By the way, Facebook used to let you mass delete friends, but that is no longer possible. What&#8217;s more, if you delete a lot of them at once they lock certain features of your account. I had to write to Facebook to explain what I was doing and why before they would let me unfriend people again. I know it&#8217;s an automatic check for suspicious behaviour, but it would be nice if they spent more effort checking what people are saying and doing on their platform&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/08/09/facebook-my-recent-experience/">Facebook : My Recent Experience</a> was first posted on August 9, 2018 at 8:25 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/GR9Aaq_Jj3g" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8280 Thu Aug 09 2018 03:25:22 GMT-0400 (EDT) Video: Explain Plan, Execution Plan, AutoTrace, and Real Time SQL Monitoring https://www.thatjeffsmith.com/archive/2018/08/video-explain-plan-execution-plan-autotrace-and-real-time-sql-monitoring/ <p>15 minutes overview of how to use these features in Oracle SQL Developer:</p> <ul> <li>Get an Explain Plan</li> <li>Customize the display of the plan</li> <li>Get a cached plan</li> <li>Use DBMS_XPLAN</li> <li>Compare Plans</li> <li>Use Real Time SQL Monitoring</li> <li>Use AutoTrace.</li> <p><iframe width="560" height="315" src="https://www.youtube.com/embed/-IWxdI9-Z-U" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p> <p>And if you want an awesome video, MUCH more in depth about how to read a plan, then I suggest this from @<a href="https://www.youtube.com/watch?v=WhuufIeGefE" rel="noopener" target="_blank">sqlmaria</a>. </p> thatjeffsmith https://www.thatjeffsmith.com/?p=6909 Wed Aug 08 2018 17:15:55 GMT-0400 (EDT) Oracle DBSAT Discoverer Feature https://blog.pythian.com/oracle-dbsat-discoverer-feature/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Oracle’s Database Security Assessment Tool (DBSAT) is a nice and powerful free tool that performs Database and OS Security Audits and provides recommendations based on the findings. The tool and documentation can be downloaded from the following Oracle link (although an account with CSI will be required for the download):</p> <p><a href="http://www.oracle.com/technetwork/database/security/dbsat/downloads/index.html">http://www.oracle.com/technetwork/database/security/dbsat/downloads/index.html</a></p> <p>&#8220;<span id="kmPgTpl:r1:0:ol22" class="xq"><label>Oracle Database Security Assessment Tool (DBSAT) (<strong>Doc ID 2138254.1</strong>)</label>&#8220;</span></p> <p>DBSAT has three components:</p> <ol> <li> <div><strong>The Collector:</strong> Executes SQL queries and runs operating system commands to collect data from the system to be assessed.</div> </li> <li><strong>The Reporter:</strong> Analyzes the collected data and generates a Database Security Assessment Report in HTML, Excel, JSON, and text formats. The Reporter can run on any machine: PC, laptop, or server (same or different than where the Collector ran).</li> <li><strong>The Discoverer:</strong> Executes SQL queries and collects data from the system to be assessed, based on the settings specified in configuration files. The collected data is then used to generate a Database Sensitive Data Assessment Report in HTML and CSV formats. The Discoverer can run on any machine: PC, laptop, or server.</li> </ol> <p>The scope of this blog is mainly to look at the third component, the Discoverer.</p> <p><strong>Discoverer Prerequisites:</strong></p> <p>The Discoverer is a Java program and requires the Java Runtime Environment (JRE) 1.8 (jdk8-u172) or later to run.</p> <p>Also, JAVA_HOME needs to be set in the environment variables, otherwise you may face an error like: <code>Error: Java version 1.6 or later is required</code></p> <p>During testing on my VM I had some issues even the Java version was correct:</p> <p><code>oracle@localhost:~/DBSAT$ java -fullversion<br /> java full version "1.7.0_03-b04"<br /> oracle@localhost:~/DBSAT$<br /> oracle@localhost:~/DBSAT$ echo $JAVA_HOME<br /> /usr/bin/java<br /> oracle@localhost:~/DBSAT$</code></p> <p>Setting JAVA_HOME without the &#8220;java&#8221; folder resolved the problem:<br /> <code>oracle@localhost:~/DBSAT$ export JAVA_HOME=/usr</code></p> <div>Later on, while working in a Windows environment, I had to set the JAVA_HOME with the full path:</div> <div><code>set JAVA_HOME=C:\Program Files\Java\jre1.8.0_141</code></div> <div></div> <div>The Discoverer executes SQL queries and collects data from the system to be assessed, based on the settings specified in the configuration and pattern files:</div> <div></div> <div>&#8230;/DBSAT/Discover/conf/sample_dbsat.config</div> <div>&#8230;/DBSAT/Discover/conf/sensitive_en.ini</div> <p>The .config file needs to be edited to specify the connectivity. For example the Database Listener Port and Hostname, Schemas to be Audited or All, and any tables or even specific columns to exclude from the scan. Example config file:</p> <p><code>DB_HOSTNAME = localhost<br /> DB_PORT = ####<br /> DB_SERVICE_NAME =<br /> SCHEMAS_SCOPE = ALL<br /> EXCLUSION_LIST_FILE =</code></p> <p>You will see some default Sensitive Categories are provided. Your own customized can also be added:</p> <p><code>[Sensitive Categories] PII = High Risk<br /> PII - Address = High Risk<br /> PII - IDs = High Risk<br /> PII - IT Data = High Risk<br /> PII-Linked = Medium Risk<br /> PII-Linked - Birth Details = Medium Risk<br /> Job Data = Medium Risk<br /> Financial Data - PCI = High Risk<br /> Financial Data - Banking = Medium Risk<br /> Health Data = Medium Risk</code></p> <p>The second file &#8220;sensitive_en.ini&#8221; has a number of predetermined patterns to search for. This way, the DBSAT Discoverer helps you to verify your sensitive data (especially for GDPR compliance), assuming of course your columns are properly named or at least properly documented. If your columns do not have human-readable names, but they have appropriate comments in the metadata, DBSAT Discoverer still will report them if they contain sensitive data. Also, you can add your own custom patterns for it to scan for if required. Example:</p> <p><code>[LAST_NAME] COL_NAME_PATTERN = (^LNAME$)|((LAST|FAMILY|SUR|PATERNAL).*NAME$)<br /> COL_COMMENT_PATTERN = (Last|Family|Sur|Paternal).*Name<br /> SENSITIVE_CATEGORY = PII</code></p> <p><code>[STREET] COL_NAME_PATTERN = STREET|^ST$|AVENUE|ROAD|ALLEY|BOULEVARD|PARKWAY|PLAZA|POINT|VALLEY<br /> COL_COMMENT_PATTERN = Street.*(Address)?<br /> SENSITIVE_CATEGORY = PII - Address</code></p> <p>For those looking to audit and verify compliance with GDPR, DBSAT Discoverer can help. DBSAT reports are helpful for identifying tables containing personal and sensitive data which need to be protected, audited, masked, encrypted, etc.</p> <p>A sample for running usage? See below:</p> <p><code>./dbsat discover -c ./Discover/conf/Test12c_dbsat.config Test12c_dbsat_SensitiveData</code></p> <p>The -c option is to point to our configuration file (sample_dbsat.config =&gt; Test12c_dbsat.config)</p> <p>Also, by not including the -n parameter, the output report will be encrypted (which is recommended as this is all about sensitive data!).</p> <p>The resulting Output Report is named &#8220;Test12c_dbsat_SensitiveData&#8221;, and the output file will be &#8220;dbsat_test12c_report.zip&#8221; which will contain .html and .csv versions of the report.</p> <p>A couple of samples of findings on my test:</p> <p><img class="alignnone size-full wp-image-104745" src="https://blog.pythian.com/wp-content/uploads/Financial_SensitiveData.png" alt="" width="500" height="121" srcset="https://blog.pythian.com/wp-content/uploads/Financial_SensitiveData.png 500w, https://blog.pythian.com/wp-content/uploads/Financial_SensitiveData-465x113.png 465w, https://blog.pythian.com/wp-content/uploads/Financial_SensitiveData-350x85.png 350w" sizes="(max-width: 500px) 100vw, 500px" /></p> <p><img class="alignnone size-full wp-image-104748" src="https://blog.pythian.com/wp-content/uploads/SensitiveColumnDetails-1.jpg" alt="" width="750" height="135" srcset="https://blog.pythian.com/wp-content/uploads/SensitiveColumnDetails-1.jpg 750w, https://blog.pythian.com/wp-content/uploads/SensitiveColumnDetails-1-465x84.jpg 465w, https://blog.pythian.com/wp-content/uploads/SensitiveColumnDetails-1-350x63.jpg 350w" sizes="(max-width: 750px) 100vw, 750px" /></p> <p>Note the default categories captured columns in my table named &#8220;Credit_Card&#8221;. I also added a customized category named &#8220;Custom2&#8221; to test. It captures specifically my column &#8220;Card_Number&#8221; (mostly to test that the additional categories and pattern matching added to the&#8221;Test12c_dbsat.config&#8221; file worked as expected).</p> <p>Learn more about the DBSAT Tool <a href="https://blog.pythian.com/oracles-database-security-assessment-tool-dbsat-version-2-2-0-1/">here</a>.</p> <p>See the <a href="https://docs.oracle.com/cd/E93129_01/SATUG/SATUG.pdf">full DBSAT guide.</a></p> <p>&nbsp;</p> </div></div> Roy Salazar https://blog.pythian.com/?p=104713 Wed Aug 08 2018 15:30:40 GMT-0400 (EDT) MobaXTerm 10.9 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/rDKGbVFpswU/ <p><img class="alignleft wp-image-4858" src="https://oracle-base.com/blog/wp-content/uploads/2015/05/command-prompt.png" alt="" width="150" height="150" />Once again I&#8217;m late to the party. About a week ago <a href="http://mobaxterm.mobatek.net/">MobaXTerm 10.9</a> was released.</p> <p>The <a href="https://mobaxterm.mobatek.net/download-home-edition.html">downloads</a> and <a href="https://mobaxterm.mobatek.net/download-home-edition.html">changelog</a> are in the usual places.</p> <p>This is a great tool!</p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/08/08/mobaxterm-10-9/">MobaXTerm 10.9</a> was first posted on August 8, 2018 at 1:06 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/rDKGbVFpswU" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8278 Wed Aug 08 2018 08:06:43 GMT-0400 (EDT) ORA-00600 [KUTYXTTE_TRANSFORM:FAILED] http://oracle-help.com/ora-errors/ora-00600-kutyxtte_transformfailed/ <p>Recently started seeing this error on few of the Exadata environments.</p> <p><span class="crayon-v">ORA</span><span class="crayon-o">–</span><span class="crayon-cn">00600</span><span class="crayon-o">:</span> <span class="crayon-e">internal </span><span class="crayon-e">error </span><span class="crayon-v">code</span><span class="crayon-sy">,</span> <span class="crayon-v">arguments</span><span class="crayon-o">:[KUTYXTTE_TRANSFORM:FAILED]</span></p> <p>As per metalink this is a known Bug <strong>28194173</strong></p> <p><strong>Patch 28194173: SQL ON 12.2 QUARANTINED AND CELL THROWING ORA-00600 [KUTYXTTE_TRANSFORM:FAILED]</strong></p> <p>I’ve seen this in 12.2.</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/ora-errors/ora-00600-kutyxtte_transformfailed/">ORA-00600 [KUTYXTTE_TRANSFORM:FAILED]</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5367 Wed Aug 08 2018 07:14:30 GMT-0400 (EDT) ORA-04030:QERGH hash-agg,kllcqas:kllsltba http://oracle-help.com/oracle-12c/oracle-12cr2/ora-04030qergh-hash-aggkllcqaskllsltba/ <p>In Oracle 12.2, ORA-04030:QERGH hash-agg,kllcqas:kllsltba, then we need do the following</p> <p>[oracle@local03 run]$ grep CRS_LIMIT_OPENFILE /u01/app/12.2.0.1/grid/crs/install/s_crsconfig_local03_env.txt<br /> Old value CRS_LIMIT_OPENFILE=65536</p> <p>We need to change the value and restart the CRS</p> <p>New value CRS_LIMIT_OPENFILE=400000</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-12c/oracle-12cr2/ora-04030qergh-hash-aggkllcqaskllsltba/">ORA-04030:QERGH hash-agg,kllcqas:kllsltba</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5365 Wed Aug 08 2018 07:10:18 GMT-0400 (EDT) ORA-600[KCLANTILOCK_17] http://oracle-help.com/ora-errors/ora-600kclantilock_17/ <p>Recently started seeing this error on few of the RAC environments.</p> <p><span class="crayon-v">ORA</span><span class="crayon-o">&#8211;</span><span class="crayon-cn">00600</span><span class="crayon-o">:</span> <span class="crayon-e">internal </span><span class="crayon-e">error </span><span class="crayon-v">code</span><span class="crayon-sy">,</span> <span class="crayon-v">arguments</span><span class="crayon-o">:[</span>ORA-600[KCLANTILOCK_17]</p> <p><span class="crayon-o">After upgrade to 12.2, Due to this Instance got crashed.</span></p> <p>As per metalink this is a known Bug <strong>27162390</strong></p> <p><strong>Patch 27162390: LNX64-181-RAC: LMS HIT ORA-600[KCLANTILOCK_17], INST CRASH</strong></p> <p>I’ve seen this in 12.2. As per Oracle it’s fixed in version 18.1</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/ora-errors/ora-600kclantilock_17/">ORA-600[KCLANTILOCK_17]</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5363 Wed Aug 08 2018 06:49:28 GMT-0400 (EDT) ORA-600 [KDZA_CHECK_EQ NOT IN DICTIONARY] http://oracle-help.com/ora-errors/ora-600-kdza_check_eq-not-in-dictionary/ <p>Recently started seeing this error on few of the RAC environments.</p> <p><span class="crayon-v">ORA</span><span class="crayon-o">&#8211;</span><span class="crayon-cn">00600</span><span class="crayon-o">:</span> <span class="crayon-e">internal </span><span class="crayon-e">error </span><span class="crayon-v">code</span><span class="crayon-sy">,</span> <span class="crayon-v">arguments</span><span class="crayon-o">:[KDZA_CHECK_EQ NOT IN DICTIONARY]</span></p> <p>As per metalink this is a known Bug 24818566</p> <p><strong>Patch 24818566: SQLLOAD FAILED WITH ORA-600 [KDZA_CHECK_EQ NOT IN DICTIONARY]</strong></p> <p>I’ve seen this in 11204 ,12102,12.2 both. As per Oracle it’s fixed in version 18.1</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/ora-errors/ora-600-kdza_check_eq-not-in-dictionary/">ORA-600 [KDZA_CHECK_EQ NOT IN DICTIONARY]</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5361 Wed Aug 08 2018 06:45:16 GMT-0400 (EDT) ADF Performance Tuning: A Field Report https://technology.amis.nl/2018/08/08/adf-performance-tuning-a-field-report/ <p>Last week I was doing an extensive performance analysis / health check on a large ADF project, with the <a href="https://www.adfpm.com/adf-performance-monitor-major-new-version-7-0/" target="_blank" rel="noopener">newest version</a> of our ADF Performance Monitor product. In this performance assessment/analysis I have focused high-level on the most important performance bottlenecks. We could see in the ADF Performance Monitor that end-users experience very slow page load times, they were waiting much more than needed. This ADF application needed attention; it could run more efficient like nearly all ADF applications can. In this blog I describe some of my findings, maybe interesting for other ADF projects as well.</p> <h3>Complete overview</h3> <p>The first thing I always do is configuring the ADF Performance Monitor on all WebLogic managed servers (in this case 4) to have a complete overview of the performance:</p> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?ssl=1"><img data-attachment-id="49497" data-permalink="https://technology.amis.nl/2018/08/08/adf-performance-tuning-a-field-report/dashboard_25072018_high_brower_load_time/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?fit=1899%2C948&amp;ssl=1" data-orig-size="1899,948" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="dashboard_25072018_high_brower_load_time" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?fit=300%2C150&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?fit=702%2C350&amp;ssl=1" class="aligncenter size-large wp-image-49497" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?resize=702%2C350&#038;ssl=1" alt="" width="702" height="350" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?resize=1024%2C511&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?resize=300%2C150&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?resize=768%2C383&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/dashboard_25072018_high_brower_load_time.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <p>&nbsp;</p> <p>In this case a typical daily performance summary was (top left section):</p> <ul> <li><strong>296435</strong> HTTP requests (7% very slow, 28% slow, and 65 normal)</li> <li><strong>1134 </strong>Errors (0,4%)</li> <li><strong>0,25 </strong>Seconds Average Server Process Time</li> <li><strong>0,57 </strong>Seconds Average Total Time an end-user needs to wait before an HTTP request is processed</li> </ul> <p>What already is strange here is that the AVG total time end-users needs to wait (0,57 Sec) is more than double the time the AVG process time by the application server (0,25 Sec)!</p> <h3>Problem 1: Very Slow Browser Load Time</h3> <p>On the chart at the right bottom we can see the explanation for this. In this chart we see in a glance in which layer processing time has been spent; database (yellow), webservice (pink), application server (blue), network (purple), and browser load time (grey).</p> <p>Read more on <a href="https://www.adfpm.com/adf-performance-tuning-a-field-report/" target="_blank" rel="noopener">adfpm.com</a> – our new website on the ADF Performance Monitor.</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/08/08/adf-performance-tuning-a-field-report/">ADF Performance Tuning: A Field Report</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Frank Houweling https://technology.amis.nl/?p=49493 Wed Aug 08 2018 03:08:10 GMT-0400 (EDT) Introduction to Azure SQL Managed Instance – SQL On The Edge Episode 17 https://blog.pythian.com/introduction-to-azure-sql-managed-instance-sql-on-the-edge-episode-17/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>SQL Server is one of the most adopted and widely-used database engines in the world. Almost every company either builds on it or uses some enterprise third-party software that uses it as the back end. For this reason, it makes sense that Microsoft has invested in their SQL database offering in Azure from the initial years of the cloud platform.</p> <p>Looking at the timeline of SQL-related Azure releases, we can also see that in the last few years, Microsoft has aggressively increased these investments to provide customers with more choice and more power. Let&#8217;s see:</p> <ul> <li>Azure SQL Database was released in 2010.</li> <li>Performance-based tiers released in 2014.</li> <li>Elastic Pools released in 2015.</li> <li>Azure SQL Data Warehouse released in 2016.</li> <li>Azure SQL Managed Instance in preview now, H1 2018.</li> </ul> <p>At this point, Managed Instance does not have a set date to be generally available, but it is very likely that the team is trying to make this happen during the second half of this year. So with a new offering in place, let&#8217;s look at what makes it different, and why it might be the best fit for your SQL Server workload going forward.</p> <p><strong>Why Managed Instance?</strong><br /> From its inception, Azure SQL Database has focused on the database as the unit of deployment, security, and performance. While this fits well with new developments or SaaS providers, it wasn&#8217;t always a good fit for on-premises customers. Many existing SQL Server clients are used to having multiple databases per instance and being able to work with them on a low isolation environment. This means moving data back and forth, using three-part names, cross-database security, etc. For these clients, there is too much friction to migrate to the existing database-centric offering.</p> <p>This is the gap that Managed Instance is coming to fill. It will provide the instance as the fundamental unit and allow this close relationship between the databases that live inside of it. It will be a better fit for many existing SQL Server clients but it will also open up new scenarios that are not available on the single database Azure SQL Database offering. For example, you will be able to place a managed instance directly into a VNET, backup and restore from binary SQL Server backups and use features not allowed on the single database model like SQLCLR, linked servers, etc.</p> <p><strong>Compatibility</strong><br /> As I mentioned, backups can be restored into Managed Instance and they can be as old as SQL Server 2005 backups. However, once you are on Managed Instance, if you back up your database, you will get a backup that is for the latest SQL Server version; you won&#8217;t be able to simply back up back to an old version. The actual database compatibility models are the same as what Azure SQL Database offers, which is from 100 (SQL Server 2008) to 140 (SQL Server 2017).</p> <p>In regards to the other features, there are some limits based on the fact that this is a managed service. For example, linked servers will only be to SQL Server or Azure SQL databases since you can&#8217;t install third-party drivers on the underlying system. Similarly, SQL Agent currently only allows T-SQL jobs. Something like a command line job would provide direct access to the underlying operating system and thus could interfere with the operation of the service.</p> <p><strong>Pricing Model</strong><br /> The pricing model for Managed Instance is split into four components:</p> <ol> <li>Cores: the amount of cores will also dictate how much RAM you get.</li> <li>Data Storage: the first 32GB are included, then you pay in 32GB increments.</li> <li>Backup Storage: the amount in GB of your compressed backups.</li> <li>IO Rate: flat rate per 1 million IOPS.</li> </ol> <p>For clients that have an existing SQL Server investment with Software Assurance, Microsoft is providing the Azure Hybrid Benefit. This means you can migrate your SQL Server licenses into Managed Instance and get a large discount off the sticker price (about 40%).</p> <p>There are also some opt-in features that have to be paid for separately, like the ability to have active geo-replication (for DR) and security monitoring with Azure Threat Detection. During the preview, the DR options are not live yet but we can expect those when the service becomes generally available.</p> <p><strong>Hardware Specifications</strong><br /> In terms of technical specifications, Managed Instance is offered on two hardware generations: Gen4 and Gen5.</p> <p>Gen 4 CPUs are based on Intel E5-2673 v3 (Haswell) 2.4 GHz processors and 1 vCore maps to 1 physical core.</p> <p>Gen 5 CPUs are based on Intel E5-2673 v4 (Broadwell) 2.3 GHz processors and 1 vCore maps to 1 logical core (HT thread).</p> <p>The higher core counts (from 32 all the way to 80) are only available on Gen5 and also Gen5 has NVMe SSD storage. There is a drawback that Gen5 maps 5.5 GB of RAM per core versus 7GB of RAM per core for Gen4.</p> <p><strong>Service Tiers</strong><br /> The service is offered in two different tiers: General Purpose and Business Critical. Both of them are okay for production use and choosing one over the other depends on performance, RTO, and RPO requirements. For Tier 3 &#8211; Tier 2 workloads, General Purpose could be enough, for Tier 1 mission-critical workloads, definitely go with Business Critical.</p> <p>Business Critical supports higher IOPS with less latency by using local NVMe storage instead of remote Premium Storage. Business Critical also offers faster failover capabilities built on top of Availability Groups, with three replicas including one for read-only workloads. The final big distinction is that Business Critical is required if you use the In-Memory OLTP feature.</p> <p><strong>Demo</strong><br /> Okay, now that we have a good understanding of Managed Instance, let&#8217;s run a demo of creating your Managed Instance, connecting to it and extracting cores and memory information. Check it out on the video below, cheers!</p> <p>Azure SQL Managed Instance – SQL On The Edge Episode 17</p> <p><iframe src="https://www.youtube.com/embed/ZYm4mgcCaE4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> </div></div> Warner Chaves https://blog.pythian.com/?p=104635 Tue Aug 07 2018 12:59:46 GMT-0400 (EDT) SLOB http://oracledba.blogspot.com/2018/08/slob.html SLOB is an Oracle I/O workload generation tool kit, supports testing extreme REDO logging I/O (minimal amount of CPU overhead)<br />The SLOB package can be downloaded from&nbsp;<a href="https://kevinclosson.net/slob/">Kevin Closson's Blog: SLOB Resources</a><br />SLOB 2.4.2 requires Linux with Oracle Client<br /><h2>Installation</h2>Unzip the SLOB tar.gz file in the desired directory ($SLOB).<br />Create tablespace for SLOB data (the script creates IOPS)<br /><blockquote>$ cd $SLOB<br />$ sqlplus / as sysdba @ misc/ts</blockquote>Load SLOB data by running setup.sh script located in the $SLOB directory. Using two mandatory parameters:<br /><ul><li>Tablespace into which SLOB will create and load the test schemas</li><li>The number of schemas to create and load</li></ul><blockquote>$ ./setup.sh IOPS 8</blockquote><h2>configuration</h2>One time compilation<br /><blockquote>$ cd $SLOB/wait_kit<br />$ make</blockquote>Edit runtime parameter in slob.conf configuration file<br /><blockquote>$ cd $SLOB<br />$ vi slob.conf</blockquote>Execute slob using number of SLOB schemas<br /><blockquote>$ cd $SLOB<br />$ ./runit.sh 8</blockquote>We can edit the slob.conf file to modify some parameters.<br /><ul><li>UPDATE_PCT - Percentage of SLOB update operations</li><li>SCAN_PCT - percentage of short scan SELECT operations on a short scan table</li><li>RUN_TIME - Wall-­clock duration of a SLOB test in seconds</li><li>WORK_LOOP - SLOB test duration based on iterations</li><li>SCALE - Number of database blocks / Size</li><li>SCAN_TABLE_SZ - size of the short scan table(s)</li><li>WORK_UNIT - scope of blocks being manipulated by each operation</li><li>REDO_STRESS - HEAVY -&gt; generate significant amounts of redo logging</li><li>LOAD_PARALLEL_DEGREE - number of Oracle Database sessions concurrently inserting data</li><li>THREADS_PER_SCHEMA – sessions against each schema</li><li>DATABASE_STATISTICS_TYPE - “awr” or “statspack.”</li><li>ADMIN_SQLNET_SERVICE – separate service for admin operations</li><li>SQLNET_SERVICE_BASE – service for load balance and round robin connection</li><li>SQLNET_SERVICE_MAX - appended to SQLNET_SERVICE_BASE in a RAC testing scenario</li><li>DBA_PRIV_USER – sys / system or any DBA user</li><li>SYSDBA_PASSWD – password of the above user</li></ul><h2>Results &amp; tools</h2><ul><li>iostat.out - input/output statistics: Read Thoughput (MB/sec), Write Throughput (MB/sec) and information about queue lengths.</li><li>mpstat.out - CPU utilization (processors related statistics)</li><li>vmstat.out - virtual memory statistics</li><li>misc/awr_info.sh - extracts interesting information from the awr.txt file</li></ul><blockquote>$ misc/awr_info.sh awr_pdb.txt<br />FILE|SESSIONS|ELAPSED|DB CPU|DB Tm|EXECUTES|LIO|PREADS|READ_MBS|PWRITES|WRITE_MBS|REDO_MBS|DFSR_LAT|DPR_LAT|DFPR_LAT|DFPW_LAT|LFPW_LAT|TOP WAIT|<br />awr_pdb.txt||61|0.6|1.0|695|52106|1997| 16.6|0| 53|34.8| 250|0|0|0| 192|DB CPU 33.3 54.7|</blockquote> Yossi Nixon tag:blogger.com,1999:blog-6061714.post-4321192045897033292 Tue Aug 07 2018 10:30:00 GMT-0400 (EDT) Installing SQL Server 2008 R2 on Windows 2012 Cluster https://blog.pythian.com/installing-sql-server-2008-r2-windows-2012-cluster/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Despite being an older version, some applications may require you to install a SQL Server 2008 R2 clustered instance on a Windows 2012 environment. You will quickly find out that there are a number of compatibility issues that can make this task tricky. See below for a summary of the issues you can encounter and how to work around them in order to successfully install SQL Server.</p> <h2>Cluster Service Verification Rule Failed</h2> <p>The first issue you may run into is during the Setup Support Rules step of the SQL installation wizard. The <em>Cluster Service Verification Rule</em> fails, saying that the SQL Server failover cluster service is not online or cannot be accessed:</p> <p><img class="aligncenter wp-image-104876" src="https://blog.pythian.com/wp-content/uploads/clusterservicerule.png" alt="Cluster Service Verification Rule Failed" width="462" height="197" srcset="https://blog.pythian.com/wp-content/uploads/clusterservicerule.png 819w, https://blog.pythian.com/wp-content/uploads/clusterservicerule-465x198.png 465w, https://blog.pythian.com/wp-content/uploads/clusterservicerule-350x149.png 350w" sizes="(max-width: 462px) 100vw, 462px" /></p> <p>&nbsp;</p> <p><strong>Solution</strong>: Install the <em>Failover Cluster Automation Server</em> feature on all cluster nodes.</p> <ol> <li>Open Server Manager</li> <li>Click on Manage -&gt; Add Roles and Features</li> <li>In the “Features” page of the wizard, select Failover Cluster automation Server under Remote Server Administration Tools -&gt; Feature Administration Tools -&gt; Failover Clustering Tools<br /> <img class="aligncenter wp-image-104877" src="https://blog.pythian.com/wp-content/uploads/installfailoverclusterserver.png" alt="Install cluster feature" width="463" height="423" srcset="https://blog.pythian.com/wp-content/uploads/installfailoverclusterserver.png 673w, https://blog.pythian.com/wp-content/uploads/installfailoverclusterserver-465x425.png 465w, https://blog.pythian.com/wp-content/uploads/installfailoverclusterserver-350x320.png 350w" sizes="(max-width: 463px) 100vw, 463px" /></li> <li>Click Next and Install</li> <li>Repeat for every node in the cluster</li> <li>Rerun the SQL Installation Wizard</li> </ol> <p>&nbsp;</p> <h2>Missing Option in Cluster Security Policy</h2> <p>The next issue is on the <em>Cluster Security Policy</em> page of the SQL Installation Wizard. When running the wizard on Windows 2008, you will have an option to either “<em>Use Service SIDs</em>” or “<em>Use Domain Group</em>”. The recommended option for Windows 2008+ is the service SIDs. However, when running on Windows 2012, it does not give you this option and just asks you to enter a domain group for the Database Engine and SQL Server Agent.</p> <p><img class="aligncenter wp-image-104878" src="https://blog.pythian.com/wp-content/uploads/clustersecpol.png" alt="Cluster Security Policy " width="627" height="462" srcset="https://blog.pythian.com/wp-content/uploads/clustersecpol.png 802w, https://blog.pythian.com/wp-content/uploads/clustersecpol-465x343.png 465w, https://blog.pythian.com/wp-content/uploads/clustersecpol-350x258.png 350w" sizes="(max-width: 627px) 100vw, 627px" /></p> <p><strong>Solution</strong>: This is a bug. The simple fix is to leave it blank and just click <em>Next</em>. When the Domain Group button is left empty, it will use Service SIDs by default.</p> <p>&nbsp;</p> <h2>Windows Server 2003 FILESTREAM Hotfix Check Rule Failed</h2> <p>You reach the end of the setup wizard and get to the <em>Cluster Installation Rules</em> page, but the &#8220;<em>Windows Server 2003 FILESTREAM Hotfix Check</em>&#8221; rule fails even if you do not wish to install Filestream. The error says: “<em>Windows Server 2003 hotfix KB937444 is not installed. This hotfix is required for FILESTREAM to work on a Windows Server 2003-based cluster</em>”; however, the mentioned hotfix is not valid for Windows 2012.</p> <p><img class="aligncenter wp-image-104879" src="https://blog.pythian.com/wp-content/uploads/filestreamcheck.png" alt="Filestream hotfix rule" width="478" height="211" srcset="https://blog.pythian.com/wp-content/uploads/filestreamcheck.png 778w, https://blog.pythian.com/wp-content/uploads/filestreamcheck-465x205.png 465w, https://blog.pythian.com/wp-content/uploads/filestreamcheck-350x154.png 350w" sizes="(max-width: 478px) 100vw, 478px" /></p> <p><strong>Solution</strong>: This is also a bug. The solution is to slipstream service pack 1:</p> <ol> <li>Download all 3 SQL Server 2008 R2 SP1 packages from <a href="https://www.microsoft.com/en-us/download/details.aspx?displaylang=en&amp;id=26727">https://www.microsoft.com/en-us/download/details.aspx?displaylang=en&amp;id=26727</a>.<br /> <blockquote><p><img class="aligncenter wp-image-104880" src="https://blog.pythian.com/wp-content/uploads/download.png" alt="SP1 Download" width="581" height="288" srcset="https://blog.pythian.com/wp-content/uploads/download.png 752w, https://blog.pythian.com/wp-content/uploads/download-465x231.png 465w, https://blog.pythian.com/wp-content/uploads/download-350x174.png 350w" sizes="(max-width: 581px) 100vw, 581px" /></p></blockquote> </li> <li>Copy the original SQL Server 2008 R2 installer files to a new folder. In this example, we will use C:\SQL2008R2_SP1.<br /> <img class="aligncenter wp-image-104881" src="https://blog.pythian.com/wp-content/uploads/files1.png" alt="Filestream root folder" width="562" height="322" srcset="https://blog.pythian.com/wp-content/uploads/files1.png 827w, https://blog.pythian.com/wp-content/uploads/files1-465x267.png 465w, https://blog.pythian.com/wp-content/uploads/files1-350x201.png 350w" sizes="(max-width: 562px) 100vw, 562px" /></li> <li>Open a CMD window and run the below commands to extract the SP1 packages to C:\SQL2008R2_SP1\SP:<br /> <blockquote><p><em>SQLServer2008R2SP1-KB2528583-IA64-ENU.exe /x:C:\SQL2008R2_SP1\SP</em><br /> <em>SQLServer2008R2SP1-KB2528583-x64-ENU.exe /x:C:\SQL2008R2_SP1\SP</em><br /> <em>SQLServer2008R2SP1-KB2528583-x86-ENU.exe /x:C:\SQL2008R2_SP1\SP</em></p></blockquote> </li> <li>Copy the Setup.exe from C:\SQL2008R2_SP1\SP to C:\SQL2008R2_SP1, replacing the original file.<br /> <img class="aligncenter wp-image-104882" src="https://blog.pythian.com/wp-content/uploads/files2.png" alt="Filestream - Copy setup.exe" width="577" height="265" srcset="https://blog.pythian.com/wp-content/uploads/files2.png 807w, https://blog.pythian.com/wp-content/uploads/files2-465x213.png 465w, https://blog.pythian.com/wp-content/uploads/files2-350x160.png 350w" sizes="(max-width: 577px) 100vw, 577px" /></li> <li>For each architecture type (IA64, x64 and x86), copy all files (excluding the folders and Microsoft.SQL.Chainer.PackageData.dll) from C:\SQL2008R2_SP1\SP\ to C:\SQL2008R2_SP1\. You can use the below robocopy commands:<br /> <blockquote> <pre>robocopy C:\SQL2008R2_SP1\SP\x86 C:\SQL2008R2_SP1\x86 /XF Microsoft.SQL.Chainer.PackageData.dll</pre> <p>&nbsp;</p> <pre>robocopy C:\SQL2008R2_SP1\SP\x64 C:\SQL2008R2_SP1\x64 /XF Microsoft.SQL.Chainer.PackageData.dll</pre> <p>&nbsp;</p> <pre>robocopy C:\SQL2008R2_SP1\SP\ia64 C:\SQL2008R2_SP1\ia64 /XF Microsoft.SQL.Chainer.PackageData.dll</pre> <p>&nbsp;</p></blockquote> </li> <li>See if you have a DefaultSetup.INI file under these three locations:<br /> <blockquote><p>C:\SQL2008R2_SP1\x86<br /> C:\SQL2008R2_SP1\x64<br /> C:\SQL2008R2_SP1\ia64</p></blockquote> <p>If the file exists, edit each file and add the following line at the end:</p> <blockquote> <pre>PCUSOURCE=".\SP"</pre> </blockquote> <p>If it doesn&#8217;t exist, create it under the three locations with the following content:</p> <blockquote> <pre>;SQLSERVER2008 R2 Configuration File [SQLSERVER2008] PCUSOURCE=".\SP"</pre> </blockquote> </li> <li>Run setup.exe from C:\SQL2008R2_SP1. The wizard should open as usual. To confirm that you are slipstreaming, in the installation rules you should notice &#8220;Update Setup Media Language Rule&#8221;. Proceed with the install as normal.</li> </ol> <h2>SQL Instance does not failover properly</h2> <p>This next issue only occurs if you have another SQL Server 2012 or later failover cluster instance (or availability groups) on the same server as the SQL 2008 R2 cluster instance. You finally managed to install the SQL 2008 R2 instance and added the other cluster nodes. You test failing over the instance to one of the passive nodes, but you may notice issues such as the SQL instance does not failover to the second node or SQL Agent resource does not come online and the SQL Server resource eventually goes in a failed state. In addition, the following messages can be found in the cluster logs:</p> <blockquote> <pre>Res SQL Server Agent : WaitingToComeOnline -&gt; OfflineDueToProvider( StateUnknown ) 000026d4.00002244::2018/03/15-15:55:39.419 INFO [RCM] TransitionToState(SQL Server Agent ) WaitingToComeOnline--&gt;OfflineDueToProvider 0000273c.00002cf8::2018/03/15-15:55:39.419 ERR [RHS] RhsCall::DeadlockMonitor: Call ONLINERESOURCE timed out by 16 milliseconds for resource 'SQL Server'. 0000273c.00002cf8::2018/03/15-15:55:39.419 ERR [RHS] Resource SQL Server handling deadlock. Cleaning current operation. 0000273c.0000439c::2018/03/15-15:55:25.934 INFO [RES] SQL Server : [sqsrvres] Run 'EXEC sp_server_diagnostics 20' returns following information 0000273c.0000439c::2018/03/15-15:55:25.934 ERR [RES] SQL Server : [sqsrvres] ODBC Error: [42000] [Microsoft][SQL Server Native Client 11.0][SQL Server]Could not find stored procedure 'sp_server_diagnostics'. (2812) 0000273c.0000439c::2018/03/15-15:55:25.934 ERR [RES] SQL Server : [sqsrvres] Failed to run diagnostics command. See previous log for error message 0000273c.0000439c::2018/03/15-15:55:25.934 INFO [RES] SQL Server : [sqsrvres] Disconnect from SQL Server</pre> </blockquote> <p><strong>Solution</strong>: Either stop and restart the cluster service on each node in the cluster or restart the nodes. After the restart, failover should behave normally. Follow this link for more details on the cause of the issue: <a href="https://support.microsoft.com/en-ca/help/2938136/could-not-find-stored-procedure-sp-server-diagnostics-error-and-the-in">https://support.microsoft.com/en-ca/help/2938136/could-not-find-stored-procedure-sp-server-diagnostics-error-and-the-in</a></p> <p>&nbsp;</p> </div></div> Alexandre Hamel https://blog.pythian.com/?p=104875 Tue Aug 07 2018 08:56:37 GMT-0400 (EDT) Running Spring Tool Suite and other GUI applications from a Docker container https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/ <p>Running an application within a Docker container helps in isolating the application from the host OS. Running GUI applications like for example an IDE from a Docker container, can be challenging. I&#8217;ll explain several of the issues you might encounter and how to solve them. For this I will use <a href="https://spring.io/tools">Spring Tool Suite</a> as an example. The code (Dockerfile and docker-compose.yml) can also be found <a href="https://github.com/MaartenSmeets/provisioning/tree/master/docker/STS">here</a>. Due to (several) security concerns, this is not recommended in a production environment.</p> <p><span id="more-49476"></span></p> <h1>Running a GUI from a Docker container</h1> <p>In order to run a GUI application from a Docker container and display its GUI on the host OS, several steps are needed;</p> <h2>Which display to use?</h2> <p>The container needs to be aware of the display to use. In order to make the display available, you can pass the DISPLAY environment variable to the container. docker-compose describes the environment/volume mappings/port mappings and other things of docker containers. This makes it easier to run containers in a quick and reproducible way and avoids long command lines.</p> <h3>docker-compose</h3> <p>You can do this by providing it in a docker-compose.yml file. See for example below. The environment indicates the host DISPLAY variable is passed as DISPLAY variable to the container.</p> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?ssl=1"><img data-attachment-id="49478" data-permalink="https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/docker-compose-env/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?fit=628%2C288&amp;ssl=1" data-orig-size="628,288" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="docker-compose-env" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?fit=300%2C138&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?fit=628%2C288&amp;ssl=1" class="aligncenter size-medium wp-image-49478" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env-300x138.png?resize=300%2C138&#038;ssl=1" alt="" width="300" height="138" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?resize=300%2C138&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-env.png?w=628&amp;ssl=1 628w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h3>Docker</h3> <p>In a Docker command (when not using docker-compose), you would do this with the -e flag or with &#8211;env. For example;</p> <pre class="brush: plain; title: ; notranslate">docker run --env DISPLAY=$DISPLAY containername</pre> <h2>Allow access to the display</h2> <p>The Docker container needs to be allowed to present its screen on the Docker host. This can be done by executing the following command:</p> <pre class="brush: plain; title: ; notranslate">xhost local:root</pre> <p>After execution, during the session, root is allowed to use the current users display. Since the Docker daemon runs as root, Docker containers (in general!) now can use the current users display. If you want to persist this, you should add it to a start-up script.</p> <h2>Sharing the X socket</h2> <p>The last thing to do is sharing the X socket (don&#8217;t ask me details but this is required&#8230;). This can be done by defining a volume mapping in your Docker command line or docker-compose.yml file. For Ubuntu this looks like you can see in the image below.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?ssl=1"><img data-attachment-id="49479" data-permalink="https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/docker-compose-volume/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?fit=628%2C288&amp;ssl=1" data-orig-size="628,288" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="docker-compose-volume" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?fit=300%2C138&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?fit=628%2C288&amp;ssl=1" class="aligncenter size-medium wp-image-49479" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?resize=300%2C138&#038;ssl=1" alt="" width="300" height="138" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?resize=300%2C138&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-volume.png?w=628&amp;ssl=1 628w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h1>Spring Tool Suite from a Docker container</h1> <p>In order to give a complete working example, I&#8217;ll show how to run Spring Tool Suite from a Docker container. In this example I&#8217;m using the Docker host JVM instead of installing a JVM inside the container. If you want to have the JVM also inside the container (instead of using the host JVM), look at <a href="https://technology.amis.nl/2018/07/27/automate-the-installation-of-oracle-jdk-8-and-10-on-rhel-and-debian-derivatives/">the following</a> and add that to the Dockerfile. As a base image I&#8217;m using an official Ubuntu image.</p> <p>I&#8217;ve used the following Dockerfile:</p> <pre class="brush: plain; title: ; notranslate">FROM ubuntu:18.04 MAINTAINER Maarten Smeets &lt;maarten.smeets@amis.nl&gt; ARG uid LABEL nl.amis.smeetsm.ide.name=&quot;Spring Tool Suite&quot; nl.amis.smeetsm.ide.version=&quot;3.9.5&quot; ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz RUN adduser --uid ${uid} --disabled-password --gecos '' develop RUN mkdir -p /opt/ide &amp;&amp; \ tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide &amp;&amp; \ ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre &amp;&amp; \ chown -R develop:develop /opt/ide &amp;&amp; \ mkdir /home/develop/ws &amp;&amp; \ chown develop:develop /home/develop/ws &amp;&amp; \ mkdir /home/develop/.m2 &amp;&amp; \ chown develop:develop /home/develop/.m2 &amp;&amp; \ rm /tmp/ide.tar.gz &amp;&amp; \ apt-get update &amp;&amp; \ apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java &amp;&amp; \ apt-get autoremove -y &amp;&amp; \ apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* &amp;&amp; \ rm -rf /tmp/* USER develop:develop WORKDIR /home/develop ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws</pre> <p>The specified packages are required to be able to run STS inside the container and create the GUI to display on the host.</p> <p>I&#8217;ve used the following docker-compose.yml file:</p> <pre class="brush: plain; title: ; notranslate">version: '3' services: sts: build: context: . dockerfile: Dockerfile args: uid: ${UID} container_name: &quot;sts&quot; volumes: - /tmp/.X11-unix:/tmp/.X11-unix - /home/develop/ws:/home/develop/ws - /home/develop/.m2:/home/develop/.m2 - /usr/lib/jvm/java-10-oracle:/usr/lib/jvm/java-10-oracle - /etc/java-10-oracle:/etc/java-10-oracle environment: - DISPLAY user: develop ports: &quot;8080:8080&quot;</pre> <p>Notice this docker-compose file has some dependencies on the host OS. It expects a JDK 10 to be installed in /usr/lib/jvm/java-10-oracle with configuration in /etc/java-10-oracle. Also it expects to find /home/develop/ws and /home/develop/.m2 to be present on the host to be mapped to the container. The .X11-unix mapping was already mentioned as needed to allow a GUI screen to be displayed. There are also some other things which are important to notice in this file.</p> <h2>User id</h2> <p>First the way a non-privileged user is created inside the container. This user is created with a user id (uid) which is supplied as a parameter. Why did I do that? Files in mapped volumes which are created by the container user will be created with the uid which the user inside the container has. This will cause issues if inside the container the user has a different uid as outside of the container. Suppose I run the container onder a user develop. This user on the host has a uid of 1002. Inside the container there is also a user develop with a uid of 1000. Files on a mapped volume are created with uid 1000; the uid of the user in the container. On the host however, uid 1000 is a different user. These files created by the container cannot be accessed by the develop user on the host (with uid 1002). In order to avoid this, I&#8217;m creating a develop user inside the VM with the same uid as the user used outside of the VM (the user in the docker group which gave the command to start the container).</p> <h2>Workspace folder and Maven repository</h2> <p>When working with Docker containers, it is a common practice to avoid storing state inside the container. State can be various things. I consider the STS application work-space folder and the Maven repository among them. This is why I&#8217;ve created the folders inside the container and mapped them in the docker-compose file to the host. They will use folders with the same name (/home/develop/.m2 and /home/develop/ws) on the host.</p> <h2>Java</h2> <p>My Docker container with only Spring Tool Suite was big enough already without having a more than 300Mb JVM inside of it (on Linux Java 10 is almost double the size of Java 8). I&#8217;m using the host JVM instead. I installed the host JVM on my Ubuntu development VM as described <a href="https://technology.amis.nl/2018/07/27/automate-the-installation-of-oracle-jdk-8-and-10-on-rhel-and-debian-derivatives/">here</a>.</p> <p>In order to use the host JVM inside the Docker container, I needed to do 2 things:</p> <p>Map 2 folders to the container:</p> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?ssl=1"><img data-attachment-id="49480" data-permalink="https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/docker-compose-java/" data-orig-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?fit=628%2C340&amp;ssl=1" data-orig-size="628,340" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="docker-compose-java" data-image-description="" data-medium-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?fit=300%2C162&amp;ssl=1" data-large-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?fit=628%2C340&amp;ssl=1" class="aligncenter size-medium wp-image-49480" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?resize=300%2C162&#038;ssl=1" alt="" width="300" height="162" srcset="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?resize=300%2C162&amp;ssl=1 300w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/08/docker-compose-java.png?w=628&amp;ssl=1 628w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p>And map the JVM path to the JRE folder onder STS: ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre.</p> <h1>Seeing it work</h1> <p>First as mentioned, allow access to the display:</p> <pre class="brush: plain; title: ; notranslate">xhost local:root</pre> <p>Since the build uses the variable UID, you should do:</p> <pre class="brush: plain; title: ; notranslate">export UID=$UID</pre> <p>Next build:</p> <pre class="brush: plain; title: ; notranslate">docker-compose build Building sts Step 1/10 : FROM ubuntu:18.04 ---&gt; 735f80812f90 Step 2/10 : MAINTAINER Maarten Smeets &lt;maarten.smeets@amis.nl&gt; ---&gt; Using cache ---&gt; 69177270763e Step 3/10 : ARG uid ---&gt; Using cache ---&gt; 85c9899e5210 Step 4/10 : LABEL nl.amis.smeetsm.ide.name=&quot;Spring Tool Suite&quot; nl.amis.smeetsm.ide.version=&quot;3.9.5&quot; ---&gt; Using cache ---&gt; 82f56ab07a28 Step 5/10 : ADD https://download.springsource.com/release/STS/3.9.5.RELEASE/dist/e4.8/spring-tool-suite-3.9.5.RELEASE-e4.8.0-linux-gtk-x86_64.tar.gz /tmp/ide.tar.gz ---&gt; Using cache ---&gt; 61ab67d82b0e Step 6/10 : RUN adduser --uid ${uid} --disabled-password --gecos '' develop ---&gt; Using cache ---&gt; 679f934d3ccd Step 7/10 : RUN mkdir -p /opt/ide &amp;&amp; tar zxvf /tmp/ide.tar.gz --strip-components=1 -C /opt/ide &amp;&amp; ln -s /usr/lib/jvm/java-10-oracle /opt/ide/sts-3.9.5.RELEASE/jre &amp;&amp; chown -R develop:develop /opt/ide &amp;&amp; mkdir /home/develop/ws &amp;&amp; chown develop:develop /home/develop/ws &amp;&amp; rm /tmp/ide.tar.gz &amp;&amp; apt-get update &amp;&amp; apt-get install -y libxslt1.1 libswt-gtk-3-jni libswt-gtk-3-java &amp;&amp; apt-get autoremove -y &amp;&amp; apt-get clean &amp;&amp; rm -rf /var/lib/apt/lists/* &amp;&amp; rm -rf /tmp/* ---&gt; Using cache ---&gt; 5e486a4d6dd0 Step 8/10 : USER develop:develop ---&gt; Using cache ---&gt; c3c2b332d932 Step 9/10 : WORKDIR /home/develop ---&gt; Using cache ---&gt; d8e45440ce31 Step 10/10 : ENTRYPOINT /opt/ide/sts-3.9.5.RELEASE/STS -data /home/develop/ws ---&gt; Using cache ---&gt; 2d95751237d7 Successfully built 2d95751237d7 Successfully tagged t_sts:latest</pre> <p>Next run:</p> <pre class="brush: plain; title: ; notranslate">docker-compose up</pre> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?ssl=1"><img data-attachment-id="49481" data-permalink="https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/sts-from-docker/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?fit=1308%2C984&amp;ssl=1" data-orig-size="1308,984" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="sts from docker" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?fit=300%2C226&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?fit=702%2C528&amp;ssl=1" class="aligncenter size-medium wp-image-49481" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?resize=300%2C226&#038;ssl=1" alt="" width="300" height="226" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?resize=300%2C226&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?resize=768%2C578&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?resize=1024%2C770&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/08/sts-from-docker.png?w=1308&amp;ssl=1 1308w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <p>When you run a Spring Boot application on port 8080 inside the container, you can access it on the host on port 8080 with for example Firefox.</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/08/06/running-spring-tool-suite-and-other-gui-applications-from-a-docker-container/">Running Spring Tool Suite and other GUI applications from a Docker container</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=49476 Mon Aug 06 2018 15:13:43 GMT-0400 (EDT) Upgrade and Patch Application in Application Container http://oracle-help.com/oracle-database/multitenant/upgrade-and-patch-application-in-application-container/ <p>In previous article we have seen Installing Application and Synchronizing it with Application PDB.</p> <p><strong><a class="row-title" href="http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/">Introduction to Application Container</a></strong></p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="lTbXBI2dEK"><p><a href="http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/">Application Container Creation and Installation of Application</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/embed/#?secret=lTbXBI2dEK" data-secret="lTbXBI2dEK" width="600" height="338" title="&#8220;Application Container Creation and Installation of Application&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>In this article, we will see upgrading Application and Patching Application.</p> <p><strong>1. Upgrade Application in Application Container :</strong></p> <p>Before starting application upgrade you need to know the current version of Application.</p> <p><strong>Step 1:</strong> Connect the current container as app-pdb</p><pre class="crayon-plain-tag">ALTER SESSION SET CONTAINER=APP_PDB;</pre><p><strong>Step 2:</strong> Check the current version pd application</p><pre class="crayon-plain-tag">SQL&gt; SELECT APP_NAME,APP_VERSION FROM DBA_APPLICATIONS WHERE APP_IMPLICIT='N'; APP_NAME APP_VERSION -------------------------------------------------------------------------------------------------------------------------------- ------------------------------ APP_PDB 1.0</pre><p><strong>Step 3:</strong> Start upgrade application</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB BEGIN UPGRADE '1.0' TO '2.0'; Pluggable database altered.</pre><p><strong>Step 4 :</strong> We can see as the result of the above upgrade statement one PDB &#8220;F348281081_21_1&#8221; created as a clone of app_pdb</p><pre class="crayon-plain-tag">SQL&gt; show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 TESTPDB1 MOUNTED 4 APP_PDB READ WRITE NO 5 APP_SAL MOUNTED 6 F348281081_21_1 READ ONLY NO</pre><p><strong>Step 5 :</strong> Run upgrade script</p><pre class="crayon-plain-tag">SQL&gt;@/u01/upg_2.0.sql</pre><p><strong>Step 6 :</strong> End upgrade</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB END UPGRADE TO '2.0'; Pluggable database altered.</pre><p><strong>Step 7 :</strong> Connect to application pdb :</p><pre class="crayon-plain-tag">[oracle@localhost admin]$ sqlplus sys/oracle@app_sal as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Mon Jul 30 02:17:44 2018 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL&gt;</pre><p><strong>Step 8</strong> : Synchronize application pdb with application root :</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB SYNC; Pluggable database altered.</pre><p><strong>2. Patching application root and application pdb :</strong></p> <p>Minor changes in application constitutes Application Patches. Let us see step by step procedure to apply a patch and synchronizing it with application pdb.</p> <p>The current user must have the ALTER PLUGGABLE DATABASE  system privilege, and the privilege must be commonly granted in the application root.</p> <p><strong>Step 1 :</strong> Connect to the application root</p> <p><strong>Step 2 :</strong> applying patch statement needs patch number and the minimum version of an application on which this patch can be applied :</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB BEGIN PATCH 1783 MINIMUM VERSION '2.0'; Pluggable database altered.</pre><p>This patch can be applied only on an application running on 2.0 version.</p> <p><strong>Step 3 :</strong> Apply patch script</p><pre class="crayon-plain-tag">SQL&gt;@/u01/patches/pat_1783.sql</pre><p><strong>Step 4 :</strong> End patching</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB END PATCH 1783; Pluggable database altered.</pre><p>You can sync patching to application pdbs using step 8 of the Upgrade procedure.</p> <p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/upgrade-and-patch-application-in-application-container/">Upgrade and Patch Application in Application Container</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5241 Sun Aug 05 2018 15:13:55 GMT-0400 (EDT) Creating Application PDB and Syncing with Application Root http://oracle-help.com/oracle-database/multitenant/creating-application-pdb-and-syncing-with-application-root/ <p>In a previous post, we have seen creation steps of Application Container and Installation of Application in the Application container.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="vltoXTg7ut"><p><a href="http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/">Application Container Creation and Installation of Application</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/embed/#?secret=vltoXTg7ut" data-secret="vltoXTg7ut" width="600" height="338" title="&#8220;Application Container Creation and Installation of Application&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>In this article, we will see Creation Steps of Application PDB and synchronization of Application PDB with Application Container.</p> <p><strong>Create application PDB:</strong></p> <p><strong>Step1:</strong> Connect to application root :</p><pre class="crayon-plain-tag">[oracle@localhost oradata]$ sqlplus sys/oracle@app_pdb as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Sun Jul 29 01:32:29 2018 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL&gt;</pre><p><strong>Step 2:</strong> Create application pdb</p><pre class="crayon-plain-tag">SQL&gt; CREATE PLUGGABLE DATABASE app_sal admin user adm_sal identified by oracle; Pluggable database created.</pre><p><strong>Step 3:</strong> Open pluggable database</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APP_SAL OPEN; Pluggable database altered.</pre><p><strong>Step 4:</strong> Change current container as application pdb</p><pre class="crayon-plain-tag">SQL&gt; ALTER SESSION SET CONTAINER=APP_SAL; Session altered.</pre><p><strong>Step 5:</strong> Sync application pdb with application root :</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB SYNC; Pluggable database altered.</pre><p><strong>Step 6:</strong> Check table we have created in the application root</p><pre class="crayon-plain-tag">SQL&gt; COL NAME FORMAT A20 SQL&gt; COL DESCRIPTION FORMAT A50 SQL&gt; SELECT * FROM SAL_USR.SAL_MST; NAME DESCRIPTION -------------------- -------------------------------------------------- ID001 THIS IS TEST ENTRY</pre><p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/creating-application-pdb-and-syncing-with-application-root/">Creating Application PDB and Syncing with Application Root</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5219 Sun Aug 05 2018 15:10:16 GMT-0400 (EDT) Application Container Creation and Installation of Application http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/ <p>In the previous article, we have seen Introduction of Application Container</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="o7VKuiQ7io"><p><a href="http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/">Introduction to Application Container</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/embed/#?secret=o7VKuiQ7io" data-secret="o7VKuiQ7io" width="600" height="338" title="&#8220;Introduction to Application Container&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>&nbsp;</p> <p>In this article we will see creating Application Container, Installing Application in Container, Creating Application <strong>PDBs</strong> and syncing application <span style="background-color: #f6d5d9;">PDB </span>with application root.</p> <p><strong>Step 1:</strong> Create an application root database</p><pre class="crayon-plain-tag">SQL&gt; CREATE PLUGGABLE DATABASE APP_PDB AS APPLICATION CONTAINER ADMIN USER APP_ADMIN IDENTIFIED BY oracle; Pluggable database created.</pre><p><strong>Step 2:</strong> Open application root database</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APP_PDB OPEN; Pluggable database altered.</pre><p><strong>Step 3:</strong> Save open state for this database</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APP_PDB SAVE STATE; Pluggable database altered. SQL&gt;</pre><p><strong>Step 4:</strong> Check from <strong>v$pdbs</strong> view</p><pre class="crayon-plain-tag">SQL&gt; SELECT CON_ID,NAME,OPEN_MODE FROM V$PDBS WHERE APPLICATION_ROOT='YES'; CON_ID NAME OPEN_MODE ---------- --------------------------------------------------------- 4 APP_PDB READ WRITE SQL&gt;</pre><p><strong>Step 5:</strong> View datafile and tablespace of application container :</p><pre class="crayon-plain-tag">SQL&gt; SELECT FILE_NAME,TABLESPACE_NAME FROM CDB_DATA_FILES WHERE CON_ID=4; FILE_NAME TABLESPACE_NAME --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------ /u02/oradata/TESTCDB/71DA02D769265585E055000000000001/datafile/o1_mf_system_fokrl9lc_.dbf SYSTEM /u02/oradata/TESTCDB/71DA02D769265585E055000000000001/datafile/o1_mf_sysaux_fokrl9lr_.dbf SYSAUX /u02/oradata/TESTCDB/71DA02D769265585E055000000000001/datafile/o1_mf_undotbs1_fokrl9ls_.dbf UNDOTBS1</pre><p>We can see here SYSTEM, SYSAUX and UNDO tablespaces are created in application root.</p> <p>Set container to application container.</p><pre class="crayon-plain-tag">SQL&gt; alter session set container=app_pdb; Session altered.</pre><p><strong>Step 6:</strong> Check granted privilege to application container admin user.</p><pre class="crayon-plain-tag">SQL&gt; SELECT GRANTEE,GRANTED_ROLE,COMMON FROM DBA_ROLE_PRIVS WHERE GRANTEE='APP_ADMIN'; GRANTEE GRANTED_ROLE COM -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- --- APP_ADMIN PDB_DBA NO</pre><p><strong>Step 7:</strong> Check privilege of PDB_DBA role.</p><pre class="crayon-plain-tag">SQL&gt; SELECT ROLE,PRIVILEGE,ADMIN_OPTION,INHERITED FROM ROLE_SYS_PRIVS WHERE ROLE='PDB_DBA'; ROLE PRIVILEGE ADM INH ---------------------------------------------------------------------------------------------- ---------------------------------------- --- --- PDB_DBA CREATE SESSION NO NO PDB_DBA SET CONTAINER NO NO PDB_DBA CREATE PLUGGABLE DATABASE NO NO</pre><p><strong>Step 8:</strong> Create tns entry for application pdb :</p><pre class="crayon-plain-tag">app_pdb = (description = (address = (protocol=TCP) ( HOST=localhost)(port=1521)) (connect_data = (server =DEDICATED) (SERVICE_NAME=app_pdb) ) )</pre><p><strong>Step 9:</strong> Connect to application container and install the application using begin install and version</p><pre class="crayon-plain-tag">SQL&gt; alter session set container=app_pdb; Session altered. SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB BEGIN INSTALL '1.0'; Pluggable database altered.</pre><p><strong>Step 10:</strong> We will create one user and create some tables and data for that user.</p><pre class="crayon-plain-tag">SQL&gt; CREATE TABLESPACE SAL_TBS DATAFILE SIZE 100M; Tablespace created. SQL&gt; CREATE USER SAL_USR IDENTIFIED BY oracle DEFAULT TABLESPACE SAL_TBS QUOTA UNLIMITED ON SAL_TBS; User created. SQL&gt; GRANT CONNECT,CREATE TABLE TO SAL_USR;</pre><p><strong>Step 11:</strong> Connect to that schema</p><pre class="crayon-plain-tag">SQL&gt; ALTER SESSION SET CURRENT_SCHEMA=SAL_USR; Session altered.</pre><p><strong>Step 12:</strong> Create a table in <strong>SAL_USR</strong> schema :</p><pre class="crayon-plain-tag">SQL&gt; CREATE TABLE SAL_MST(NAME VARCHAR2(100),DESCRIPTION VARCHAR2(200)); Table created. SQL&gt; INSERT INTO SAL_MST VALUES ('ID001','THIS IS TEST ENTRY'); 1 row created. SQL&gt; COMMIT; Commit complete.</pre><p><strong>Step 13:</strong> End application installation using END INSTALL :</p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE APPLICATION APP_PDB END INSTALL '1.0'; Pluggable database altered.</pre><p>Step 14: Check the application version from DBA_APPLICATIONS :</p><pre class="crayon-plain-tag">SQL&gt; SELECT APP_NAME,APP_VERSION,APP_STATUS,APP_IMPLICIT FROM DBA_APPLICATIONS WHERE APP_IMPLICIT='N'; APP_NAME APP_VERSION APP_STATUS A -------------------------------------------------------------------------------------------------------------------------------- ------------------------------ ------------ - APP_PDB 1.0 NORMAL N</pre><p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/application-container-creation-and-installation-of-application/">Application Container Creation and Installation of Application</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5191 Sun Aug 05 2018 15:09:08 GMT-0400 (EDT) Introduction to Application Container http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/ <p>Application container is a new feature introduced in <strong>Oracle 12c Release 2</strong>. The main benefit of an application container in your environment is it makes application level patching and upgrade easy.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="K3uAF6TiQn"><p><a href="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/">Introduction of Multitenant Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/embed/#?secret=K3uAF6TiQn" data-secret="K3uAF6TiQn" width="600" height="338" title="&#8220;Introduction of Multitenant Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>In a product based company where you have multiple customers which are having same application running in your PDBs. We can take benefits of application container where we providing SaaS [Software as a Service] service.</p> <p>Now let us understand each term :</p> <ul> <li><strong>Application Container:</strong> Application container is <span style="background-color: #f6d5d9;">PDB </span>which we create with AS APPLICATION CONTAINER clause. It stores data and metadata for one or more application task. A master application definition is a maintainer in the application container.</li> <li><strong>Application Root:</strong> Application root is a container inside cdb$root container. For each PDB created with AS APPLICATION CONTAINER, Oracle creates an application root. Application root contains descriptions of Oracle-supplied common objects which are shared by Application PDBs.</li> <li><strong>Application Seed: </strong>Application seed is a PDB created inside Application Root. That helps us to create application PDBs quickly.</li> <li><strong>Application PDB:</strong> It is a pluggable database which we create after setting an application container as a current container.</li> </ul> <p>Each application pdb can be a part of only one application container. Changes made to the Application Container requires synchronization with all application pdbs.</p> <p>Below image gives a brief idea of how application container works :</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png"><img data-attachment-id="5213" data-permalink="http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/attachment/application_container/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?fit=570%2C528" data-orig-size="570,528" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="application_container" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?fit=300%2C278" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?fit=570%2C528" class="wp-image-5213 size-full aligncenter" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?resize=570%2C528" alt="" width="570" height="528" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?w=570 570w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?resize=300%2C278 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?resize=60%2C56 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/application_container.png?resize=150%2C139 150w" sizes="(max-width: 570px) 100vw, 570px" data-recalc-dims="1" /></a></p> <p>In the above diagram, we can see under <strong>CDB$ROOT</strong> container 4 <strong>PDB</strong> exists. pdb seed,pdb1 and two application container.app_order application container has two pdbs ord_mob and ord_com. ord_mob and ord_com stores the same application data.</p> <p>app_sales application container has two pdbs <strong>sal_mob</strong> and <strong>sal_com</strong>. These two containers store the same application data.</p> <p><strong>Characteristics of Application Container :</strong></p> <p>Application common objects exist only in Application Root. Metadata or data stored in Application root are only accessible in Application PDBs associated with that application container.</p> <p>One can instantly create application pdb under application container by creating a clone of application seed pdb or by unplugging and plugging, cloning remote and local pdbs to join application container.</p> <p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/introduction-to-application-container/">Introduction to Application Container</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5209 Sun Aug 05 2018 14:59:13 GMT-0400 (EDT) Manually Create and configure CDB database and PDB database http://oracle-help.com/oracle-database/multitenant/manually-create-and-configure-cdb-database-and-pdb-database/ <p>In the previous article, we can see that the architecture of multitenant. In this post, we will configure manually CDB and PDB.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="xbPS5r6Mfn"><p><a href="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/">Introduction of Multitenant Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/embed/#?secret=xbPS5r6Mfn" data-secret="xbPS5r6Mfn" width="600" height="338" title="&#8220;Introduction of Multitenant Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p><strong>Step 1:</strong> create initmycdb.ora</p><pre class="crayon-plain-tag">[oracle@localhost ~]$ cd $ORACLE_HOME/dbs [oracle@localhost dbs]$ cat initmycdb.ora db_name=mycdb CONTROL_FILES='/u02/oradata/mycdb/control01.ctl','/u02/oradata/mycdb/control02.ctl' ENABLE_PLUGGABLE_DATABASE=TRUE DB_CREATE_FILE_DEST='/u02/oradata/' DB_CREATE_ONLINE_LOG_DEST_1='/u02/oradata/' DB_BLOCK_SIZE=8192 UNDO_MANAGEMENT=AUTO UNDO_TABLESPACE=undotbs USER_DUMP_DEST='/u01/oracle/admin/mycdb/adump/'</pre><p><strong>Step 2:</strong> Create necessary directories.</p><pre class="crayon-plain-tag">[oracle@localhost dbs]$ mkdir -p /u01/oracle/admin/mycdb/adump [oracle@localhost dbs]$ mkdir -p /u02/oradata/mycdb</pre><p><strong>Step 3:</strong> export <strong>ORACLE_SID</strong> and start the database in the nomount stage</p><pre class="crayon-plain-tag">[oracle@localhost ~]$ export ORACLE_SID=mycdb [oracle@localhost dbs]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 24 09:07:35 2018 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to an idle instance. SQL&gt; startup nomount ORACLE instance started. Total System Global Area 272629760 bytes Fixed Size 2923336 bytes Variable Size 213910712 bytes Database Buffers 50331648 bytes Redo Buffers 5464064 bytes SQL&gt;</pre><p><strong>Step 4:</strong> Create a script to create a manual database using create database command with <strong>ENABLE PLUGGABLE DATABASE</strong> clause.</p><pre class="crayon-plain-tag">create database mycdb user sys identified by oracle user system identified by manager maxlogfiles 3 maxlogmembers 3 logfile group 1 size 50m,group 2 size 50 m default temporary tablespace temp tempfile size 50m undo tablespace undotbs datafile size 200m enable pluggable database;</pre><p><strong>Step 5:</strong> Run this script on sql prompt :</p><pre class="crayon-plain-tag">SQL&gt; create database mycdb user sys identified by oracle user system identified by manager maxlogfiles 3 maxlogmembers 3 logfile group 1 size 50m,group 2 size 50 m default temporary tablespace temp tempfile size 50m undo tablespace undotbs datafile size 200m enable pluggable database; Database created.</pre><p>you will get Database created output after successful completion of this script.</p><pre class="crayon-plain-tag">SQL&gt; select name,open_mode,CDB from v$database; NAME OPEN_MODE CDB --------- -------------------- --- MYCDB READ WRITE YES</pre><p>Now run <strong>catcdb.sql</strong></p><pre class="crayon-plain-tag">SQL&gt; @?/rdbms/admin/catcdb.sql Session altered. Enter new password for SYS: Enter new password for SYSTEM: Enter temporary tablespace name: temp Session altered. Connected.</pre><p>Creating pdb as I have used OMF, I don&#8217;t need to use <strong>file_name_convert</strong> parameter</p><pre class="crayon-plain-tag">SQL&gt; CREATE PLUGGABLE DATABASE mypdb1 ADMIN USER mypdb1_adm IDENTIFIED BY oracle; Pluggable database created.</pre><p>Check using show pdbs command</p><pre class="crayon-plain-tag">SQL&gt; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 MYPDB1 MOUNTED</pre><p><strong>Open pluggable database</strong></p><pre class="crayon-plain-tag">SQL&gt; ALTER PLUGGABLE DATABASE MYPDB1 OPEN; Pluggable database altered. SQL&gt; SHOW PDBS CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 MYPDB1 READ WRITE NO SQL&gt;</pre><p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/manually-create-and-configure-cdb-database-and-pdb-database/">Manually Create and configure CDB database and PDB database</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5169 Sun Aug 05 2018 14:50:26 GMT-0400 (EDT) Describe the CDB root and pluggable database containers http://oracle-help.com/oracle-database/multitenant/describe-the-cdb-root-and-pluggable-database-containers/ <p>In the previous article, we have explored Multitenant architecture.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="E8rip0zz6V"><p><a href="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/">Introduction of Multitenant Architecture</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-database/multitenant/introduction-of-multitenant-architecture/embed/#?secret=E8rip0zz6V" data-secret="E8rip0zz6V" width="600" height="338" title="&#8220;Introduction of Multitenant Architecture&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>We have heard this word container database and pluggable database many times after 12c came into the market. Today we will see what does it mean by a pluggable database and container database.</p> <p><strong>Container Database and Pluggable Database :</strong> In a multi-tenant architecture, there will always be one container root database. Which will work as base database and all other database having application data will work as a pluggable database , as tenant of this container database.</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png"><img data-attachment-id="5166" data-permalink="http://oracle-help.com/oracle-database/multitenant/describe-the-cdb-root-and-pluggable-database-containers/attachment/container-database1/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?fit=790%2C544" data-orig-size="790,544" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="container database1" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?fit=300%2C207" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?fit=790%2C544" class="wp-image-5166 size-full aligncenter" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?resize=790%2C544" alt="" width="790" height="544" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?w=790 790w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?resize=300%2C207 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?resize=768%2C529 768w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?resize=60%2C41 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/07/container-database1.png?resize=150%2C103 150w" sizes="(max-width: 790px) 100vw, 790px" data-recalc-dims="1" /></a></p> <p>We can see in above diagram in single container database <strong>CDB$ROOT</strong> I have plugged three pluggable database :</p> <ul> <li>PDB$SEED</li> <li>APPL1</li> <li>APPL2</li> </ul> <p>With Above diagram we can see the following things :</p> <ol> <li>Redo log files are common for all PDBs.All PDB in the CDB share archivelog mode of CDB.</li> <li>Control Files are common for all PDBs. Common control files are responsible for all structural changed for any pluggable database or container database.</li> <li>Each container has its own data dictionary stored in its proper SYSTEM tablespace, containing its own metadata, and a SYSAUX tablespace.</li> <li>The PDBs can create tablespaces within the PDB according to application needs.</li> <li>Each datafile is associated with a specific container, named CON_ID.</li> </ol> <p>Now let us understand each container shown in Diagram.</p> <ul> <li><strong>CDB$ROOT</strong></li> </ul> <p>CDB$ROOT is the first container database created when we create CDB in 12c Multitenant architecture. The CDB root is a system-supplied container that stores common users, which are users that can connect to multiple containers, and system-supplied metadata and data. The source code for system-supplied PL/SQL packages is stored in the CDB root.</p> <ul> <li><strong>PDB$SEED</strong></li> </ul> <p>It is the system supplied a template that is used to create another PDB. Creating another pluggable database using PDB$SEED is a very fast operation.</p> <ul> <li><strong>Pluggable Database</strong></li> </ul> <p>Pluggable database contains application data. It contains permanent and temporary tablespace and local users and role. Local users created in a specific pluggable database are not visible to any other pluggable database of that container.We can do the various operation on the pluggable database. We can create , clone ,plug , unplug or proxied pluggable database.</p> <p>Stay tuned for <strong>More articles on Oracle Multitenant<br /> </strong><br /> Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle"><strong>https://t.me/helporacle</strong></a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/multitenant/describe-the-cdb-root-and-pluggable-database-containers/">Describe the CDB root and pluggable database containers</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Jagruti Jasleniya http://oracle-help.com/?p=5155 Sun Aug 05 2018 14:44:58 GMT-0400 (EDT) Oracle Cloud Infrastructure (1Z0-932) http://oracle-help.com/articles/oracle-cloud-infrastructure-1z0-932/ <p>Hi Readers</p> <p>Finally, I’ve successfully passed <strong>1Z0-932</strong>, <a href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:1Z0-932"><strong>Oracle Cloud Infrastructure 2018 Architect Associate</strong></a>. It was almost more than 8-10 months that I’ve been preparing for this.</p> <p>The Exam contains 70 questions related to <strong>OCI</strong> which includes storage, VCN, Terraform, Knife, DNS etc.. To earn this certification you need to get 68% marks.</p> <p>This was the result of the journey: <strong>Oracle Cloud Infrastructure 2018 Certified Architect Associate</strong></p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg"><img data-attachment-id="5289" data-permalink="http://oracle-help.com/articles/oracle-cloud-infrastructure-1z0-932/attachment/1-66/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?fit=1103%2C843" data-orig-size="1103,843" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1533427511&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?fit=300%2C229" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?fit=980%2C749" class="size-full wp-image-5289 aligncenter" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=980%2C749" alt="" width="980" height="749" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?w=1103 1103w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=300%2C229 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=768%2C587 768w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=1024%2C783 1024w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=60%2C46 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/08/1.jpg?resize=150%2C115 150w" sizes="(max-width: 980px) 100vw, 980px" data-recalc-dims="1" /></a></p> <p>In this post you will find:</p> <ul> <li>A path to achieve it</li> <li>Study Materials</li> <li>Exam Detail &amp; Score</li> </ul> <p><strong>How to Achieve this Certification?</strong></p> <p>Before appearing to this exam you must have the vast knowledge of <strong>OCI &amp; it’s features and </strong>components.</p> <p><strong>List of exam topics</strong></p> <ul> <li>Getting Started with Oracle Cloud Infrastructure (OCI)</li> <li>Working with the Identity and Access Management (IAM) Service</li> <li>Creating a Virtual Cloud Network (VCN)</li> <li>Launching Bare Metal and Virtual Compute Instances</li> <li>Creating and Managing Block Storage Volumes</li> <li>Creating and Managing Object Storage</li> <li>Instantiating a Load Balancer</li> <li>Setting Up a Domain Name System (DNS)</li> <li>Launching a Database Instance</li> <li>Advanced Database</li> <li>Advanced Networking Concepts</li> <li>DevOps</li> <li>Advanced Identity and Access Management (IAM)</li> <li>Architecting Best Practices</li> </ul> <p>For more detail, you can follow the below link.</p> <p><a href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:1Z0-932">https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&amp;get_params=p_exam_id:1Z0-932</a></p> <p><strong>How to prepare for the exam?</strong></p> <p>There is only one way to study for this exam</p> <ol> <li><strong><a href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=1030">Oracle Cloud Infrastructure Learning Subscription</a></strong></li> <li><strong><a href="https://apexapps.oracle.com/pls/apex/f?p=44785:141:105384035100474::::P141_PAGE_ID,P141_SECTION_ID:521,3649">Service Introduction eLearning Series</a></strong></li> <li><strong><a href="https://education.oracle.com/education/pdf/Oracle_Cloud_Infrastructure_study_guide.pdf">Study Guide</a></strong></li> <li><a href="http://oukc.oracle.com/static12/opn/login/?t=checkusercookies|r=-1|c=2164389233"><strong>Practice Exam</strong></a></li> </ol> <p><strong>How to register for the exam?</strong></p> <p>If you want to enroll exam, please click on below link.</p> <p><strong>Exam Number:</strong> 1Z0-932</p> <p><strong>Exam Title: </strong><span class="examTitleH2">Oracle Cloud Infrastructure 2018 Architect Associate</span></p> <p><a href="https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=654&amp;get_params=p_id:538">https://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=654&amp;get_params=p_id:538</a></p> <p>Below are the Exam Details related to scoring.</p> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg"><img data-attachment-id="5290" data-permalink="http://oracle-help.com/articles/oracle-cloud-infrastructure-1z0-932/attachment/2-58/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?fit=563%2C430" data-orig-size="563,430" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1533427479&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="2" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?fit=300%2C229" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?fit=563%2C430" class="size-full wp-image-5290 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?resize=563%2C430" alt="" width="563" height="430" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?w=563 563w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?resize=300%2C229 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?resize=60%2C46 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/08/2.jpg?resize=150%2C115 150w" sizes="(max-width: 563px) 100vw, 563px" data-recalc-dims="1" /></a></p> <p><strong>Special Thanks</strong></p> <p><a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez</strong></a> is an <strong>Oracle OCM</strong> and <strong>ACED</strong> who gave an opportunity to write him and motivating me to learn new things.  I’m very thankful to him as this was the kick off for my studies.</p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/oracle-cloud-infrastructure-1z0-932/">Oracle Cloud Infrastructure (1Z0-932)</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5288 Sat Aug 04 2018 14:36:55 GMT-0400 (EDT) LEAP#407 AC-powered DPS3005 Bench Supply https://blog.tardate.com/2018/08/leap407-ac-powered-dps3005-bench-supply.html <p>The DPS3005 is one of a range of popular DC power supply modules; this one is designed for up to 32V/5A DC output. For this project, I am mounting the module in a project case, and adding a rectified step-down transformer so the unit is powered from mains AC.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/DPS3005BenchPowerSupply">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/DPS3005BenchPowerSupply"><img src="https://leap.tardate.com/Equipment/DPS3005BenchPowerSupply/assets/DPS3005BenchPowerSupply_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/08/leap407-ac-powered-dps3005-bench-supply.html Sat Aug 04 2018 07:21:56 GMT-0400 (EDT) Visual Studio- All the Warm and Fuzzy https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/ <p>So I haven&#8217;t opened Visual Studio in&#8230;.oh&#8230;.let&#8217;s just say it&#8217;s been a few years&#8230;:)</p> <p><a href="https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/baby/" rel="attachment wp-att-8089"><img class="alignnone size-full wp-image-8089" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/baby.gif?resize=383%2C284&#038;ssl=1" alt="" width="383" height="284" data-recalc-dims="1" /></a></p> <p>I had a project that I needed to run and was surprised when the Solution Explorer was missing from SSMS 2017.  Its only fair to say, there was also fair warning from Microsoft.</p> <p><a href="https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/sol_ex/" rel="attachment wp-att-8087"><img class="alignnone size-large wp-image-8087" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?resize=650%2C113&#038;ssl=1" alt="" width="650" height="113" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?resize=1024%2C178&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?resize=300%2C52&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?resize=768%2C133&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?w=1568&amp;ssl=1 1568w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/sol_ex.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>Due to this, I opened up Visual Studio to use its Solution Explorer and integration for SSIS and other features required for a large project.  I was both happy with the sheer amount of features and have some constructive feedback to make it more user friendly.</p> <p>I love that I can navigate servers, log into SQL Server databases and manage and verify what&#8217;s occurred in my releases.  The properties pane comes in handy to offer valuable information when I&#8217;m building out connection strings or looking for data that may have not compiled correctly in a build.</p> <p>Although the rough instructions were for Solution Explorer for SSMS, I was able, even with as rusty as I was, figure out how to do everything- projects, SSIS and database SQL, in Visual Studio.</p> <p>The interface is familiar as a Windows user-  right click for options, left click to execute the option.  The interface has links on the left for shortcuts to SSMS Object Explorer, which allows me to log into my database environments, along with browsing servers that I may also be deploying application code to.</p> <p>Projects make it easy to build out a full, multi-tier deployment and debug it all from one application, too.  Needless to say, I&#8217;m happy to report that even with some missing instructions, I was able to do what needed to be done and do it with some grace.</p> <p><a href="https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/vss_dw_deploy/" rel="attachment wp-att-8088"><img class="alignnone size-large wp-image-8088" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?resize=650%2C314&#038;ssl=1" alt="" width="650" height="314" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?resize=1024%2C494&amp;ssl=1 1024w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?resize=300%2C145&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?resize=768%2C371&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?w=1300&amp;ssl=1 1300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/vss_dw_deploy.png?w=1950&amp;ssl=1 1950w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>So what do I see that can be improved?</p> <ol> <li>When you copy and paste to update a path, don&#8217;t automatically remove the remainder of the file name, etc., that&#8217;s been left on purpose.  This can lead to extra human intervention, which then leads to more chance of human error.</li> <li>The hints when hovering over a button can become a nuisance instead of a help.  Have the hints auto-hide after 5 seconds.  There&#8217;s no reason to leave them up when we&#8217;re trying to guide our cursor to a small button.</li> <li>Make debug display all steps, the errors and then shut down automatically when complete.</li> <li>Make it easier to keep panes open for the SQL Server Object Explorer, Toolbox, etc. vs. auto-hiding.  The information on the pane may be needed for reference as one works on another configuration panel.</li> <li>The Control Flow on SSIS package execution shouldn&#8217;t be blown up so large that you can&#8217;t decipher what a package is doing.  Keep it legible.</li> </ol> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/visual-studio-2015/" rel="tag">Visual Studio 2015</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/&title=Visual Studio- All the Warm and Fuzzy"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/&title=Visual Studio- All the Warm and Fuzzy"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/&title=Visual Studio- All the Warm and Fuzzy"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/&title=Visual Studio- All the Warm and Fuzzy"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/03/checking-the-health-of-an-ethernet-cable-on-exadata/" >Checking the Health of an Ethernet Cable on Exadata</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/11/oracle-open-world-2015-survival-of-the-fittest/" >Oracle Open World 2015, Survival of the Fittest!</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/08/emulating-a-raspberry-pi-on-virtualbox/" >Emulating a Raspberry Pi on Virtualbox</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/04/performance-live-discussion-on-twitter-12pm-pst-tue-april-15/" >Performance live discussion on twitter, 12pm PST Tue April 15</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/10/user-group-boards-voting-oh-yeah-and-odtug/" >User Group Boards, Voting, Oh Yeah and ODTUG</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBA Kevlar</a> [<a href="https://dbakevlar.com/2018/08/visual-studio-all-the-warm-and-fuzzy/">Visual Studio- All the Warm and Fuzzy</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8086 Fri Aug 03 2018 18:46:04 GMT-0400 (EDT) Getting Started with the Oracle Database Backup Service in 2018 https://blog.pythian.com/getting-started-with-the-oracle-database-backup-service-in-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>A few years ago I wrote an article on getting started with the Oracle Database Backup Service which can be found <a href="https://blog.pythian.com/oracle-cloud-backups-service/">here</a>.</p> <p>It&#8217;s no surprise that in the last few years things have changed significantly.  Trying to use this service in the current Oracle Cloud Infrastructure (OCI) with a new Oracle account created recently posed some issues and complexities.  (For older or &#8220;traditional&#8221; cloud accounts, things may work a little differently).</p> <p><b>This article helps explain how to get started with the Oracle Database Backup Service in July 2018</b>.  Cloud services are changing rapidly and I&#8217;ve found that both the online documentation and My Oracle Support (MOS) documents are out of date or no longer accurate &#8211; hence I&#8217;m trying to clarify the steps.</p> <p>Specifically, the problems I encountered when trying to re-implement with a new Oracle Cloud account are:</p> <ul> <li style="font-weight: 400;">How to determine the correct REST endpoint URL to use with the RMAN backup module?</li> <li style="font-weight: 400;">Navigating to the Classic storage service when signing up with OCI.</li> <li style="font-weight: 400;">How to create storage containers for the cloud backup programmatically using cURL?</li> </ul> <p>&nbsp;</p> <h2>OCI But With Classic Storage</h2> <p>The first thing that isn&#8217;t overly clear is the fact that for modern (recently created) Oracle Cloud accounts, the Backup Service can be added to the OCI Dashboard but interestingly, it does not appear in the service list.</p> <p>To add, choose <strong>Customize Dashboad</strong> (rightmost box) and then choose <strong>Database Backup</strong> from the <b>Data Management</b> section:</p> <p><img class="aligncenter wp-image-104859" src="https://blog.pythian.com/wp-content/uploads/My-Services-Dashboard.jpg" alt="" width="800" height="206" srcset="https://blog.pythian.com/wp-content/uploads/My-Services-Dashboard.jpg 1323w, https://blog.pythian.com/wp-content/uploads/My-Services-Dashboard-465x120.jpg 465w, https://blog.pythian.com/wp-content/uploads/My-Services-Dashboard-350x90.jpg 350w" sizes="(max-width: 800px) 100vw, 800px" /></p> <p>&nbsp;</p> <p>It will then appear on the dashboard and can be selected:</p> <p><img class="aligncenter wp-image-104860" src="https://blog.pythian.com/wp-content/uploads/Backup-Service-in-Dashboard.jpg" alt="" width="400" height="138" srcset="https://blog.pythian.com/wp-content/uploads/Backup-Service-in-Dashboard.jpg 430w, https://blog.pythian.com/wp-content/uploads/Backup-Service-in-Dashboard-350x120.jpg 350w" sizes="(max-width: 400px) 100vw, 400px" /></p> <p>&nbsp;</p> <p>Selecting that service provides some details but not critical information about how it&#8217;s being used or the REST endpoint URL:</p> <p><img class="aligncenter wp-image-104861" src="https://blog.pythian.com/wp-content/uploads/Service-Overview.jpg" alt="" width="600" height="459" srcset="https://blog.pythian.com/wp-content/uploads/Service-Overview.jpg 756w, https://blog.pythian.com/wp-content/uploads/Service-Overview-465x356.jpg 465w, https://blog.pythian.com/wp-content/uploads/Service-Overview-350x268.jpg 350w" sizes="(max-width: 600px) 100vw, 600px" /></p> <p>&nbsp;</p> <p>There&#8217;s other account information not shown in the screenshot, but the critical information needed to start using the service is not displayed.</p> <p>Similarly, most MOS documents and online documentation currently suggest that the REST endpoint is in the format of:</p> <pre style="padding-left: 30px;">https://<span style="color: #ff0000;"><strong>myDomain</strong></span>.storage.oraclecloud.com/v1/myService-myDomain</pre> <p>However, if your Oracle Cloud account was created recently, it likely will not be in formatted that way and instead, will use a URL that starts with, or includes, a region-specific sub-domain.</p> <p>The trick to finding the actual REST endpoint URL is to recognize that the Database Backup Service still uses Oracle Cloud Classic object storage.</p> <p>&nbsp;</p> <h2>The Quick Solution: Add the region to the REST URL</h2> <p>The region is probably pretty easy to guess (i.e. &#8220;us&#8221; or &#8220;em&#8221;) however to determine the proper URL with accuracy, the easiest thing to do is to determine it from the Classic Storage service dashboard.</p> <p>Again, choose to customize the dashboard and this time, choose to add Storage Classic:</p> <p><img class="aligncenter wp-image-104862" src="https://blog.pythian.com/wp-content/uploads/Add-Storage-Classic.jpg" alt="" width="400" height="81" srcset="https://blog.pythian.com/wp-content/uploads/Add-Storage-Classic.jpg 620w, https://blog.pythian.com/wp-content/uploads/Add-Storage-Classic-465x94.jpg 465w, https://blog.pythian.com/wp-content/uploads/Add-Storage-Classic-350x71.jpg 350w" sizes="(max-width: 400px) 100vw, 400px" /></p> <p>&nbsp;</p> <p>When added to the dashboard, click on the service to open up the &#8220;<strong>Service: Oracle Cloud Infrastructure Object Storage Classic</strong>&#8221; service page.  Near the bottom under <b>Additional Information</b> will be the proper <b>REST Endpoint</b> and <b>Auth V1 Endpoint</b> to use.</p> <p>From the top right of the service page, you can also click on the <strong>Open Service Console</strong> link to get to the details of the storage usage.</p> <p>Hence, the URL is probably something simple like:</p> <pre style="padding-left: 30px;">https://myDomain<strong><span style="color: #0000ff;">.us.</span></strong>storage.oraclecloud.com/v1/Storage-myDomain</pre> <p>&nbsp;</p> <h2>An Alternate URL Also Exists on the Storage Service Console</h2> <p>To make matters a little more confusing, an alternate but still usable URL can be obtained from the storage service page.</p> <p>First, navigate to the <b>Storage Service Console</b> page.  Either by choosing the <strong>Open Service Console</strong> button from the top right (in the blue bar) from the &#8220;<span id="pt1:pt2:ot1" class="p_AFHoverTarget x22w x32y" title="Service Type: Storage Classic">Service: Oracle Cloud Infrastructure Object Storage Classic&#8221; page or by choosing it in the black navigation menu on the left.</span></p> <p>Under the <b>Account </b>tab, we can see the REST endpoint URL for the Storage Service which also can be used:</p> <p><img class="aligncenter wp-image-104863" src="https://blog.pythian.com/wp-content/uploads/Account_Details.jpg" alt="" width="600" height="342" srcset="https://blog.pythian.com/wp-content/uploads/Account_Details.jpg 734w, https://blog.pythian.com/wp-content/uploads/Account_Details-465x265.jpg 465w, https://blog.pythian.com/wp-content/uploads/Account_Details-350x199.jpg 350w" sizes="(max-width: 600px) 100vw, 600px" /></p> <p>&nbsp;</p> <p>This URL is also one that can be used when installing the <a href="http://www.oracle.com/technetwork/database/availability/oracle-cloud-backup-2162729.html">Oracle Database Cloud Backup Module</a> so that RMAN backups can write to the Backup Service.</p> <p>&nbsp;</p> <h2>Creating Storage Containers</h2> <p>The next challenge is how to create your own storage containers within the service.  You may want to create storage containers to logically divide the cloud backups by department, business unit, server, or even database.</p> <p>Creating a container is rather straightforward from the <b>Storage Classic</b> service web page as shown previously &#8211; just use the <b>Containers </b>tab.  But it&#8217;s more challenging if you want to use scripted or manual cURL commands and REST.  Which may be necessary if you have a large deployment.</p> <p>Oracle provides the MOS document &#8220;<a href="https://support.oracle.com/epmos/faces/DocContentDisplay?id=2225766.1">Step-by-Step procedure to place On-Premise Database backup on Oracle Cloud (Doc ID 2225766.1)</a>&#8221; to complement the online documentation.  But again the URL format provided does not work for modern Oracle Cloud accounts.</p> <p>To create the required authorization token, the cURL command required is actually:</p> <pre>curl -i -X GET https://<b></b>/auth/v1.0 -H  'X-Storage-User: Storage-<b></b>:<b></b>' -H 'X-Storage-Pass: <b></b>'</pre> <p>&nbsp;</p> <p>The catch here is to use the base URL from your REST endpoint (as determined previously).  For example: <a href="https://uscom-central-1b.us.storage.oraclecloud.com/auth/v1.0">https://uscom-central-1b.us.storage.oraclecloud.com/auth/v1.0</a></p> <p>Example output:</p> <pre lang="bash">$ curl -i -X GET https://uscom-central-1b.us.storage.oraclecloud.com/auth/v1.0 -H  'X-Storage-User: Storage-*********:simon_pane@********' -H 'X-Storage-Pass: *********' HTTP/1.1 200 OK date: 1532712571246 X-Auth-Token: AUTH_tkabf61211e7cfe0a3****************** X-Storage-Token: AUTH_tkabf61211e7cfe0a3****************** X-Storage-Url: https://uscom-central-1b.storage.oraclecloud.com/v1/Storage-********* Content-Length: 0 Server: Oracle-Storage-Cloud-Service </pre> <p>&nbsp;</p> <p>You can then use the <b>X-Auth-Token </b>and <b>X-Storage-Url </b>values returned from the above command to create the desired storage container:</p> <pre lang="bash">$ curl -v -s -X PUT -H "X-Auth-Token: AUTH_tkabf61211e7cfe0a3******************" https://uslecloud.com/v1/*********/<b>DB_`hostname -s`_${ORACLE_SID}</b> * About to connect() to uscom-central-1b.storage.oraclecloud.com port 443 (#0) * Trying 129.150.7.1... * Connected to uscom-central-1b.storage.oraclecloud.com (129.150.7.1) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=*.storage.oraclecloud.com,OU=Oracle CSEC CHICAGO,O=Oracle Corporation,L=Redwood City,ST=California,C=US * start date: Sep 11 00:00:00 2017 GMT * expire date: Dec 11 23:59:59 2018 GMT * common name: *.storage.oraclecloud.com * issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US &gt; PUT /v1/Storage-*********/DB_oci-12201-vm1_ORCL HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: uscom-central-1b.storage.oraclecloud.com &gt; Accept: */* &gt; X-Auth-Token: AUTH_tkabf61211e7cfe0a3****************** &gt; &lt; <b>HTTP/1.1 201 Created</b> &lt; X-Last-Modified-Timestamp: 1532712939.95273 &lt; X-Trans-Id: txfbe0422b4a5e4fe08f411-*************** &lt; Content-Length: 0 &lt; Date: Fri, 27 Jul 2018 17:35:40 GMT &lt; Server: Oracle-Storage-Cloud-Service &lt; * Connection #0 to host uscom-central-1b.storage.oraclecloud.com left intact </pre> <p>&nbsp;</p> <p>And the new container can be confirmed from the Web UI, if required:</p> <p><img class="aligncenter wp-image-104864" src="https://blog.pythian.com/wp-content/uploads/Container-List.jpg" alt="" width="600" height="324" srcset="https://blog.pythian.com/wp-content/uploads/Container-List.jpg 853w, https://blog.pythian.com/wp-content/uploads/Container-List-465x251.jpg 465w, https://blog.pythian.com/wp-content/uploads/Container-List-350x189.jpg 350w" sizes="(max-width: 600px) 100vw, 600px" /></p> <p>&nbsp;</p> <h2>Installing and backing up using the Oracle Database Cloud Backup Module</h2> <p>Now that the REST endpoint URL has been determined and the storage container created, the rest of the deployment is straightforward and follows the documented procedures.</p> <p>Install using the REST endpoint URL and use the <b>-container</b> option to specify the container to use:</p> <pre lang="bash">$ java -jar opc_install.jar \ &gt; -serviceName Storage \ &gt; -identityDomain ******** \ &gt; -host https://uscom-central-1b.storage.oraclecloud.com/v1/Storage-******** \ &gt; -opcId 'simon_pane@********' \ &gt; -opcPass '********' \ &gt; -walletDir $ORACLE_HOME/dbs/opc${ORACLE_SID} \ &gt; -libDir ${ORACLE_HOME}/lib \ &gt; -container DB_`hostname -s`_${ORACLE_SID} Oracle Database Cloud Backup Module Install Tool, build 12.2.0.1.0DBBKPCSBP_2018-06-12 Oracle Database Cloud Backup Module credentials are valid. Backups would be sent to container DB_oci-12201-vm1_ORCL. Oracle Database Cloud Backup Module wallet created in directory /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/opcORCL. Oracle Database Cloud Backup Module initialization file /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/opcORCL.ora created. Downloading Oracle Database Cloud Backup Module Software Library from file opc_linux64.zip. Download complete. $ </pre> <p>&nbsp;</p> <p>Adjust the RMAN settings:</p> <pre lang="bash">$ echo "configure channel device type sbt parms='SBT_LIBRARY=libopc.so,SBT_PARMS=(OPC_PFILE=${ORACLE_HOME}/dbs/opc${ORACLE_SID}.ora)';" | rman target=/ Recovery Manager: Release 12.2.0.1.0 - Production on Fri Jul 27 19:33:58 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1509082029) RMAN&gt; using target database control file instead of recovery catalog new RMAN configuration parameters: CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS 'SBT_LIBRARY=libopc.so,SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/12.2.0.1/dbhome_1/dbs/opcORCL.ora)'; new RMAN configuration parameters are successfully stored RMAN&gt; Recovery Manager complete. $ echo "configure default device type to sbt;" | rman target=/ Recovery Manager: Release 12.2.0.1.0 - Production on Fri Jul 27 19:33:59 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1509082029) RMAN&gt; using target database control file instead of recovery catalog new RMAN configuration parameters: CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE'; new RMAN configuration parameters are successfully stored RMAN&gt; Recovery Manager complete. $ echo "configure controlfile autobackup on;" | rman target=/ Recovery Manager: Release 12.2.0.1.0 - Production on Fri Jul 27 19:34:00 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1509082029) RMAN&gt; using target database control file instead of recovery catalog new RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters are successfully stored RMAN&gt; Recovery Manager complete. $ </pre> <p>&nbsp;</p> <p>And finally, run a backup remembering to include backup encryption:</p> <pre lang="bash">$ read -s BUpassword $ echo -e "set encryption on identified by \"$BUpassword\" only;\nbackup database plus archivelog;" | rman target=/ Recovery Manager: Release 12.2.0.1.0 - Production on Fri Jul 27 19:40:42 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1509082029) RMAN&gt; executing command: SET encryption using target database control file instead of recovery catalog RMAN&gt; Starting backup at 27-JUL-18 current log archived allocated channel: ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1: SID=37 device type=SBT_TAPE channel ORA_SBT_TAPE_1: Oracle Database Backup Service Library VER=12.2.0.2 channel ORA_SBT_TAPE_1: starting archived log backup set channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set input archived log thread=1 sequence=4372 RECID=4350 STAMP=982611643 channel ORA_SBT_TAPE_1: starting piece 1 at 27-JUL-18 channel ORA_SBT_TAPE_1: finished piece 1 at 27-JUL-18 piece handle=07t92tls_1_1 tag=TAG20180727T194044 comment=API Version 2.0,MMS Version 12.2.0.2 channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:15 Finished backup at 27-JUL-18 Starting backup at 27-JUL-18 using channel ORA_SBT_TAPE_1 channel ORA_SBT_TAPE_1: starting full datafile backup set channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set input datafile file number=00002 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_soe32_tb_fnl8cmbp_.dbf input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_fngxkvdo_.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_fngxnfkq_.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_fngxmnfx_.dbf input datafile file number=00007 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_fngxngn8_.dbf channel ORA_SBT_TAPE_1: starting piece 1 at 27-JUL-18 ... </pre> <p>&nbsp;</p> <p>When verifying the backup through RMAN, we can see the storage URL and container used in the <b>media </b>field:</p> <pre lang="bash">$ echo "list backup of database;" | rman target=/ Recovery Manager: Release 12.2.0.1.0 - Production on Fri Jul 27 20:22:02 2018 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1509082029) RMAN&gt; using target database control file instead of recovery catalog List of Backup Sets =================== BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------- 7 Full 45.88G SBT_TAPE 00:27:05 27-JUL-18 BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20180727T194059 Handle: 08t92tmb_1_1 <strong>Media: uscom-central-1b.sto..orage-********/DB_oci-12201-vm1_ORCL</strong> List of Datafiles in backup set 7 File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name ---- -- ---- ---------- --------- ----------- ------ ---- 1 Full 1763447 27-JUL-18 NO /u01/app/oracle/oradata/ORCL/datafile/o1_mf_system_fngxkvdo_.dbf 2 Full 1763447 27-JUL-18 NO /u01/app/oracle/oradata/ORCL/datafile/o1_mf_soe32_tb_fnl8cmbp_.dbf 3 Full 1763447 27-JUL-18 NO /u01/app/oracle/oradata/ORCL/datafile/o1_mf_sysaux_fngxmnfx_.dbf 4 Full 1763447 27-JUL-18 NO /u01/app/oracle/oradata/ORCL/datafile/o1_mf_undotbs1_fngxnfkq_.dbf 7 Full 1763447 27-JUL-18 NO /u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_fngxngn8_.dbf RMAN&gt; Recovery Manager complete. $ </pre> <p>&nbsp;</p> <p>And if desired, the storage object chunks can also be viewed from the Web UI:</p> <p><img class="aligncenter wp-image-104865" src="https://blog.pythian.com/wp-content/uploads/Backup-Pieces.jpg" alt="" width="600" height="285" srcset="https://blog.pythian.com/wp-content/uploads/Backup-Pieces.jpg 999w, https://blog.pythian.com/wp-content/uploads/Backup-Pieces-465x221.jpg 465w, https://blog.pythian.com/wp-content/uploads/Backup-Pieces-350x166.jpg 350w" sizes="(max-width: 600px) 100vw, 600px" /></p> <p>&nbsp;</p> <h2>Summary of Common Problems</h2> <p>As a reference, some of the problems I encountered when trying to set this up include:</p> <pre><strong><span style="color: #800000;">Could not resolve host: MyDomain.storage.oraclecloud.com; Unknown error</span></strong> </pre> <pre lang="bash">$ curl -v -s -X GET -H "X-Storage-User: Storage-*******:simon_pane@*******" -H "X-Storage-Pass: **************" https://*******.storage.oraclecloud.com/auth/v1.0 * Could not resolve host: *******.storage.oraclecloud.com; Unknown error * Closing connection 0 </pre> <p>This is due to using the old URL format with a new Oracle Cloud account. The base URLs no longer include the identity domain. Instead, include the geographic region from the REST endpoint URL.</p> <p>&nbsp;</p> <pre><span style="color: #800000;"><strong>Error: Could not authenticate to Oracle Database Cloud Backup Module …. testConnection: 401 Unauthorized</strong></span></pre> <pre lang="bash">$ java -jar opc_install.jar \ &gt; -serviceName Storage \ &gt; -identityDomain ******* \ &gt; -host https://uscom-central-1b.storage.oraclecloud.com/v1/Storage-******* \ &gt; -opcId 'simon_pane@*******' \ &gt; -opcPass '*******.' \ &gt; -walletDir ${ORACLE_HOME}/dbs/opc_wallet \ &gt; -libDir ${ORACLE_HOME}/lib Oracle Database Cloud Backup Module Install Tool, build 12.2.0.1.0DBBKPCSBP_2018-06-12 Error: Could not authenticate to Oracle Database Cloud Backup Module Please verify the values of -host, -opcId, and -opcPass options. Exception in thread "main" java.lang.RuntimeException: java.io.IOException: testConnection: 401 Unauthorized. at oracle.backup.opc.install.OpcConfig.testConnection(OpcConfig.java:296) at oracle.backup.opc.install.OpcConfig.doOpcConfig(OpcConfig.java:185) at oracle.backup.opc.install.OpcConfig.main(OpcConfig.java:177) Caused by: java.io.IOException: testConnection: 401 Unauthorized. at oracle.backup.opc.install.OpcConfig.testConnection(OpcConfig.java:252) ... 2 more $ </pre> <p>This can be due to the cloud account password having a special character (# @ ! $) in it. Resolve by changing the cloud account password to not include any special characters.</p> <p>&nbsp;</p> <pre><span style="color: #800000;"><strong>java.net.UnknownHostException: ********.storage.oraclecloud.com</strong></span></pre> <pre lang="bash">$ java -jar opc_install.jar \ &gt; -serviceName Storage \ &gt; -identityDomain ******** \ &gt; -host https://********.storage.oraclecloud.com/v1/Storage-******** \ &gt; -opcId 'simon_pane@********' \ &gt; -opcPass 'back1tup0line4ME' \ &gt; -walletDir $ORACLE_HOME/dbs/opc$ORACLE_SID \ &gt; -libDir $ORACLE_HOME/lib Oracle Database Cloud Backup Module Install Tool, build 12.2.0.1.0DBBKPCSBP_2018-06-12 Error: Could not authenticate to Oracle Database Cloud Backup Module Please verify the values of -host, -opcId, and -opcPass options. Exception in thread "main" java.lang.RuntimeException: java.net.UnknownHostException: ********.storage.oraclecloud.com at oracle.backup.opc.install.OpcConfig.testConnection(OpcConfig.java:296) at oracle.backup.opc.install.OpcConfig.doOpcConfig(OpcConfig.java:185) at oracle.backup.opc.install.OpcConfig.main(OpcConfig.java:177) ... </pre> <p>This error is due to using the wrong URL (i.e. the old URL format for a new Cloud identity domain). Resolve by using the REST endpoint URL as described previously.</p> <p>&nbsp;</p> <pre><span style="color: #800000;"><strong>HTTP/1.1 401 Unauthorized</strong></span></pre> <pre lang="bash">$ curl -v -s -X GET -H "X-Storage-User: Storage-********:simon_pane@********" -H "X-Storage-Pass: **************" https://uscom-central-1b.storage.oraclecloud.com/auth/v1.0 * About to connect() to uscom-central-1b.storage.oraclecloud.com port 443 (#0) * Trying 129.150.7.1... * Connected to uscom-central-1b.storage.oraclecloud.com (129.150.7.1) port 443 (#0) * Initializing NSS with certpath: sql:/etc/pki/nssdb * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: * subject: CN=*.storage.oraclecloud.com,OU=Oracle CSEC CHICAGO,O=Oracle Corporation,L=Redwood City,ST=California,C=US * start date: Sep 11 00:00:00 2017 GMT * expire date: Dec 11 23:59:59 2018 GMT * common name: *.storage.oraclecloud.com * issuer: CN=Symantec Class 3 Secure Server CA - G4,OU=Symantec Trust Network,O=Symantec Corporation,C=US &gt; GET /auth/v1.0 HTTP/1.1 &gt; User-Agent: curl/7.29.0 &gt; Host: uscom-central-1b.storage.oraclecloud.com &gt; Accept: */* &gt; X-Storage-User: Storage-********:simon_pane@******** &gt; X-Storage-Pass: ************** &gt; &lt; HTTP/1.1 401 Unauthorized &lt; X-Trans-Id: tx30595fda265c48e68bd28-************ &lt; WWW-Authenticate: Token &lt; Content-Type: text/plain;charset=UTF-8 &lt; Content-Length: 27 &lt; Date: Thu, 26 Jul 2018 13:00:37 GMT &lt; Server: Oracle-Storage-Cloud-Service &lt; * Connection #0 to host uscom-central-1b.storage.oraclecloud.com left intact Invalid user id or password$ </pre> <p>This is due to the format of the cURL command. The actual URL is correct but the cURL command arguments need to be adjusted. Use the format as described previously, not the format documented in the online documentation or MOS notes.</p> </div></div> Simon Pane https://blog.pythian.com/?p=104858 Fri Aug 03 2018 10:00:31 GMT-0400 (EDT) Oracle Parallel Query Hints Reference – Part 4: GBY_PUSHDOWN https://blog.pythian.com/oracle-parallel-query-hints-reference-part-4-gby_pushdown/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Welcome to Part 4 of the series.</p> <p>The GBY_PUSHDOWN is a very interesting hint, introduced in 10.2 &#8211; it doesn&#8217;t have the PQ prefix, but nevertheless plays a <strong>crucial role in Parallel query plans.</strong></p> <p>Purpose: When GBY_PUSHDOWN is activated, an extra &#8220;aggregation&#8221; step is introduced in the plan, before the PQ workers that read the data send the data to the PQ receivers after a re-shuffle.</p> <p>Example NO PUSHDOWN:</p> <pre class="brush: plain; title: ; notranslate"> select /*+PARALLEL(4) NO_GBY_PUSHDOWN */ mod5_id, count(*) from tlarge group by mod5_id; -------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time | TQ |IN-OUT| PQ Distrib | -------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 5 | 15 | 640 (1)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 5 | 15 | 640 (1)| 00:00:01 | Q1,01 | P-&gt;S | QC (RAND) | | 3 | HASH GROUP BY | | 5 | 15 | 640 (1)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 100K| 292K| 638 (0)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND HASH | :TQ10000 | 100K| 292K| 638 (0)| 00:00:01 | Q1,00 | P-&gt;P | HASH | | 6 | PX BLOCK ITERATOR | | 100K| 292K| 638 (0)| 00:00:01 | Q1,00 | PCWC | | | 7 | TABLE ACCESS STORAGE FULL| TLARGE | 100K| 292K| 638 (0)| 00:00:01 | Q1,00 | PCWP | | -------------------------------------------------------------------------------------------------------------------------- </pre> <p>Example PUSHDOWN</p> <pre class="brush: plain; title: ; notranslate"> select /*+PARALLEL(4) GBY_PUSHDOWN */ mod5_id, count(*) from tlarge group by mod5_id; --------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time | TQ |IN-OUT| PQ Distrib | --------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 5 | 15 | 640 (1)| 00:00:01 | | | | | 1 | PX COORDINATOR | | | | | | | | | | 2 | PX SEND QC (RANDOM) | :TQ10001 | 5 | 15 | 640 (1)| 00:00:01 | Q1,01 | P-&gt;S | QC (RAND) | | 3 | HASH GROUP BY | | 5 | 15 | 640 (1)| 00:00:01 | Q1,01 | PCWP | | | 4 | PX RECEIVE | | 5 | 15 | 640 (1)| 00:00:01 | Q1,01 | PCWP | | | 5 | PX SEND HASH | :TQ10000 | 5 | 15 | 640 (1)| 00:00:01 | Q1,00 | P-&gt;P | HASH | | 6 | HASH GROUP BY | | 5 | 15 | 640 (1)| 00:00:01 | Q1,00 | PCWP | | | 7 | PX BLOCK ITERATOR | | 100K| 292K| 638 (0)| 00:00:01 | Q1,00 | PCWC | | | 8 | TABLE ACCESS STORAGE FULL| TLARGE | 100K| 292K| 638 (0)| 00:00:01 | Q1,00 | PCWP | | --------------------------------------------------------------------------------------------------------------------------- </pre> <p>See the extra step #6 HASH GROUP BY. Each PQ reader will read a set of BLOCKs and aggregate the data in its local memory space by mdo5_id. As a reminder, there are only 5 values for mod5_id, so the memory for aggregation will be very small. When there is no more work assigned for the PQ reader process, each PQ reader will distribute its aggregated data based on the hash key for final aggregation.</p> <p>This is best illustrated with the communication report for each query (V$PQ_TQSTAT report). Note that this can only be extracted from the session that ran the SQL.</p> <pre class="brush: plain; title: ; notranslate"> v$PQ_TQSTAT query: select dfo_number &quot;d&quot;, tq_id as &quot;t&quot;, server_type, num_rows,rpad('x',round(num_rows*10/nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type),0)),'x') as &quot;pr&quot;, round(bytes/1024/1024) mb, process, instance i,round(ratio_to_report (num_rows) over (partition by dfo_number, tq_id, server_type)*100) as &quot;%&quot;, open_time, avg_latency, waits, timeouts,round(bytes/nullif(num_rows,0)) as &quot;b/r&quot; from v$pq_tqstat order by dfo_number, tq_id, server_type desc, process; </pre> <pre class="brush: plain; title: ; notranslate"> select /*+PARALLEL(2) NO_GBY_PUSHDOWN */ mod5_id, count(*) from tlarge t group by mod5_id; d, t, SERVER_TYPE, NUM_ROWS, pr, MB, PROCESS, I, %, OPEN_TIME, AVG_LATENCY, WAITS, TIMEOUTS, b/r 1 0 Producer 53608 xxxxxxxxxx 0 P002 1 54 0 0 37 18 4 1 0 Producer 46392 xxxxxxxxx 0 P003 1 46 0 0 36 18 4 1 0 Consumer 60000 xxxxxxxxxx 0 P000 1 60 0 0 188 185 4 1 0 Consumer 40000 xxxxxxx 0 P001 1 40 0 0 188 185 4 1 1 Producer 3 xxxxxxxxxx 0 P000 1 60 0 0 24 12 16 1 1 Producer 2 xxxxxxx 0 P001 1 40 0 0 16 8 20 1 1 Consumer 5 xxxxxxxxxx 0 QC 1 100 0 0 3 0 17 </pre> <pre class="brush: plain; title: ; notranslate"> select /*+PARALLEL(2) GBY_PUSHDOWN */ mod5_id, count(*) from tlarge t group by mod5_id; d, t, SERVER_TYPE, NUM_ROWS, pr, MB, PROCESS, I, %, OPEN_TIME, AVG_LATENCY, WAITS, TIMEOUTS, b/r 1 0 Producer 5 xxxxxxxxxx 0 P002 1 50 0 0 0 0 25 1 0 Producer 5 xxxxxxxxxx 0 P003 1 50 0 0 0 0 25 1 0 Consumer 6 xxxxxxxxxx 0 P000 1 60 0 0 84 81 24 1 0 Consumer 4 xxxxxxx 0 P001 1 40 0 0 84 81 28 1 1 Producer 3 xxxxxxxxxx 0 P000 1 60 0 0 2 1 16 1 1 Producer 2 xxxxxxx 0 P001 1 40 0 0 2 1 20 1 1 Consumer 5 xxxxxxxxxx 0 QC 1 100 0 0 3 0 17 </pre> <p>Notice how for the query WITHOUT the PUSHDOWN each of the producers sent 53,608 and 46,392 rows to each consumer. Given that I have five values and two consumers, one of the consumers had 20,000 extra values, so the consumers took 60,000 and 40,000 rows each.</p> <p>While in the case WITH the PUSHDOWN, the producers only produced five rows, significantly reducing the amount of data exchanged between the Parallel Query workers.</p> <p>The drawback of this optimization is if the number of unique values in the groups identified by the columns in the &#8220;group by&#8221; clause (i.e. each group has very few members), then this optimization will do more work. But in most cases, the savings are quite significant.</p> <p>NOTE: This is a cost-based optimization, so it&#8217;s very possible for the Optimiser to choose NOT to use it &#8211; when in fact it should.</p> </div></div> Christo Kutrovsky https://blog.pythian.com/?p=104855 Thu Aug 02 2018 14:40:41 GMT-0400 (EDT) 2019 Leadership Program - Now Accepting Applications https://www.odtug.com/p/bl/et/blogaid=811&source=1 Are you looking to invest in your professional development? Do you enjoy the ODTUG community and are you looking to become more involved? The ODTUG leadership program is a great way to accomplish both goals and broaden your network. ODTUG https://www.odtug.com/p/bl/et/blogaid=811&source=1 Thu Aug 02 2018 11:57:06 GMT-0400 (EDT) News and Updates from Google Cloud Platform https://blog.pythian.com/news-and-updates-from-google-cloud-platform/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><em><span style="font-weight: 400;">I recently joined Chris Presley for Episode 5 of his podcast</span><a href="https://blog.pythian.com/?s=cloudscape"> <span style="font-weight: 400;">Cloudscape</span></a><span style="font-weight: 400;"> to talk about what’s happening in the world of cloud. My focus was the most recent events surrounding Google Cloud Platform (GCP).</span></em></p> <p><em><span style="font-weight: 400;">Some of the highlights of our discussion included:</span></em></p> <ul> <li><span style="font-weight: 400;">         </span><b>BigQuery</b></li> <li><span style="font-weight: 400;">         </span><b>BigTable</b></li> <li><span style="font-weight: 400;">         </span><b>Kubernetes</b></li> <li><span style="font-weight: 400;">         </span><b>Stackdriver</b></li> <li><span style="font-weight: 400;">         </span><b>New Google regions</b></li> </ul> <p>&nbsp;</p> <p><b>BigQuery</b></p> <p><span style="font-weight: 400;">BigQuery had a big development recently with the availability of numeric data types.  </span></p> <p><span style="font-weight: 400;">It’s important for financial institutions to have proper numeric data types so they can aggregate calculations accurately, and that was missing in BigQuery. It is now available in beta. This is a big deal for financial institutions who do a lot of analytics in the aggregation of money types on BigQuery. It will drive a lot of customers to adopt BigQuery, especially customers who previously had convoluted ways of dealing with numeric data types for accurate currency reporting mechanisms as now they will have a direct way to handle the numeric data.</span></p> <p><b>BigTable</b></p> <p><span style="font-weight: 400;">BigTable had an important announcement with instance level IAM-obtained access management to general availability. This was part of Google’s long-term plan, it was in beta for months. A lot of organizations wanted to have this because they would be able to have more granular security control of instances within BigTable and manage it accordingly. </span></p> <p><b>Kubernetes</b></p> <p><span style="font-weight: 400;">Kubernetes had several announcements recently related to the release of Kubernetes 1.10 and the Google Kubernetes Engine.</span></p> <p><span style="font-weight: 400;">One of the most notable features of this release is the availability of shared VPCs. Previously, Kubernetes clusters would be running in their own clusters. To connect all these networks together, you would have to connect every cluster to each other which was very difficult, not very efficient, and needed a lot of resource overhead. Now, there is the availability of shared VPCs. Although there are some performance issues, this makes it very easy and very convenient for large organizations deploying production for Kubernetes workloads with several teams and multiple tenants. They can share this physical resource while still maintaining logical separation.</span></p> <p><span style="font-weight: 400;">It used to be difficult to communicate between Kubernetes clusters and between one projects. Now, you can compartmentalize the Kubernetes engine and the Kubernetes clusters into separate projects, sharing off those common resources across multiple teams.</span></p> <p><span style="font-weight: 400;">Security was also a big concern because everything was done at the organizational level and organizational administrators did not have a lot of control over the individual flag of origins specific to projects. Now they can separate access to the projects and data in a much more granular fashion, providing more reliability and an audit trail in terms of which security permissions are granted to which users and services.</span></p> <p><span style="font-weight: 400;">A key aspect of this is billing. With separate projects and separate Kubernetes engine clusters, you can isolate their resource usage and understand what the billing is going to be for each individual project team and each individual tenant workload. You can then budget accordingly.</span></p> <p><span style="font-weight: 400;">Several companies have different applications which are in specific and isolated workloads for their customers, and they need to have the multi-tenant isolation for sensitive data. Rather than going through a very convoluted network isolation architecture, they can use the shared VPC concept and isolate workloads while at the same time sharing network resources. </span></p> <p><span style="font-weight: 400;">This release also provides the availability for regional persistent disks and regional clusters for higher availability. They provide a durable network-attached storage for synchronous replication of data between two zones in the same region which was not available earlier. You had to do some hacking to do it effectively earlier, but it is available now, and you can configure it.</span></p> <p><span style="font-weight: 400;">One of the other important things that I thought would be very important to mention is the availability of custom boot disks. So, earlier you could only use a standard persistent disk for your Kubernetes clusters. But now you can also choose to use SSD &amp; custom boot disks. This would increase performance, especially for high performance and high throughput workloads. I know a lot of use cases out there that want to use a very high throughput Kafka and similar environments on Kubernetes clusters. This is ideal for running high throughput workloads on Kubernetes.</span></p> <p><b>Stackdriver</b></p> <p><span style="font-weight: 400;">Stackdriver also had a couple of important updates.  Stackdriver is Google’s monitoring mechanism and one of the things they have been trying to do is improve their usability. Just recently, they launched a couple of usability enhancements.</span></p> <p><span style="font-weight: 400;">One of the key things was the ability to manage alerting policies in the monitoring API. This enables users to create a custom alerting condition for resources that are being monitored by Stackdriver based on metadata which has been assigned by conditions. This improves the flexibility and the metric guides that you can derive and report on.</span></p> <p><span style="font-weight: 400;">Stackdriver earlier had predefined custom templates which could report and capture the monitoring metrics.  But now, the availability of a custom monitoring template makes it much easier to create your own alerting policies and better understand the behavior of your application.</span></p> <p><span style="font-weight: 400;">Another update is a beta version of a new alerting condition configuration in the Stackdriver UI. This is more of a UI feature rather than an API feature that can be very regulated and consumed. It allows you to find different alerting conditions more precisely and look into the metadata of all the log and all the reporting, enabling you to define a broader set of conditions. It really helps because it is a more powerful, complete method of identifying time series and specific aggregations, especially on log data. It enables you to be much more efficient, accurately do alerting on aggregation of custom metrics, and log these metrics. It gives you the ability to filter metadata to alert on very specific Kubernetes resources, for example.</span></p> <p><span style="font-weight: 400;">The final thing that I wanted to mention is that this also allows you to edit on new metric threshold conditions that were created by the API.</span></p> <p><span style="font-weight: 400;">These updates give users of Stackdriver a lot more flexibility and they are conducive to those specific application workloads and the application behavior you have on GCP.</span></p> <p><b>New Google regions</b></p> <p><span style="font-weight: 400;">Lastly, I want to mention that the Singapore region opened recently.  This is significant because Google always wanted more presence in the Asia-Pacific region. There are a lot of Google customers who are very excited to have the region open in their own backyard. A new region has also been announced in Zurich, Switzerland which is scheduled to open in 2019.</span></p> <p><i><span style="font-weight: 400;">This was a summary of the Google Cloud Platform topics we discussed during the podcast. Chris also welcomed Greg Baker (Amazon Web Services) and</span></i><a href="https://pythian.com/experts/warner-chaves/"> <i><span style="font-weight: 400;">Warner Chaves</span></i></a><i><span style="font-weight: 400;"> (Microsoft Azure) who also discussed topics related to their expertise.</span></i></p> <p><i><span style="font-weight: 400;">Click</span></i><a href="https://blog.pythian.com/cloudscape-podcast-episode-5-cloud-vendor-news-updates-may-2018/"> <i><span style="font-weight: 400;">here</span></i></a><i><span style="font-weight: 400;"> to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></i></p> </div></div> Kartick Sekar https://blog.pythian.com/?p=104907 Thu Aug 02 2018 10:46:02 GMT-0400 (EDT) Extended Histograms – 2 https://jonathanlewis.wordpress.com/2018/08/02/extended-histograms-2/ <p>Following on from <a href="https://jonathanlewis.wordpress.com/2018/07/31/extended-histograms/"><em><strong>the previous posting</strong></em></a> which raised the idea of faking a frequency histogram for a column group (extended stats), this is just a brief demonstration of how you can do this. It&#8217;s really only a minor variation of something I&#8217;ve published before, but it shows how you can use a query to generate a set of values for the histogram and it pulls in a detail about how Oracle generates and stores column group values.</p> <p>We&#8217;ll start with the same table as we had before &#8211; two columns which hold only the combinations (&#8216;Y&#8217;, &#8216;N&#8217;) or (&#8216;N&#8217;, &#8216;Y&#8217;) in a very skewed way, with a requirement to ensure that the optimizer provides an estimate of 1 if a user queries for (&#8216;N&#8217;,&#8217;N&#8217;) &#8230; and I&#8217;m going to go the extra mile and create a histogram that does the same when the query is for the final possible combination of (&#8216;Y&#8217;,&#8217;Y&#8217;).</p> <p>Here&#8217;s the starting code that generates the data, and creates histograms on all the columns (I&#8217;ve run this against 12.1.0.2 and 12.2.0.1 so far):</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: histogram_hack_2a.sql rem Author: Jonathan Lewis rem Dated: Jul 2018 rem rem Last tested rem 12.2.0.1 rem 12.1.0.2 rem 11.2.0.4 rem create table t1 as select 'Y' c2, 'N' c3 from all_objects where rownum &lt;= 71482 -- &gt; comment to deal with wordpress format issue. union all select 'N' c2, 'Y' c3 from all_objects where rownum &lt;= 1994 -- &gt; comment to deal with wordpress format issue. ; variable v1 varchar2(128) begin :v1 := dbms_stats.create_extended_stats(null,'t1','(c2,c3)'); dbms_output.put_line(:v1); end; / execute dbms_stats.gather_table_stats(null, 't1', method_opt=&gt;'for all columns size 10'); </pre> <p>In a variation from the previous version of the code I&#8217;ve used the <em>&#8220;create_extended_stats()&#8221;</em> function so that I can return the resulting virtual column name (also known as an <em>&#8220;extension&#8221;</em> name) into a variable that I can use later in an anonymous PL/SQL block.</p> <p>Let&#8217;s now compare the values stored in the histogram for that column with the values generated by a function call that I <a href="https://jonathanlewis.wordpress.com/2014/05/04/extended-stats-3/"><em><strong>first referenced a couple of years ago</strong></em></a>:</p> <pre class="brush: plain; title: ; notranslate"> select endpoint_value from user_tab_histograms where table_name = 'T1' and column_name = :v1 ; select distinct c2, c3, mod(sys_op_combined_hash(c2,c3),9999999999) endpoint_value from t1 ; ENDPOINT_VALUE -------------- 4794513072 6030031083 2 rows selected. C C ENDPOINT_VALUE - - -------------- N Y 4794513072 Y N 6030031083 2 rows selected. </pre> <p>So we have a method of generating the values that Oracle should store in the histogram; now we need to generate 4 values and supply them to a call to <em><strong>dbms_stats.set_column_stats()</strong></em> in the right order with the frequencies we want to see:</p> <pre class="brush: plain; title: ; notranslate"> declare l_distcnt number; l_density number; l_nullcnt number; l_avgclen number; l_srec dbms_stats.statrec; n_array dbms_stats.numarray; begin dbms_stats.get_column_stats ( ownname =&gt;null, tabname =&gt;'t1', colname =&gt;:v1, distcnt =&gt;l_distcnt, density =&gt;l_density, nullcnt =&gt;l_nullcnt, avgclen =&gt;l_avgclen, srec =&gt;l_srec ); l_srec.novals := dbms_stats.numarray(); l_srec.bkvals := dbms_stats.numarray(); for r in ( select mod(sys_op_combined_hash(c2,c3),9999999999) hash_value, bucket_size from ( select 'Y' c2, 'Y' c3, 1 bucket_size from dual union all select 'N' c2, 'N' c3, 1 from dual union all select 'Y' c2, 'N' c3, 71482 from dual union all select 'N' c2, 'Y' c3, 1994 from dual ) order by hash_value ) loop l_srec.novals.extend; l_srec.novals(l_srec.novals.count) := r.hash_value; l_srec.bkvals.extend; l_srec.bkvals(l_srec.bkvals.count) := r.bucket_size; end loop; n_array := l_srec.novals; l_distcnt := 4; l_srec.epc := 4; -- -- For 11g rpcnts must not be mentioned -- For 12c is must be set to null or you -- will (probably) raise error: -- ORA-06533: Subscript beyond count -- l_srec.rpcnts := null; dbms_stats.prepare_column_values(l_srec, n_array); dbms_stats.set_column_stats( ownname =&gt;null, tabname =&gt;'t1', colname =&gt;:v1, distcnt =&gt;l_distcnt, density =&gt;l_density, nullcnt =&gt;l_nullcnt, avgclen =&gt;l_avgclen, srec =&gt;l_srec ); end; </pre> <p>The outline of the code is simply: get_column_stats, set up a couple of arrays and simple variables, prepare_column_values, set_column_stats. The special detail that I&#8217;ve included here is that I&#8217;ve used a <em>&#8220;union all&#8221;</em> query to generate an ordered list of hash values (with the desired frequencies), then grown the arrays one element at a time to copy them in place. (That&#8217;s not the only option at this point, and it&#8217;s probably not the most efficient option &#8211; but it&#8217;s good enough). In the past I&#8217;ve used this type of approach but used an analytic query against the table data to produce the equivalent of 12c Top-frequency histogram in much older versions of Oracle.</p> <p>A couple of important points &#8211; I&#8217;ve set the <em>&#8220;end point count&#8221;</em> (<em><strong>l_srec.epc</strong></em>) to match the size of the arrays, and I&#8217;ve also changed the number of distinct values to match. For 12c to tell the code that this is a frequency histogram (and not a hybrid) I&#8217;ve had to null out the <em>&#8220;repeat counts&#8221;</em> array (<em><strong>l_srec.rpcnts</strong></em>). If you run this on 11g the reference to <strong><em>rpcnts</em></strong> is illegal so has to be commented out.</p> <p>After running this procedure, here&#8217;s what I get in <em><strong>user_tab_histograms</strong></em> for the column:</p> <pre class="brush: plain; title: ; notranslate"> select endpoint_value column_value, endpoint_number endpoint_number, endpoint_number - nvl(prev_endpoint,0) frequency from ( select endpoint_number, lag(endpoint_number,1) over( order by endpoint_number ) prev_endpoint, endpoint_value from user_tab_histograms where table_name = 'T1' and column_name = :v1 ) order by endpoint_number ; COLUMN_VALUE ENDPOINT_NUMBER FREQUENCY ------------ --------------- ---------- 167789251 1 1 4794513072 1995 1994 6030031083 73477 71482 8288761534 73478 1 4 rows selected. </pre> <p>It&#8217;s left as an exercise to the reader to check that the estimated cardinality for the predicate <em>&#8220;c2 = &#8216;N&#8217; and c3 = &#8216;N'&#8221;</em> is 1 with this histogram in place.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18689 Thu Aug 02 2018 09:13:58 GMT-0400 (EDT) How to Open your PL/SQL Objects…With the Keyboard in SQL Developer https://www.thatjeffsmith.com/archive/2018/08/how-to-open-your-pl-sql-objects-with-the-keyboard-in-sql-developer/ <p>I have some code&#8230;</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="plsql"><pre class="de1"><span class="kw1">BEGIN</span> give_raises<span class="br0">&#40;</span><span class="br0">&#41;</span><span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;</span> <span class="sy0">/</span></pre></div></div></div></div></div></div></div> <p>I can guess that GIVE_RAISES is a procedure of some kind. But now I want to open it, or &#8216;go to it&#8217; or &#8216;step into it.&#8217; </p> <p>And I want to do so without having to pick up my mouse.</p> <p>So I could tell you about the <a href="https://www.thatjeffsmith.com/archive/2012/11/sql-developer-describe-versus-ctrlclick-to-open-database-objects/" rel="noopener" target="_blank">Ctrl+Click trick</a>, but that&#8217;s all mouse.</p> <p>So what&#8217;s a child of the 80&#8217;s to do?</p> <p>Define a keyboard shortcut for &#8216;Open Declaration&#8217; and then use that.</p> <div id="attachment_6902" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/08/open-declaration.gif"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/08/open-declaration.gif" alt="" width="1024" height="768" class="size-full wp-image-6902" /></a><p class="wp-caption-text">Ctrl+Mouse Click versus using the keyboard to open my procedure.</p></div> <h3>It&#8217;s not JUST for PL/SQL</h3> <div id="attachment_6903" style="width: 785px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/08/open-dec-object.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/08/open-dec-object.png" alt="" width="775" height="340" class="size-full wp-image-6903" /></a><p class="wp-caption-text">Yup, it works for TABLES too!</p></div> <p>Note that I have two <a href="https://www.thatjeffsmith.com/archive/2014/05/how-can-i-see-tables-from-two-different-connections-in-oracle-sql-developer/" rel="noopener" target="_blank">Document Tab Groups</a> going, that&#8217;s why you can see my SQL Worksheet and the EMPLOYEES table side-by-side.</p> <h3>Keyboard Shortcuts</h3> <p>We don&#8217;t have enough keys to assign EVERYTHING a keyboard shortcut. Open Declaration was one of them. So, decide what makes sense for you. For me, it was ALT+I (I = Inspect.)</p> <p><a href="https://www.thatjeffsmith.com/archive/2012/11/keyboard-shortcuts-in-oracle-sql-developer/" rel="noopener" target="_blank">Everything You Need to Know about Keyboard Shortcuts PLUS A Cheat Sheet!</a></p> thatjeffsmith https://www.thatjeffsmith.com/?p=6901 Wed Aug 01 2018 11:15:58 GMT-0400 (EDT) Extended Histograms https://jonathanlewis.wordpress.com/2018/07/31/extended-histograms/ <p>Today&#8217;s little puzzle comes courtesy of <a href="https://www.freelists.org/post/oracle-l/extended-statistics-and-nonexistent-combined-values"><em><strong>the Oracle-L mailing list</strong></em></a>. A table has two columns (<em><strong>c2</strong> </em>and <em><strong>c3</strong></em>), which contain only the values <em>&#8216;Y&#8217;</em> and <em>&#8216;N&#8217;</em>, with the following distribution:</p> <pre class="brush: plain; title: ; notranslate"> select c2, c3, count(*) from t1 group by c2, c3 ; C C COUNT(*) - - ---------- N Y 1994 Y N 71482 2 rows selected. </pre> <p>The puzzle is this &#8211; how do you get the optimizer to predict a cardinality of zero (or, using its best approximation, 1) if you execute a query where the predicate is:</p> <pre class="brush: plain; title: ; notranslate"> where c2 = 'N' and c3 = 'N' </pre> <p>Here are 4 tests you might try:</p> <ul> <li>Create simple stats (no histograms) on c2 and c3.</li> <li>Create frequency histograms on c2 and c3</li> <li>Create a column group (extended stats) on (c2,c3) but no histograms</li> <li>Create a column group (extended stats) on (c2,c3) with a histogram on (c2, c3)</li> </ul> <p>If you do these tests you&#8217;ll find the estimated cardinalities are (from 12.1.0.2):</p> <ul> <li>18,369 &#8211; derived as 73,476 / 4  &#8230; total rows over total possible combinations</li> <li>1,940   &#8211; derived as 73,476 * (1,994/73,476) * (71,482/73,476) &#8230; total rows * fraction where c2 = &#8216;N&#8217; * fraction where c3 = &#8216;N&#8217;</li> <li>36,738 &#8211; derived as 73,476 / 2 &#8230; total rows / number of distinct combinations of <em>(c2, c3)</em></li> <li>997      &#8211; derived as 1,994 / 2 &#8230; half the frequency of the least frequently occurring value in the histogram</li> </ul> <p>The last <a href="https://jonathanlewis.wordpress.com/2009/04/23/histogram-change/"><em><strong>algorithm appeared in 10.2.0.4</strong></em></a>; prior to that a <em>&#8220;value not in frequency histogram&#8221;</em> would have been given an estimated cardinality of 1 (which is what the person on Oracle-L wanted to see).</p> <p>In fact the optimizer&#8217;s behaviour can be reverted to the 10.2.0.3 mechanism by setting fix-control 5483301 to zero (or off), either with an <em>&#8220;alter session&#8221;</em> call or inside the <em><strong>/*+ opt_param() */</strong></em> hint. There is, however, another option &#8211; if you get the column stats, then immediately set them (<em><strong>dbms_stats.get_column_stats()</strong></em>, <em><strong>dbms_stats.set_column_stats()</strong></em>) the optimizer defines the stats as <em>&#8220;user defined&#8221;</em> and (for reasons I don&#8217;t know &#8211; perhaps it&#8217;s an oversight) reverts to the 10.2.0.3 behaviour. Here&#8217;s some code to demonstrate the point; as the srcipt header says, I&#8217;ve tested it on versions up to 18.1</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: histogram_hack_2.sql rem Author: Jonathan Lewis rem Dated: Jul 2018 rem rem Last tested rem 18.1.0.0 via LiveSQL (with some edits) rem 12.2.0.1 rem 12.1.0.2 rem create table t1 as select 'Y' c2, 'N' c3 from all_objects where rownum &lt;= 71482 -- &gt; comment to avoid format issue union all select 'N' c2, 'Y' c3 from all_objects where rownum &lt;= 1994 -- &gt; comment to avoid format issue ; execute dbms_stats.gather_table_stats(user,'t1',method_opt=&gt;'for all columns size 10 for columns (c2,c3) size 10'); column column_name format a128 new_value m_colname select column_name from user_tab_cols where table_name = 'T1' and column_name not in ('C2','C3') ; set autotrace traceonly explain select /* pre-hack */ * from t1 where c2 = 'N' and c3 = 'N'; set autotrace off declare l_distcnt number default null; l_density number default null; l_nullcnt number default null; l_srec dbms_stats.statrec; l_avgclen number default null; begin dbms_stats.get_column_stats ( ownname =&gt;user, tabname =&gt;'t1', colname =&gt;'&amp;m_colname', distcnt =&gt;l_distcnt, density =&gt;l_density, nullcnt =&gt;l_nullcnt, srec =&gt;l_srec, avgclen =&gt;l_avgclen ); dbms_stats.set_column_stats( ownname =&gt;user, tabname =&gt;'t1', colname =&gt;'&amp;m_colname', distcnt =&gt;l_distcnt, density =&gt;l_density, nullcnt =&gt;l_nullcnt, srec =&gt;l_srec, avgclen =&gt;l_avgclen ); end; / set autotrace traceonly explain select /* post-hack */ * from t1 where c2 = 'N' and c3 = 'N'; set autotrace off </pre> <p>I&#8217;ve created a simple table for the data and collected stats including histograms on the two columns and on the column group. I&#8217;ve taken a simple strategy to find the name of the column group (I could have used the function <em><strong>dbms_stats.create_extended_stats()</strong></em> to set an SQL variable to the name of the column group, of course), and then run a little bit of PL/SQL that literally does nothing more than copy the column group&#8217;s stats into memory then write them back to the data dictionary.</p> <p>Here are the &#8220;before&#8221; and &#8220;after&#8221; execution plans that we get from autotrace:</p> <pre class="brush: plain; title: ; notranslate"> BEFORE -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 997 | 3988 | 23 (27)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T1 | 997 | 3988 | 23 (27)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;C2&quot;='N' AND &quot;C3&quot;='N') AFTER -------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 4 | 23 (27)| 00:00:01 | |* 1 | TABLE ACCESS FULL| T1 | 1 | 4 | 23 (27)| 00:00:01 | -------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;C2&quot;='N' AND &quot;C3&quot;='N') </pre> <p>As required &#8211; the estimate for the (&#8216;N&#8217;,&#8217;N&#8217;) rows drops down to (the optimizer&#8217;s best approximation to ) zero.</p> <h3>Footnote:</h3> <p>An alternative strategy (and, I&#8217;d say, a better strategic approach) would have been to create a &#8220;fake&#8221; frequency histogram that included the value (&#8216;N&#8217;,&#8217;N&#8217;) giving it a frequency of 1 &#8211; a method <a href="https://jonathanlewis.wordpress.com/2009/05/28/frequency-histograms/"><em><strong>I&#8217;ve suggested in the past</strong></em></a>  but with the little problem that you need to be able to work out the value to use in the array passed to <em><strong>dbms_stats.set_column_stats()</strong></em> to represent the value for the (&#8216;N&#8217;,&#8217;N&#8217;) combination &#8211; and I&#8217;ve written about that topic <a href="https://jonathanlewis.wordpress.com/2014/05/04/extended-stats-3/"><em><strong>in the past as well</strong></em></a>.</p> <p>You might wonder why the optimizer is programmed to use <em>&#8220;half the least popular&#8221;</em> for predicates references values not in the index. Prior to 12c it&#8217;s easy to make an argument for the algorithm. Frequency histograms used to be sampled with a very small sample size, so if you were unlucky a &#8220;slightly less popular&#8221; value could be missed completely in the sample; if you were requesting a value that didn&#8217;t appear in the histogram then (presumably) you knew it should exist in the data, so guessing a cardinality somewhat less than the least popular must have seemed like a good idea.</p> <p>In 12c, of course, you ought to be taking advantage of the <a href="https://jonathanlewis.wordpress.com/2013/07/14/12c-histograms/"><em><strong>&#8220;approximate NDV&#8221; implementation</strong></em></a> for using a 100% sample to generate frequency (and Top-N / Top-Frequency histograms). If you&#8217;ve got a 12c frequency histogram then the absence of a value in the histogram means the data really wasn&#8217;t there so a cardinality estimate of 1 makes more sense. (Of course, you might have allowed Oracle to gather the histogram at the wrong time &#8211; but that&#8217;s a different issue). If you&#8217;ve got a Top-N histogram then the optimizer will behave as if a &#8220;missing&#8221; value is one of those nominally allowed for in the &#8220;low frequency&#8221; bucket and use neither the 1 nor the &#8220;half the least popular&#8221;.</p> <p>So, for 12c and columns with frequency histograms it seems perfectly reasonably to set the fix control to zero &#8211; after getting approval from Oracle support, of course.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18682 Tue Jul 31 2018 18:05:10 GMT-0400 (EDT) DevSecOps: What All DevOps Should Be https://blog.pythian.com/devsecops-what-all-devops-should-be/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Today the value of DevOps is well understood, at least by IT professionals. In the IT community, DevOps is almost universally accepted as being superior to traditional software development. By breaking down departmental silos, DevOps unifies development and operations teams in the goal of delivering better software products faster and more frequently.</p> <p>DevOps has been a dominant approach for almost ten years. More recently though, we’ve seen a growing interest in DevSecOps, and rightly so. By explicitly adding “security” to the DevOps philosophy and name, DevSecOps incorporates the necessary emphasis on defending against increasingly frequent and dangerous attacks on data security. How important is that emphasis? Important enough for us to suggest that all DevOps should involve at least some of the elements of DevSecOps.</p> <p>To be clear, improved security has always been part of the intention of DevOps. The methodology allows building, testing, and deployment to happen faster and more often, and that alone should be a boon to security. Any approach that supports frequent iteration makes it possible to spot and fix vulnerabilities quickly. And the faster vulnerabilities are caught and corrected, the fewer points of exposure there will be for ransomware, intrusions and data leakage — at least in theory.</p> <p>In practice, though, DevOps’ reliance on velocity and automation can create its own Achilles’ heel when security teams are unable to keep up with the number of releases being shipped. Massive breaches still happen with frightening regularity, and they still happen at organizations large enough to have the wherewithal to defend themselves. This stark reality points to a need to prioritize security from the earliest stages of software development. Put another way, this reality is pointing to a need for the principles underlying DevSecOps.</p> <p>Think of DevSecOps as the logical and necessary evolution of DevOps. Like DevOps, DevSecOps combines the functions of an organization’s development and operations teams. On top of that approach, though, DevSecOps makes everyone responsible for security, and it incorporates security right from the start of the coding process (rather than attempting to apply it later, or worse yet, retroactively). Under DevSecOps, security has its own metrics and goals, and they become part of the same development pipeline used by both the development and operation teams. So now, Dev and Ops have to think about more than velocity and uptime. Under DevSecOps, every member of the team is graded on everyone else’s goals, including vulnerabilities detected within systems and code.  This integration of security into the earlier stages of software development is what allows security to scale and keep up with the increased velocity achieved through traditional DevOps practices.</p> <p>DevSecOps is about improved tools, certainly, but it is more noticeably about building a culture that prioritizes security throughout the enterprise. Rather than centralizing control over security matters, DevSecOps empowers business operators with tools and processes that help with security decisions. And, in its purest form, DevSecOps asks security teams to “eat your own dog food” — in other words, to abide by the same security requirements that apply to teams across the organization. When, for example, a security practitioner has to ensure that their security controls are automated and capable of scaling with more frequent releases, he or she will quickly understand the pain points traditionally faced by their peers with regards to security.</p> <p>The DevSecOps culture also steps up the vigilance around exploitable vulnerabilities. While traditional security methods test hypothetical threats as suggested by (usually) a central authority, the DevSecOps practitioner prefers to think like the enemy. Under DevSecOps, systems are routinely exposed to mock attacks that mimic today’s real-world threats. Moreover, the DevSecOps philosophy accepts the reality that some security breaches will inevitably succeed. Thus, it thinks in terms of creating infrastructure and code that can be re-stacked quickly while still providing data security and availability.</p> <p>Much of DevOps’ success has stemmed from treating infrastructure as code. That revolutionary mindset allowed functional testing to “shift left,” or earlier in the development lifecycle, which in turn allowed bugs to be detected and fixed before they could make their way into production. DevSecOps improves on DevOps by ensuring that security gets shifted left, as well. By treating security as code, organizations can improve the defense of their data through automated testing.</p> <p>DevOps demonstrated an unprecedented strength in delivering better quality products faster. Under the DevSecOps philosophy, baked-in security is part of quality, and it will be tested just as rigorously as functionality.</p> <p><a href="https://pythian.com/devops-consulting/">Learn how</a> Pythian can help make DevSecOps part of your organization.</p> </div></div> Greg Baker https://blog.pythian.com/?p=104871 Tue Jul 31 2018 13:18:29 GMT-0400 (EDT) Loading data from OSS to Oracle Autonomous Cloud Services with SQL Developer https://www.thatjeffsmith.com/archive/2018/07/loading-data-from-oss-to-oracle-autonomous-cloud-services-with-sql-developer/ <p>Ok that title has a TON of buzz and marketing words in it.</p> <p>But, we have the Oracle Cloud. And available there is a service where we take care of your database for you &#8211; Autonomous. We make sure it&#8217;s up, and that it&#8217;s fast. </p> <p>We have <a href="https://www.oracle.com/database/autonomous-database/feature.html" rel="noopener" target="_blank">1 autonomous cloud service today and will have and second one coming SOON</a>. </p> <p>These services come with an <a href="https://cloud.oracle.com/storage/object-storage/features" rel="noopener" target="_blank">S3 compatible Object Store OSS (Oracle Object Storage, complete with S3 API support)</a> , so you can move data/files around.</p> <p><em>For the new Autonomous Transaction Processing (ATP) Service, this feature in SQL Developer will be available in version 18.3 of SQL Developer</em></p> <p>In SQL Developer version 18.1 and higher, we make it pretty easy to take data files you may have uploaded to your OSS and load that data to new or existing tables in your Autonomous DB Service. </p> <p>We basically make the files available to the database, map the columns just right, and then make calls to DBMS_CLOUD (<a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud-reference.html#GUID-52C9974C-D95E-4E41-AFBD-0FC4065C029A" rel="noopener" target="_blank">Docs</a>) for getting the data in there. DBMS_CLOUD is an Oracle Cloud ONLY package that makes it easy to load and copy data from OSS. </p> <p>We also make it very easy to TEST your scenarios &#8211; find bad records, fix them, and run again if necessary.</p> <p>This is all done with a very familiar interface, the Import Data dialog.</p> <p>If your existing database connection is of type &#8216;Cloud PDB&#8217;, then when you get to the first page, you&#8217;ll see this available in the source data drop down.</p> <div id="attachment_6888" style="width: 955px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load1.png" alt="" width="945" height="789" class="size-full wp-image-6888" /></a><p class="wp-caption-text">Pick THIS one.</p></div> <p>Then you need to select your proper credentials and tell us which file you want (we do NOT have a OSS file browser TODAY, but we do want one.) So you need to have the URL handy.</p> <p>Paste it in, and hit the preview button.</p> <div id="attachment_6900" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load2-1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load2-1.png" alt="" width="1024" height="482" class="size-full wp-image-6900" /></a><p class="wp-caption-text">THAT’S RIGHT, WE’RE PULLING THE DATA DOWN FROM OSS TO LET YOU PREVIEW AND SETUP THE FILE LOAD SCENARIO.</p></div> <p>This will work for NEW or Existing tables. For this scenario I&#8217;m going with an Existing table.</p> <p>The next page is almost the same as you&#8217;re used to, but a few important differences:</p> <div id="attachment_6890" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load3.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load3.png" alt="" width="1024" height="789" class="size-full wp-image-6890" /></a><p class="wp-caption-text">If we were building a NEW table, we can tell it to JUST create the external table, or to also load the data over from the external to the new table.</p></div> <p>Once you have your options in a happy place, the rest of the wizard is pretty much the same..until you get to the Test dialog.</p> <h3>This is where it gets FUN</h3> <!-- Easy AdSense V7.43 --> <!-- [midtext: 1 urCount: 1 urMax: 0] --> <div class="ezAdsense adsense adsense-midtext" style="float:left;margin:12px;"><script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> <!-- 336-rectangle --> <ins class="adsbygoogle" style="display:inline-block;width:336px;height:280px" data-ad-client="ca-pub-1495560488595385" data-ad-slot="5904412551"></ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script></div> <!-- Easy AdSense V7.43 --> <p>Let&#8217;s imagine you are a genius who never makes mistakes. You&#8217;ll get to witness yourself in all your glory when you run the test and see SUCCESS, a populated External Table Data panel and an EMPTY bad file contents panel.</p> <p>So what we&#8217;re trying to achieve here is saving you a LOT of wasted time. We want to make sure the scenario works for say the first 1,000 records before we go to move the ENTIRE file over and process it. If there IS a problem, you can fix it now.</p> <p>The test will simply create the External table and show the results of trying to query it via your load parameters as defined in the previous screens. </p> <div id="attachment_6891" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test2.png" alt="" width="1024" height="682" class="size-full wp-image-6891" /></a><p class="wp-caption-text">Yes, we&#8217;re basically just making calls to DBMS_CLOUD for you.</p></div> <p>So it looks like it&#8217;s worked. Let&#8217;s go preview the data.</p> <div id="attachment_6892" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test3.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test3.png" alt="" width="1024" height="682" class="size-full wp-image-6892" /></a><p class="wp-caption-text">That looks like employees to me.</p></div> <p>And just to make sure, let&#8217;s go peak at the rejected (Bad File Contents) panel.</p> <div id="attachment_6893" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test5.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test5.png" alt="" width="1024" height="682" class="size-full wp-image-6893" /></a><p class="wp-caption-text">Sweet!</p></div> <h3>But Jeff, I&#8217;m not perfect, I made a boo-boo.</h3> <p>No worries, let&#8217;s see what it looks like when there is problem with the definition of the external table or with the data, or both.</p> <div id="attachment_6894" style="width: 1210px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test-bad1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test-bad1.png" alt="" width="1200" height="800" class="size-full wp-image-6894" /></a><p class="wp-caption-text">Oh, it&#8217;s not liking my date format, or something?</p></div> <p>Man, it sure would be nice to SEE what that rejected row looks like.</p> <p>Just click on the Bad File Contents!</p> <div id="attachment_6895" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test-bad2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-test-bad2.png" alt="" width="1024" height="682" class="size-full wp-image-6895" /></a><p class="wp-caption-text">Looks like I have my column mapping reversed for employee_id and hire_date, oops.</p></div> <p>So instead of starting over, just go back in the wizard, re-map the columns, test again.</p> <p>And THEN click &#8216;Finish&#8217; to actually run the full scenario. And then when were done, we&#8217;ll have a log of the complete scenario and we can browse the table.</p> <h3>The Table!</h3> <p>We open the log of the scenario for you, and then you can manually browse the table like you&#8217;re used to. Or get to doing your cool reports, graphs, and SQL stuff. </p> <div id="attachment_6896" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-success.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/cloud-load-success.png" alt="" width="1024" height="553" class="size-full wp-image-6896" /></a><p class="wp-caption-text">When the wizard is DONE, you&#8217;ll have the log of the entire operation, and you can then go browse your table.</p></div> <h3>What are these tables?</h3> <div id="attachment_6897" style="width: 470px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/copy-bad-log.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/copy-bad-log.png" alt="" width="460" height="720" class="size-full wp-image-6897" /></a><p class="wp-caption-text">These seem to keep popping up&#8230;</p></div> <p>You can nuke/drop these as needed, but they&#8217;re basically just a collection of CLOBs that show the contents of the logs and bad file from your SQL Dev DBMS_CLOUD runs.</p> <p><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/copy-bad-log2.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/copy-bad-log2.png" alt="" width="649" height="688" class="aligncenter size-full wp-image-6898" /></a></p> <h3>What&#8217;s Coming Next?</h3> <p>We&#8217;re enhancing the Cart so you create deployments of multiples files and tables in a single scenario. And then run those as often as necessary.</p> <p>We&#8217;re also working on SQL Developer Web so that there is a data loading facility there so you can get rocking and rolling right away without even having to pull up the desktop tool. </p> <p>More news here later this year.</p> thatjeffsmith https://www.thatjeffsmith.com/?p=6887 Tue Jul 31 2018 11:48:44 GMT-0400 (EDT) Choosing Best Index for your MongoDB query https://blog.pythian.com/choosing-best-index-mongodb-query/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Indexing plays a quintessential role in database query performance and MongoDB is no exception. Choosing the best index for a query will improve its performance, while a bad one could lead to huge execution times and high memory utilization.</p> <p dir="ltr">In this article, I will walk you through the process of finding the right index for a common tuning scenario. Consider an example collection with 81K documents similar to the one below:</p> <div style="width: 100%; line-height: 1em; border: 1px solid #DEBB07; line: 1;"> <pre><code><sup>MongoDB Enterprise repl:PRIMARY&gt; db.city_inspections.findOne() { "_id" : ObjectId("56d61033a378eccde8a83550"), "id" : "10057-2015-ENFO", "certificate_number" : 6007104, "business_name" : "LD BUSINESS SOLUTIONS", "date" : "Feb 25 2015", "result" : "Violation Issued", "sector" : "Tax Preparers - 891", "address" : { "city" : "NEW YORK", "zip" : 10030, "street" : "FREDERICK DOUGLASS BLVD", "number" : 2655 } } </sup></code></pre> </div> <p>And the following query:<code></code></p> <p><code><sup>db.city_inspections.find({certificate_number:{$gt:6000000,$lte:9400000},"address.zip":10030}).sort({date:-1})</sup><br /> </code></p> <p>The query is looking for all documents where certificate number is between 6M to 9.4M and the zip code is equal to 10030. The result will be sorted by date, in descending order. Let’s now observe how it’s handled by MongoDB:</p> <div style="width: 100%; height: 500px; line-height: 1em; overflow: scroll; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; explain.find({certificate_number:{$gt:6000000,$lte:9400000},"address.zip":10030}).sort({date:-1}) { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "blog.city_inspections", "indexFilterSet" : false, "parsedQuery" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, <span style="color: #2969b0;"> "winningPlan" : { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "COLLSCAN", "filter" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, "direction" : "forward"</span> } } }, "rejectedPlans" : [ ] }, "executionStats" : { "executionSuccess" : true, <span style="color: #2969b0;">"nReturned" : 82, "executionTimeMillis" : 149, "totalKeysExamined" : 0, "totalDocsExamined" : 81047,</span> "executionStages" : { "stage" : "SORT", "nReturned" : 82, "executionTimeMillisEstimate" : 20, "works" : 81133, "advanced" : 82, "needTime" : 81050, "needYield" : 0, "saveState" : 633, "restoreState" : 633, "isEOF" : 1, "invalidates" : 0, "sortPattern" : { "date" : -1 }, "memUsage" : 24329, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 82, "executionTimeMillisEstimate" : 20, "works" : 81050, "advanced" : 82, "needTime" : 80967, "needYield" : 0, "saveState" : 633, "restoreState" : 633, "isEOF" : 1, "invalidates" : 0, "inputStage" : { "stage" : "COLLSCAN", "filter" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, "nReturned" : 82, "executionTimeMillisEstimate" : 120, "works" : 81049, "advanced" : 82, "needTime" : 80966, "needYield" : 0, "saveState" : 633, "restoreState" : 633, "isEOF" : 1, "invalidates" : 0, "direction" : "forward", "docsExamined" : 81047 } } } }, "serverInfo" : { "host" : "m103", "port" : 27000, "version" : "3.6.5-rc0", "gitVersion" : "a20ecd3e3a174162052ff99913bc2ca9a839d618" }, "ok" : 1 } MongoDB Enterprise &gt; </code></pre> </div> <p dir="ltr">From the execution stats, we can observe that MongoDB has performed a collection scan (totalDocsExamined:81047 and totalKeysScanned:0) to return 82 documents (nReturned) in 149 ms. The time taken might look small here but actually, the ratio between totalDocsExamined:totalKeysExamine:nReturned is what we are interested in. We might decide to create an index as per the keys’ order in the query:</p> <pre> {certificate_number:1,"address.zip":1,date:-1}</pre> <div style="width: 100%; line-height: 1em; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; db.city_inspections.createIndex({certificate_number:1,"address.zip":1,</code> <code>date:-1}) { "createdCollectionAutomatically" : false, "numIndexesBefore" : 1, "numIndexesAfter" : 2, "ok" : 1 } </code></pre> </div> <p>After creating the index and running the query again, we get the following execution plan:</p> <div style="width: 100%; height: 500px; line-height: 1em; overflow: scroll; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; explain.find({certificate_number:{$gt:6007104,$lte:9400000},</code> <code>"address.zip":10030}).sort({date:-1}) { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "blog.city_inspections", "indexFilterSet" : false, "parsedQuery" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, <span style="color: #2969b0;">"winningPlan" : { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "certificate_number" : 1, "address.zip" : 1, "date" : -1 }, "indexName" : "certificate_number_1_address.zip_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "certificate_number" : [ ], "address.zip" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "certificate_number" : [ "(6007104.0, 9400000.0]" ], "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]"</span> ] } } } } }, "rejectedPlans" : [ ] }, "executionStats" : { "executionSuccess" : true, <span style="color: #2969b0;">"nReturned" : 82, "executionTimeMillis" : 174, "totalKeysExamined" : 40270, "totalDocsExamined" : 82,</span> "executionStages" : { "stage" : "SORT", "nReturned" : 82, "executionTimeMillisEstimate" : 171, "works" : 40354, "advanced" : 82, "needTime" : 40271, "needYield" : 0, "saveState" : 316, "restoreState" : 316, "isEOF" : 1, "invalidates" : 0, "sortPattern" : { "date" : -1 }, "memUsage" : 24329, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 82, "executionTimeMillisEstimate" : 171, "works" : 40271, "advanced" : 82, "needTime" : 40188, "needYield" : 0, "saveState" : 316, "restoreState" : 316, "isEOF" : 1, "invalidates" : 0, "inputStage" : { "stage" : "FETCH", "nReturned" : 82, "executionTimeMillisEstimate" : 171, "works" : 40270, "advanced" : 82, "needTime" : 40187, "needYield" : 0, "saveState" : 316, "restoreState" : 316, "isEOF" : 1, "invalidates" : 0, "docsExamined" : 82, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 82, "executionTimeMillisEstimate" : 171, "works" : 40270, "advanced" : 82, "needTime" : 40187, "needYield" : 0, "saveState" : 316, "restoreState" : 316, "isEOF" : 1, "invalidates" : 0, "keyPattern" : { "certificate_number" : 1, "address.zip" : 1, "date" : -1 }, "indexName" : "certificate_number_1_address.zip_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "certificate_number" : [ ], "address.zip" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "certificate_number" : [ "(6007104.0, 9400000.0]" ], "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]" ] }, "keysExamined" : 40270, "seeks" : 40188, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } } } } }, "serverInfo" : { "host" : "m103", "port" : 27000, "version" : "3.6.5-rc0", "gitVersion" : "a20ecd3e3a174162052ff99913bc2ca9a839d618" }, "ok" : 1 }</code></pre> </div> <p>Indexing should reduce execution time, but in this case, execution time has increased by 10%! totalDocsExamined:nReturned ratio is now 1:1 which is optimal, although 40k+ index keys were examined this time.</p> <p>This is because the engine is first fetching the documents by certificate number to then find the ones matching the zip code. There are 40K records with a certificate number between 6M to 9.4M but only 82 of them have a zip code equal to 10030. Sorting is done at the last stage.</p> <p>Even though it’s a covering index, it is not very efficient, and we still didn’t achieve the best ratio between totalKeysExamined:totalDocsExamined:nReturn possible (1:1:1).</p> <p>Let&#8217;s try making the equality key the first index key, and certificate_number (range), the second: {&#8220;address.zip&#8221;:1,certificate_number:1,date:-1}</p> <p>Let’s create the above index and dive into the plan:</p> <div style="width: 100%; line-height: 1em; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; db.city_inspections.createIndex({"address.zip":1,certificate_number:1,</code> <code>date:-1}) { "createdCollectionAutomatically" : false, "numIndexesBefore" : 2, "numIndexesAfter" : 3, "ok" : 1 } </code></pre> </div> <p dir="ltr">After creating the index and running the query again, we get the following execution plan:</p> <div style="width: 100%; height: 500px; line-height: 1em; overflow: scroll; padding: 5px; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; explain.find({certificate_number:{$gt:6007104,$lte:9400000},"address.zip":10030}).sort({date:-1}) { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "blog.city_inspections", "indexFilterSet" : false, "parsedQuery" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, <span style="color: #2969b0;"> "winningPlan" : { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "address.zip" : 1, "certificate_number" : 1, "date" : -1 }, "indexName" : "address.zip_1_certificate_number_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "address.zip" : [ ], "certificate_number" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "address.zip" : [ "[10030.0, 10030.0]" ], "certificate_number" : [ "(6007104.0, 9400000.0]" ], "date" : [ "[MaxKey, MinKey]"</span> ] } } } } }, "rejectedPlans" : [ { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "certificate_number" : 1, "address.zip" : 1, "date" : -1 }, "indexName" : "certificate_number_1_address.zip_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "certificate_number" : [ ], "address.zip" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "certificate_number" : [ "(6007104.0, 9400000.0]" ], "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]" ] } } } } } ] }, "executionStats" : { "executionSuccess" : true, <span style="color: #2969b0;">"nReturned" : 82, "executionTimeMillis" : 2, "totalKeysExamined" : 82, "totalDocsExamined" : 82,</span> "executionStages" : { "stage" : "SORT", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 168, "advanced" : 82, "needTime" : 84, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "sortPattern" : { "date" : -1 }, "memUsage" : 24329, "memLimit" : 33554432, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 84, "advanced" : 82, "needTime" : 1, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "inputStage" : { "stage" : "FETCH", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 83, "advanced" : 82, "needTime" : 0, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "docsExamined" : 82, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 83, "advanced" : 82, "needTime" : 0, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "keyPattern" : { "address.zip" : 1, "certificate_number" : 1, "date" : -1 }, "indexName" : "address.zip_1_certificate_number_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "address.zip" : [ ], "certificate_number" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "address.zip" : [ "[10030.0, 10030.0]" ], "certificate_number" : [ "(6007104.0, 9400000.0]" ], "date" : [ "[MaxKey, MinKey]" ] }, "keysExamined" : 82, "seeks" : 1, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } } } } }, "serverInfo" : { "host" : "m103", "port" : 27000, "version" : "3.6.5-rc0", "gitVersion" : "a20ecd3e3a174162052ff99913bc2ca9a839d618" }, "ok" : 1 } </code></pre> </div> <p>The above Index seems to work better! Now we can see that totalKeysScanned: totalDocsExamined:nRetuned is 1:1:1 as we expected. We might conclude that this is the best index for this query, but let&#8217;s see if we can do something to reduce the amount of sorting required. Consider the following index:</p> <pre> {"address.zip":1,date:-1,certificate_number:1}</pre> <p dir="ltr">And the execution plan with the above index would be:</p> <div style="width: 100%; line-height: 1em; border: 1px solid #DEBB07;"> <pre><code>MongoDB Enterprise &gt; db.city_inspections.createIndex({"address.zip":1,date:-1,certificate_number:1}) { "createdCollectionAutomatically" : false, "numIndexesBefore" : 3, "numIndexesAfter" : 4, "ok" : 1 } </code></pre> </div> <p dir="ltr">After creating the index, let&#8217;s see the execution plan:</p> <div style="width: 100%; height: 500px; line-height: 1em; overflow: scroll; padding: 5px; border: 1px solid #DEBB07;"> <pre><code> MongoDB Enterprise &gt; explain.find({certificate_number:{$gt:6007104,$lte:9400000},"address.zip":10030}).sort({date:-1}) { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "blog.city_inspections", "indexFilterSet" : false, "parsedQuery" : { "$and" : [ { "address.zip" : { "$eq" : 10030 } }, { "certificate_number" : { "$lte" : 9400000 } }, { "certificate_number" : { "$gt" : 6007104 } } ] }, <span style="color: #2969b0;"> "winningPlan" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "address.zip" : 1, "date" : -1, "certificate_number" : 1 }, "indexName" : "address.zip_1_date_-1_certificate_number_1", "isMultiKey" : false, "multiKeyPaths" : { "address.zip" : [ ], "date" : [ ], "certificate_number" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]" ], "certificate_number" : [ "(6007104.0, 9400000.0]" ] } } }, "rejectedPlans" : [ { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "certificate_number" : 1, "address.zip" : 1, "date" : -1 }, "indexName" : "certificate_number_1_address.zip_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "certificate_number" : [ ], "address.zip" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "certificate_number" : [ "(6007104.0, 9400000.0]" ], "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]"</span> ] } } } } }, { "stage" : "SORT", "sortPattern" : { "date" : -1 }, "inputStage" : { "stage" : "SORT_KEY_GENERATOR", "inputStage" : { "stage" : "FETCH", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "address.zip" : 1, "certificate_number" : 1, "date" : -1 }, "indexName" : "address.zip_1_certificate_number_1_date_-1", "isMultiKey" : false, "multiKeyPaths" : { "address.zip" : [ ], "certificate_number" : [ ], "date" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "address.zip" : [ "[10030.0, 10030.0]" ], "certificate_number" : [ "(6007104.0, 9400000.0]" ], "date" : [ "[MaxKey, MinKey]" ] } } } } } ] }, "executionStats" : { "executionSuccess" : true, <span style="color: #2969b0;"> "nReturned" : 82, "executionTimeMillis" : 3, "totalKeysExamined" : 129, "totalDocsExamined" : 82,</span> "executionStages" : { "stage" : "FETCH", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 130, "advanced" : 82, "needTime" : 46, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "docsExamined" : 82, "alreadyHasObj" : 0, "inputStage" : { "stage" : "IXSCAN", "nReturned" : 82, "executionTimeMillisEstimate" : 0, "works" : 129, "advanced" : 82, "needTime" : 46, "needYield" : 0, "saveState" : 3, "restoreState" : 3, "isEOF" : 1, "invalidates" : 0, "keyPattern" : { "address.zip" : 1, "date" : -1, "certificate_number" : 1 }, "indexName" : "address.zip_1_date_-1_certificate_number_1", "isMultiKey" : false, "multiKeyPaths" : { "address.zip" : [ ], "date" : [ ], "certificate_number" : [ ] }, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "forward", "indexBounds" : { "address.zip" : [ "[10030.0, 10030.0]" ], "date" : [ "[MaxKey, MinKey]" ], "certificate_number" : [ "(6007104.0, 9400000.0]" ] }, "keysExamined" : 129, "seeks" : 47, "dupsTested" : 0, "dupsDropped" : 0, "seenInvalidated" : 0 } } }, "serverInfo" : { "host" : "m103", "port" : 27000, "version" : "3.6.5-rc0", "gitVersion" : "a20ecd3e3a174162052ff99913bc2ca9a839d618" }, "ok" : 1 } MongoDB Enterprise &gt; </code></pre> </div> <p>&nbsp;</p> <p dir="ltr">From the execution stats, we can see that totalDocsExamined:totalKeysExamined:nReturned is not exactly 1:1:1 but, as per the winning plan section, this index eliminated the sorting stage completely and the execution time went from 149 to 3 milliseconds.</p> <h2 dir="ltr">Conclusion</h2> <p dir="ltr">This query tuning approach follows the Equality-Sort-Range rule, where you first specify the key affected by an equality condition, then the one by which the result is sorted, and finally the key for which a range of values is requested. Equality-Sort-Range is a rule of thumb and works well on scenarios similar to the one described above.</p> <p dir="ltr">As with any rule of thumb, there are scenarios where it won&#8217;t apply. Specifically, when there is a small number of values matching the range condition for each filtered key. In this case, it is better to filter by range first, then sort a smaller number of results.</p> <p>&nbsp;</p> <p>&nbsp;</p> </div></div> Darshan Jayarama https://blog.pythian.com/?p=104584 Tue Jul 31 2018 09:41:42 GMT-0400 (EDT) LEAP#406 CH340G USB to Serial on a Breadboard https://blog.tardate.com/2018/07/leap406-ch340g-usb-to-serial-on-a-breadboard.html <p>The CH340G is a USB to UART Interface chip. It is often used as a cheap alternative to more established brands. I have some CH340G chips, and with only a few extra components one can build a perfectly serviceable USB to TTL-level serial on a breadboard. I test it out by using it to program an ATmega328 on a breadboard over the serial link.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Electronics101/SerialInterface/UsbUartCH340G">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Electronics101/SerialInterface/UsbUartCH340G"><img src="https://leap.tardate.com/Electronics101/SerialInterface/UsbUartCH340G/assets/UsbUartCH340G_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/leap406-ch340g-usb-to-serial-on-a-breadboard.html Mon Jul 30 2018 15:05:11 GMT-0400 (EDT) Text index usage within MongoDB https://blog.pythian.com/mongodb-text-search/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><div style="width: 610px" class="wp-caption aligncenter"><img class="size-full" src="https://cms-assets.tutsplus.com/uploads/users/48/posts/24835/image/mongodb-text.png" alt="MongoDB text" width="600" height="222" /><p class="wp-caption-text">MongoDB full text</p></div> <p>Recently a client came to me asking “How do we verify if a full text search index is being used on MongoDB?” The <em>db.showIndexes()</em> command shows an index on a text field, but <em>explain()</em> shows COLLSCAN and the query is really slow (More about <em>explain()</em> <a href="https://docs.mongodb.com/manual/reference/explain-results/" target="_blank" rel="noopener">here</a>).</p> <p>Since it was an interesting case, I decided to write this blog post, describing the use of text indexes within MongoDB.</p> <p>First, let’s see how to create a text index. The command below will create one for the <em>data.entry_text</em> key in the <em>entries</em> collection:</p> <pre lang="javascript">db.entries.createIndex( { "data.entry_text": "text" } ) </pre> <p>If we check the index definition, the output would look like this:</p> <pre lang="javascript">db.entries.getIndexes() . . . . . . [ { "v": 1, "key": { "_fts": "text", "_ftsx": 1 }, "name": "data.entry_text_text", "ns": "database.entries", "background": false, "weights": { "data.entry_text": 1 }, "default_language": "english", "language_override": "language", "textIndexVersion": 3 } ] . . . . . . </pre> <p>We can see the index is of type “text”, created on the namespace <em>database.entries</em> and only for the field <em>data.entry_text</em>. The index version is &#8220;textIndexVersion&#8221;: 3 and that is the default version since MongoDB 3.2. More information about changes in text indexes introduced changes in version 3 can be found <a href="https://docs.mongodb.com/manual/core/index-text/#create-text-index" target="_blank" rel="noopener">here</a>.</p> <p>Going back to my customer’s issue, they were testing with a query using a search pattern similar to the one below:</p> <pre lang="javascript">db.entries.find({"data.entry_text": /cats/}) </pre> <p>If we look at the explain plan for this query, we can see that no index is being used and the query is doing a full collection scan.</p> <pre lang="javascript">{ "queryPlanner": { "plannerVersion": 1, "namespace": "database.entries", "indexFilterSet": false, "parsedQuery": { "data.entry_text": { "$regex": "cats" } }, "winningPlan": { "stage": "COLLSCAN", "filter": { "data.entry_text": { "$regex": "cats" } }, "direction": "forward" }, "rejectedPlans": [ ] }, "executionStats": { "executionSuccess": true, "nReturned": 20, "executionTimeMillis": 1795, "totalKeysExamined": 0, "totalDocsExamined": 133414, "executionStages": { "stage": "COLLSCAN", "filter": { "data.entry_text": { "$regex": "cats" } }, "nReturned": 20, "executionTimeMillisEstimate": 1799, "works": 133416, "advanced": 20, "needTime": 133395, "needYield": 0, "saveState": 1098, "restoreState": 1098, "isEOF": 1, "invalidates": 0, "direction": "forward", "docsExamined": 133414 }, "allPlansExecution": [ ] }, "serverInfo": { "host": "sanitized", "port": 27017, "version": "3.4.10", "gitVersion": "078f28920cb24de0dd479b5ea6c66c644f6326e9" }, "ok": 1 } </pre> <p>So, what is wrong with the index and why it is not picked up by the optimizer? If you worked with text search in MongoDB before, you probably have noted the problem already. First, a text search requires the $text operator in order to indicate the server on which we want to perform these type of queries. Furthermore, regex syntax is using (/ /) , which will not be considered a full text search.</p> <pre lang="javascript">db.entries.find({"data.entry_text": /cats/}) -- Regex search </pre> <p>Here is how the text search should look like:</p> <pre lang="javascript">db.entries.find({$text : {$search: "cats"}}) -- Text search </pre> <p>Now, if we run explain on the query and check the plan, the WinningPlan key shows that the FTS index is being used for the query. IndexName is our index &#8220;data.entry_text_text&#8221; as expected.</p> <pre lang="javascript">db.entries.find({$text : {$search: "cats"}}).explain() . . . . . . { "queryPlanner" : { "plannerVersion" : 1, "namespace" : "mtkiller.original-mt-entries", "indexFilterSet" : false, "parsedQuery" : { "$text" : { "$search" : "cats", "$language" : "english", "$caseSensitive" : false, "$diacriticSensitive" : false } }, "winningPlan" : { "stage" : "TEXT", "indexPrefix" : { }, "indexName" : "data.entry_text_text", "parsedTextQuery" : { "terms" : [ "cat" ], "negatedTerms" : [ ], "phrases" : [ ], "negatedPhrases" : [ ] }, "textIndexVersion" : 3, "inputStage" : { "stage" : "TEXT_MATCH", "inputStage" : { "stage" : "TEXT_OR", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "_fts" : "text", "_ftsx" : 1 }, "indexName" : "data.entry_text_text", "isMultiKey" : true, "isUnique" : false, "isSparse" : false, "isPartial" : false, "indexVersion" : 2, "direction" : "backward", "indexBounds" : { } } } } }, "rejectedPlans" : [ ] }, "serverInfo" : { "host" : "sanitized", "port" : 37017, "version" : "3.4.15", "gitVersion" : "52e5b5fbaa3a2a5b1a217f5e647b5061817475f9" }, "ok" : 1 } </pre> <h3>Conclusion</h3> <p>Regular expressions utilize B+ tree indexes and work well for search patterns that match the regular expressions against the values in the index. Further optimization can occur if the regular expression is a “prefix expression,” which means that all potential matches start with the same string. However, text search on any field whose value is a string or an array of string elements requires text index. Both regex and text search have their own operators and syntax, so the right ones should be used in each case for the optimizer to choose the expected index.</p> </div></div> Igor Donchovski https://blog.pythian.com/?p=104829 Mon Jul 30 2018 13:24:28 GMT-0400 (EDT) How to Manage Multiple MySQL Binary Installations with SYSTEMD https://blog.pythian.com/manage-multiple-mysql-binary-installations-with-systemd/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>This blog will go into how to manage multiple MySQL binary installations with SYSTEMD using the systemctl command.  With package installations of MySQL using YUM or APT, it&#8217;s quick and easy to manage your server&#8217;s state by executing systemctl commands to stop, start, restart, and status.  But what do you do when you want to install MySQL using the binary installation with a single or with multiple MySQL instances? You can still use SYSTEMD to easily manage the MySQL instances. All commands and testing have been done on Debian, and some details may change in other distro&#8217;s.</p> <h1>MySQL preparation</h1> <p>These are the steps to set up MySQL with multiple instances. If you currently have a MySQL server package installation using YUM or APT, it will need to be removed first. Make sure you keep your client. I also had to install some base packages for MySQL on Debian</p> <pre lang="bash">apt install libaio1 libaio-dev numactl </pre> <h3>Download MySQL binary installation</h3> <p>Download the compressed tar file binary installation and extract to /usr/local, and create a soft link for mysql to the extracted binaries.</p> <p>Example :</p> <pre lang="bash">wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz tar zxvf mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz -C /usr/local ln -s /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ /usr/local/mysql Example result root@binary:/usr/local# ls -al total 44 drwxrwsr-x 11 root staff 4096 Jun 19 17:53 . drwxr-xr-x 10 root root 4096 Apr 17 18:09 .. drwxrwsr-x 2 root staff 4096 Apr 17 18:09 bin drwxrwsr-x 2 root staff 4096 Apr 17 18:09 etc drwxrwsr-x 2 root staff 4096 Apr 17 18:09 games drwxrwsr-x 2 root staff 4096 Apr 17 18:09 include drwxrwsr-x 4 root staff 4096 Apr 17 18:22 lib lrwxrwxrwx 1 root staff 9 Apr 17 18:09 man -&gt; share/man lrwxrwxrwx 1 root staff 47 Jun 19 17:53 mysql -&gt; /usr/local/mysql-5.7.22-linux-glibc2.12-x86_64/ drwxr-sr-x 9 root staff 4096 Jun 19 17:52 mysql-5.7.22-linux-glibc2.12-x86_64 drwxrwsr-x 2 root staff 4096 Apr 17 18:09 sbin drwxrwsr-x 7 root staff 4096 Apr 17 18:22 share drwxrwsr-x 2 root staff 4096 Apr 17 18:09 src </pre> <h3>Export path and aliases</h3> <p>Create an export of the MySQL path and aliases to log in to the MySQL instances using pre-made client config files. The password doesn&#8217;t matter right now as it will get updated in a couple of steps. Update the socket for each config file so they are unique because this needs to be different for each MySQL instance. Reboot your server to ensure that the configuration is loaded during boot time correctly. Run &#8220;echo $PATH&#8221; after reboot and validate that the new path is configured to include /usr/local/mysql:/usr/local/mysql/bin.</p> <p>Example :</p> <pre lang="bash">echo "export PATH=$PATH:/usr/local/mysql:/usr/local/mysql/bin" &gt;&gt; /etc/profile.d/mysql.sh echo "alias mysql1='mysql --defaults-file=/etc/instance1_client.cnf'" &gt;&gt; /etc/profile.d/mysql.sh echo "alias mysql2='mysql --defaults-file=/etc/instance2_client.cnf'" &gt;&gt; /etc/profile.d/mysql.sh </pre> <p>Example client config : /etc/instance1_client.cnf</p> <pre lang="bash">[client] user=root password='mysqlpass' socket=/var/run/mysql/mysqld_instance1.sock </pre> <p>Example path :</p> <pre lang="bash">root@binary:~# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/mysql:/usr/local/mysql/bin </pre> <h3>Create user/group, paths, and MySQL permissions</h3> <p>Next, create the user and group that will be used by the MySQL services. Then create the paths and set the proper permissions.</p> <p>Example :</p> <pre lang="bash">groupadd mysql useradd -r -g mysql -s /bin/false mysql mkdir -p /mysql/data/instance1 mkdir -p /mysql/data/instance2 mkdir -p /mysql/logs/instance1 mkdir -p /mysql/logs/instance2 mkdir /var/run/mysql/ chown mysql:mysql /var/run/mysql chown -R mysql:mysql /mysql </pre> <h3>Create MySQL configuration for each instance</h3> <p>Below is an example of the first instance I placed in /etc/my.instance1.cnf. My naming convention is instanceX. As an example, my first instance is instance1, and my second instance is instance2. I then place that naming convention in the configuration filename my.instance1.cnf. I could have done my.cnf.instance1 or instance1.my.cnf.</p> <p>Having the naming convention in the configuration files is very important as it will come into effect with the configuration of SYSTEMD. I also set my naming convention in the PID file because this will also be used by configuration of SYSTEMD. Make sure the socket you have configured in your configuration files matches what was in your client configuration files in the previous step.</p> <p>Example :</p> <pre lang="bash">[mysqld] ## Server basedir = /usr/local/mysql datadir = /mysql/data/instance1 binlog_format = MIXED log_slave_updates = 1 log-bin = /mysql/logs/instance1/mysql-bin relay-log = /mysql/logs/instance1/relay-bin log_error = /mysql/logs/instance1/mysql_error.log slow_query_log_file = /mysql/logs/instance1/slow_query.log socket = /var/run/mysql/mysqld_instance1.sock pid-file = /var/run/mysql/mysqld_instance1.pid port = 3306 user = mysql server-id = 1 </pre> <h3>Initialize MySQL</h3> <p>Initialize your database and get the temporary password for the database from the error log file so you can log in and update the passwords after the MySQL instances are started. Next, update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the temporary password. This will make it simpler to log in and change the initial password. Repeat this for each instance.</p> <p>Example :</p> <pre lang="bash">root@binary:/usr/local# /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --initialize Database files are present in the data directory root@binary:/usr/local# ls -al /mysql/data/instance1 total 110628 drwxr-xr-x 5 mysql mysql 4096 Jun 22 13:19 . drwxr-xr-x 4 mysql mysql 4096 Jun 19 18:04 .. -rw-r----- 1 mysql mysql 56 Jun 22 13:18 auto.cnf -rw-r----- 1 mysql mysql 417 Jun 22 13:19 ib_buffer_pool -rw-r----- 1 mysql mysql 12582912 Jun 22 13:19 ibdata1 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:19 ib_logfile0 -rw-r----- 1 mysql mysql 50331648 Jun 22 13:18 ib_logfile1 drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 mysql drwxr-x--- 2 mysql mysql 4096 Jun 22 13:18 performance_schema drwxr-x--- 2 mysql mysql 12288 Jun 22 13:19 sys Capture the temporary root password root@binary:/usr/local# cat /mysql/logs/instance1/mysql_error.log 2018-06-22T17:18:50.464555Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2018-06-22T17:18:50.978714Z 0 [Warning] InnoDB: New log files created, LSN=45790 2018-06-22T17:18:51.040350Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2018-06-22T17:18:51.129954Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 5506e36e-7640-11e8-9b0f-0800276bf3cb. 2018-06-22T17:18:51.132700Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2018-06-22T17:18:51.315917Z 1 [Note] A temporary password is generated for root@localhost: ptraRbBy&lt;6Wm </pre> <h1>SYSTEMD configuration</h1> <p>Now that the MySQL instances are prepared and ready to be started. We will now configure SYSTEMD so that systemctl can manage the MySQL instances.</p> <h3>SYSTEMD MySQL service</h3> <p>Create the SYSTEMD base configuration at /etc/systemd/system/mysql@.service and place the following contents inside. This is where the naming convention of the MySQL instances comes into effect. In the SYSTEMD configuration file, %I will be replaced with the naming convention that you use. You want to make sure that the PIDfile and the MySQL configuration file in the ExecStart will match up with your previous configurations. You only need to create one SYSTEMD configuration file. As you enable each service in the next step, SYSTEMD will make copies of the configuration for you and replace the %I accordingly with your naming convention.</p> <p>Example /etc/systemd/system/mysql@.service :</p> <pre lang="bash">[Unit] Description=Oracle MySQL After=network.target [Service] Type=forking User=mysql Group=mysql PIDFile=/var/run/mysql/mysqld_prd_%I.pid ExecStart= ExecStart=/usr/cd --defaults-file=/etc/my.%I.cnf --daemonize Restart=on-failure RestartPreventExitStatus=1 [Install] WantedBy=multi-user.target </pre> <h3>Enable and start the MySQL instances</h3> <p>Enable the service, placing the naming convention after the @ symbol using the systemctl command. SYSTEMD will make a copy of the configuration file in the previous step and replace the %I with the text after the @. When viewing the status of the service, you will see that the process is using the correct configuration file based upon the naming convention. Repeat for each instance.</p> <p>Example :</p> <pre lang="bash">systemctl enable mysql@instance1 systemctl start mysql@instance1 root@binary:~# systemctl status mysql@instance1 ● mysql@instance1.service - Oracle MySQL Loaded: loaded (/etc/systemd/system/mysql@.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-06-22 14:51:48 EDT; 10min ago Process: 11372 ExecStart=/usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize (code=exited, status=0/SUCCESS) Main PID: 11374 (mysqld) Tasks: 28 (limit: 4915) CGroup: /system.slice/system-mysql.slice/mysql@instance1.service └─11374 /usr/local/mysql/bin/mysqld --defaults-file=/etc/my.instance1.cnf --daemonize Jun 22 14:51:48 binary systemd[1]: Starting Oracle MySQL... Jun 22 14:51:48 binary systemd[1]: Started Oracle MySQL. </pre> <p>Example PID and Socket files :</p> <pre lang="bash">root@binary:/var/log# ls -al /var/run/mysql total 16 drwxr-xr-x 2 mysql mysql 160 Jul 20 10:33 . drwxr-xr-x 19 root root 640 Jul 20 10:33 .. -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance1.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance1.sock.lock -rw-r----- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.pid srwxrwxrwx 1 mysql mysql 0 Jul 20 10:33 mysqld_instance2.sock -rw------- 1 mysql mysql 6 Jul 20 10:33 mysqld_instance2.sock.lock </pre> <h1>Managing MySQL</h1> <p>Now that we have started the two MySQL instances, we can log in to them using the aliases that we created pointing to the client configuration files that we updated to use the temporary root password. Next, we can log in and change the initial root password, and then update the configuration files accordingly with the new credentials.</p> <h3>Change root password</h3> <p>Log in to MySQL using the alias mysql1 and mysql2 which we configured previously and change the root password. Repeat for each instance.</p> <p>Example :</p> <pre lang="bash">mysql1 mysql&gt; ALTER USER 'root'@'localhost' IDENTIFIED BY 'MyNewPass'; mysql&gt; exit </pre> <h3>Update MySQL client configuration</h3> <p>Update the MySQL client configuration files (/etc/instance1_client.cnf and /etc/instance2_client.cnf in my example) with the new passwords. Repeat for each instance.</p> <p>Example client config /etc/instance1_client.cnf :</p> <pre lang="bash">[client] user=root password='MyNewPass' socket=/var/run/mysql/mysqld_instance1.sock </pre> <h1>Conclusion</h1> <p>Configuring MySQL to be controlled by systemctl makes it much easier to manage your MySQL instances. This process also allows for easy configuration of multiple instances, even beyond two. But keep in mind when configuring multiple MySQL instances on a single server, you allocate the memory for each of the MySQL instances accordingly to allow for overhead.</p> </div></div> Kevin Markwardt https://blog.pythian.com/?p=104722 Mon Jul 30 2018 09:23:30 GMT-0400 (EDT) FAQ: Webinars for “Oracle Indexing Internals and Best Practices” https://richardfoote.wordpress.com/2018/07/30/faq-webinars-for-oracle-indexing-internals-and-best-practices/ I&#8217;ve been somewhat inundated with questions regarding the &#8220;Oracle Indexing Internals and Best Practices&#8221; webinar series I&#8217;ll be running in October and November since I announced both webinar series last week. So I&#8217;ve compiled the following list of frequently asked questions which I&#8217;m hoping will address most of those asked. If you have any additional [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5659 Mon Jul 30 2018 06:01:00 GMT-0400 (EDT) Network Slowness Caused Database Contention That Caused Goldengate Lag http://www.fahdmirza.com/2018/07/network-slowness-caused-database.html <div dir="ltr" style="text-align: left;" trbidi="on">I got paged for a goldengate extract lagging behind. Checked the extract configuration and it was normal extract and it seemed stuck without giving any error in the ggserr.log or anywhere else. It wasn't abended either and was in running state.<br /><br /><br /><a name='more'></a>Tried stopping and restating it, but still it remained in running state while doing nothing and lag was increasing. So the issue was clearly outside of goldengate. Checked the database by starting from alert log and didn't see any errors there either.<br /><br />Jumped into the database and ran some queries to see which sessions were active and what they were running. After going through various active sessions, turned out that few of them were doing long transactions over a dblink and these sessions were several hours old and seemed stuck. These sessions were also inducing widespread delay on the temp tablespace and were blocking other sessions. Due to undersized temp plus these stuck long running transactions, database performance was also slower than usual.<br /><br />Ran a select statement over that dblink and it was very slow. Used tnsping to ping that database remotely and it returned with delay. Then used network commands like ping, tracert, etc to check network status and it all was pointing to delay in network.<br /><br />Killed the long running transaction as it was going nowhere, and that eased the pressure on temp tablespace, which in return enabled extract to finish off the lag.</div> Fahd Mirza tag:blogger.com,1999:blog-3496259157130184660.post-3808767576234023032 Mon Jul 30 2018 01:55:00 GMT-0400 (EDT) Oracle 18c Grid Infrastructure Upgrade https://gavinsoorma.com/2018/07/oracle-18c-grid-infrastructure-upgrade/ <h3><span style="color: #ff0000;">Upgrade Oracle 12.1.0.2 Grid Infrastructure to 18c </span></h3> <p><strong>Download the 18c Grid Infrastructure software (18.3)</strong></p> <p><a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html">https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png"><img class="aligncenter size-full wp-image-8220" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png" alt="" width="774" height="415" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png 774w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-300x161.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-768x412.png 768w" sizes="(max-width: 774px) 100vw, 774px" /></a></p> <p>&nbsp;</p> <p><strong>Prerequisites</strong></p> <ul> <li>Apply the patch <strong>21255373</strong> to the 12.1.0.2 Grid Infrastructure software home</li> <li>Edit the /etc/security/limits.conf file and add the lines:</li> </ul> <p>oracle soft stack 10240<br /> grid   soft stack 10240</p> <p>&nbsp;</p> <p><strong>Notes</strong></p> <ul> <li>Need to have at least 10 GB of free space in the $ORACLE_BASE directory</li> <li>The unzipped 18c Grid Infrastructure software occupies around 11 GB of disk space &#8211; a big increase on the earlier versions</li> <li>The Grid Infrastructure upgrade can be performed in rolling fashion -configure Batches for this</li> <li>We can see the difference in the software version between the RAC nodes while GI upgrade is in progress &#8230;.</li> </ul> <p>During Upgrade:</p> <p>[root@rac01 trace]# cd /u02/app/18.0.0/grid/bin</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [12.1.0.2.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [12.1.0.2.0]</p> <p>[root@rac01 bin]#</p> <p>&nbsp;</p> <p>After Upgrade:</p> <p>[root@rac01 bin]# <strong>./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [18.0.0.0.0]</p> <p>[root@rac01 bin]# <strong>./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [18.0.0.0.0]</p> <p>&nbsp;</p> <ul> <li>The minimum memory requirements is 8 GB &#8211; same as 12c Release 2</li> <li>Got an error PRVF-5600 related to /etc/resolv.conf stating the file cannot be parsed as some lines are in an improper format   &#8211; ignored the error because the format of the file is correct.</li> </ul> <p>[grid@rac01 grid]$ cat /etc/resolv.conf<br /> # Generated by NetworkManager<br /> search localdomain  rac.localdomain</p> <p>nameserver 192.168.56.102</p> <p>options timeout:3<br /> options retries:1</p> <p>&nbsp;</p> <p><strong>Create the directory structure on both RAC nodes</strong></p> <p>[root@rac01 app]# su &#8211; grid</p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/</p> <p>[grid@rac01 ~]$ cd /u02/app</p> <p>[grid@rac01 app]$ mkdir 18.1.0</p> <p>[grid@rac01 app]$ cd 18.1.0/</p> <p>[grid@rac01 18.0.0]$ mkdir grid</p> <p>[grid@rac01 18.0.0]$ cd grid</p> <p>[grid@rac01 grid]$ ssh grid@rac02</p> <p>Last login: Sun Jul 29 11:22:38 2018 from rac01.localdomain</p> <p>[grid@rac02 ~]$ cd /u02/app</p> <p>[grid@rac02 app]$ mkdir 18.1.0</p> <p>[grid@rac02 app]$ cd 18.1.0/</p> <p>[grid@rac02 18.0.0]$ mkdir grid</p> <p>&nbsp;</p> <p><strong>Unzip the 18c GI Software</strong></p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/grid</p> <p>[grid@rac01 grid]$ unzip -q /media/sf_software/LINUX.X64_180000_grid_home.zip</p> <p>&nbsp;</p> <p><strong>Execute gridSetup.sh</strong></p> <p>[grid@rac01 18.0.0]$ export DISPLAY=:0.0</p> <p>[grid@rac01 18.0.0]$ <strong>./gridSetup.sh</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png"><img class="aligncenter size-full wp-image-8196" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png" alt="" width="612" height="382" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png 612w, https://gavinsoorma.com/wp-content/uploads/2018/07/18a-300x187.png 300w" sizes="(max-width: 612px) 100vw, 612px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png"><img class="aligncenter size-full wp-image-8197" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png"><img class="aligncenter size-full wp-image-8198" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png"><img class="aligncenter size-full wp-image-8199" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png" alt="" width="800" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png 800w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-768x575.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-627x470.png 627w" sizes="(max-width: 800px) 100vw, 800px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png"><img class="aligncenter size-full wp-image-8200" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png" alt="" width="794" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png"><img class="aligncenter size-full wp-image-8201" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png" alt="" width="801" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png 801w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-768x571.png 768w" sizes="(max-width: 801px) 100vw, 801px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png"><img class="aligncenter size-full wp-image-8202" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png"><img class="aligncenter size-full wp-image-8203" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png" alt="" width="802" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-768x571.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png"><img class="aligncenter size-full wp-image-8204" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png" alt="" width="797" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-768x574.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png"><img class="aligncenter size-full wp-image-8205" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png"><img class="aligncenter size-full wp-image-8206" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png" alt="" width="802" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-768x573.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png"><img class="aligncenter size-full wp-image-8207" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png" alt="" width="798" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-768x577.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png"><img class="aligncenter size-full wp-image-8208" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png" alt="" width="797" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png"><img class="aligncenter size-full wp-image-8209" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png" alt="" width="798" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png"><img class="aligncenter size-full wp-image-8210" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png" alt="" width="798" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png"><img class="aligncenter size-full wp-image-8211" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png" alt="" width="799" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-768x578.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png"><img class="aligncenter size-full wp-image-8212" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png" alt="" width="793" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-768x582.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png"><img class="aligncenter size-full wp-image-8213" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png" alt="" width="794" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-768x580.png 768w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png"><img class="aligncenter size-full wp-image-8214" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png"><img class="aligncenter size-full wp-image-8215" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png" alt="" width="793" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-768x579.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png"><img class="aligncenter size-full wp-image-8216" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png" alt="" width="799" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-768x573.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><strong>ASM Configuration Assistant 18c</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png"><img class="aligncenter size-full wp-image-8217" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png" alt="" width="949" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png 949w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-300x188.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-768x482.png 768w" sizes="(max-width: 949px) 100vw, 949px" /></a></p> <p>&nbsp;</p> <p><strong>GIMR pluggable database upgraded to 18c</strong></p> <p>&nbsp;</p> <pre>[grid@rac01 bin]$ export ORACLE_SID=-MGMTDB [grid@rac01 bin]$ pwd /u02/app/18.0.0/grid/bin [grid@rac01 bin]$ ./sqlplus sys as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Jul 29 22:09:17 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; select name,open_mode from v$pdbs; NAME -------------------------------------------------------------------------------- OPEN_MODE ---------- PDB$SEED READ ONLY <strong>GIMR_DSCREP_10</strong> READ WRITE SQL&gt; alter session set container=GIMR_DSCREP_10; Session altered. SQL&gt; select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX UNDOTBS1 TEMP USERS SYSGRIDHOMEDATA SYSCALOGDATA SYSMGMTDATA SYSMGMTDATADB SYSMGMTDATACHAFIX SYSMGMTDATAQ 11 rows selected. SQL&gt; select file_name from dba_data_files where tablespace_name='SYSMGMTDATA'; FILE_NAME -------------------------------------------------------------------------------- +OCR/_MGMTDB/7224A7DF6CB92239E0536438A8C03F3A/DATAFILE/sysmgmtdata.281.982792479 SQL&gt; </pre> Gavin Soorma https://gavinsoorma.com/?p=8218 Mon Jul 30 2018 01:05:29 GMT-0400 (EDT) mV Meter Battery and Protection Mod https://blog.tardate.com/2018/07/mv-meter-battery-and-protection-mod.html <p>The ATmega328-based millivolt meter based on a design by Scullcom Hobby Electronics has been serving well on my bench. But time for a couple of mods:</p> <ul> <li>adding a 9V internal battery that can be used when not connected to external supply - great for when the bench is crowded</li> <li>simple reverse-polarity protection (inline rectifier), particularly to avoid any confusion over centre-negative/centre-positive power connectors</li> </ul> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/MilliVoltmeterDIY/CustomBoardAndEnclosure">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/MilliVoltmeterDIY/CustomBoardAndEnclosure"><img src="https://leap.tardate.com/Equipment/MilliVoltmeterDIY/CustomBoardAndEnclosure/assets/Battery_mod_installed.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/mv-meter-battery-and-protection-mod.html Sun Jul 29 2018 14:27:43 GMT-0400 (EDT) LEAP#405 Bootloaders and Arduino Serial Programming https://blog.tardate.com/2018/07/leap405-bootloaders-and-arduino-serial-programming.html <p>The Arduino IDE makes programming AVR-based microcontrollers so easy that many quite simple concepts get lost in the fog. I confess to having been hazy for the longest time concerning the role of the bootloader and what exactly was going on when you click the “Upload Sketch” button. There are actually some great resources around the net and on YouTube, but they can also mislead a little because they might focus on just one aspect, so I decided to try and pull together a comprehensive soup-to-nuts story. It covers:</p> <ul> <li>How to check what bootloader (if any) is on a chip</li> <li>What bootloaders are available?</li> <li>How to burn a bootloader with the Arduino IDE</li> <li>How to burn a bootloader with Nick Gammon’s incredibly useful Arduino utility sketches</li> <li>Breadboard Setup for Programming over USB-Serial (FTDI and CH340 veriants)</li> <li>Programming over USB-Serial with the Arduino IDE</li> <li>Programming over USB-Serial with avrdude and gcc toolchain</li> </ul> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/ATmegaSerialProgrammer">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/ATmegaSerialProgrammer"><img src="https://leap.tardate.com/playground/ATmegaSerialProgrammer/assets/ATmegaSerialProgrammer_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/leap405-bootloaders-and-arduino-serial-programming.html Sun Jul 29 2018 02:33:34 GMT-0400 (EDT) LEAP#404 The Arza-matron https://blog.tardate.com/2018/07/leap404-the-arza-matron.html <p>I’ve had the other half of the guitar I used for <a href="https://fretboard.tardate.com/">The Fretboard (LEAP#018)</a> sitting on a shelf ever since. It’s almost been thrown out a number of times, but luckily I didn’t as it proved to be inspiration for this last-minute idea for a party decoration.</p> <p>The basic idea, using as many on-hand parts as possible:</p> <ul> <li>sound input</li> <li>2 LED strip circuits independently controlled and powered from 12V</li> <li>simple Arduino sketch to sample the sound and drive the LED strips with PWM</li> </ul> <p>It worked just fine, although in the process I discovered the sound module I used did not output continuous reading but rather a threshold trigger (so the effect was not as subtle as I planned). Something to fix next time I want to fire this up…</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Kraft/arzamatron">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Kraft/arzamatron"><img src="https://leap.tardate.com/Kraft/arzamatron/assets/arzamatron_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/leap404-the-arza-matron.html Sat Jul 28 2018 03:14:10 GMT-0400 (EDT) Accessing the Oracle 18c Database in Vagrant VM http://oracle-help.com/oracle-18c/accessing-the-oracle-18c-database-in-vagrant-vm/ <p>In the previous post, we learned the installation of <strong>Vagrant</strong> and provisioning the <strong>Oracle 18c Database</strong> in Vagrant VM.</p> <div class="entry-content-asset"> <blockquote class="wp-embedded-content" data-secret="EcVZdpsdV6"><p><a href="http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/">Oracle Database 18c installation with Vagrant</a></p></blockquote> <p><iframe class="wp-embedded-content" sandbox="allow-scripts" security="restricted" src="http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/embed/#?secret=EcVZdpsdV6" data-secret="EcVZdpsdV6" width="600" height="338" title="&#8220;Oracle Database 18c installation with Vagrant&#8221; &#8212; ORACLE-HELP" frameborder="0" marginwidth="0" marginheight="0" scrolling="no"></iframe></div> <p>Now, in this post, we will show how to access <strong>Oracle 18c Database</strong> in <strong>Vagrant VM</strong>. First of all, we can check the <strong>status</strong> of VM.</p><pre class="crayon-plain-tag">D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant global-status id name provider state directory ------------------------------------------------------------------------------------------------------------------------- c09dbcd oracle-18c-vagrant virtualbox running D:/shared/vagrant-boxes-master/vagrant-boxes-master/OracleDatabase/18.3.0</pre><p>Next step uses <strong><span style="font-family: 'courier new', courier, monospace;">vagrant ssh</span></strong> command to log in to VM and then <strong><span style="font-family: 'courier new', courier, monospace;">sudo </span></strong>to <strong>oracle</strong> user.</p><pre class="crayon-plain-tag">D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant ssh Welcome to Oracle Linux Server release 7.5 (GNU/Linux 4.14.35-1818.0.9.el7uek.x86_64) The Oracle Linux End-User License Agreement can be viewed here: * /usr/share/eula/eula.en_US For additional packages, updates, documentation and community help, see: * http://yum.oracle.com/ [vagrant@oracle-18c-vagrant ~]$ sudo su - oracle Last login: Fri Jul 27 14:51:44 UTC 2018 [oracle@oracle-18c-vagrant ~]$</pre><p>Now check the <span style="font-family: 'courier new', courier, monospace;"><strong>pmon</strong> </span>status and use <strong><span style="font-family: 'courier new', courier, monospace;">sqlplus</span> </strong>to connect to the <strong>Oracle Database</strong>.</p><pre class="crayon-plain-tag">[oracle@oracle-18c-vagrant ~]$ ps -ef|grep pmon oracle 27894 1 0 15:52 ? 00:00:00 ora_pmon_ORCLCDB oracle 28858 28813 0 16:02 pts/0 00:00:00 grep --color=auto pmon [oracle@oracle-18c-vagrant ~]$ ps -ef|grep tns root 21 2 0 15:24 ? 00:00:00 [netns] oracle 24009 1 0 15:35 ? 00:00:00 /opt/oracle/product/18c/dbhome_1/bin/tnslsnr LISTENER -inherit oracle 28862 28813 0 16:02 pts/0 00:00:00 grep --color=auto tns [oracle@oracle-18c-vagrant ~]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 27 16:02:09 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; select name from v$database; NAME --------- ORCLCDB SQL&gt; show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 ORCLPDB1 READ WRITE NO SQL&gt;exit Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 [oracle@oracle-18c-vagrant ~]$</pre><p>In this step, we can change the default passwords and connect via an <strong>EZConnect</strong> method.</p><pre class="crayon-plain-tag">[oracle@oracle-18c-vagrant ~]$ ls setPassword.sh [oracle@oracle-18c-vagrant ~]$ ./setPassword.sh oraclevagrant #New_Password The Oracle base remains unchanged with value /opt/oracle SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 27 15:03:05 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; User altered. SQL&gt; User altered. SQL&gt; Session altered. SQL&gt; User altered. SQL&gt; Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 [oracle@oracle-18c-vagrant ~]$ [oracle@oracle-18c-vagrant ~]$ sql system@//localhost:1521/ORCLCDB SQLcl: Release 17.3.0 Production on Fri Jul 27 15:02:11 2018 Copyright (c) 1982, 2018, Oracle. All rights reserved. Password? (**********?) ************* Last Successful login time: Fri Jul 27 2018 15:02:21 +00:00 Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; exit Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0</pre><p>After testing will be completed, now we will destroy <strong>Vagrant VM</strong>.</p><pre class="crayon-plain-tag">D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant port The forwarded ports for the machine are listed below. Please note that these values may differ from values configured in the Vagrantfile if the provider supports automatic port collision detection and resolution. 22 (guest) =&gt; 2222 (host) 1521 (guest) =&gt; 1521 (host) 5500 (guest) =&gt; 5500 (host) D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant ssh-config Host oracle-18c-vagrant HostName 127.0.0.1 User vagrant Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile D:/shared/vagrant-boxes-master/vagrant-boxes-master/OracleDatabase/18.3.0/.vagrant/machines/oracle-18c-vagrant/virtualbox/private_key IdentitiesOnly yes LogLevel FATAL D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant halt ==&gt; oracle-18c-vagrant: Attempting graceful shutdown of VM... D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant destroy oracle-18c-vagrant: Are you sure you want to destroy the 'oracle-18c-vagrant' VM? [y/N] y ==&gt; oracle-18c-vagrant: Destroying VM and associated drives... D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;</pre><p>In the next Article, we will see how to<strong> install Vagrant VM on Linux </strong></p> <p>Stay tuned for <strong>More articles on Vagrant related to Oracle<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-18c/accessing-the-oracle-18c-database-in-vagrant-vm/">Accessing the Oracle 18c Database in Vagrant VM</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5235 Fri Jul 27 2018 16:51:09 GMT-0400 (EDT) Oracle Database 18c installation with Vagrant http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/ <p>A vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the &#8220;works on my machine&#8221; excuse a relic of the past. It provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and is controlled by a single consistent workflow to help maximize the productivity and flexibility.</p> <h3 id="introduction-to-vagrant"><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.vagrantup.com/intro/getting-started/">Introduction to Vagrant</a></span></h3> <p><strong>Oracle</strong> has recently launched <strong>Oracle 18c Database</strong> which is now available on <strong>GitHub</strong> repository with <strong>Vagrant</strong> boxes. With the help of <strong>Vagrant</strong>, we can get ready <strong>VirtualBox</strong> with OS and installed Oracle database for sandbox testing and QA environment. In a couple of minutes, we can get readymade VM having DB for this requirement, we need following things</p> <ol> <li><span style="color: #0000ff;"><a style="color: #0000ff;" href="http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html">Oracle VirtualBox</a></span></li> <li><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://www.vagrantup.com/downloads.html">Vagrant installer</a></span></li> <li><span style="color: #0000ff;"><a style="color: #0000ff;" href="http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html">Oracle Database 18c setup</a></span></li> <li><span style="color: #0000ff;"><a style="color: #0000ff;" href="https://github.com/oracle/vagrant-boxes">Vagrant configuration files from GitHub</a></span></li> </ol> <p>After successful installation of <strong>VirtualBox</strong> &amp; <strong>Vagrant, </strong>now, we can create an Oracle Database Vagrant box. Then, download a configuration file from <strong>GitHub</strong> by click on <em><strong>Clone or Download</strong></em> into your local machine. There is another option to the configuration from GIT command but that, I will cover in another post. You will need to download the <strong>Linux x86-64</strong> zip file of  <strong>Oracle 18c.</strong></p> <p>Now, I will unzip the Vagrant files on my local machine.</p> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg"><img data-attachment-id="5227" data-permalink="http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/attachment/1-65/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?fit=953%2C451" data-orig-size="953,451" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1532736233&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?fit=300%2C142" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?fit=953%2C451" class="size-full wp-image-5227 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?resize=953%2C451" alt="" width="953" height="451" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?w=953 953w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?resize=300%2C142 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?resize=768%2C363 768w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?resize=60%2C28 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/07/1-1.jpg?resize=150%2C71 150w" sizes="(max-width: 953px) 100vw, 953px" data-recalc-dims="1" /></a></p> <p>Next step is to open the <span style="font-family: 'courier new', courier, monospace;"><strong>CMD</strong> </span>with the &#8220;<span style="font-family: 'courier new', courier, monospace;"><strong>run as administrator</strong></span>&#8220;. Check the Vagrant is installed and it&#8217;s version. In my case, I installed Vagrant in <strong>&#8220;<span style="font-family: 'courier new', courier, monospace;">C:\HashiCorp\Vagrant</span>&#8220;</strong></p><pre class="crayon-plain-tag">C:\HashiCorp\Vagrant&gt;vagrant -v Vagrant 2.1.2 C:\HashiCorp\Vagrant&gt;vagrant -h Usage: vagrant [options] <command></command> [] -v, --version Print the version and exit. -h, --help Print this help. Common commands: box manages boxes: installation, removal, etc. destroy stops and deletes all traces of the vagrant machine global-status outputs status Vagrant environments for this user halt stops the vagrant machine help shows the help for a subcommand init initializes a new Vagrant environment by creating a Vagrantfile login log in to HashiCorp's Vagrant Cloud package packages a running vagrant environment into a box plugin manages plugins: install, uninstall, update, etc. port displays information about guest port mappings powershell connects to machine via powershell remoting provision provisions the vagrant machine push deploys code in this environment to a configured destination rdp connects to machine via RDP reload restarts vagrant machine, loads new Vagrantfile configuration resume resume a suspended vagrant machine snapshot manages snapshots: saving, restoring, etc. ssh connects to machine via SSH ssh-config outputs OpenSSH valid configuration to connect to the machine status outputs status of the vagrant machine suspend suspends the machine up starts and provisions the vagrant environment validate validates the Vagrantfile version prints current and latest Vagrant version For help on any individual command run `vagrant COMMAND -h` Additional subcommands are available, but are either more advanced or not commonly used. To see all subcommands, run the command `vagrant list-commands`.</pre><p>Next, we will have to move the <strong><span style="font-family: 'courier new', courier, monospace;">vagrant-boxes/OracleDatabase/&lt;version&gt;</span></strong> folder of the version we would like to build and then copy the Oracle Database installer zip file into the folder. In my case, I will install Oracle 18c, So we will move <strong><span style="font-family: 'courier new', courier, monospace;">D:\shared\vagrant-boxes-master\OracleDatabase\18.3.0</span></strong> and copy installer here.</p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg"><img data-attachment-id="5228" data-permalink="http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/attachment/2-57/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?fit=856%2C507" data-orig-size="856,507" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1532736795&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="2" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?fit=300%2C178" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?fit=856%2C507" class="size-full wp-image-5228 aligncenter" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?resize=856%2C507" alt="" width="856" height="507" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?w=856 856w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?resize=300%2C178 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?resize=768%2C455 768w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?resize=60%2C36 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/2-1.jpg?resize=150%2C89 150w" sizes="(max-width: 856px) 100vw, 856px" data-recalc-dims="1" /></a></p> <p>Now, we are ready to fire <span style="font-family: 'courier new', courier, monospace;"><strong>Vagrant UP</strong></span> command to install <strong>VirtualBox</strong> with <strong>Oracle Database 18c</strong>.</p><pre class="crayon-plain-tag">D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant up Bringing machine 'oracle-18c-vagrant' up with 'virtualbox' provider... ==&gt; oracle-18c-vagrant: Importing base box 'ol7-latest'... ==&gt; oracle-18c-vagrant: Matching MAC address for NAT networking... ==&gt; oracle-18c-vagrant: Setting the name of the VM: oracle-18c-vagrant ==&gt; oracle-18c-vagrant: Clearing any previously set network interfaces... ==&gt; oracle-18c-vagrant: Preparing network interfaces based on configuration... oracle-18c-vagrant: Adapter 1: nat ==&gt; oracle-18c-vagrant: Forwarding ports... oracle-18c-vagrant: 1521 (guest) =&gt; 1521 (host) (adapter 1) oracle-18c-vagrant: 5500 (guest) =&gt; 5500 (host) (adapter 1) oracle-18c-vagrant: 22 (guest) =&gt; 2222 (host) (adapter 1) ==&gt; oracle-18c-vagrant: Running 'pre-boot' VM customizations... ==&gt; oracle-18c-vagrant: Booting VM... ==&gt; oracle-18c-vagrant: Waiting for machine to boot. This may take a few minutes... oracle-18c-vagrant: SSH address: 127.0.0.1:2222 oracle-18c-vagrant: SSH username: vagrant oracle-18c-vagrant: SSH auth method: private key oracle-18c-vagrant: oracle-18c-vagrant: Vagrant insecure key detected. Vagrant will automatically replace oracle-18c-vagrant: this with a newly generated keypair for better security. oracle-18c-vagrant: oracle-18c-vagrant: Inserting generated public key within guest... oracle-18c-vagrant: Removing insecure key from the guest if it's present... oracle-18c-vagrant: Key inserted! Disconnecting and reconnecting using new SSH key... ==&gt; oracle-18c-vagrant: Machine booted and ready! ==&gt; oracle-18c-vagrant: Checking for guest additions in VM... ==&gt; oracle-18c-vagrant: Setting hostname... ==&gt; oracle-18c-vagrant: Mounting shared folders... oracle-18c-vagrant: /vagrant =&gt; D:/shared/vagrant-boxes-master/vagrant-boxes-master/OracleDatabase/18.3.0 ==&gt; oracle-18c-vagrant: Running provisioner: shell... oracle-18c-vagrant: Running: C:/Users/skagupta/AppData/Local/Temp/vagrant-shell20180727-16572-tusxvb.sh oracle-18c-vagrant: INSTALLER: Started up oracle-18c-vagrant: Resolving Dependencies oracle-18c-vagrant: --&gt; Running transaction check oracle-18c-vagrant: ---&gt; Package btrfs-progs.x86_64 0:4.9.1-1.0.2.el7 will be updated oracle-18c-vagrant: ---&gt; Package btrfs-progs.x86_64 0:4.15.1-1.el7 will be an update oracle-18c-vagrant: --&gt; Processing Dependency: libzstd.so.1()(64bit) for package: btrfs-progs-4.15.1-1.el7.x86_64 oracle-18c-vagrant: ---&gt; Package gnupg2.x86_64 0:2.0.22-4.el7 will be updated oracle-18c-vagrant: ---&gt; Package gnupg2.x86_64 0:2.0.22-5.el7_5 will be an update oracle-18c-vagrant: ---&gt; Package iproute.x86_64 0:4.11.0-14.el7 will be updated oracle-18c-vagrant: ---&gt; Package iproute.x86_64 0:4.14.1-5.0.2.el7 will be an update oracle-18c-vagrant: ---&gt; Package kernel-tools.x86_64 0:3.10.0-862.6.3.0.1.el7 will be updated oracle-18c-vagrant: ---&gt; Package kernel-tools.x86_64 0:3.10.0-862.9.1.el7 will be an update oracle-18c-vagrant: ---&gt; Package kernel-tools-libs.x86_64 0:3.10.0-862.6.3.0.1.el7 will be updated oracle-18c-vagrant: ---&gt; Package kernel-tools-libs.x86_64 0:3.10.0-862.9.1.el7 will be an update oracle-18c-vagrant: ---&gt; Package kmod-vboxguest-uek5.x86_64 0:5.2.14-1.el7 will be updated oracle-18c-vagrant: ---&gt; Package kmod-vboxguest-uek5.x86_64 0:5.2.16-1.el7 will be an update oracle-18c-vagrant: ---&gt; Package python-perf.x86_64 0:3.10.0-862.6.3.0.1.el7 will be updated oracle-18c-vagrant: ---&gt; Package python-perf.x86_64 0:3.10.0-862.9.1.el7 will be an update oracle-18c-vagrant: ---&gt; Package xfsprogs.x86_64 0:4.5.0-15.0.1.el7 will be updated oracle-18c-vagrant: ---&gt; Package xfsprogs.x86_64 0:4.15-1.el7 will be an update oracle-18c-vagrant: ---&gt; Package yum.noarch 0:3.4.3-158.0.1.el7 will be updated oracle-18c-vagrant: ---&gt; Package yum.noarch 0:3.4.3-158.0.2.el7 will be an update oracle-18c-vagrant: --&gt; Running transaction check oracle-18c-vagrant: ---&gt; Package libzstd.x86_64 0:1.3.4-1.el7 will be installed oracle-18c-vagrant: --&gt; Finished Dependency Resolution oracle-18c-vagrant: oracle-18c-vagrant: Dependencies Resolved oracle-18c-vagrant: oracle-18c-vagrant: ================================================================================ oracle-18c-vagrant: Package Arch Version Repository Size oracle-18c-vagrant: ================================================================================ oracle-18c-vagrant: Updating: oracle-18c-vagrant: btrfs-progs x86_64 4.15.1-1.el7 ol7_UEKR5 765 k oracle-18c-vagrant: gnupg2 x86_64 2.0.22-5.el7_5 ol7_latest 1.5 M oracle-18c-vagrant: iproute x86_64 4.14.1-5.0.2.el7 ol7_UEKR5 512 k oracle-18c-vagrant: kernel-tools x86_64 3.10.0-862.9.1.el7 ol7_latest 6.3 M oracle-18c-vagrant: kernel-tools-libs x86_64 3.10.0-862.9.1.el7 ol7_latest 6.2 M oracle-18c-vagrant: kmod-vboxguest-uek5 x86_64 5.2.16-1.el7 ol7_developer 152 k oracle-18c-vagrant: python-perf x86_64 3.10.0-862.9.1.el7 ol7_latest 6.3 M oracle-18c-vagrant: xfsprogs x86_64 4.15-1.el7 ol7_UEKR5 1.0 M oracle-18c-vagrant: yum noarch 3.4.3-158.0.2.el7 ol7_latest 1.2 M oracle-18c-vagrant: Installing for dependencies: oracle-18c-vagrant: libzstd x86_64 1.3.4-1.el7 ol7_UEKR5 212 k oracle-18c-vagrant: oracle-18c-vagrant: Transaction Summary oracle-18c-vagrant: ================================================================================ oracle-18c-vagrant: Install ( 1 Dependent package) oracle-18c-vagrant: Upgrade 9 Packages oracle-18c-vagrant: Total download size: 24 M oracle-18c-vagrant: Downloading packages: oracle-18c-vagrant: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. oracle-18c-vagrant: -------------------------------------------------------------------------------- oracle-18c-vagrant: Total 3.5 MB/s | 24 MB 00:06 oracle-18c-vagrant: Running transaction check oracle-18c-vagrant: Running transaction test oracle-18c-vagrant: Transaction test succeeded oracle-18c-vagrant: Running transaction oracle-18c-vagrant: Installing : libzstd-1.3.4-1.el7.x86_64 oracle-18c-vagrant: inflating: /opt/oracle/product/18c/dbhome_1/md/gdal/bin/gdallocationinfo oracle-18c-vagrant: inflating: /opt/oracle/product/18c/dbhome_1/md/gdal/bin/ogrinfo oracle-18c-vagrant: creating: /opt/oracle/product/18c/dbhome_1/md/lib/ oracle-18c-vagrant: inflating: /opt/oracle/product/18c/dbhome_1/md/lib/libsdogdal.so oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/bin/lbuilder oracle-18c-vagrant: -&gt; ../nls/lbuilder/lbuilder oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libocci.so oracle-18c-vagrant: -&gt; libocci.so.18.1 oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libodm18.so oracle-18c-vagrant: -&gt; libodmd18.so oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libagtsh.so oracle-18c-vagrant: -&gt; libagtsh.so.1.0 oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so oracle-18c-vagrant: -&gt; libclntsh.so.18.1 oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libjavavm18.a oracle-18c-vagrant: -&gt; ../javavm/jdk/jdk8/lib/libjavavm18.a oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/jce.jar oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/lib/jce.jar oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/admin/cbp.jar oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/admin/cbp.jar oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libclntshcore.so oracle-18c-vagrant: -&gt; libclntshcore.so.18.1 oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so.11.1 oracle-18c-vagrant: -&gt; libclntsh.so oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so.10.1 oracle-18c-vagrant: -&gt; libclntsh.so oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/precomp/public/ORACA.H oracle-18c-vagrant: -&gt; oraca.h oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLCA.H oracle-18c-vagrant: -&gt; sqlca.h oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLDA.H oracle-18c-vagrant: -&gt; sqlda.h oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/precomp/public/ORACA.COB oracle-18c-vagrant: -&gt; oraca.cob oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLCA.COB oracle-18c-vagrant: -&gt; sqlca.cob oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/admin/classes.bin oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/admin/classes.bin oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/admin/libjtcjt.so oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/admin/libjtcjt.so oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/admin/lfclasses.bin oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/admin/lfclasses.bin oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/cacerts oracle-18c-vagrant: -&gt; ../../../javavm/jdk/jdk8/lib/security/cacerts oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/sunjce_provider.jar oracle-18c-vagrant: -&gt; ../../javavm/jdk/jdk8/lib/sunjce_provider.jar oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/java.security oracle-18c-vagrant: -&gt; ../../../javavm/jdk/jdk8/lib/security/java.security oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/local_policy.jar oracle-18c-vagrant: -&gt; ../../../javavm/jdk/jdk8/lib/security/local_policy.jar oracle-18c-vagrant: linking: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/US_export_policy.jar oracle-18c-vagrant: -&gt; ../../../javavm/jdk/jdk8/lib/security/US_export_policy.jar oracle-18c-vagrant: extracting: /opt/oracle/product/18c/dbhome_1/install/.img.bin oracle-18c-vagrant: finishing deferred symbolic links: oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/bin/lbuilder -&gt; ../nls/lbuilder/lbuilder oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libocci.so -&gt; libocci.so.18.1 oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libodm18.so -&gt; libodmd18.so oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libagtsh.so -&gt; libagtsh.so.1.0 oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so -&gt; libclntsh.so.18.1 oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libjavavm18.a -&gt; ../javavm/jdk/jdk8/lib/libjavavm18.a oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/jce.jar -&gt; ../../javavm/jdk/jdk8/lib/jce.jar oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/admin/cbp.jar -&gt; ../../javavm/jdk/jdk8/admin/cbp.jar oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libclntshcore.so -&gt; libclntshcore.so.18.1 oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so.11.1 -&gt; libclntsh.so oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/lib/libclntsh.so.10.1 -&gt; libclntsh.so oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/precomp/public/ORACA.H -&gt; oraca.h oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLCA.H -&gt; sqlca.h oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLDA.H -&gt; sqlda.h oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/precomp/public/ORACA.COB -&gt; oraca.cob oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/precomp/public/SQLCA.COB -&gt; sqlca.cob oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/admin/classes.bin -&gt; ../../javavm/jdk/jdk8/admin/classes.bin oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/admin/libjtcjt.so -&gt; ../../javavm/jdk/jdk8/admin/libjtcjt.so oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/admin/lfclasses.bin -&gt; ../../javavm/jdk/jdk8/admin/lfclasses.bin oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/cacerts -&gt; ../../../javavm/jdk/jdk8/lib/security/cacerts oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/sunjce_provider.jar -&gt; ../../javavm/jdk/jdk8/lib/sunjce_provider.jar oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/java.security -&gt; ../../../javavm/jdk/jdk8/lib/security/java.security oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/local_policy.jar -&gt; ../../../javavm/jdk/jdk8/lib/security/local_policy.jar oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/javavm/lib/security/US_export_policy.jar -&gt; ../../../javavm/jdk/jdk8/lib/security/US_export_policy.jar oracle-18c-vagrant: Launching Oracle Database Setup Wizard... oracle-18c-vagrant: [WARNING] [INS-32055] The Central Inventory is located in the Oracle base. oracle-18c-vagrant: ACTION: Oracle recommends placing this Central Inventory in a location outside the Oracle base directory. oracle-18c-vagrant: [WARNING] [INS-13013] Target environment does not meet some mandatory requirements. oracle-18c-vagrant: CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /tmp/InstallActions2018-07-27_02-33-04PM/installActions2018-07-27_02-33-04PM.log oracle-18c-vagrant: ACTION: Identify the list of failed prerequisite checks from the log: /tmp/InstallActions2018-07-27_02-33-04PM/installActions2018-07-27_02-33-04PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. oracle-18c-vagrant: The response file for this session can be found at: oracle-18c-vagrant: /opt/oracle/product/18c/dbhome_1/install/response/db_2018-07-27_02-33-04PM.rsp oracle-18c-vagrant: oracle-18c-vagrant: You can find the log of this install session at: oracle-18c-vagrant: /tmp/InstallActions2018-07-27_02-33-04PM/installActions2018-07-27_02-33-04PM.log oracle-18c-vagrant: oracle-18c-vagrant: As a root user, execute the following script(s): oracle-18c-vagrant: 1. /opt/oracle/oraInventory/orainstRoot.sh oracle-18c-vagrant: 2. /opt/oracle/product/18c/dbhome_1/root.sh oracle-18c-vagrant: oracle-18c-vagrant: Execute /opt/oracle/oraInventory/orainstRoot.sh on the following nodes: oracle-18c-vagrant: [oracle-18c-vagrant] oracle-18c-vagrant: Execute /opt/oracle/product/18c/dbhome_1/root.sh on the following nodes: oracle-18c-vagrant: [oracle-18c-vagrant] oracle-18c-vagrant: oracle-18c-vagrant: oracle-18c-vagrant: Successfully Setup Software with warning(s). oracle-18c-vagrant: Moved the install session logs to: oracle-18c-vagrant: oracle-18c-vagrant: /opt/oracle/oraInventory/logs/InstallActions2018-07-27_02-33-04PM oracle-18c-vagrant: Changing permissions of /opt/oracle/oraInventory. oracle-18c-vagrant: Adding read,write permissions for group. oracle-18c-vagrant: Removing read,write,execute permissions for world. oracle-18c-vagrant: Changing groupname of /opt/oracle/oraInventory to dba. oracle-18c-vagrant: The execution of the script is complete. oracle-18c-vagrant: Check /opt/oracle/product/18c/dbhome_1/install/root_oracle-18c-vagrant_2018-07-27_14-35-09-200482162.log for the output of root script oracle-18c-vagrant: INSTALLER: Oracle software installed oracle-18c-vagrant: oracle-18c-vagrant: LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 27-JUL-2018 14:35:09 oracle-18c-vagrant: oracle-18c-vagrant: Copyright (c) 1991, 2018, Oracle. All rights reserved. oracle-18c-vagrant: Starting /opt/oracle/product/18c/dbhome_1/bin/tnslsnr: please wait... oracle-18c-vagrant: TNSLSNR for Linux: Version 18.0.0.0.0 - Production oracle-18c-vagrant: System parameter file is /opt/oracle/product/18c/dbhome_1/network/admin/listener.ora oracle-18c-vagrant: Log messages written to /opt/oracle/diag/tnslsnr/oracle-18c-vagrant/listener/alert/log.xml oracle-18c-vagrant: Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) oracle-18c-vagrant: Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) oracle-18c-vagrant: oracle-18c-vagrant: Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) oracle-18c-vagrant: STATUS of the LISTENER oracle-18c-vagrant: ------------------------ oracle-18c-vagrant: Alias LISTENER oracle-18c-vagrant: Version TNSLSNR for Linux: Version 18.0.0.0.0 - Production oracle-18c-vagrant: Start Date 27-JUL-2018 14:35:09 oracle-18c-vagrant: Uptime 0 days 0 hr. 0 min. 0 sec oracle-18c-vagrant: Trace Level off oracle-18c-vagrant: Security ON: Local OS Authentication oracle-18c-vagrant: SNMP OFF oracle-18c-vagrant: Listener Parameter File /opt/oracle/product/18c/dbhome_1/network/admin/listener.ora oracle-18c-vagrant: Listener Log File /opt/oracle/diag/tnslsnr/oracle-18c-vagrant/listener/alert/log.xml oracle-18c-vagrant: Listening Endpoints Summary... oracle-18c-vagrant: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) oracle-18c-vagrant: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) oracle-18c-vagrant: The listener supports no services oracle-18c-vagrant: The command completed successfully oracle-18c-vagrant: INSTALLER: Listener created oracle-18c-vagrant: [WARNING] [DBT-11209] Current available memory is less than the required available memory (1,536MB) for creating the database. oracle-18c-vagrant: CAUSE: Following nodes do not have required available memory : oracle-18c-vagrant: Node:oracle-18c-vagrant Available memory:1.3867GB (1454060.0KB) oracle-18c-vagrant: Prepare for db operation oracle-18c-vagrant: 8% complete oracle-18c-vagrant: Copying database files oracle-18c-vagrant: 31% complete oracle-18c-vagrant: Creating and starting Oracle instance oracle-18c-vagrant: 32% complete oracle-18c-vagrant: 36% complete oracle-18c-vagrant: 40% complete oracle-18c-vagrant: 43% complete oracle-18c-vagrant: 46% complete oracle-18c-vagrant: Completing Database Creation oracle-18c-vagrant: 51% complete oracle-18c-vagrant: 54% complete oracle-18c-vagrant: Creating Pluggable Databases oracle-18c-vagrant: 58% complete oracle-18c-vagrant: 77% complete oracle-18c-vagrant: Executing Post Configuration Actions oracle-18c-vagrant: 100% complete oracle-18c-vagrant: Database creation complete. For details check the logfiles at: oracle-18c-vagrant: /opt/oracle/cfgtoollogs/dbca/ORCLCDB. oracle-18c-vagrant: Database Information: oracle-18c-vagrant: Global Database Name:ORCLCDB oracle-18c-vagrant: System Identifier(SID):ORCLCDB oracle-18c-vagrant: Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details. oracle-18c-vagrant: oracle-18c-vagrant: SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 27 14:51:44 2018 oracle-18c-vagrant: Version 18.3.0.0.0 oracle-18c-vagrant: oracle-18c-vagrant: Copyright (c) 1982, 2018, Oracle. All rights reserved. oracle-18c-vagrant: oracle-18c-vagrant: Connected to: oracle-18c-vagrant: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production oracle-18c-vagrant: Version 18.3.0.0.0 oracle-18c-vagrant: SQL&gt; oracle-18c-vagrant: Pluggable database altered. oracle-18c-vagrant: oracle-18c-vagrant: SQL&gt; oracle-18c-vagrant: Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production oracle-18c-vagrant: Version 18.3.0.0.0 oracle-18c-vagrant: INSTALLER: Database created oracle-18c-vagrant: INSTALLER: Oratab configured oracle-18c-vagrant: Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-rdbms.service to /etc/systemd/system/oracle-rdbms.service. oracle-18c-vagrant: INSTALLER: Created and enabled oracle-rdbms systemd's service oracle-18c-vagrant: INSTALLER: setPassword.sh file setup oracle-18c-vagrant: ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: Uo1KltTN24E=1 oracle-18c-vagrant: INSTALLER: Installation complete, database ready to use! D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;</pre><p>Now, we can check whether Vagrant is started or not.</p><pre class="crayon-plain-tag">D:\shared\vagrant-boxes-master\vagrant-boxes-master\OracleDatabase\18.3.0&gt;vagrant global-status id name provider state directory ------------------------------------------------------------------------------------------------------------------------- c09dbcd oracle-18c-vagrant virtualbox running D:/shared/vagrant-boxes-master/vagrant-boxes-master/OracleDatabase/18.3.0</pre><p>In the next Article, we will see how to <a href="http://oracle-help.com/oracle-18c/accessing-the-oracle-18c-database-in-vagrant-vm/"><strong>access the database in Vagrant VM and stop &amp; destroy Vagrant VM.</strong></a></p> <p>Stay tuned for <strong>More articles on Vagrant related to Oracle<br /> </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: </strong><a href="https://t.me/helporacle">https://t.me/helporacle</a></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-18c/oracle-database-18c-installation-with-vagrant/">Oracle Database 18c installation with Vagrant</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5225 Fri Jul 27 2018 14:53:29 GMT-0400 (EDT) How to efficiently back up and restore crontab https://blog.pythian.com/how-to-efficiently-backup-and-restore-crontab/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>When a database environment is being patched, entries from crontab are commented and then removed when patching is completed.</p> <p>The process works well when there are only a few entries from crontab, but it does not scale when crontab has dozens of entries and when some entries are commented and some are not.</p> <p>I believe a more efficient method is to backup and restore the existing crontab.</p> <p>Here&#8217;s a demo:</p> <p>Crontab entries.</p> <pre>[oracle@racnode-dc1-1 ~]$ crontab -l */05 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */15 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 #*/25 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */35 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */45 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */55 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 </pre> <p>Backup crontab and display contents. Extra precaution in case crontab.save.dinh is removed.</p> <pre>[oracle@racnode-dc1-1 ~]$ crontab -l &gt; crontab.save.dinh; cat crontab.save.dinh */05 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */15 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 #*/25 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */35 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */45 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */55 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 </pre> <p>Remove crontab.</p> <pre>[oracle@racnode-dc1-1 ~]$ crontab -r; crontab -l no crontab for oracle </pre> <p>Restore crontab.</p> <pre>[oracle@racnode-dc1-1 ~]$ crontab crontab.save.dinh; crontab -l */05 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */15 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 #*/25 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */35 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */45 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */55 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 </pre> <p>If you are lazy like me, then backup can be done in one command.</p> <pre>[oracle@racnode-dc1-1 ~]$ crontab -l &gt; crontab.save.dinh; cat crontab.save.dinh; crontab -r; crontab -l */05 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */15 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 #*/25 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */35 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */45 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 */55 * * * * /bin/date &gt; /tmp/date.out 2&gt;&amp;1 no crontab for oracle [oracle@racnode-dc1-1 ~]$ </pre> </div></div> Michael Dinh https://blog.pythian.com/?p=104813 Fri Jul 27 2018 11:10:54 GMT-0400 (EDT) Automate the installation of Oracle JDK 8 and 10 on RHEL and Debian derivatives https://technology.amis.nl/2018/07/27/automate-the-installation-of-oracle-jdk-8-and-10-on-rhel-and-debian-derivatives/ <p>Automating the Oracle JDK installation on RHEL derivatives (such as CentOS, Oracle Linux) and Debian derivatives (such as Mint, Ubuntu) differs. This is due to different package managers and repositories. In this blog I&#8217;ll provide quick instructions on how to automate the installation of Oracle JDK 8 and 10 on different Linux distributions. I chose JDK 8 and 10 since they are currently the only Oracle JDK versions which receive public updates (see <a href="http://www.oracle.com/technetwork/java/javase/eol-135779.html">here</a>).</p> <p><span id="more-49448"></span></p> <h1>Debian derivatives</h1> <p>Benefit of using the below repositories is that you will often get the latest version and can easily update to the latest version in an existing installation if you want.</p> <h2>Oracle JDK 8</h2> <pre class="brush: plain; title: ; notranslate">sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections sudo apt-get -y install oracle-java8-installer sudo apt-get -y install oracle-java8-set-default</pre> <h2>Oracle JDK 10</h2> <pre class="brush: plain; title: ; notranslate">sudo add-apt-repository ppa:linuxuprising/java sudo apt-get update sudo echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections sudo echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections sudo apt-get -y install oracle-java10-installer sudo apt-get -y install oracle-java10-set-default</pre> <h1>RHEL derivatives</h1> <p>Since RHEL derivatives are often provided by commercial software vendors such as RedHat and Oracle, they like to work on a subscription basis for their repositories since people pay for using them. Configuration of the specific repositories and subscriptions of course differs per vendor and product. For Oracle Linux you can look <a href="https://blogs.oracle.com/linux/how-to-install-java-se-on-oracle-linux-from-uln">here</a>. For RedHat you can look <a href="https://access.redhat.com/solutions/732883">here</a>.</p> <p>The below described procedure makes you independent of vendor specific subscriptions, however you will not gain automatic updates and if you want to have the latest version you have to manually update the download URL from <a href="http://www.oracle.com/technetwork/java/javase/downloads">here</a> and update the Java installation path in the alternatives commands. You also might encounter issues with the validity of the used cookie which might require you to update the URL.</p> <h2>Oracle JDK 8</h2> <pre class="brush: plain; title: ; notranslate">sudo wget -O ~/jdk8.rpm -N --no-check-certificate --no-cookies --header &quot;Cookie: oraclelicense=accept-securebackup-cookie&quot; http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.rpm sudo yum -y localinstall ~/jdk8.rpm sudo update-alternatives --install /usr/bin/java java /usr/java/jdk1.8.0_181-amd64/jre/bin/java 1 sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk1.8.0_181-amd64/bin/jar 1 sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk1.8.0_181-amd64/bin/javac 1 sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk1.8.0_181-amd64/jre/bin/javaws 1</pre> <h2>Oracle JDK 10</h2> <pre class="brush: plain; title: ; notranslate">sudo wget -O ~/jdk10.rpm -N --no-check-certificate --no-cookies --header &quot;Cookie: oraclelicense=accept-securebackup-cookie&quot; http://download.oracle.com/otn-pub/java/jdk/10.0.2+13/19aef61b38124481863b1413dce1855f/jdk-10.0.2_linux-x64_bin.rpm sudo yum -y localinstall ~/jdk10.rpm sudo update-alternatives --install /usr/bin/java java /usr/java/jdk-10.0.2/bin/java 1 sudo update-alternatives --install /usr/bin/jar jar /usr/java/jdk-10.0.2/bin/jar 1 sudo update-alternatives --install /usr/bin/javac javac /usr/java/jdk-10.0.2/bin/javac 1 sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/jdk-10.0.2/bin/javaws 1</pre> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/07/27/automate-the-installation-of-oracle-jdk-8-and-10-on-rhel-and-debian-derivatives/">Automate the installation of Oracle JDK 8 and 10 on RHEL and Debian derivatives</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=49448 Fri Jul 27 2018 09:21:08 GMT-0400 (EDT) Installing Oracle 18c using command line https://blog.pythian.com/installing-oracle-18c-using-command-line/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Just three days ago Oracle released Oracle 18c for general installation on-premises. We&#8217;ve had a taste of the new Oracle release on Oracle Cloud for quite some time, but now we can download and install it for in-house testing. I installed it for testing and decided to share my experience. Even though I didn&#8217;t experience any surprises and the process itself went smoothly, I still hope that a few people will benefit from some useful information about the process.</p> <p>I was hoping to check how an &#8220;rpm&#8221; based installation worked, but alas that type of distribution was not yet available. I tested the general unzip and install way using the file from the <a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html" rel="noopener" target="_blank">Oracle site</a>. The process was quite simple and straightforward. </p> <p>I was starting from a fresh installed Oracle Linux 7 and the first step was to prepare the system. It was simple enough and I used a prepared file ora_preinst.lst with a list of packages to install.</p> <pre lang="bash" escaped="true"> [root@vm130-179 ~]# vi ora_preinst.lst [root@vm130-179 ~]# cat ora_preinst.lst oracle-database-preinstall-18c lvm2 unzip gcc [root@vm130-179 ~]# [root@vm130-179 ~]# yum -y install $(cat ora_preinst.lst) Loaded plugins: ulninfo Resolving Dependencies --> Running transaction check ---> Package gcc.x86_64 0:4.8.5-28.0.1.el7_5.1 will be installed --> Processing Dependency: libgomp = 4.8.5-28.0.1.el7_5.1 for package: gcc-4.8.5-28.0.1.el7_5.1.x86_64 --> Processing Dependency: cpp = 4.8.5-28.0.1.el7_5.1 for package: gcc-4.8.5-28.0.1.el7_5.1.x86_64 …. Complete! [root@vm130-179 ~]# </pre> </p> <p>I had a disk &#8220;/dev/xvdb&#8221; attached to the system to be used for software and a test database. To speed the process, I used a simple script to create an LVM volume and mount it to the system.</p> <pre lang="bash" escaped="true"> [root@vm130-179 ~]# cat add_disk.sh #!/bin/bash pvcreate $1 vgcreate vgsoft $1 lvcreate -l 100%FREE -n orasoft01 vgsoft mkfs.ext4 /dev/mapper/vgsoft-orasoft01 cp /etc/fstab /etc/fstab.orig sed -i '/^\s*$/d' /etc/fstab echo -e '\n'`blkid /dev/mapper/vgsoft-orasoft01 | cut -d " " -f 2 | sed 's/"//g'`"\t\t/u01\text4\tdefaults\t1 2" >>/etc/fstab mkdir /u01 mount -a [root@vm130-179 ~]# sh add_disk.sh /dev/xvdb   Physical volume "/dev/xvdb" successfully created.   Volume group "vgsoft" successfully created   Logical volume "orasoft01" created. … mkdir: cannot create directory ‘/u01’: File exists [root@vm130-179 ~]# df -h /u01 Filesystem                    Size  Used Avail Use% Mounted on /dev/mapper/vgsoft-orasoft01   50G   53M   47G   1% /u01 [root@vm130-179 ~]# </pre> </p> <p>The system was ready and the rest of the steps were about installing the software and creating a test database.<br /> We needed to create the necessary directories and a response file for the installation:</p> <pre lang="bash" escaped="true"> [oracle@vm130-179 ~]$ mkdir -p /u01/app/oracle/product/18.0.0/dbhome_1 [oracle@vm130-179 ~]$ unzip -q  /u01/app/oracle/distr/LINUX.X64_180000_db_home.zip -d /u01/app/oracle/product/18.0.0/dbhome_1/ [oracle@vm130-179 ~]$ cd /u01/app/oracle/product/18.0.0/dbhome_1/ [oracle@vm130-179 dbhome_1]$ sed -e '/\s*#.*$/d' -e '/^\s*$/d' install/response/db_install.rsp > install/response/soft_only.rsp [oracle@vm130-179 dbhome_1]$ vi install/response/soft_only.rsp [oracle@vm130-179 dbhome_1]$ cat install/response/soft_only.rsp oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0 oracle.install.option=INSTALL_DB_SWONLY UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/u01/app/oraInventory ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.OSDBA_GROUP=dba oracle.install.db.OSOPER_GROUP=dba oracle.install.db.OSBACKUPDBA_GROUP=dba oracle.install.db.OSDGDBA_GROUP=dba oracle.install.db.OSKMDBA_GROUP=dba oracle.install.db.OSRACDBA_GROUP=dba [oracle@vm130-179 dbhome_1]$ </pre> </p> <p>Having everything ready I just ran the installer in silent mode using the prepared response file:</p> <pre lang="bash" escaped="true"> [oracle@vm130-179 dbhome_1]$ ./runInstaller -silent -responseFile install/response/soft_only.rsp Launching Oracle Database Setup Wizard... [WARNING] [INS-13014] Target environment does not meet some optional requirements.    CAUSE: Some of the optional prerequisites are not met. See logs for details. installActions2018-07-25_01-26-33PM.log    ACTION: Identify the list of failed prerequisite checks from the log: installActions2018-07-25_01-26-33PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at:  /u01/app/oracle/product/18.0.0/dbhome_1/install/response/db_2018-07-25_01-26-33PM.rsp You can find the log of this install session at:  /tmp/InstallActions2018-07-25_01-26-33PM/installActions2018-07-25_01-26-33PM.log As a root user, execute the following script(s):         1. /u01/app/oraInventory/orainstRoot.sh         2. /u01/app/oracle/product/18.0.0/dbhome_1/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [vm130-179] Execute /u01/app/oracle/product/18.0.0/dbhome_1/root.sh on the following nodes: [vm130-179] Successfully Setup Software with warning(s). Moved the install session logs to:  /u01/app/oraInventory/logs/InstallActions2018-07-25_01-26-33PM [oracle@vm130-179 dbhome_1]$ </pre> <p>The new Oracle release comes as an already zipped Oracle home prepared for a clone. Because of that, the installation part was quick since you only needed to relink binaries and register the home in the global inventory.<br /> The last step was to execute the root.sh script as root user:</p> <pre lang="bash" escaped="true"> [root@vm130-179 ~]# /u01/app/oracle/product/18.0.0/dbhome_1/root.sh Check /u01/app/oracle/product/18.0.0/dbhome_1/install/root_vm130-179.dlab.pythian.com_2018-07-25_13-28-58-244286954.log for the output of root script [root@vm130-179 ~]# </pre> </p> <p>It was quick and easy. One part of the process I thought was bit outdated was the usage of the response file for the installer. I thought it could be slightly improved by using an interactive command line based installer but maybe I was asking too much. Anyway, the response file worked.<br /> Creating a database was even easier. You just needed to allocate storage and run the dbca utility.</p> <pre lang="bash" escaped="true"> [oracle@vm130-179 dbhome_1]$ mkdir /u01/app/oracle/oradata [oracle@vm130-179 dbhome_1]$ export PATH=/u01/app/oracle/product/18.0.0/dbhome_1/bin:$PATH [oracle@vm130-179 dbhome_1]$ dbca -createDatabase -silent -createAsContainerDatabase true -pdbName pdb1 -templateName General_Purpose.dbc -gdbName orcl -sysPassword welcome1 -systemPassword welcome1 -pdbAdminPassword welcome1 -dbsnmpPassword welcome1 -datafileDestination /u01/app/oracle/oradata -storageType FS -sampleSchema true [WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.    CAUSE: ….. 100% complete Database creation complete. For details check the logfiles at:  /u01/app/oracle/cfgtoollogs/dbca/orcl. Database Information: Global Database Name:orcl System Identifier(SID):orcl Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log" for further details. [oracle@vm130-179 dbhome_1]$ </pre> <p>I got a couple of warnings because passwords I used for sys, system and pdbadmin users were not up to security standards. Keep in mind that if you don&#8217;t want to show the passwords in the command line, you can skip the parameters and put the passwords interactively when the dbca asks you to do that.<br /> Everything is ready for the tests: </p> <pre lang="bash" escaped="true"> [oracle@vm130-179 dbhome_1]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base has been set to /u01/app/oracle [oracle@vm130-179 dbhome_1]$ sqlplus / as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Wed Jul 25 13:57:05 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle.  All rights reserved. Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL> </pre> <p>The entire process didn&#8217;t take too much time and could be easily automated to be executed either by ansible scripts or used in a docker container. Overall the install is pretty similar to the older releases. The one main difference is that the binaries are now extracted from the zip file directly into the Oracle home, rather than into a staging location only to be copied by the OUI. One more thing I noticed was that after unzipping the Oracle home we got 1.4 Gb &#8220;.patch_storage&#8221; directory inside. I am not sure why we need it, but hope somebody from Oracle will tell us soon. Happy testing.</p> </div></div> Gleb Otochkin https://blog.pythian.com/?p=104839 Fri Jul 27 2018 09:07:36 GMT-0400 (EDT) VirtualBox networking explained https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/ <p>VirtualBox networking is extremely flexible. With this flexibility comes the challenge of making the correct choices. In this blog, the different options are explained and some example use cases are elaborated. Access between guests, host and other members of the network is explained and the required configuration is shown. This information is also available in the <a href="https://www.slideshare.net/MaartenSmeets1/virtualbox-networking-explained">following presentation</a>.</p> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?ssl=1"><img data-attachment-id="49431" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-networking-overview/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?fit=1874%2C379&amp;ssl=1" data-orig-size="1874,379" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox networking overview" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?fit=300%2C61&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?fit=702%2C142&amp;ssl=1" class="aligncenter wp-image-49431 size-large" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?resize=702%2C142&#038;ssl=1" alt="" width="702" height="142" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?resize=1024%2C207&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?resize=300%2C61&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?resize=768%2C155&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-networking-overview.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <p><span id="more-49430"></span></p> <h1>Networking options</h1> <h2>Internal network</h2> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?ssl=1"><img data-attachment-id="49437" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-internal-network/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?fit=1881%2C741&amp;ssl=1" data-orig-size="1881,741" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox internal network" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?fit=300%2C118&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?fit=702%2C276&amp;ssl=1" class="aligncenter wp-image-49437 size-large" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?resize=702%2C276&#038;ssl=1" alt="" width="702" height="276" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?resize=1024%2C403&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?resize=300%2C118&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?resize=768%2C303&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <h3>Overview</h3> <p>VirtualBox makes available a network interface inside a guest. If multiple guests share the same interface name, they are connected like a switch and can access each other.</p> <h3>Benefits</h3> <ul class="postList"> <li class="graf graf--li">Easy to use. Little configuration required</li> <li class="graf graf--li">No VirtualBox virtual host network interface (device + driver) required</li> <li class="graf graf--li">Guests can access each other</li> <li class="graf graf--li">Secure (access from outside the host is not possible)</li> </ul> <h3>Drawbacks</h3> <ul class="postList"> <li class="graf graf--li">The host can’t access the guests</li> <li class="graf graf--li">Guests can’t access the host</li> <li class="graf graf--li">Guests can’t access the internet</li> <li class="graf graf--li">The VirtualBox internal DHCP server has no GUI support, only a CLI</li> </ul> <h3>Configuration</h3> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?ssl=1"><img data-attachment-id="49436" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-internal-network-configuration/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?fit=914%2C765&amp;ssl=1" data-orig-size="914,765" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox internal network configuration" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?fit=300%2C251&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?fit=702%2C588&amp;ssl=1" class="aligncenter wp-image-49436 size-medium" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?resize=300%2C251&#038;ssl=1" alt="" width="300" height="251" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?resize=300%2C251&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?resize=768%2C643&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-internal-network-configuration.png?w=914&amp;ssl=1 914w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h2>NAT</h2> <p><a href="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?ssl=1"><img data-attachment-id="49441" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-nat/" data-orig-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?fit=1876%2C721&amp;ssl=1" data-orig-size="1876,721" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox nat" data-image-description="" data-medium-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?fit=300%2C115&amp;ssl=1" data-large-file="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?fit=702%2C270&amp;ssl=1" class="aligncenter wp-image-49441 size-large" src="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?resize=702%2C270&#038;ssl=1" alt="" width="702" height="270" srcset="https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?resize=1024%2C394&amp;ssl=1 1024w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?resize=300%2C115&amp;ssl=1 300w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?resize=768%2C295&amp;ssl=1 768w, https://i0.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <h3>Overview</h3> <p>VirtualBox makes available a single virtual isolated NAT router on a network interface inside a guest. Every guest gets his own virtual router and can&#8217;t access other guests. DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the NAT router as gateway. The DHCP server can be configured using a CLI (no GUI support). The NAT router uses the hosts network interface. No specific VirtualBox network interface needs to be created. External parties only see a single host interface. The NAT router opens a port on the hosts interface. The internal address is translated to the hosts IP. The request to the destination IP is done. The response is forwarded back towards the guest (a table of external port to internal IP is kept by the router). Port mappings can be made to allow requests to the host on a specific port to be forwarded to the guest.</p> <h3>Benefits</h3> <ul class="postList"> <li class="graf graf--li">Easy to use. Little configuration required</li> <li class="graf graf--li">Isolated. Every guest their own virtual router</li> <li class="graf graf--li">No VirtualBox virtual host network interface (device + driver) required</li> <li class="graf graf--li">Internet access</li> <li class="graf graf--li">Fixed IP possible</li> </ul> <h3>Drawbacks</h3> <ul class="postList"> <li class="graf graf--li">Guests can’t access each other or the host</li> <li class="graf graf--li">The virtual NAT router DHCP server can be configured using a CLI only</li> <li class="graf graf--li">To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific web interfaces</li> </ul> <h3>Configuration</h3> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?ssl=1"><img data-attachment-id="49438" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-nat-configuration/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?fit=943%2C805&amp;ssl=1" data-orig-size="943,805" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox nat configuration" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?fit=300%2C256&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?fit=702%2C599&amp;ssl=1" class="aligncenter size-medium wp-image-49438" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?resize=300%2C256&#038;ssl=1" alt="" width="300" height="256" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?resize=300%2C256&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?resize=768%2C656&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-configuration.png?w=943&amp;ssl=1 943w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h2>NAT network</h2> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?ssl=1"><img data-attachment-id="49440" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-nat-network/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?fit=1874%2C728&amp;ssl=1" data-orig-size="1874,728" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox nat network" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?fit=300%2C117&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?fit=702%2C273&amp;ssl=1" class="aligncenter wp-image-49440 size-large" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?resize=702%2C273&#038;ssl=1" alt="" width="702" height="273" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?resize=1024%2C398&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?resize=300%2C117&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?resize=768%2C298&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <h3>Overview</h3> <p>VirtualBox makes available a virtual NAT router on a network interface for all guests using the NAT network. Guests can access each other. The NAT network needs to be created. DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the NAT router as gateway. The DHCP server can be configured. The NAT router uses the hosts network interface. No specific VirtualBox network interface needs to be created. External parties only see a single host interface. The NAT router opens a port on the hosts interface. The internal address is translated to the hosts IP to a specific port per host. The request to the destination IP is done. The response is forwarded back towards the guest (a table of external port to internal IP is kept by the router). Port mappings can be made to allow requests to the host on a specific port to be forwarded to a guest.</p> <h3>Benefits</h3> <ul class="postList"> <li class="graf graf--li">Guests can access each other</li> <li class="graf graf--li">No VirtualBox virtual host network interface (device + driver) required</li> <li class="graf graf--li">DHCP server can be configured using the GUI</li> <li class="graf graf--li">Internet access</li> <li class="graf graf--li">Fixed IP possible</li> </ul> <h3>Drawbacks</h3> <ul class="postList"> <li class="graf graf--li">To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific webinterfaces</li> <li class="graf graf--li">Requires additional VirtualBox configuration to define the network / DHCP server</li> </ul> <h3>Configuration</h3> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?ssl=1"><img data-attachment-id="49439" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-nat-network-configuration/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?fit=1833%2C861&amp;ssl=1" data-orig-size="1833,861" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox nat network configuration" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?fit=300%2C141&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?fit=702%2C330&amp;ssl=1" class="aligncenter size-medium wp-image-49439" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?resize=300%2C141&#038;ssl=1" alt="" width="300" height="141" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?resize=300%2C141&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?resize=768%2C361&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?resize=1024%2C481&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-nat-network-configuration.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h2>Host only</h2> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?ssl=1"><img data-attachment-id="49435" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-host-only/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?fit=1880%2C729&amp;ssl=1" data-orig-size="1880,729" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox host only" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?fit=300%2C116&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?fit=702%2C272&amp;ssl=1" class="aligncenter wp-image-49435 size-large" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?resize=702%2C272&#038;ssl=1" alt="" width="702" height="272" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?resize=1024%2C397&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?resize=300%2C116&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?resize=768%2C298&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <h3>Overview</h3> <p class="graf graf--p">VirtualBox creates a host interface (a virtual device visible on the host). This interface can be shared amongst guests. Guests can access each other. DHCP (Dynamic Host Configuration Protocol) requests on the interface are answered with an IP for the guest and address of the Host only adapter. The DHCP server can be configured using the VirtualBox GUI The virtual host interface is not visible outside of the host. The internet cannot be accessed via this interface from the guest. The host can access the guests by IP. Port mappings are not needed.</p> <h3>Benefits</h3> <ul class="postList"> <li class="graf graf--li">Guests can access each other</li> <li class="graf graf--li">You can create separate guest networks</li> <li class="graf graf--li">DHCP server can be configured using the GUI</li> <li class="graf graf--li">Fixed IP possible</li> </ul> <h3>Drawbacks</h3> <ul class="postList"> <li class="graf graf--li">To access the guest from the host requires port forwarding configuration and might require an entry in the hosts hosts file for specific webinterfaces</li> <li class="graf graf--li">Requires additional VirtualBox configuration to define the network / DHCP server</li> <li class="graf graf--li">VirtualBox virtual host network interface (device + driver) required</li> <li class="graf graf--li">No internet access</li> </ul> <h3>Configuration</h3> <p><a href="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?ssl=1"><img data-attachment-id="49434" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-host-only-configuration/" data-orig-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?fit=1810%2C646&amp;ssl=1" data-orig-size="1810,646" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox host only configuration" data-image-description="" data-medium-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?fit=300%2C107&amp;ssl=1" data-large-file="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?fit=702%2C250&amp;ssl=1" class="aligncenter size-medium wp-image-49434" src="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?resize=300%2C107&#038;ssl=1" alt="" width="300" height="107" srcset="https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?resize=300%2C107&amp;ssl=1 300w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?resize=768%2C274&amp;ssl=1 768w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?resize=1024%2C365&amp;ssl=1 1024w, https://i2.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-host-only-configuration.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h2>Bridged</h2> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?ssl=1"><img data-attachment-id="49433" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-bridged/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?fit=1874%2C733&amp;ssl=1" data-orig-size="1874,733" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox bridged" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?fit=300%2C117&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?fit=702%2C275&amp;ssl=1" class="aligncenter wp-image-49433 size-large" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?resize=702%2C275&#038;ssl=1" alt="" width="702" height="275" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?resize=1024%2C401&amp;ssl=1 1024w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?resize=300%2C117&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?resize=768%2C300&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged.png?w=1404&amp;ssl=1 1404w" sizes="(max-width: 702px) 100vw, 702px" data-recalc-dims="1" /></a></p> <h3>Overview</h3> <p>The guest uses a host interface. On the host interface a net filter driver is applied to allow VirtualBox to send data to the guest. This requires a so-called promiscuous mode to be used by the adapter. Promiscuous mode means the adapter can have multiple MAC addresses. Most wireless adapters do not support this. In that case VirtualBox replaces the MAC address of packages which are visible to the adapter. An external DHCP server is used. Same way the host gets its IP / gateway. No additional configuration required. It might not work if the DHCP server only allows registered MACs (some company networks) Easy access. The guest is directly available from the network (every host) the host is connected to. Port mappings are not required. The host can access the guests by IP. Guests can access the host by IP.</p> <h3>Benefits</h3> <ul class="postList"> <li class="graf graf--li">Guests can access each other</li> <li class="graf graf--li">Host can access guests and guests can access the host. Anyone on the host network can access the guests</li> <li class="graf graf--li">No virtual DHCP server needed</li> <li class="graf graf--li">Easy to configure / use</li> <li class="graf graf--li">Same access to internet as the host has</li> </ul> <h3>Drawbacks</h3> <ul class="postList"> <li class="graf graf--li">Guests can’t be split into separate networks (not isolated)</li> <li class="graf graf--li">Sometimes doesn’t work; dependent on external DHCP server and ability to filter packets on a host network interface. Company networks might block your interface</li> <li class="graf graf--li">No easy option for a fixed IP since host network is a variable</li> <li class="graf graf--li">Not secure. The guest is exposed on the hosts network</li> </ul> <h3>Configuration</h3> <p><a href="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?ssl=1"><img data-attachment-id="49432" data-permalink="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/virtualbox-bridged-configuration/" data-orig-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?fit=880%2C738&amp;ssl=1" data-orig-size="880,738" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="virtualbox bridged configuration" data-image-description="" data-medium-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?fit=300%2C252&amp;ssl=1" data-large-file="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?fit=702%2C589&amp;ssl=1" class="aligncenter size-medium wp-image-49432" src="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?resize=300%2C252&#038;ssl=1" alt="" width="300" height="252" srcset="https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?resize=300%2C252&amp;ssl=1 300w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?resize=768%2C644&amp;ssl=1 768w, https://i1.wp.com/technology.amis.nl/wp-content/uploads/2018/07/virtualbox-bridged-configuration.png?w=880&amp;ssl=1 880w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p> <h2>Use cases</h2> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Case 1: ELK stack</strong></p> <p class="graf graf--p">I’m trying out the new version of the ELK stack (Elasticsearch, Logstash, Kibana)</p> <p class="graf graf--p">Requirements:</p> <ul class="postList"> <li class="graf graf--li">I do not require internet access inside the guest</li> <li class="graf graf--li">I want to access my guest from my host</li> <li class="graf graf--li">I do not want my guest to be accessible outside of my host</li> <li class="graf graf--li">I do not want to manually configure port mappings</li> </ul> <p class="graf graf--p">Solution: Host only adapter</p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Case 2: SOA Suite for security workshop</strong></p> <p class="graf graf--p">I’m using Oracle SOA Suite for a security workshop. SOA Suite consists of 3 separate VMs, DB, Admin Server, Managed Server</p> <p class="graf graf--p">Requirements:</p> <ul class="postList"> <li class="graf graf--li">The VMs require fixed (internal) IPs</li> <li class="graf graf--li">The VMs need to be able to access each other</li> <li class="graf graf--li">Course participants need to call my services from the same network</li> <li class="graf graf--li">I only want to expose specific ports</li> </ul> <p class="graf graf--p">Solution: NAT + Host only (possibly NAT network)</p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Case 3: VM for distribution during course</strong></p> <p class="graf graf--p">I’ve created an Ubuntu / Spring Tool Suite VM for a course. The VM will be distributed to participants.</p> <p class="graf graf--p">Requirements:</p> <ul class="postList"> <li class="graf graf--li">The VM to distribute requires internet access. During the course several things will need to be downloaded</li> <li class="graf graf--li">I am unaware of the VirtualBox created interfaces present on the host machines and don’t want the participants to manually have to select an adapter</li> <li class="graf graf--li">I want the participants to do as little networking configuration as possible. VirtualBox networking is not the purpose of this course.</li> </ul> <p class="graf graf--p">Solution: NAT</p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Case 4: Server hosting application</strong></p> <p class="graf graf--p">I’ve created a server inside a VM which hosts an application.</p> <p class="graf graf--p">Requirements:</p> <ul class="postList"> <li class="graf graf--li">The MAC of the VM is configured inside the routers DHCP server so it will always get the same IP. Use the external DHCP server to obtain an IP</li> <li class="graf graf--li">The application will be used by (and thus needs to be accessible for) different people on the network.</li> <li class="graf graf--li">The application uses many different ports for different features. These ports change regularly. Some features use random ports. Manual port mappings are not an option</li> <li class="graf graf--li">The application accesses different resources (such as a print server) on the hosts network</li> </ul> <p class="graf graf--p">Solution: Bridged</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/07/27/virtualbox-networking-explained/">VirtualBox networking explained</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=49430 Fri Jul 27 2018 05:02:26 GMT-0400 (EDT) 18c (18.3) Installation On Premises https://hemantoracledba.blogspot.com/2018/07/18c-183-installation-on-premises.html <div dir="ltr" style="text-align: left;" trbidi="on">Documentation by @oraclebase&nbsp; (Tim Hall) on installing 18c (18.3)&nbsp;On Premises on OEL :<br /><br /><a href="https://oracle-base.com/articles/18c/oracle-db-18c-installation-on-oracle-linux-6-and-7">Oracle Database 18c Installation On Oracle Linux 6 (OL6) and 7 (OL7)</a><br /><br />To understand Patch Numbering in 18c, see Oracle Support Document IDs 2337415.1 and 2369376.1<br /><br />.<br />.<br />.<br /><br /><br /><br /><br />&nbsp;</div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-8700811473284550849 Thu Jul 26 2018 22:42:00 GMT-0400 (EDT) Power BI 101- Logging and Tracing, Part III https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/ <p>Power BI, like many Microsoft products, is multi-threaded.  This can be seen from the logs and even the Task Manager.  I know, I know&#8230;you&#8217;ve probably heard this part all before&#8230;</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/seriously/" rel="attachment wp-att-8072"><img class="wp-image-8072 aligncenter" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/seriously.gif?resize=348%2C270&#038;ssl=1" alt="" width="348" height="270" data-recalc-dims="1" /></a></p> <p>The importance of this information, is that the logs will display Process IDs, (PID) that are separate from the main Power BI Desktop executable, including the secondary processes..  Moving from the Power BI logs that reside in the Performance folder, (see Part I here) we can view and connect the PIDs and TID, (Transaction IDs) to information from the Task Manager and the data displayed:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/powerbi_tm1/" rel="attachment wp-att-8069"><img class="alignnone size-large wp-image-8069" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/Powerbi_tm1.png?resize=650%2C292&#038;ssl=1" alt="" width="650" height="292" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/Powerbi_tm1.png?resize=1024%2C460&amp;ssl=1 1024w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/Powerbi_tm1.png?resize=300%2C135&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/Powerbi_tm1.png?resize=768%2C345&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/Powerbi_tm1.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>Note that I&#8217;ve highlighted the thread count in the image above and we can see the total resource usage, but if we want to see it broken down, we can simply expand the left hand arrow to the application name:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/powerbi_tm2/" rel="attachment wp-att-8070"><img class="alignnone size-large wp-image-8070" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/powerbi_tm2.png?resize=650%2C308&#038;ssl=1" alt="" width="650" height="308" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/powerbi_tm2.png?resize=1024%2C486&amp;ssl=1 1024w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/powerbi_tm2.png?resize=300%2C142&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/powerbi_tm2.png?resize=768%2C365&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/powerbi_tm2.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>We can see that there are numerous threads, with a few taking considerable memory over others-  The CefSharp.BrowserSubprocess can be a bit misleading-  It&#8217;s Power BI using Chromium to render the visuals that are part of the Power BI Desktop that&#8217;s part of the current run.  Chromium (CefSharp.BrowserSubprocess) subprocesses will always come in pairs, one for rendering and one for messaging.</p> <p>In the Task Manager Details, we can see each of the PIDs that correspond with the processes IDs listed in the logs.  By updating our viewable columns, (right click, choose &#8220;threads&#8221; and click OK) you can now view how many threads are associated with a given PID.</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/tm1-2/" rel="attachment wp-att-8077"><img class="alignnone size-large wp-image-8077" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?resize=650%2C118&#038;ssl=1" alt="" width="650" height="118" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?resize=1024%2C186&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?resize=300%2C54&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?resize=768%2C139&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?w=1300&amp;ssl=1 1300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM1.png?w=1950&amp;ssl=1 1950w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>In the main view of the Task Manager, you can do something similar, right clicking in the top tabs and choosing to display the PID, the process type, (to verify what is what) and the executable to pair up entries in the log with the Task Manager.</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/tm2/" rel="attachment wp-att-8078"><img class="alignnone size-large wp-image-8078" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?resize=650%2C246&#038;ssl=1" alt="" width="650" height="246" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?resize=1024%2C387&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?resize=300%2C113&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?resize=768%2C290&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?w=1300&amp;ssl=1 1300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/TM2.png?w=1950&amp;ssl=1 1950w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>The SQL Server Analysis Service, the Windows console host and Power BI application subprocess are visible in the list as well.  Different types of Power BI data models, depending on the data sources, features and functions will effect what subprocesses are required to satisfy the demand.  By viewing them in the Task Manager, it helps identify what processing requires heavier resources.  This is just another step, another view into what&#8217;s going on behind the scenes with Power BI.</p> <p>&nbsp;</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/logging/" rel="tag">logging</a>, <a href="https://dbakevlar.com/tag/power-bi/" rel="tag">Power BI</a>, <a href="https://dbakevlar.com/tag/tracing/" rel="tag">Tracing</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/&title=Power BI 101- Logging and Tracing, Part III"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/&title=Power BI 101- Logging and Tracing, Part III"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/&title=Power BI 101- Logging and Tracing, Part III"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/&title=Power BI 101- Logging and Tracing, Part III"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/#comments">1 (One) on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2018/05/broadening-your-audience/" >Broadening Your Audience</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/03/repeat-performance/" >Repeat Performance</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/10/delphix-express-virtualize-your-first-database-and-application/" >Delphix Express - virtualize your first database (and application)</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/03/gold-agent-image/" >Gold Agent Image</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/05/blue-medoras-brian-williams-blogs-about-customer-monitoring-templates/" >Blue Medora's Brian Williams Blogs About Custom Monitoring Templates</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBA Kevlar</a> [<a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-iii/">Power BI 101- Logging and Tracing, Part III</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8068 Thu Jul 26 2018 19:59:08 GMT-0400 (EDT) Inefficient Queries to ALL_SYNONYMS https://blog.pythian.com/inefficient-queries-all_synonyms/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Short summary: Queries to <a href="https://docs.oracle.com/database/121/REFRN/GUID-DCDB52FF-8339-4EDE-B36A-2E12AFE25D33.htm#REFRN20273"><tt>ALL_SYNONYMS</tt></a> cause FTS of <tt>SYS.OBJ$</tt> which can&#8217;t be avoided.</p> <p><span id="more-104767"></span></p> <p>Let&#8217;s have a look at a simple query and its execution plan in my test 12.1.0.2 instance. The plan is pretty big, adaptive, and uses dynamic sampling. It is composed of two UNION ALL branches. The first branch starts at plan step ID 4, and is not a big deal &#8211; just 4 buffer gets and no rows returned right in the start of the branch execution, lines 12-16. </p> <p>The second part is more interesting. It is composed by the <tt>_ALL_SYNONYMS_TREE</tt> view, and, as the name suggests, it&#8217;s a <tt>CONNECT BY</tt> on top of a multi-table join. And where are the top query conditions applied? On step 27, after <tt>_ALL_SYNONYMS_TREE</tt> is fully instantiated. This is the only way to execute such a query, since there&#8217;s no good <tt>START WITH</tt> condition in the <tt>CONNECT BY</tt>, and the top query conditions logically can&#8217;t be pushed into <tt>START WITH</tt>.</p> <pre class="brush: sql; collapse: true; gutter: false; light: false; title: ; toolbar: true; notranslate"> SELECT TABLE_NAME, TABLE_OWNER, DB_LINK FROM ALL_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'X'; Plan hash value: 4035506875 ---------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ---------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.04 | 398 | | 1 | VIEW | ALL_SYNONYMS | 1 | 3 | 0 |00:00:00.04 | 398 | | 2 | SORT UNIQUE | | 1 | 3 | 0 |00:00:00.04 | 398 | | 3 | UNION-ALL | | 1 | | 0 |00:00:00.04 | 398 | | * 4 | FILTER | | 1 | | 0 |00:00:00.01 | 4 | |- * 5 | HASH JOIN | | 1 | 1 | 0 |00:00:00.01 | 4 | | 6 | NESTED LOOPS | | 1 | 1 | 0 |00:00:00.01 | 4 | | 7 | NESTED LOOPS | | 1 | 1 | 0 |00:00:00.01 | 4 | |- 8 | STATISTICS COLLECTOR | | 1 | | 0 |00:00:00.01 | 4 | |- * 9 | HASH JOIN | | 1 | 1 | 0 |00:00:00.01 | 4 | | 10 | NESTED LOOPS | | 1 | 1 | 0 |00:00:00.01 | 4 | |- 11 | STATISTICS COLLECTOR | | 1 | | 0 |00:00:00.01 | 4 | | 12 | NESTED LOOPS | | 1 | 1 | 0 |00:00:00.01 | 4 | | 13 | TABLE ACCESS BY INDEX ROWID | USER$ | 1 | 1 | 1 |00:00:00.01 | 2 | | * 14 | INDEX UNIQUE SCAN | I_USER1 | 1 | 1 | 1 |00:00:00.01 | 1 | | 15 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 1 | 1 | 0 |00:00:00.01 | 2 | | * 16 | INDEX RANGE SCAN | I_OBJ5 | 1 | 1 | 0 |00:00:00.01 | 2 | | * 17 | INDEX RANGE SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | |- 18 | INDEX FULL SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 19 | INDEX UNIQUE SCAN | I_SYN1 | 0 | 1 | 0 |00:00:00.01 | 0 | | 20 | TABLE ACCESS BY INDEX ROWID | SYN$ | 0 | 1 | 0 |00:00:00.01 | 0 | |- 21 | TABLE ACCESS FULL | SYN$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 22 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | 23 | NESTED LOOPS SEMI | | 0 | 1 | 0 |00:00:00.01 | 0 | | * 24 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 25 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 26 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 27 | VIEW | _ALL_SYNONYMS_TREE | 1 | 2 | 0 |00:00:00.04 | 394 | | * 28 | CONNECT BY NO FILTERING WITH START-WITH | | 1 | | 0 |00:00:00.04 | 394 | | * 29 | FILTER | | 1 | | 0 |00:00:00.04 | 394 | | * 30 | HASH JOIN | | 1 | 89 | 0 |00:00:00.04 | 394 | | 31 | TABLE ACCESS FULL | USER$ | 1 | 76 | 76 |00:00:00.01 | 6 | | * 32 | HASH JOIN | | 1 | 89 | 0 |00:00:00.04 | 388 | | 33 | INDEX FULL SCAN | I_USER2 | 1 | 76 | 76 |00:00:00.01 | 1 | |- * 34 | HASH JOIN | | 1 | 89 | 0 |00:00:00.03 | 387 | | 35 | NESTED LOOPS | | 1 | 89 | 0 |00:00:00.03 | 387 | | 36 | NESTED LOOPS | | 1 | | 0 |00:00:00.03 | 387 | |- 37 | STATISTICS COLLECTOR | | 1 | | 0 |00:00:00.03 | 387 | | * 38 | HASH JOIN | | 1 | 89 | 0 |00:00:00.03 | 387 | | 39 | INDEX FULL SCAN | I_USER2 | 1 | 76 | 76 |00:00:00.01 | 1 | | * 40 | HASH JOIN | | 1 | 89 | 0 |00:00:00.03 | 386 | | * 41 | TABLE ACCESS FULL | OBJ$ | 1 | 5182 | 5182 |00:00:00.01 | 349 | | * 42 | HASH JOIN | | 1 | 5102 | 5178 |00:00:00.01 | 37 | | 43 | TABLE ACCESS FULL | USER$ | 1 | 76 | 76 |00:00:00.01 | 6 | | 44 | TABLE ACCESS FULL | SYN$ | 1 | 5102 | 5182 |00:00:00.01 | 31 | | * 45 | INDEX RANGE SCAN | I_OBJ1 | 0 | | 0 |00:00:00.01 | 0 | | 46 | TABLE ACCESS BY INDEX ROWID | OBJ$ | 0 | 1 | 0 |00:00:00.01 | 0 | |- * 47 | TABLE ACCESS FULL | OBJ$ | 0 | 5182 | 0 |00:00:00.01 | 0 | | * 48 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | 49 | NESTED LOOPS SEMI | | 0 | 1 | 0 |00:00:00.01 | 0 | | * 50 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 51 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 52 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 53 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | 54 | NESTED LOOPS SEMI | | 0 | 1 | 0 |00:00:00.01 | 0 | | * 55 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 56 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 57 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 58 | FILTER | | 0 | | 0 |00:00:00.01 | 0 | | * 59 | FILTER | | 0 | | 0 |00:00:00.01 | 0 | | 60 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 61 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 62 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 63 | TABLE ACCESS BY INDEX ROWID | SYN$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 64 | INDEX UNIQUE SCAN | I_SYN1 | 0 | 1 | 0 |00:00:00.01 | 0 | | 65 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 66 | INDEX RANGE SCAN | I_OBJ1 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 67 | INDEX RANGE SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 68 | INDEX RANGE SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 69 | FILTER | | 0 | | 0 |00:00:00.01 | 0 | | * 70 | FILTER | | 0 | | 0 |00:00:00.01 | 0 | | 71 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 72 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 73 | NESTED LOOPS | | 0 | 1 | 0 |00:00:00.01 | 0 | | 74 | TABLE ACCESS BY INDEX ROWID | USER$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 75 | INDEX UNIQUE SCAN | I_USER1 | 0 | 1 | 0 |00:00:00.01 | 0 | | 76 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 77 | INDEX RANGE SCAN | I_OBJ5 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 78 | INDEX RANGE SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 79 | INDEX RANGE SCAN | I_OBJAUTH1 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 80 | FIXED TABLE FULL | X$KZSRO | 0 | 1 | 0 |00:00:00.01 | 0 | | * 81 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | 82 | NESTED LOOPS SEMI | | 0 | 1 | 0 |00:00:00.01 | 0 | | * 83 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 84 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 85 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | * 86 | FIXED TABLE FULL | X$KZSPR | 0 | 23 | 0 |00:00:00.01 | 0 | | * 87 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | | 88 | NESTED LOOPS SEMI | | 0 | 1 | 0 |00:00:00.01 | 0 | | * 89 | INDEX SKIP SCAN | I_USER2 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 90 | INDEX RANGE SCAN | I_OBJ4 | 0 | 1 | 0 |00:00:00.01 | 0 | | * 91 | TABLE ACCESS FULL | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | ---------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter((( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR (((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND &quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2) OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL) AND IS NOT NULL))) 5 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot;) 9 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 14 - access(&quot;U&quot;.&quot;NAME&quot;='PUBLIC') 16 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot; AND &quot;O&quot;.&quot;NAME&quot;='X' AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 17 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 19 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot;) 22 - filter((&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 24 - access(&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter((&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))) 25 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88 AND &quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot;) 26 - filter((&quot;UE&quot;.&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 27 - filter((&quot;ST&quot;.&quot;SYN_OWNER&quot;='PUBLIC' AND &quot;ST&quot;.&quot;SYN_SYNONYM_NAME&quot;='X')) 28 - access(&quot;S&quot;.&quot;BASE_SYN_ID&quot;=PRIOR NULL AND &quot;S&quot;.&quot;ORIGIN_CON_ID&quot;=PRIOR NULL) filter( IS NOT NULL) 29 - filter(((( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR (((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND &quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2) OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL) AND IS NOT NULL)) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR (((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND &quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2) OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL) AND IS NOT NULL)))) 30 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot;) 32 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 34 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot;) 38 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 40 - access(&quot;BU&quot;.&quot;USER#&quot;=&quot;O&quot;.&quot;SPARE3&quot; AND &quot;S&quot;.&quot;NAME&quot;=&quot;O&quot;.&quot;NAME&quot;) 41 - filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 42 - access(&quot;S&quot;.&quot;OWNER&quot;=&quot;BU&quot;.&quot;NAME&quot;) 45 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot; AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 47 - filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 48 - filter((&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 50 - access(&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter((&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))) 51 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88 AND &quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot;) 52 - filter((&quot;UE&quot;.&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 53 - filter((&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 55 - access(&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter((&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))) 56 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88 AND &quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot;) 57 - filter((&quot;UE&quot;.&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 58 - filter((( IS NOT NULL OR (&quot;S&quot;.&quot;NODE&quot; IS NULL AND IS NOT NULL)) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR (((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND &quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2) OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL) AND IS NOT NULL)))) 59 - filter(TO_NUMBER(SYS_CONTEXT('USERENV','CON_ID')) IS NOT NULL) 64 - access(&quot;S&quot;.&quot;OBJ#&quot;=:B1) 66 - access(&quot;O&quot;.&quot;OBJ#&quot;=:B1 AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 67 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot;) 68 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 69 - filter((( IS NOT NULL OR &quot;BA&quot;.&quot;GRANTOR#&quot;=USERENV('SCHEMAID')) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR (((SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE' AND &quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2) OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL) AND IS NOT NULL)))) 70 - filter(:B1 IS NULL) 75 - access(&quot;BU&quot;.&quot;NAME&quot;=:B1) 77 - access(&quot;BU&quot;.&quot;USER#&quot;=&quot;O&quot;.&quot;SPARE3&quot; AND &quot;O&quot;.&quot;NAME&quot;=:B1) 78 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 79 - access(&quot;BA&quot;.&quot;OBJ#&quot;=&quot;O&quot;.&quot;OBJ#&quot;) 80 - filter(&quot;KZSROROL&quot;=:B1) 81 - filter((&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 83 - access(&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter((&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))) 84 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88 AND &quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot;) 85 - filter((&quot;UE&quot;.&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 86 - filter((((-&quot;KZSPRPRV&quot;)=(-45) OR (-&quot;KZSPRPRV&quot;)=(-47) OR (-&quot;KZSPRPRV&quot;)=(-397) OR (-&quot;KZSPRPRV&quot;)=(-48) OR (-&quot;KZSPRPRV&quot;)=(-49) OR (-&quot;KZSPRPRV&quot;)=(-50)) AND &quot;INST_ID&quot;=USERENV('INSTANCE'))) 87 - filter((&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) 89 - access(&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter((&quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id')))) 90 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88 AND &quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot;) 91 - filter((&quot;UE&quot;.&quot;TYPE#&quot;=:B1 AND &quot;UE&quot;.&quot;USER#&quot;=:B2)) Note ----- - dynamic statistics used: dynamic sampling (level=2) - this is an adaptive plan (rows marked '-' are inactive) </pre> <p>Because my test database is very small &#8211; only 22K objects created &#8211; such a query to <tt>ALL_SYNONYMS</tt> is pretty fast, and it takes just under 400 buffer gets to execute. Let&#8217;s look at the same query execution statistics in an EBS 12.2 database. Although the plan is slightly different, its shape is still the same, the hierarchy is still there, and SYS.OBJ$ full scan is present. You can see that <tt>OBJ$</tt> is much bigger, more than 600MB. FTS of <tt>OBJ$</tt> was pretty fast because of the buffered reads, and all <tt>OBJ$</tt>&#8216;s blocks were cached already. Occasionally though, Oracle switches to direct path reads while running this query, and it means reading 600MB+ data off disk. In such a case, the execution time can go up; I&#8217;ve seen a few cases where it was as high as 30s.</p> <p>On top of the issue with FTS, there are multiple subqueries which drive buffer gets up to 720K. This is a side effect of filtering <tt>_ALL_SYNONYMS_TREE</tt> data too late on step 23.</p> <pre class="brush: sql; collapse: true; gutter: false; light: false; title: ; toolbar: true; notranslate"> ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:04.78 | 720K| 2387 | | 1 | VIEW | ALL_SYNONYMS | 1 | 3 | 1 |00:00:04.78 | 720K| 2387 | | 2 | SORT UNIQUE | | 1 | 3 | 1 |00:00:04.78 | 720K| 2387 | | 3 | UNION-ALL | | 1 | | 1 |00:00:04.78 | 720K| 2387 | |* 4 | FILTER | | 1 | | 1 |00:00:00.01 | 15 | 2 | | 5 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 12 | 2 | | 6 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 11 | 2 | | 7 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 9 | 1 | | 8 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 7 | 1 | | 9 | TABLE ACCESS BY INDEX ROWID | USER$ | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | |* 10 | INDEX UNIQUE SCAN | I_USER1 | 1 | 1 | 1 |00:00:00.01 | 2 | 0 | | 11 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 1 | 1 | 1 |00:00:00.01 | 4 | 1 | |* 12 | INDEX RANGE SCAN | I_OBJ5 | 1 | 1 | 1 |00:00:00.01 | 3 | 1 | |* 13 | INDEX RANGE SCAN | I_USER2 | 1 | 1 | 1 |00:00:00.01 | 2 | 0 | |* 14 | INDEX UNIQUE SCAN | I_SYN1 | 1 | 1 | 1 |00:00:00.01 | 2 | 1 | | 15 | TABLE ACCESS BY INDEX ROWID | SYN$ | 1 | 1 | 1 |00:00:00.01 | 1 | 0 | |* 16 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | |* 17 | INDEX RANGE SCAN | I_USER_EDITIONING | 1 | 12 | 2 |00:00:00.01 | 2 | 0 | |* 18 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 0 | 1 | 0 |00:00:00.01 | 0 | 0 | |* 19 | INDEX RANGE SCAN | I_USER_EDITIONING | 0 | 12 | 0 |00:00:00.01 | 0 | 0 | | 20 | NESTED LOOPS SEMI | | 0 | 5 | 0 |00:00:00.01 | 0 | 0 | |* 21 | INDEX RANGE SCAN | I_OBJ4 | 0 | 5 | 0 |00:00:00.01 | 0 | 0 | |* 22 | INDEX RANGE SCAN | I_USER2 | 0 | 266 | 0 |00:00:00.01 | 0 | 0 | |* 23 | VIEW | _ALL_SYNONYMS_TREE | 1 | 2 | 0 |00:00:04.78 | 720K| 2385 | |* 24 | CONNECT BY NO FILTERING WITH START-WITH | | 1 | | 10075 |00:00:04.78 | 720K| 2385 | |* 25 | FILTER | | 1 | | 10058 |00:00:01.41 | 333K| 0 | |* 26 | HASH JOIN | | 1 | 70 | 10058 |00:00:00.63 | 112K| 0 | |* 27 | HASH JOIN | | 1 | 70 | 10058 |00:00:00.61 | 112K| 0 | | 28 | NESTED LOOPS | | 1 | 70 | 10058 |00:00:00.60 | 112K| 0 | | 29 | NESTED LOOPS | | 1 | 70 | 10058 |00:00:00.59 | 105K| 0 | |* 30 | HASH JOIN | | 1 | 70 | 10058 |00:00:00.46 | 87654 | 0 | |* 31 | HASH JOIN | | 1 | 70 | 10058 |00:00:00.45 | 87634 | 0 | | 32 | TABLE ACCESS FULL | USER$ | 1 | 3293 | 3293 |00:00:00.01 | 173 | 0 | |* 33 | HASH JOIN | | 1 | 193K| 198K|00:00:00.42 | 87461 | 0 | | 34 | TABLE ACCESS FULL | SYN$ | 1 | 174K| 174K|00:00:00.04 | 1086 | 0 | |* 35 | TABLE ACCESS FULL | OBJ$ | 1 | 174K| 174K|00:00:00.28 | 86375 | 0 | | 36 | INDEX FULL SCAN | I_USER2 | 1 | 3293 | 3293 |00:00:00.01 | 20 | 0 | |* 37 | INDEX RANGE SCAN | I_OBJ1 | 10058 | 1 | 10058 |00:00:00.13 | 17611 | 0 | | 38 | TABLE ACCESS BY INDEX ROWID | OBJ$ | 10058 | 1 | 10058 |00:00:00.01 | 6803 | 0 | | 39 | INDEX FULL SCAN | I_USER2 | 1 | 3293 | 3293 |00:00:00.01 | 20 | 0 | | 40 | TABLE ACCESS FULL | USER$ | 1 | 3293 | 3293 |00:00:00.01 | 173 | 0 | |* 41 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 4 | 1 | 3 |00:00:00.01 | 11 | 0 | |* 42 | INDEX RANGE SCAN | I_USER_EDITIONING | 4 | 12 | 9 |00:00:00.01 | 8 | 0 | |* 43 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 3 | 1 | 3 |00:00:00.01 | 9 | 0 | |* 44 | INDEX RANGE SCAN | I_USER_EDITIONING | 3 | 12 | 9 |00:00:00.01 | 6 | 0 | | 45 | NESTED LOOPS SEMI | | 9419 | 5 | 9419 |00:00:00.39 | 113K| 0 | |* 46 | INDEX RANGE SCAN | I_OBJ4 | 9419 | 5 | 81193 |00:00:00.26 | 22421 | 0 | |* 47 | INDEX RANGE SCAN | I_USER2 | 81193 | 266 | 9419 |00:00:00.11 | 90613 | 0 | |* 48 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 42 | 1 | 42 |00:00:00.01 | 121 | 0 | |* 49 | INDEX RANGE SCAN | I_USER_EDITIONING | 42 | 12 | 122 |00:00:00.01 | 79 | 0 | |* 50 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 41 | 1 | 41 |00:00:00.01 | 118 | 0 | |* 51 | INDEX RANGE SCAN | I_USER_EDITIONING | 41 | 12 | 120 |00:00:00.01 | 77 | 0 | | 52 | NESTED LOOPS SEMI | | 10053 | 5 | 10053 |00:00:00.35 | 107K| 0 | |* 53 | INDEX RANGE SCAN | I_OBJ4 | 10053 | 5 | 69782 |00:00:00.24 | 27630 | 0 | |* 54 | INDEX RANGE SCAN | I_USER2 | 69782 | 266 | 10053 |00:00:00.09 | 79952 | 0 | |* 55 | FILTER | | 9429 | | 9424 |00:00:03.29 | 387K| 2385 | |* 56 | FILTER | | 9429 | | 9424 |00:00:00.37 | 68334 | 211 | | 57 | NESTED LOOPS | | 9429 | 1 | 9424 |00:00:00.37 | 68334 | 211 | | 58 | NESTED LOOPS | | 9429 | 1 | 9424 |00:00:00.35 | 58613 | 211 | | 59 | NESTED LOOPS | | 9429 | 1 | 9424 |00:00:00.33 | 49184 | 211 | | 60 | TABLE ACCESS BY INDEX ROWID | SYN$ | 9429 | 1 | 9424 |00:00:00.24 | 21904 | 211 | |* 61 | INDEX UNIQUE SCAN | I_SYN1 | 9429 | 1 | 9424 |00:00:00.22 | 12480 | 211 | | 62 | TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$ | 9424 | 1 | 9424 |00:00:00.08 | 27280 | 0 | |* 63 | INDEX RANGE SCAN | I_OBJ1 | 9424 | 1 | 9424 |00:00:00.07 | 17811 | 0 | |* 64 | INDEX RANGE SCAN | I_USER2 | 9424 | 1 | 9424 |00:00:00.01 | 9429 | 0 | |* 65 | INDEX RANGE SCAN | I_USER2 | 9424 | 1 | 9424 |00:00:00.01 | 9721 | 0 | |* 66 | FILTER | | 9390 | | 9288 |00:00:02.70 | 206K| 2174 | |* 67 | FILTER | | 9390 | | 9561 |00:00:02.46 | 98219 | 2174 | | 68 | NESTED LOOPS | | 9390 | 6 | 9561 |00:00:02.45 | 98219 | 2174 | | 69 | NESTED LOOPS | | 9390 | 1 | 9703 |00:00:00.79 | 70185 | 202 | | 70 | NESTED LOOPS | | 9390 | 1 | 9703 |00:00:00.76 | 59765 | 202 | | 71 | TABLE ACCESS BY INDEX ROWID | USER$ | 9390 | 1 | 9390 |00:00:00.03 | 22222 | 0 | |* 72 | INDEX UNIQUE SCAN | I_USER1 | 9390 | 1 | 9390 |00:00:00.02 | 11776 | 0 | | 73 | TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$ | 9390 | 1 | 9703 |00:00:00.73 | 37543 | 202 | |* 74 | INDEX RANGE SCAN | I_OBJ5 | 9390 | 1 | 9703 |00:00:00.71 | 27840 | 202 | |* 75 | INDEX RANGE SCAN | I_USER2 | 9703 | 1 | 9703 |00:00:00.02 | 10420 | 0 | |* 76 | INDEX RANGE SCAN | I_OBJAUTH1 | 9703 | 7 | 9561 |00:00:01.66 | 28034 | 1972 | |* 77 | FIXED TABLE FULL | X$KZSRO | 161 | 1 | 2 |00:00:00.01 | 0 | 0 | |* 78 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 206 | 1 | 153 |00:00:00.01 | 588 | 0 | |* 79 | INDEX RANGE SCAN | I_USER_EDITIONING | 206 | 12 | 948 |00:00:00.01 | 381 | 0 | |* 80 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 142 | 1 | 142 |00:00:00.01 | 403 | 0 | |* 81 | INDEX RANGE SCAN | I_USER_EDITIONING | 142 | 12 | 295 |00:00:00.01 | 261 | 0 | | 82 | NESTED LOOPS SEMI | | 8531 | 5 | 8531 |00:00:00.20 | 107K| 0 | |* 83 | INDEX RANGE SCAN | I_OBJ4 | 8531 | 5 | 73472 |00:00:00.06 | 24386 | 0 | |* 84 | INDEX RANGE SCAN | I_USER2 | 73472 | 266 | 8531 |00:00:00.12 | 82632 | 0 | |* 85 | FIXED TABLE FULL | X$KZSPR | 1 | 5 | 1 |00:00:00.01 | 0 | 0 | |* 86 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 3 | 1 | 3 |00:00:00.01 | 9 | 0 | |* 87 | INDEX RANGE SCAN | I_USER_EDITIONING | 3 | 12 | 9 |00:00:00.01 | 6 | 0 | |* 88 | TABLE ACCESS BY INDEX ROWID BATCHED | USER_EDITIONING$ | 3 | 1 | 3 |00:00:00.01 | 9 | 0 | |* 89 | INDEX RANGE SCAN | I_USER_EDITIONING | 3 | 12 | 9 |00:00:00.01 | 6 | 0 | | 90 | NESTED LOOPS SEMI | | 9419 | 5 | 9419 |00:00:00.17 | 113K| 0 | |* 91 | INDEX RANGE SCAN | I_OBJ4 | 9419 | 5 | 81193 |00:00:00.04 | 22421 | 0 | |* 92 | INDEX RANGE SCAN | I_USER2 | 81193 | 266 | 9419 |00:00:00.11 | 90613 | 0 | ---------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 4 - filter((( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR ( IS NOT NULL AND ((&quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2 AND SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE') OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL)))) 10 - access(&quot;U&quot;.&quot;NAME&quot;='PUBLIC') 12 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot; AND &quot;O&quot;.&quot;NAME&quot;='X' AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 13 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 14 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot;) 16 - filter(&quot;TYPE#&quot;=:B1) 17 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 18 - filter(&quot;UE&quot;.&quot;TYPE#&quot;=:B1) 19 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 21 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88) 22 - access(&quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot; AND &quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter(&quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) 23 - filter((&quot;ST&quot;.&quot;SYN_OWNER&quot;='PUBLIC' AND &quot;ST&quot;.&quot;SYN_SYNONYM_NAME&quot;='X')) 24 - access(&quot;S&quot;.&quot;BASE_SYN_ID&quot;=PRIOR NULL AND &quot;S&quot;.&quot;ORIGIN_CON_ID&quot;=PRIOR NULL) filter( IS NOT NULL) 25 - filter(((( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR ( IS NOT NULL AND ((&quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2 AND SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE') OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL))) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR ( IS NOT NULL AND ((&quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2 AND SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE') OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL))))) 26 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot;) 27 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 30 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 31 - access(&quot;S&quot;.&quot;OWNER&quot;=&quot;BU&quot;.&quot;NAME&quot; AND &quot;BU&quot;.&quot;USER#&quot;=&quot;O&quot;.&quot;SPARE3&quot;) 33 - access(&quot;S&quot;.&quot;NAME&quot;=&quot;O&quot;.&quot;NAME&quot;) 35 - filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 37 - access(&quot;O&quot;.&quot;OBJ#&quot;=&quot;S&quot;.&quot;OBJ#&quot; AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 41 - filter(&quot;TYPE#&quot;=:B1) 42 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 43 - filter(&quot;UE&quot;.&quot;TYPE#&quot;=:B1) 44 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 46 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88) 47 - access(&quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot; AND &quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter(&quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) 48 - filter(&quot;TYPE#&quot;=:B1) 49 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 50 - filter(&quot;UE&quot;.&quot;TYPE#&quot;=:B1) 51 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 53 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88) 54 - access(&quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot; AND &quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter(&quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) 55 - filter((( IS NOT NULL OR (&quot;S&quot;.&quot;NODE&quot; IS NULL AND IS NOT NULL)) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR ( IS NOT NULL AND ((&quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2 AND SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE') OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL))))) 56 - filter(TO_NUMBER(SYS_CONTEXT('USERENV','CON_ID')) IS NOT NULL) 61 - access(&quot;S&quot;.&quot;OBJ#&quot;=:B1) 63 - access(&quot;O&quot;.&quot;OBJ#&quot;=:B1 AND &quot;O&quot;.&quot;TYPE#&quot;=5) filter(&quot;O&quot;.&quot;TYPE#&quot;=5) 64 - access(&quot;O&quot;.&quot;SPARE3&quot;=&quot;U&quot;.&quot;USER#&quot;) 65 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 66 - filter((( IS NOT NULL OR &quot;BA&quot;.&quot;GRANTOR#&quot;=USERENV('SCHEMAID')) AND (( IS NULL AND &quot;O&quot;.&quot;TYPE#&quot;&lt;&gt;88) OR BITAND(&quot;O&quot;.&quot;FLAGS&quot;,1048576)=1048576 OR BITAND(&quot;U&quot;.&quot;SPARE1&quot;,16)=0 OR ( IS NOT NULL AND ((&quot;U&quot;.&quot;TYPE#&quot;&lt;&gt;2 AND SYS_CONTEXT('userenv','current_edition_name')='ORA$BASE') OR (&quot;U&quot;.&quot;TYPE#&quot;=2 AND &quot;U&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) OR IS NOT NULL))))) 67 - filter(:B1 IS NULL) 72 - access(&quot;BU&quot;.&quot;NAME&quot;=:B1) 74 - access(&quot;BU&quot;.&quot;USER#&quot;=&quot;O&quot;.&quot;SPARE3&quot; AND &quot;O&quot;.&quot;NAME&quot;=:B1) 75 - access(&quot;O&quot;.&quot;OWNER#&quot;=&quot;U&quot;.&quot;USER#&quot;) 76 - access(&quot;BA&quot;.&quot;OBJ#&quot;=&quot;O&quot;.&quot;OBJ#&quot;) 77 - filter(&quot;KZSROROL&quot;=:B1) 78 - filter(&quot;TYPE#&quot;=:B1) 79 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 80 - filter(&quot;UE&quot;.&quot;TYPE#&quot;=:B1) 81 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 83 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88) 84 - access(&quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot; AND &quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter(&quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) 85 - filter((((-&quot;KZSPRPRV&quot;)=(-45) OR (-&quot;KZSPRPRV&quot;)=(-47) OR (-&quot;KZSPRPRV&quot;)=(-397) OR (-&quot;KZSPRPRV&quot;)=(-48) OR (-&quot;KZSPRPRV&quot;)=(-49) OR (-&quot;KZSPRPRV&quot;)=(-50)) AND &quot;INST_ID&quot;=USERENV('INSTANCE'))) 86 - filter(&quot;TYPE#&quot;=:B1) 87 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 88 - filter(&quot;UE&quot;.&quot;TYPE#&quot;=:B1) 89 - access(&quot;UE&quot;.&quot;USER#&quot;=:B1) 91 - access(&quot;O2&quot;.&quot;DATAOBJ#&quot;=:B1 AND &quot;O2&quot;.&quot;TYPE#&quot;=88) 92 - access(&quot;O2&quot;.&quot;OWNER#&quot;=&quot;U2&quot;.&quot;USER#&quot; AND &quot;U2&quot;.&quot;TYPE#&quot;=2 AND &quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) filter(&quot;U2&quot;.&quot;SPARE2&quot;=TO_NUMBER(SYS_CONTEXT('userenv','current_edition_id'))) </pre> <p>Why <tt>CONNECT BY</tt> is in this view? It wasn&#8217;t always this way. Some time ago, there was no hierarchy inside. It appeared as a fix for bug 3369744 in 10.2.0.1. The fix was supposed to help in situations where <tt>ALL_SYNONYMS</tt> didn&#8217;t report synonyms for synonyms. Hence, Oracle decided to add a hierarchy inside this view. In my opinion, that was a wrong decision. The number of cases where you have multiple layers of synonyms is relatively small. So it would be much better to close this issue as not a bug, and suggest using existing <tt>ALL_SYNONYMS</tt> to build a tree in the client code. This way a query would run instantly with no FTS because of a perfect <tt>START WITH</tt> condition.</p> <p>Oracle JDBC driver as part of its implementation runs a few queries to <tt>ALL_SYNONYMS</tt>. This is how I found it. The driver uses hierarchical queries on top of the view because it was implemented a long time ago when there was no tree inside <tt>ALL_SYNONYMS</tt>. I think that was the proper way to deal with the &#8220;synonyms for synonyms&#8221; case. From what I understand, JDBC driver can run such queries when user code tries to utilize Abstract Data Types/Collections. It caches results per Connection so that the number of times such queries are run is relatively small. If you reuse your Connections and do not re-initialize them too often, of course.</p> </div></div> Timur Akhmadeev https://blog.pythian.com/?p=104767 Thu Jul 26 2018 10:40:48 GMT-0400 (EDT) Google NEXT 2018: Google Cloud Platform and the Pythian Partnership https://blog.pythian.com/google-next-2018-google-cloud-platform-pythian-partnership/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">On day one of the Google NEXT 2018 conference which has a sold-out attendance at San Francisco’s Moscone center, Google’s drive and mission for cloud dominance and on-boarding of enterprise clients couldn’t be more poignant. At this conference, Google Cloud is sending a strong message to the technology space that it is enterprise-ready and gaining momentum in bringing enterprise companies to the cloud. As an example, Unity, a game engine development company made a migration from AWS to Google Cloud. This move stands out as an example of the tidal shift Google Cloud is making in the cloud product and infrastructure space for Enterprises. Moreso, as a reflection of the giant strides made by the Google Cloud team, they are now listed as a Leader in Gartner’s 2018 Magic Quadrant for Cloud Infrastructure as a Service (IaaS), moving up from last year, where they were listed as a Visionary.</span></p> <div class="imgcap"> <p><img class="alignnone size-full wp-image-104832" src="https://blog.pythian.com/wp-content/uploads/pythian_googlecloud.png" alt="" width="1252" height="592" srcset="https://blog.pythian.com/wp-content/uploads/pythian_googlecloud.png 1252w, https://blog.pythian.com/wp-content/uploads/pythian_googlecloud-465x220.png 465w, https://blog.pythian.com/wp-content/uploads/pythian_googlecloud-350x165.png 350w" sizes="(max-width: 1252px) 100vw, 1252px" /></p> <div class="thecap">Pythian is established as a key partner with Google Cloud Platform on the Cloud vision and mission. Google CEO Diane Greene was quick to point out some of the initiatives and the key principles that drive the Cloud Vision for Google and the reason behind some of the “supercharged-scale” gains that have been observed over the past year. The core initiatives are Google&#8217;s commitment to open-source and its constant interaction and embrace of the open-source community. Two key examples of Google&#8217;s partnership and openness with the open-source space is TensorFlow for Machine Learning and Kubernetes for Containerization, which has seen these technologies grow to become most widely adopted within the community together with community best practices. Another initiative is Google’s push for Security and Artificial Intelligence within the cloud perimeter. Diane Greene identified Security as the #1 worry for clients and Artificial Intelligence as the #1 opportunity.</div> </div> <p><span style="font-weight: 400;">Security has been big for Google and is ingrained at every level of Cloud infrastructure/ product implementation. For Google, Security starts with the Titan chips that mitigate firmware threats at the hardware level which is the base level of Computer Organization as the operating system, the file system, and other software application/ utilities depend on the hardware processor. The Titan chip serves as a “hardware root of trust” to ensure the integrity of the drivers loaded on the machine and mitigates rootkits threats to the data/ information stored on the computing machine. Google’s investment in security from the hardware level ensures that it is the cornerstone of the Cloud Platform products and infrastructure.</span></p> <p><span style="font-weight: 400;">Google is making big bets and virtually leading from the front in Artificial Intelligence, the great disruptor of the 21st century. As a consequence, AI capabilities are a big part of the Cloud Service and Product offerings. Beginning with Datalab a cloud-based notebook for Machine Learning modeling and experimentation, to the Google Cloud Machine Learning Engine (MLE) for large-scale model training and deployment and the slew of APIs for AI tasks such as Vision, NLP, translation, and DialogFlow. Announced in this years conference, Google Cloud AutoML has been introduced to further create the capabilities to train high-quality custom machine learning models out of the box. In Beta stage are capabilities for Vision, Natural Language, and Translation. Google Cloud’s investment in AI for a key contributor to its dump up Gartner’s Quadrant as a Leader in the IaaS space.</span></p> <h3 id="the-pythian-partnership">The Pythian Partnership</h3> <p><span style="font-weight: 400;">Pythian&#8217;s relationship with Google Cloud as a partner has never been better.</span><span style="font-weight: 400;"> As Diane Greene put it, “customers really win when the tech companies partner”. Pythian is firmly stationed as a key partner in the cloud ecosystem with Google, working hand-in-hand to bring enterprise data work-loads and products to the cloud and to enable companies to optimize their cloud strategy and develop their cloud competencies.</span> Moreso, <a href="https://pythian.com/analytics-as-a-service/" target="_blank" rel="noopener">Pythian’s KICKASS (Analytics as a Service)</a> <span style="font-weight: 400;">platform provides excellent enterprise tooling for very-large-scale and complex data migrations from on-site database clusters to the Google cloud data warehouse and database offerings.</span></p> <div class="imgcap"> <p><img class="alignnone size-full wp-image-104833" src="https://blog.pythian.com/wp-content/uploads/pythian_booth.png" alt="" width="3264" height="1836" srcset="https://blog.pythian.com/wp-content/uploads/pythian_booth.png 3264w, https://blog.pythian.com/wp-content/uploads/pythian_booth-465x262.png 465w, https://blog.pythian.com/wp-content/uploads/pythian_booth-350x197.png 350w" sizes="(max-width: 3264px) 100vw, 3264px" /></p> <div class="thecap">Pythian&#8217;s booth at Cloud NEXT 2018 Partner Summit</div> </div> <p><span style="font-weight: 400;">Google Cloud’s commitment to Security dovetails nicely with Pythian’s own key values and key belief in Security as it drives it&#8217;s “Love Your Data” mandate. This is the key push behind its development of </span><a href="https://tehama.io/"><span style="font-weight: 400;">Tehama</span></a><span style="font-weight: 400;"> a super-secure and agile platform for providing IT services globally. Moreso Pythian is well-aligned with Google Cloud strategy for AI with its own Big Data and Enterprise Data Science teams consisting of highly-skilled and experienced experts in AI, Advanced Analytics, Machine Learning and Deep Learning science, programming and engineering to enable clients to win in the market by enhancing their decision support and corporate strategy through harnessing intelligence from data.</span></p> <p><b>Given the energy and momentum at Google NEXT this week, there has never been a better time to consider Google Cloud &#8211; and <a href="https://pythian.com/google-cloud-platform/">we&#8217;re happy to have that conversation with you</a>.</b></p> </div></div> Ekaba Bisong https://blog.pythian.com/?p=104831 Thu Jul 26 2018 09:18:32 GMT-0400 (EDT) Oracle Database 18.3.0 and Docker http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/Pvspl85exVM/ <p><img class="alignleft wp-image-7820" src="https://oracle-base.com/blog/wp-content/uploads/2018/01/docker-db.png" alt="" width="200" height="171" />Just a quick heads-up to let you know I&#8217;ve updated my Docker builds to use the new 18c on-prem software.</p> <p>If you like to play around with Docker, here is some stuff you might want to check out. Remember, I&#8217;m not saying this is production ready. It&#8217;s just stuff I use for learning and demos&#8230;</p> <ul> <li>My Docker GitHub Repo <a href="https://github.com/oraclebase/dockerfiles">here</a>.</li> <li>The new 18c container build <a href="https://github.com/oraclebase/dockerfiles/tree/master/database/ol7_183">here</a>.</li> <li>The new Docker compose file <a href="https://github.com/oraclebase/dockerfiles/tree/master/compose/ol7_183_ords">here</a> to fire up an 18c DB container and a Tomcat 9 + ORDS 18.2 container to front APEX 18.1, and allow you to play with ORDS.</li> </ul> <p>Remember, if Docker is not your thing, you can always my Vagrant build <a href="https://github.com/oraclebase/vagrant/tree/master/ol7_183">here</a> to fire up the same thing, but in a single VirtualBox VM.</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/26/oracle-database-18-3-0-and-docker/">Oracle Database 18.3.0 and Docker</a> was first posted on July 26, 2018 at 10:36 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/Pvspl85exVM" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8271 Thu Jul 26 2018 05:36:18 GMT-0400 (EDT) The latest news from Google Cloud Platform https://blog.pythian.com/latest-news-google-cloud-platform/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><i><span style="font-weight: 400;">I joined</span></i><a href="https://pythian.com/experts/chris-presley/"> <i><span style="font-weight: 400;">Chris Presley</span></i></a><i><span style="font-weight: 400;"> in an episode of one of his new podcasts, the</span></i><a href="https://blog.pythian.com/?s=cloudscape"> <i><span style="font-weight: 400;">Cloudscape Podcast</span></i></a><i><span style="font-weight: 400;">, to share the latest news taking place around Google Cloud Platform (GCP).</span></i></p> <p><i><span style="font-weight: 400;">Some of the highlights of our discussion were:</span></i></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Google managed services &#8211; SAP HANA</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">New Google Storage Services</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Google’s Partner Interconnect</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Kubernetes updates</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Sole-Tenant Nodes on Google Compute Engine</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Dataflow’s new Streaming Engine</span></li> </ul> <p>&nbsp;</p> <p><b>Google Managed Services &#8211; SAP HANA</b></p> <p><span style="font-weight: 400;">Google announced a big partnership with SAP at Google Next 2017, and have since put in a great effort to make Google Cloud more friendly for SAP-type workloads.</span></p> <p><span style="font-weight: 400;">We’ve seen some things come out during the past year, but now I feel like there is really a lot of investment in that space. There have been rumored plans for a  managed SAP service that they’re planning—</span><i><span style="font-weight: 400;">a few articles in the news</span></i><span style="font-weight: 400;">.</span></p> <p><span style="font-weight: 400;">The interesting thing is that it’s not just about the managed services. When you read more about it, you will see that Google will be certifying specific VMs for SAP-type workloads for SAP HANA-type workloads. You’re going to be getting the new integrations with G Suite. Google has even rolled out the new ultramem-160 machine, which has approximately four terabytes of ram and 160 virtual CPU cores.</span></p> <p><span style="font-weight: 400;">In the future, I see them really strengthening their Google SAP alliance. This is just one example of how Google has been investing in strong technology partnerships.</span></p> <p><b>New Google Storage Services</b></p> <p><span style="font-weight: 400;">Google is rolling out a shared file system storage service, similar to Amazon’s EFS, called Cloud Filestore. So we’re going to have a managed service on GCP now that’s not based on running our own GCE VMs.</span></p> <p><span style="font-weight: 400;">Right now the service is still in beta, and it hasn’t officially been launched. They will be rolling it out the beta over the next few weeks.</span></p> <p><span style="font-weight: 400;">I’ve already signed up for it, so hopefully I’ll get to play with it soon. It does have some limitations, such as a maximum of 64 terabytes. I imagine they’ll probably charge you on a gigabytes-per-month rate, but it is a very high-performing NFS. They’re claiming 700 megabytes per second at 30,000 IOPS for the premium tier, which is impressive.</span></p> <p><span style="font-weight: 400;">It’s only going to be in a few regions at first, and it is only coming in an NFSv3 “flavor” for now. So it’s not quite primed for the Windows world yet, but knowing Google, they will roll that out sometime shortly after.</span></p> <p><span style="font-weight: 400;">On another note, Google has finally moved their transfer appliance in GA, but only in the US. If you’ve heard of this before, which you probably have if you’re an enterprise with a lot of data, it’s pretty much an appliance depth, it’s shipped your datacenter, you can fit it in a 19-inch rack, and it comes in one U or four U, I believe. Capacities are 100 terabytes or 480 terabytes.</span></p> <p><span style="font-weight: 400;">Apparently, Google is launching a new region in Los Angeles where they’re really trying to target the media and film entertainment industry. I believe this is the first cloud region from any service provider that is being rolled out in LA. I understand they want to get closer to that market and being within the city can really give them that edge. We’ll get much lower latency and really high performance massive storage.</span></p> <p><b>Google’s Partner Interconnect</b></p> <p><span style="font-weight: 400;">Partner interconnect was also announced a few months ago. Again, this is Google working with their tech partners, and a lot of the service providers, actually the top suppliers in North America, Europe, and some in Asia, as well. A lot of times you’d speak to a customer and the customer wouldn’t have the ability to connect directly to GCP, mainly because they’re too far from a point of presence or it’s just not technically feasible for them.</span></p> <p><span style="font-weight: 400;">But they may have something similar to an MPLS-based WAN that’s connecting all their different branch offices together, in a hub and spoke-type topology, which is where the Partner Interconnects come in. Instead of having to rearrange your network topologies, especially your WAN network topology, and if you have all your contracts already in place with a top-tier service provider such as Verizon, you can simply sign up for the service and tell them, “I want to extend my network into GCP” and into a specific region.</span></p> <p><span style="font-weight: 400;">You give them the required information and they set it up for you. Before you know it, you have a direct high bandwidth with low-latency connection into the cloud. The nice thing about it is that you&#8217;re relying on a service provider’s network. You get all the bells and whistles of HA, the liability and resilience of an MPLS network, and you’re getting direct connectivity to your cloud estate.</span></p> <p><span style="font-weight: 400;">There’s an initial setup fee, and there’s a recurring monthly fee that you have to pay to Google and, depending on your service provider and how they have things set up, there may be another fee in there.</span></p> <p><span style="font-weight: 400;">Then there’s the per-gigabyte fees. It’s not the most cost-efficient. I think Direct Interconnect is a little cheaper. Again, it really depends which POP you&#8217;re going through, but it’s definitely easier for you if you already have that infrastructure in place.</span></p> <p><b>Kubernetes updates from Google</b></p> <p><span style="font-weight: 400;">The latest from Google and Kubernetes is regional clusters going into GA. So you can now roll out multi-master clusters and in a single region. And so your master nodes will be living in every single zone within that region. You just tell GKE, “I want a regional cluster in US-east4,” for example. The service will spread the control-plane as well as your worker nodes across the whole region.</span></p> <p><span style="font-weight: 400;">You get the resiliency of single zone failures, and you don’t get any downtime during master upgrades, which is amazing because you have that federation across multiple zones. One zone’s master can go down fully or the actual pair can go down fully, and you still have the other two zones running so you get rolling upgrades, with continuous uptime.</span></p> <p><span style="font-weight: 400;">It’s a great combination that goes with last month’s announcement of regional disks. Now if you have persistent volumes that you want to have attached to your cluster, these will be available on a regional scale. I think that Google has definitely hit it out of the park with this one in terms of HA and Kubernetes HA.</span></p> <p><b>Sole-Tenant Nodes on Google Compute Engine</b></p> <p><span style="font-weight: 400;">This one is probably something that you have seen around other cloud providers before. One thing I have always liked about Google is that their infrastructure is unique. They pretty much build their own machines and their own racks and now customers can rent one of those physical servers, which is amazing. That’s pretty much what sole-tenant nodes are &#8211; you rent a node which is basically a server in one of Google’s data centers.</span></p> <p><span style="font-weight: 400;">You pay only for what you use, so there is pay-per-second, one-minute minimum charge &#8211; use it for an hour and then decommission it as you please. Once you have it, or once you can get a group of nodes, you can then start running your own VMs on them.</span></p> <p><span style="font-weight: 400;">The one thing that I do like about this is that there has been some licensing issues around Google and some providers. Some vendors still don’t support Google Cloud as a platform. But if you say I’m running on an actual machine, on physical hardware, does that license limitation still apply?</span></p> <p><span style="font-weight: 400;">This has been an interesting topic of conversation. It might actually be the workaround for some of those workloads.</span></p> <p><b>Dataflow’s new Streaming Engine</b></p> <p><span style="font-weight: 400;">Cloud Dataflow is pretty unique to Google. I don’t think anyone else is really running it as a data processing engine based on the Apache Beam API  other than Google . The Apache Beam API is open-source but I don’t see any other service providers adopting it today. So Google has been the the champion in getting it out there and getting it architected for really massive workloads.</span></p> <p><span style="font-weight: 400;">One of the most recent enhancements they are working on now is this streaming engine. Dataflow was originally developed to consolidate your batch and streaming workloads and now, Google is re-architecting it under the hood to make it a lot more efficient. The way it ran before is they would have an internal schedule that would spin up VMs in the background. And these VMs would have persistent disks connected to them and would have to sync certain data subsets, and have a certain understanding of what the current state of the running job was. If you were running a streaming job, you needed to know the state of your window, what data had been processed, etc. so you got that distributed computing framework going. Now that was definitely slow.</span></p> <p><span style="font-weight: 400;">What they have done now is they’ve moved that storage part out of the VMs into a back-end service which is pretty much invisible to your machines.  So now your machines literally just have to spin up, they have access to that back-end service, they do all their processing and they scale up and scale down based on that. So they have become a lot more ephemeral, their auto scaling is a lot more reactive which is great and you can see it in the published benchmarks.</span></p> <p><span style="font-weight: 400;">If you go to the Google Cloud Big Data blog, you can see some metrics they’ve run and the benchmarks and how well it actually associates with incoming flows of data as it needs to process them.</span></p> <p><span style="font-weight: 400;">The other nice thing is that you don’t need VMs that are as big anymore. So, you can run more VMs and smaller VMs which is just that much more agile for your workloads.</span></p> <p><span style="font-weight: 400;">This is currently experimentally available. You can enable it as an experimental pipeline parameter when you are deploying a job and they also say that you don’t need to redeploy your pipelines when you’re applying service updates. I still have to play with this to understand exactly how that would work.</span></p> <p><span style="font-weight: 400;">The one gotcha though, is you do get billed for the amount of streaming data that you process. Before, it was based on the number of VMs that you were running over time. Now, because there is a back-end service, Google needs to charge you something. So, you have that added cost but then again, you’re probably going to be using fewer VMs and smaller VMs with much smaller disks. So, I think cost-wise it will probably balance out.</span></p> <p><span style="font-weight: 400;">It is a great architectural shift in the operating model of Dataflow and it is nice to see Google creating the very agile and optimal workflows toward them.</span></p> <p><i><span style="font-weight: 400;">This was a summary of the Google Cloud Platform topics we discussed during the podcast, Chris also welcomed</span></i><a href="https://www.linkedin.com/in/gregbaker2/"> <i><span style="font-weight: 400;">Greg Baker</span></i></a><i><span style="font-weight: 400;"> (Amazon Web Services), and</span></i><a href="https://pythian.com/experts/john-laham/"> <i><span style="font-weight: 400;">Warner Chaves</span></i></a><i><span style="font-weight: 400;"> (Microsoft Azure) who also discussed topics related to their expertise.</span></i></p> <p><i><span style="font-weight: 400;">Click</span></i><a href="https://blog.pythian.com/cloudscape-podcast-july-2018/"> <i><span style="font-weight: 400;">here</span></i></a><i><span style="font-weight: 400;"> to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></i></p> </div></div> John Laham https://blog.pythian.com/?p=104827 Wed Jul 25 2018 12:46:03 GMT-0400 (EDT) Three Reasons You Need a Customer Data Platform Right Now https://blog.pythian.com/three-reasons-need-customer-data-platform-right-now/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">A Twitter user recently turned to the platform to issue an appeal to her bank: “Please don’t send me emails asking if I’m ready to buy a house ten minutes after emailing me an overdraft notice.” </span></p> <p><span style="font-weight: 400;">That one tweet neatly sums up everything that’s wrong with data in 2018. Simply stated, the left-hand doesn’t know what the right hand is doing. </span></p> <p><span style="font-weight: 400;">Consumers understand how much of their information businesses now possess, and they expect their needs to be anticipated and met seamlessly across all channels. It’s a reasonable expectation, but the reality currently falls well short of the mark. Big data still isn’t living up to its potential to build customer relationships and increase sales. According to a </span><a href="http://info.forbes.com/rs/790-SNV-353/images/FI_Treasure%20Data_Data%20Versus%20Goliath.pdf"><span style="font-weight: 400;">recent survey</span></a><span style="font-weight: 400;"> from Forbes Insights, only one in five executives surveyed consider their organizations to be leaders in customer data management, and only one in four say that they’re able to make full use of the data they control.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">Fortunately, now there’s a solution. The Customer Data Platform (CDP) offers a smart and elegant solution to the problem of wasted data, and it’s a solution every business needs to have in place. Here are three reasons why:</span></p> <ol> <li><b> It Actually Keeps the Promise of Unifying Data</b></li> </ol> <p><span style="font-weight: 400;">Business has long recognized the potential of big data. But the systems for managing that data haven’t kept pace with the explosion of devices, apps, tools and platforms in operation today. As a result, structured and unstructured data has stayed fragmented in outdated silos. Gleaning insights takes significant time and resources — if it can happen at all.</span></p> <p><span style="font-weight: 400;">A CDP solves that problem. It takes data from all your sources — apps, websites, call centers, transactional systems and so on — and unites it in a single, cloud-based platform where it can be analyzed, understood and acted on.</span></p> <p><span style="font-weight: 400;">The promise of unified, actionable data isn’t new. But prior technologies were limited in their ability to keep that promise. For example, Data Management Platforms were designed to target and retarget anonymous visits to ad networks, and the third-party data they offer is thus highly suspect in quality. Customer Relationship Management systems provide better quality data from more sources, but they can’t import or make sense of information from external databases. CDP solves those problems. With CDP, you get a meaningful understanding of data that comes directly from the customer through each and every touchpoint of the customer journey.</span></p> <ol start="2"> <li><b> It’s Actually Designed For The Technical Layperson </b></li> </ol> <p><span style="font-weight: 400;">Despite the name, most marketing databases aren’t particularly useful for the purposes of marketing. That’s because data lakes and warehouses typically can’t be accessed or interpreted without the help of an IT professional. This reality results in delays and frustrations for the non-tech person, and it increases the risk of a broken-telephone interpretation of data.</span></p> <p><span style="font-weight: 400;">By contrast, a CDP is designed specifically for the needs of a business, both in the data it collects and how it serves up that data. A CDP simplifies database creation, and it allows non-technical personnel greater control over its contents. No business today can afford a delay in understanding its data, and a CDP eliminates that delay.</span></p> <ol start="3"> <li><b> It Offers An Ever-Growing Understanding Of Your Customer</b></li> </ol> <p><span style="font-weight: 400;">The data collected in a CDP isn’t anonymous, and it’s not a snapshot of an isolated action. It is an ever-growing trove of understanding about individual users as they act (or decline to act) across all your touchpoints. With that understanding comes the power to engage with your customer in countless ways. For example, your CDP can be used to suggest offers based on interests that are spotted through the user’s engagement with your content. At-risk customers and abandoned shopping carts can be rescued with personalized messages and incentives. And highly granular audience segments can be targeted through Facebook, Google or other platforms. The list of opportunities is long, and its potential to grow is infinite.</span></p> <p><span style="font-weight: 400;">Recognizing the potential of a CDP is just the beginning because not all CDPs are alike. In an age of exponentially increasing volumes of data, you need to consider issues of scalability, security, data variety and privacy. </span></p> <p><span style="font-weight: 400;">That’s where Pythian’s Kick Analytics as a Service (Kick AaaS) comes in. This fully managed, cloud-based service offers a single platform for accessing data in all formats and from all your sources. Kick AaaS offers advanced analytics and visualizations to help you understand and act on the potential inherent in every customer interaction. The offering here isn’t just technical. It is deeply informed by the priorities of your business.</span></p> <p><span style="font-weight: 400;">With Kick AaaS, customer data is no longer a sidebar to your organization. At last, your data can fulfill its potential as the foundation of your success. Instead of simply reacting to past events, you’re now able to predict changes in customer thinking and behavior, and plan accordingly. In fact, the volume and precision of Kick AaaS analytics can even help you design new products in response to the insights that emerge.</span></p> <p><span style="font-weight: 400;">The business world has been promised actionable customer data for decades. With the arrival of CDPs like Pythian’s Kick AaaS, the promise is now, finally, a reality.</span></p> <p><i><span style="font-weight: 400;">To learn more about Pythian’s Kick AaaS, click </span></i><a href="https://pythian.com/analytics-as-a-service/"><i><span style="font-weight: 400;">here</span></i></a><i><span style="font-weight: 400;">.</span></i></p> </div></div> Lynda Partner, VP Marketing and Analytics as a Service https://blog.pythian.com/?p=104799 Wed Jul 25 2018 08:00:03 GMT-0400 (EDT) Installation of Oracle Database 18.3.0 On-Prem for Linux http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/Y0P0OGUpvAA/ <p><img class="alignleft wp-image-8262" src="https://oracle-base.com/blog/wp-content/uploads/2018/07/18c-installation.png" alt="" width="200" height="275" />Hot on the release of Oracle Database 18.3.0 On-Prem for Linux, I got on the case with doing some installations. The first of which can be found here.</p> <ul> <li><a href="/articles/18c/oracle-db-18c-installation-on-oracle-linux-6-and-7">Oracle Database 18c Installation On Oracle Linux 6 (OL6) and 7 (OL7)</a></li> <li><a href="/articles/18c/oracle-db-18c-installation-on-fedora-28">Oracle Database 18c Installation On Fedora 28 (F28)</a></li> </ul> <p>I few things to point out about these&#8230;</p> <p>First, I&#8217;ve gone with a read-write Oracle home. I like the idea of the read-only home, but I&#8217;ve not played around with it enough at this point to commit.</p> <p>The other thing is the Oracle home path itself. Currently I&#8217;m using &#8220;18.0.0&#8221;, rather than &#8220;18.3.0&#8221;. This feels a little strange to me, but I&#8217;m not sure how the Release Updates (RUs) will work out for this. I&#8217;m guessing what I&#8217;ll end up doing is creating a new Oracle home when a RU drops, then switch across to it, so it would be more appropriate to use 18.3.0, with a switch to 18.4.0 later. I&#8217;m still trying to decide how I want to play this. If you look at the SQL*Plus banner you will see this.</p> <pre>Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0</pre> <p>So neither of these choices feel bad. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I usually post pictures of the installer, but I think this is sending the wrong message. IMHO you shouldn&#8217;t be installing this way, so this time I&#8217;ve made the break and only posted the silent installation.</p> <p>In addition to the articles I&#8217;ve got some Vagrant builds for it (<a href="https://github.com/oraclebase/vagrant/tree/master/ol7_183">OL7</a>, <a href="https://github.com/oraclebase/vagrant/tree/master/f28_183">F28</a>). The OL7 one also includes APEX and ORDS etc.</p> <p>I&#8217;ve got a couple more things in the pipeline, which will probably come out tonight. We shall see.</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/25/installation-of-oracle-database-18-3-0-on-prem-for-linux/">Installation of Oracle Database 18.3.0 On-Prem for Linux</a> was first posted on July 25, 2018 at 7:55 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/Y0P0OGUpvAA" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8261 Wed Jul 25 2018 02:55:43 GMT-0400 (EDT) Oracle 12c Release 2 New Feature DGMGRL Scripting https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/ <p>New in Oracle 12c Release 2 is the ability for scripts to be executed through the Data Guard broker DGMGRL command-line interface very similar to like say in SQL*Plus. </p> <p>DGMGRL commands, SQL commands using the DGMGRL SQL command, and OS commands using the new HOST (or !) capability can be </p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8188 Wed Jul 25 2018 00:02:38 GMT-0400 (EDT) Oracle 12c Release 2 New Feature – SQL HISTORY https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/ <p>Oracle 12c Release 2 now provides the ability to reissue the previously executed SQL*Plus commands.</p> <p>This functionality is similar to the shell history command available on the UNIX platform.</p> <p>This feature enables us to run, edit, or delete previously executed SQL*Plus, SQL, or PL/SQL commands from the <strong>history list in </strong></p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8183 Tue Jul 24 2018 23:45:00 GMT-0400 (EDT) MongoDB Profiler http://oracle-help.com/articles/mongodb/mongodb-profiler/ <p>Profiler collects fine grained data about MongoDB write operations, cursors, database commands on a running mongod instance.<br /> You can enable profiling on a per-database or per-instance basis.<br /> The profiler is off by default.</p> <p>Profiler writes all the data it collects to the <strong>system.profile</strong> collection, <strong>which is a capped collection.</strong></p> <p><strong>*.Profiling Levels</strong><br /> 0 &#8211; the profiler is off, does not collect any data<br /> 1 &#8211; collects profiling data for slow operations only. By default slow operations are those slower than 100 milliseconds.<br /> 2 &#8211; collects profiling data for all database operations.</p> <p><strong>Enable Profiling :  </strong><span style="color: #0000ff">db.setProfilingLevel(1, { slowms: 20 })</span></p> <p><strong>Disable Profiling :  </strong><span style="color: #0000ff">db.setProfilingLevel(0)</span></p> <p><strong>Get Profiling Info: </strong><span style="color: #0000ff">db.getProfilingStatus()</span></p> <p>&nbsp;</p> <p>All details regarding performance issues related written in mongod.log file by default located in <span style="color: #0000ff">/var/log/messages/mongd/mongd.og</span></p> <p>&nbsp;</p> <p>For more detail about profiling : <a href="https://docs.mongodb.com/manual/administration/analyzing-mongodb-performance/#database-profiling">Click Here</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-profiler/">MongoDB Profiler</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5205 Tue Jul 24 2018 20:48:30 GMT-0400 (EDT) MongoDB Statistics http://oracle-help.com/articles/mongodb/mongodb-statistics/ <p>If you want to objects statistics like (No. of object, collections, datasize etc. Available) follow below the commands.</p> <p><span style="color: #0000ff">use database</span><br /> <span style="color: #0000ff">db.stats();                             &lt;&#8212;&#8212;&#8212;&#8212;&#8212;- Get the database level statistics</span><br /> <span style="color: #0000ff">db.technology.stats();          &lt;&#8212;&#8212;&#8212;&#8212;&#8212;- Get the  collection level statistics</span></p> <p><span style="color: #0000ff">db.runCommand( { collStats : &#8220;restaurant&#8221;, scale: 1024 } )</span></p> <p>&nbsp;</p> <p><strong>For more Detail about Stats: </strong></p> <p>Database Stats: <a href="https://docs.mongodb.com/manual/reference/method/db.stats/">Click Here</a></p> <p>Collection Stats: <a href="https://docs.mongodb.com/manual/reference/method/db.collection.stats/">Click Here</a></p> <p>Index Stats: <a href="https://docs.mongodb.com/manual/reference/operator/aggregation/indexStats/">Click Here</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-statistics/">MongoDB Statistics</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5203 Tue Jul 24 2018 20:40:27 GMT-0400 (EDT) MongoDB Clone Collection http://oracle-help.com/articles/mongodb/mongodb-clone-collection/ <div>Mongos does not support db.cloneCollection().</div> <div></div> <div><strong>1st Method</strong></div> <div><span style="color: #0000ff">db.cloneCollection(from, collection, query);</span></div> <div><span style="color: #0000ff">db.cloneCollection(&#8216;mongodb.example.net:27017&#8217;, &#8216;profiles&#8217;,{ &#8216;active&#8217; : true } )</span></div> <div></div> <div><strong>2nd Method depricated since 3.0 version</strong></div> <div><span style="color: #0000ff">db.collection.copyTo(&#8220;new collection name&#8221;); # But its depricated in 3.4 but it works.</span></div> <div></div> <div><strong>3st Method With Mongodump</strong></div> <div><span style="color: #0000ff">mongodump &#8211;db db-name &#8211;collection collection-name &#8211;archive=collection-name.archive</span></div> <div></div> <div></div> <div><strong>For more detail about Clone Collection:</strong> Click Here</div> <div></div> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-clone-collection/">MongoDB Clone Collection</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5201 Tue Jul 24 2018 20:32:54 GMT-0400 (EDT) MongoDB Host and Port http://oracle-help.com/articles/mongodb/mongodb-host-and-port/ <p>Get the detail of mongodb Host and mongodb runing on which port fo you get all these type of info from mongo shell.</p> <p><span style="color: #0000ff">getHostName(); # Hostname info</span></p> <p><span style="color: #0000ff">db.serverCmdLineOpts(); # Port info</span><br /> <span style="color: #0000ff">db.serverCmdLineOpts().parsed.net.port</span></p> <p><span style="color: #0000ff">db.runCommand({whatsmyuri : 1}) # Its is showing both Hostname and Port</span></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-host-and-port/">MongoDB Host and Port</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5199 Tue Jul 24 2018 20:29:32 GMT-0400 (EDT) MongoDB Rename Collection http://oracle-help.com/articles/mongodb/mongodb-rename-collection/ <p>Yes you can rename the collection if required but it is not supported on sharded collections.</p> <p><span style="color: #0000ff">db.collection.renameCollection() </span><br /> <span style="color: #0000ff">db.collection.renameCollection();</span></p> <p><span style="color: #0000ff">db.test_my.renamecollection(&#8220;test&#8221;);</span></p> <p><span style="color: #0000ff">use admin</span><br /> <span style="color: #0000ff">db.adminCommand( { renameCollection: &#8220;mydb.test_my&#8221;, to: &#8220;mydb.test&#8221; } )</span></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-rename-collection/">MongoDB Rename Collection</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5197 Tue Jul 24 2018 20:27:42 GMT-0400 (EDT) MongoDB User Creation Details http://oracle-help.com/articles/mongodb/mongodb-user-creation-details/ <p>Good Morning All,<br /> Now today give you a small demo for creating user, gives permission and getting user info.</p> <p><strong># Create user</strong><br /> <span style="color: #000000"><strong>Syntax:</strong></span><br /> <span style="color: #0000ff">db.createuser(</span><br /> <span style="color: #0000ff">{user : &#8220;mylogin&#8221; , pwd : &#8220;mylogin&#8221;,</span><br /> <span style="color: #0000ff">roles : []</span><br /> <span style="color: #0000ff">});</span></p> <p><span style="color: #0000ff">db.createuser(</span><br /> <span style="color: #0000ff">{user : &#8220;mylogin&#8221; , pwd : &#8220;mylogin&#8221;,</span><br /> <span style="color: #0000ff">roles : [{role : &#8220;userAdminAnyDatabase&#8221;, db : &#8220;admin&#8221;}]});     &lt;&#8212;&#8211; Full admin level Permission on admin user</span></p> <p><span style="color: #0000ff">db.createuser(</span><br /> <span style="color: #0000ff">{user : &#8220;mylogin&#8221; , pwd : &#8220;mylogin&#8221;,</span><br /> <span style="color: #0000ff">roles : [</span><br /> <span style="color: #0000ff">{ role : &#8220;read&#8221;, db : &#8220;admin&#8221;},                   &lt;&#8212;&#8211; Read Permission on Local admin DB</span><br /> <span style="color: #0000ff">{role : &#8220;readWrite&#8221; , db: &#8220;local&#8221;}               &lt;&#8212;&#8211; Read/Write Permission on Local DB</span><br /> <span style="color: #0000ff">] }</span><br /> <span style="color: #0000ff">);</span></p> <p><strong># Change User Password</strong><br /> <span style="color: #0000ff">use mylogin</span><br /> <span style="color: #0000ff">db.changeUserPassword(&#8220;accountUser&#8221;, &#8220;test#123&#8221;)</span></p> <p><strong># Roles</strong><br /> <span style="color: #0000ff">db.grantRolesToUser({ &#8220;&lt;username&gt;&#8221;,&#8221;roles&#8221; : [ { &#8220;role&#8221; : &#8220;assetsReader&#8221;, &#8220;db&#8221; : &#8220;assets&#8221; }]});  &lt;&#8212;- Grant role to User</span></p> <p><span style="color: #0000ff">db.revokeRoleFromUser({});                    &lt;&#8212;&#8212;&#8212;&#8211; Revoke role from User</span></p> <p><span style="color: #0000ff">Few RolesuserAadminAnydatabase,</span><span style="color: #0000ff">read, </span><span style="color: #0000ff">readWrite</span></p> <p>&nbsp;</p> <p><strong># Userinfo getting</strong><br /> <span style="color: #0000ff">use admin</span><br /> <span style="color: #0000ff">db.system.users.find();</span></p> <p><span style="color: #0000ff">db.getUsers();             &lt;&#8212;&#8212;&#8212;- It&#8217;s also showing the details of all databases user</span></p> <p><span style="color: #0000ff">db.runCommand( { usersInfo: 1 } );                 &lt;&#8212;&#8212;&#8212;&#8212; view all users of databases</span></p> <p><span style="color: #0000ff">db.runCommand(</span><br /> <span style="color: #0000ff">{</span><br /> <span style="color: #0000ff">usersInfo: { user: &#8220;Aman&#8221;, db: &#8220;home&#8221; },</span><br /> <span style="color: #0000ff">showPrivileges: true,</span><br /> <span style="color: #0000ff">showCredentials : true })</span></p> <p><span style="color: #0000ff">db.runCommand( { usersInfo: [ { user: &#8220;Aman&#8221;, db: &#8220;home&#8221; }, { user: &#8220;Tom&#8221;, db: &#8220;myApp&#8221; } ],</span><br /> <span style="color: #0000ff">showPrivileges: true</span><br /> <span style="color: #0000ff">} )</span></p> <p>&nbsp;</p> <p><strong>For More Detail about </strong></p> <p>User Creation: <a href="https://docs.mongodb.com/manual/reference/method/db.createUser/">Click Here</a></p> <p>User Roles: <a href="https://docs.mongodb.com/manual/reference/built-in-roles/">Click Here</a></p> <p>User Privileges: <a href="https://docs.mongodb.com/manual/reference/security/">Click Here</a></p> <p>Manage User and Roles: <a href="https://docs.mongodb.com/manual/tutorial/manage-users-and-roles/">Click Here</a></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-user-creation-details/">MongoDB User Creation Details</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5195 Tue Jul 24 2018 20:25:03 GMT-0400 (EDT) HugePages for Oracle database in Oracle Cloud https://blog.pythian.com/hugepages-for-oracle-database-in-oracle-cloud/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>I recently published a <a href="https://blog.pythian.com/hugepages-support-oracle-rds/" target="_blank" rel="noopener">blog post</a> about large pages support in AWS RDS for Oracle and someone asked me if and how it works on Oracle cloud. I didn&#8217;t actually know the answer and decided to look it up. Documentation for Oracle Cloud didn&#8217;t say much about HugePages or large pages support for Oracle Cloud Database Service. I only found directions for Oracle Database Exadata Cloud Service in the <a href="https://docs.oracle.com/en/cloud/paas/exadata-cloud/csexa/manage-huge-pages.html" target="_blank" rel="noopener">documentation</a>. The most reliable way, of course, was to go and check and that&#8217;s what I did. Please keep in mind that the information in this post may become obsolete since Oracle Cloud is constantly undergoing changes.</p> <p>Oracle has two types of Oracle Cloud platforms where one is called &#8220;Classic&#8221; and another is called &#8220;Oracle Cloud Infrastructure&#8221; or &#8220;OCI&#8221;. I started my tests with the latter. In the OCI interface I created a DB system (VM based) with “VM.Standard1.1” shape which comes with 1 OCPU and 7 GB of memory.</p> <p>As soon as the VM was successfully deployed I started to check the system. To my surprise I found that the HugePages were not configured. On the OS level we saw 0 defined large pages in the memory:</p> <pre lang="bash" escaped="true">[opc@gl ~]$ cat /proc/meminfo | grep Hu AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [opc@gl ~]$ </pre> <p>And the database instance parameter &#8220;use_large_pages&#8221; was explicitly setup to false:</p> <pre lang="sql" escaped="true">SQL&gt; show parameter large_pages NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ use_large_pages string false SQL&gt; </pre> <p>The large pages were not set up out of the box with the provisioned service as I was expecting. On the Oracle Cloud, we possessed full control on OS level for our VM. What if we tried to enable the HugePages for the database? The database was not using AMM and was configured for ASMM instead which was fully aligned with using large pages for SGA.</p> <pre lang="sql" escaped="true">SQL&gt; show parameter memory_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ memory_target big integer 0 SQL&gt; show parameter memory_max_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ memory_max_target big integer 0 SQL&gt; show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 1792M SQL&gt; show parameter sga_max NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_max_size big integer 1792M SQL&gt; </pre> <p>Also I noticed that the users limits for Oracle were already configured to support up to almost 5Tb of locked memory.</p> <pre lang="bash" escaped="true">[oracle@gl ~]$ ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 26559 max locked memory (kbytes, -l) 5158329 max memory size (kbytes, -m) unlimited open files (-n) 131072 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 131072 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [oracle@gl ~]$ </pre> <p>It seemed like the system was almost ready and we should just enable HugePages on OS level, switch the use_large_pages parameter to &#8220;TRUE&#8221; or &#8220;ONLY&#8221; and bounce the instance. It was exactly what I did.<br /> I changed the database instance parameter:</p> <pre lang="sql" escaped="true">SQL&gt; alter system set use_large_pages=true sid='*' scope=spfile; System altered. SQL&gt; </pre> <p>Stopped the instance using srvctl:</p> <pre lang="bash" escaped="true">[oracle@gl ~]$ srvctl stop database -db orcl_iad1zp </pre> <p>Modified the hugepages.</p> <pre lang="bash" escaped="true">[root@gl ~]# vi /etc/sysctl.conf [root@gl ~]# cat /etc/sysctl.conf | grep nr_hugepages vm.nr_hugepages = 900 [root@gl ~]# sysctl -p … [root@gl ~]# cat /proc/meminfo | grep Hu AnonHugePages: 0 kB HugePages_Total: 900 HugePages_Free: 900 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@gl ~]# </pre> <p>And started the database back:</p> <pre lang="bash" escaped="true">[oracle@gl ~]$ srvctl start database -db orcl_iad1zp </pre> <p>After that I could see that the memory in large pages had been used and the instance alert log clearly stated that 897 large pages out of 900 were allocated by the database.</p> <pre lang="bash" escaped="true">[oracle@gl ~]$ cat /proc/meminfo | grep Hu AnonHugePages: 0 kB HugePages_Total: 900 HugePages_Free: 6 HugePages_Rsvd: 3 HugePages_Surp: 0 Hugepagesize: 2048 kB [oracle@gl ~]$ ********************************************************************** 2018-07-06T19:47:45.508519+00:00 Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 2018-07-06T19:47:45.508680+00:00 Per process system memlock (soft) limit = UNLIMITED 2018-07-06T19:47:45.508773+00:00 Expected per process system memlock (soft) limit to lock SHARED GLOBAL AREA (SGA) into memory: 1794M 2018-07-06T19:47:45.508907+00:00 Available system pagesizes: 4K, 2048K 2018-07-06T19:47:45.509119+00:00 Supported system pagesize(s): 2018-07-06T19:47:45.509173+00:00 PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s) 2018-07-06T19:47:45.509220+00:00 4K Configured 3 3 NONE 2018-07-06T19:47:45.509400+00:00 2048K 900 897 897 NONE 2018-07-06T19:47:45.509481+00:00 ********************************************************************** </pre> <p>Everything worked out fine and I got the database with SGA running in HugePages memory. I didn&#8217;t see any errors or problems while doing some small tests on the database. Everything worked correctly. I tried with a couple of different shapes for the VM and got the similar results. It appeared that on the OCI platform the HugePages were not used for database systems based on VM by default. I didn&#8217;t try all types of VM but the configuration and behaviour looked the same for the other shapes only SGA and PGA parameters were growing along with new size of the instances. For example, on the &#8220;VM.Standard1.2&#8221; we were getting more memory on the box, bigger SGA but the large pages were not used anyway.</p> <p>The next step was to check the Oracle &#8220;Classic&#8221; cloud database service. It had different definition for the standard configurations and I started from &#8220;OC4&#8221; with 2 OCPU and 15 GB memory. The instance was created and I could see that it was using HugePages out of box. It was exactly what I expected from the Oracle Cloud.<br /> The database instance parameter &#8220;use_large_pages&#8221; was setup to &#8220;TRUE&#8221;</p> <pre lang="sql" escaped="true">SQL&gt; show parameter use_large_pages NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ use_large_pages string TRUE SQL&gt; show parameter sga_target NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_target big integer 5296M SQL&gt; </pre> <p>And the instance alert log showed clearly that 2649 large pages out of configured 2916 were used allocated.</p> <pre lang="text" escaped="true"> Starting ORACLE instance (normal) (OS id: 13119) 2018-07-20T16:52:59.994496+00:00 ********************************************************************** 2018-07-20T16:52:59.994556+00:00 Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 2018-07-20T16:52:59.994659+00:00 Per process system memlock (soft) limit = 128G 2018-07-20T16:52:59.994710+00:00 Expected per process system memlock (soft) limit to lock SHARED GLOBAL AREA (SGA) into memory: 5298M 2018-07-20T16:52:59.994809+00:00 Available system pagesizes: 4K, 2048K 2018-07-20T16:52:59.994901+00:00 Supported system pagesize(s): 2018-07-20T16:52:59.994948+00:00 PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s) 2018-07-20T16:52:59.994998+00:00 4K Configured 7 7 NONE 2018-07-20T16:52:59.995102+00:00 2048K 2916 2649 2649 NONE 2018-07-20T16:52:59.995150+00:00 ********************************************************************** </pre> <p>Here is a short summary. Oracle database service on OCI gives you VM and database instance without large pages configuration when the database service on &#8220;Classic&#8221; cloud platform provides you a VM and an instance with large pages setup. I was not able to find any recommendation or notes in the documentation for Oracle Cloud regarding large page usage for database service on &#8220;OCI&#8221; cloud. I was surprised that it was not configured out of the box there. From my point of view, it definitely makes sense to use large pages if a system supports it. So, if you still don&#8217;t use it, you may try to test it on OCI and maybe double verify with Oracle support before going to production.</p> </div></div> Gleb Otochkin https://blog.pythian.com/?p=104804 Tue Jul 24 2018 14:03:53 GMT-0400 (EDT) LEAP#402 Rolling with the BoldportClub Pips https://blog.tardate.com/2018/07/leap402-rolling-with-the-boldportclub-pips.html <p>The BoldportClub Pips circuit is based on “Dicing with LEDs” by Elektor (December 2006), but with a new PCB designed as only Boldport can (and a flashy red baggie).</p> <p>The ripple counter toggles through all die states at around <a href="https://www.wolframalpha.com/input/?i=1+%2F+(2.2+*+(470k%CE%A9*470k%CE%A9)%2F(470k%CE%A9%2B470k%CE%A9)+*+220pF)">8.8kHz</a>. Diode steering is used to light the appropriate LEDs for each state and reset the count when it gets to “7”. This runs fast enough that it appears all LEDs are on at the same time. hen the button is pressed, the counter stops - this is a “roll”.</p> <p>This is a similar concept (but quite a different implementation) to the <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Electronics101/555Timer/Dice">LEAP#229 Dice</a> project, which uses a 555 and CD4017 to also achieve a slow-down effect.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/pips">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/pips"><img src="https://leap.tardate.com/BoldportClub/pips/assets/pips_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/leap402-rolling-with-the-boldportclub-pips.html Tue Jul 24 2018 09:07:00 GMT-0400 (EDT) LEAP#397 I²C Scanner https://blog.tardate.com/2018/07/leap397-i2c-scanner.html <p>This is a simple sketch, inspired by <a href="http://playground.arduino.cc/Main/I2cScanner">i2c_scanner</a> that simply scans for the presence of addresses in the full 7-bit address space.</p> <p>This can be very helpful when trying to use I²C modules where the default address is not documented. As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/I2CScanner">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/I2CScanner"><img src="https://leap.tardate.com/playground/I2CScanner/assets/I2CScanner_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/07/leap397-i2c-scanner.html Tue Jul 24 2018 08:39:06 GMT-0400 (EDT) Oracle Database 18.3.0 On-Prem for Linux http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/rETnfKC5pho/ <p><img class="alignleft wp-image-8258" src="https://oracle-base.com/blog/wp-content/uploads/2018/07/18c.png" alt="" width="200" height="275" />I was just about to go to bed when I saw <a href="https://mikedietrichde.com/2018/07/23/oracle-database-18-3-0-on-premises-available-for-download-on-linux/">this post by Mike Dietrich</a>. Yay!</p> <p>I&#8217;ve had access to 18c on the Oracle Cloud for some time, so I&#8217;ve already been able to write a bunch of stuff about it (<a href="/articles/18c/articles-18c">see here</a>), but it always feels geekier when it&#8217;s running on your own kit. It also makes demos a little less dangerous if you can fall back to your own machine. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Of course I&#8217;m starting the downloads now, so maybe I&#8217;ll get to have a play tomorrow? <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> If you want it you can grab it from here.</p> <ul> <li><a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html">Oracle Database Software Downloads</a></li> </ul> <p>Happy upgrading&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/23/oracle-database-18-3-0-on-prem-for-linux/">Oracle Database 18.3.0 On-Prem for Linux</a> was first posted on July 23, 2018 at 10:54 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/rETnfKC5pho" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8257 Mon Jul 23 2018 17:54:36 GMT-0400 (EDT) Power BI 101- Logging and Tracing, Part II https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/ <p>So we went over locations and the basics of logging and tracing in Power BI.  I now want to know how to make more sense from the data.  In Oracle, we use a utility called TKProf, (along with others and a number of third party tools) to make sense of what comes from the logs.  SQL Server has Log Analytics and the profiler, but what can I do with Power BI?</p> <p>First, let&#8217;s discuss what happens when we have actual activity.  In my first post, the system was pretty static.  This time I chose to open up a file with larger data refreshes from multiple sources, added tables, calculated columns and measures.  The one Access DB has over 10 million rows that is refreshed when I first open the PBIX file:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/logging/" rel="attachment wp-att-8063"><img class="alignnone wp-image-8063" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?resize=628%2C375&#038;ssl=1" alt="" width="628" height="375" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?resize=1024%2C612&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?resize=300%2C179&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?resize=768%2C459&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?w=1300&amp;ssl=1 1300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging.png?w=1950&amp;ssl=1 1950w" sizes="(max-width: 628px) 100vw, 628px" data-recalc-dims="1" /></a></p> <p>Post loading, there&#8217;s a significant increase in number of MS Mashup Container, (calculations and measures) and msmdsrv, (data loading) logging:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/logging1/" rel="attachment wp-att-8062"><img class="alignnone size-large wp-image-8062" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging1.png?resize=650%2C358&#038;ssl=1" alt="" width="650" height="358" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging1.png?resize=1024%2C564&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging1.png?resize=300%2C165&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging1.png?resize=768%2C423&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging1.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>Do I really want to go through all this data by hand?  BI is a reporting tool, so what if I bring them into Power BI?  Let&#8217;s start with the first MS Mashup Container log-</p> <p>In Power BI, click on Get Data &#8211;&gt; Text and change the file type to &#8220;All Files&#8221; in the explorer and go to the directory that contains the trace files:</p> <pre>C:\Users\&lt;user&gt;\AppData\Local\Microsoft\Power BI Desktop\Traces\Performance</pre> <p>Remember that you will need to have &#8220;hidden items&#8221; set to be displayed to browse down to this folder.  Choose the files you wish to load in the directory and Power BI and choose a Customer delimiter of a quotes, (&#8220;) to separate the file.  This will load a file that will have a few columns you&#8217;ll need to remove that contain data like colons, nulls and other syntax from the file.  Once you&#8217;ve completed doing this, you most likely have a table with 15 columns of valuable data:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/columns_logging-2/" rel="attachment wp-att-8065"><img class="alignnone wp-image-8065" src="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/columns_logging-1.png?resize=175%2C361&#038;ssl=1" alt="" width="175" height="361" srcset="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/columns_logging-1.png?w=440&amp;ssl=1 440w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/columns_logging-1.png?resize=145%2C300&amp;ssl=1 145w" sizes="(max-width: 175px) 100vw, 175px" data-recalc-dims="1" /></a></p> <p>I&#8217;ve renamed the columns to something more descriptive and I can now apply these changes and pull some value from the data.</p> <p>Using the provided data, I can then produce a report that tells me about what types of processes are the largest users of resources and time.  I can provide reports to grant a visual on what&#8217;s going on in a Power BI environment.  The report is pretty straightforward-  Wait events against percentage of waits, Memory allocation over time, Time Waited and Wait Count.  These reports may seem really foreign for most data scientists, but for a DBA, it should resonate and provide them with ways they can offer assistance to the Power BI group in optimization.</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/logging_bi/" rel="attachment wp-att-8066"><img class="alignnone size-large wp-image-8066" src="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?resize=650%2C378&#038;ssl=1" alt="" width="650" height="378" srcset="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?resize=1024%2C596&amp;ssl=1 1024w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?resize=300%2C175&amp;ssl=1 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?resize=768%2C447&amp;ssl=1 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?w=1300&amp;ssl=1 1300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/logging_bi.png?w=1950&amp;ssl=1 1950w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /></a></p> <p>I can add hierarchy to this to drill down into interesting areas of waits and add more files, identifying each table by the file unique identifier and date that it came from going forward.  I expect my reports and my direction to look different from the direction many have taken with Power BI performance, but I wanted to demonstrate that optimization is always about time.  I admit fully that I&#8217;m still learning, but I also am approaching this from a database optimization perspective.  Please let me know your thoughts?</p> <p>Happy hunting, folks!</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/logging/" rel="tag">logging</a>, <a href="https://dbakevlar.com/tag/performance/" rel="tag">Performance</a>, <a href="https://dbakevlar.com/tag/power-bi/" rel="tag">Power BI</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/&title=Power BI 101- Logging and Tracing, Part II"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/&title=Power BI 101- Logging and Tracing, Part II"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/&title=Power BI 101- Logging and Tracing, Part II"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/&title=Power BI 101- Logging and Tracing, Part II"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/#comments">1 (One) on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/12/content-year-review-2017/" >Content- My Year in Review, 2017</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/04/kickstarting-after-a-failed-addition-to-the-awr-warehouse/" >Kickstarting After a Failed Addition to the AWR Warehouse</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2011/08/tempfile-read-writes-and-asm/" >Tempfile Read /Writes and ASM</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/10/jessica-ridgeway/" >Jessica Ridgeway</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/09/metrics-vs-statistics/" >Metrics vs Statistics</a></li></ul></div></div><hr style="color:#EBEBEB&quo /><small>Copyright © <a href="https://dbakevlar.com">DBA Kevlar</a> [<a href="https://dbakevlar.com/2018/07/power-bi-101-logging-and-tracing-part-ii/">Power BI 101- Logging and Tracing, Part II</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8061 Mon Jul 23 2018 17:09:45 GMT-0400 (EDT) Finally, the wait is over!! Now Oracle 18c is Available for on-Premise http://oracle-help.com/oracle-18c/finally-the-wait-is-over-now-oracle-18c-is-available-for-on-premise/ <p><strong>Oracle Database 18c </strong>was released for <strong>on-premise</strong> as well. Oracle Database 18c, the latest generation of the world’s most popular database, is now available in Oracle Cloud and on Oracle Exadata. It provides businesses of all sizes with access to the world’s fastest, most scalable and reliable database technology for secure and cost-effective deployment of transactional and analytical workloads in the Cloud.</p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg"><img data-attachment-id="5184" data-permalink="http://oracle-help.com/oracle-18c/finally-the-wait-is-over-now-oracle-18c-is-available-for-on-premise/attachment/18-3-2/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?fit=1221%2C789" data-orig-size="1221,789" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;skagupta&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1532395989&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="18.3" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?fit=300%2C194" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?fit=980%2C634" class="size-full wp-image-5184 aligncenter" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=980%2C633" alt="" width="980" height="633" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?w=1221 1221w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=300%2C194 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=768%2C496 768w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=1024%2C662 1024w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=60%2C39 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/18.3.jpg?resize=150%2C97 150w" sizes="(max-width: 980px) 100vw, 980px" data-recalc-dims="1" /></a></p> <p>To Download:</p> <blockquote> <h4><code><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><a href="http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html"><strong><span style="color: #0000ff;">http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</span></strong></a></span></code></h4> </blockquote> <p>Stay tuned for <strong>more articles on Oracle 18c </strong></p> <p>Thank you for giving your valuable time to read the above information.</p> <p><span class="s1">If you want to be updated with all our articles s</span>end us the Invitation or Follow us:</p> <p><strong>Telegram Channel: <a href="https://t.me/helporacle">https://t.me/helporacle</a></strong></p> <p><span class="s1"><span class="s2"><strong>Skant Gupta’s</strong> LinkedIn: <a class="jive-link-external-small" href="http://www.linkedin.com/in/skantali/" rel="nofollow">www.linkedin.com/in/skantali/</a></span></span></p> <p class="p4"><span class="s1"><strong>Joel Perez’s</strong> LinkedIn: <a href="http://www.linkedin.com/in/SirDBaaSJoelPerez"><strong>Joel Perez’s Profile</strong></a></span></p> <p>LinkedIn Group: <em><strong><a class="js-entity-name entity-name" href="https://www.linkedin.com/groups/12065270" data-app-link="">Oracle Cloud DBAAS</a></strong></em></p> <p>Facebook Page: <em><strong><a href="https://www.facebook.com/oraclehelp">OracleHelp</a></strong></em></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-18c/finally-the-wait-is-over-now-oracle-18c-is-available-for-on-premise/">Finally, the wait is over!! Now Oracle 18c is Available for on-Premise</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Skant Gupta http://oracle-help.com/?p=5183 Mon Jul 23 2018 16:05:53 GMT-0400 (EDT) Announcements from Microsoft Build 2018 https://blog.pythian.com/announcements-microsoft-build-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><i><span style="font-weight: 400;">I recently joined Chris Presley for Episode 5 of his podcast, </span></i><a href="https://blog.pythian.com/?s=cloudscape"><i><span style="font-weight: 400;">Cloudscape</span></i></a><i><span style="font-weight: 400;">, to talk about what’s happening in the world of cloud-related matters. My focus was to share the most recent events surrounding Microsoft Azure.</span></i></p> <p><i><span style="font-weight: 400;">Topics of discussion included:</span></i></p> <ul> <li><span style="font-weight: 400;">         </span><b>Always Secure &#8211; Azure DDos Protection</b></li> <li><span style="font-weight: 400;">         </span><b>Always Secure &#8211; Confidential Compute</b></li> <li><span style="font-weight: 400;">         </span><b>Azure Standard Load Balancer</b></li> <li><span style="font-weight: 400;">         </span><b>Azure Event Hubs Kafka</b></li> <li><span style="font-weight: 400;">         </span><b>Azure Functions Improvements</b></li> <li><span style="font-weight: 400;">         </span><b>Azure Blockchain Workbench</b></li> </ul> <p><b>Always Secure &#8211; Azure DDos Protection</b></p> <p><span style="font-weight: 400;">Microsoft has a new initiative which puts security upfront and center. This is especially timely now with GDPR and not just the risks from attacking, the risks from social engineering and everything else, but there are also some really complicated compliance environments. This is the Always Secure initiative.</span></p> <p><span style="font-weight: 400;">One of the first components coming from this initiative is that Azure is going to have a distributed denial of service attack protection service (DDOS). We know this is a very common service attack that happens on the internet, and there are companies like Cloudflare which provide this exact DDOS protection service. Microsoft has now decided to make this a part of their service. It makes a lot of sense. If you sign up for it, they will do the DDOS protection against your public-facing endpoints.</span></p> <p><span style="font-weight: 400;">This is Microsoft’s initiative to demonstrate that Azure is secure by default and they have put as much as they can into their investment and R&amp;D of Azure’s security.  </span></p> <p><span style="font-weight: 400;">It’s really cool that they are bringing this on board instead of relying on third-party providers. They had a lot of good security-related announcements like this at Microsoft Build 2018.</span></p> <p><b>Always Secure &#8211; Confidential Compute</b></p> <p><span style="font-weight: 400;">The other announcement from Microsoft is the confidential compute platform. This is really interesting because it’s built on the latest improvements from Intel. They have new extensions in the chips, called SGX extensions, which can do secure enclave computations. Basically, the encryption or decryption happens at the CPU chip level.</span></p> <p><span style="font-weight: 400;">Usually, what happens is, if you have a lot of encrypted data that you want to work on, you decrypt it in RAM so you don’t pay for big performance hits. The other thing that could happen is that you have data that’s encrypted end to end, but once you reach its last leg of its destination, it is decrypted to be computed on.</span></p> <p><span style="font-weight: 400;">With this new extension from Intel, the computer running the computation is not even aware of the contents of the data. For example, SQL Server has this capability called Always Encrypted and the whole goal of Always Encrypted is that SQL Server shouldn’t know what the contents of the encrypted data are. Today, the way it is usually implemented is the client holds the decryption keys. The problem with that is if you want to do really heavy computing and you have data that’s always encrypted in SQL Server, then it has to move to the client and the client is the one that has to decrypt. That is an issue because most clients are not as beefy as the database servers themselves.</span></p> <p><span style="font-weight: 400;">But now, using the secure enclaves inside the chip, they have a way to guarantee that they can send the decryption key to the SQL Server itself and the SQL Server process base does not know the actual key. It doesn’t need to know it because the Intel chip is the one that’s going to be a receiving the key and it’s going to perform the computation and then send the results back to the clients.</span></p> <p><span style="font-weight: 400;">What they are doing with these new Intel extensions is very interesting because this will allow for large, scalable queries on encrypted data that remains encrypted at all points.</span></p> <p><b>Azure Standard Load Balancer</b></p> <p><span style="font-weight: 400;">Azure Standard Load Balancer has a new feature. Now you can move them completely out of public endpoints and you can just have them inside a virtual network.  There is a pattern here of moving a lot of diverse types of Azure resources that were architected years ago to all have public endpoints. But with the new security environment that has evolved in the last few years, this is just the showstopper for many people. As a response to this, there is a general movement away from public endpoints and into Virtual Network protected endpoints. </span></p> <p><span style="font-weight: 400;">There is also an improvement in scale for the Load Balancer. For example, now you can put up to a thousand VM’s behind one of these higher end load balancers that you could get in Azure.</span></p> <p><b>Azure Event Hubs Kafka</b></p> <p><span style="font-weight: 400;">Another big announcement that I thought was smart is that Azure has developed a Kafka endpoint for its own message processing system called Event Hubs. You can take a Kafka application or a Kafka tool and you can connect it to an Azure endpoint that under the covers is their own event hub service but is compatible. The Kafka application would not know that it is not actually talking to a Kafka cluster.</span></p> <p><span style="font-weight: 400;">This approach mirrors the strategy they are using to migrate people from MongoDb and Cassandra to Cosmos Db. Instead of saying, “Oh, here’s a migration tool to do migration from MongoDb or Cassandra or Kafka to an Azure service,” they are developing these compatibility endpoints under the covers. The applications think they are talking to each other their native platform be it MongoDb or Cassandra or Kafka.</span></p> <p><b>Azure Functions Improvements</b></p> <p><span style="font-weight: 400;">Microsoft is also working hard to make their serverless story stronger. They have improved the monitoring, they’re improving the diagnostics, and they are also improving what you can do with Azure functions in terms of state full operations or long-running operations. They are trying to make more use cases fit into the Azure functions serverless platform.</span></p> <p><span style="font-weight: 400;">We will probably see a lot more work trying to make serverless computing more attractive inside Azure, as well.</span></p> <p><b>Azure Blockchain Workbench</b></p> <p><span style="font-weight: 400;">Azure Blockchain Workbench is Microsoft’s first step into providing the blockchain-as-a-service offering to the rest of the developer community.</span></p> <p><span style="font-weight: 400;">Right now what we have <span style="background-color: #f6d5d9;">are </span>templates. Amazon Web Services also released a “blockchain-as-a-service” offering but in the end, they are just templates. So we have yet to have a cloud provider really develop the first blockchain as a service offering but Microsoft is quickly getting there.</span></p> <p><span style="font-weight: 400;">With the workbench, not only do you get the templates at this point, but you are also going to get other components built around the blockchain that will make it a lot easier to develop. So, for example, bundled up will be not only the blockchain component, but it will be an API that will go on top of it. </span></p> <p><span style="font-weight: 400;">There will be an actual SQL database that will go on top of it, as well. It comes with a SQL database so you could do quicker and faster reporting and inquiring off of the blockchain. Blockchains don’t really lend themselves to fast querying and index seeking. What usually happens is that you take the data from the blockchain and you put it on a query-friendly structure like a SQL database. </span></p> <p><span style="font-weight: 400;">Microsoft is trying to make it very easy for people to develop blockchain solutions without knowing much about the blockchain infrastructure. You just have to know that you have an Ethereum blockchain under the covers. You won’t have to learn how to run an Ethereum node and you won’t have to learn how to provide or configure your consensus algorithm on the Ethereum config files or anything like that. The service will be taken care of for you. You would only need to interface with the API exposed by the service, and then you potentially would also even be able to swap out different blockchains from the backend.</span></p> <p><span style="font-weight: 400;">If you build a solution and then you go to client that doesn’t want to run Ethereum, they want to run in something else such as Hyperledger, then because you are only working with front face and API of the service, you should be able to swap out the blockchain in the backend and your application should continue to work.</span></p> <p><span style="font-weight: 400;">So this is just the very first steps into having that real generally available blockchain as a service offering. But we know it’s coming now. They’ve made it public preview, and I always say once these things come into public preview, they’re like public promises to see them to the end.</span></p> <p><i><span style="font-weight: 400;">This was a summary of some of the Microsoft Azure topics we discussed during the podcast. Chris also welcomed Greg Baker (Amazon Web Services) and Kartick Sekar (Google Cloud Platform) who discussed topics related to their expertise.</span></i></p> <p><i><span style="font-weight: 400;">Click</span></i><a href="https://blog.pythian.com/cloudscape-podcast-episode-5-cloud-vendor-news-updates-may-2018/"><i><span style="font-weight: 400;"> here</span></i></a><i><span style="font-weight: 400;"> to hear the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></i></p> </div></div> Warner Chaves https://blog.pythian.com/?p=104801 Mon Jul 23 2018 16:03:13 GMT-0400 (EDT) Pythian Achieves Cloud Migration Partner Specialization in the Google Cloud Partner Specialization Program https://blog.pythian.com/pythian-achieves-cloud-migration-partner-specialization-google-cloud-partner-specialization-program/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Today, Pythian is pleased to share the news <span style="font-weight: 400;">that it has achieved the Cloud Migration Partner Specialization in the Google Cloud Partner Specialization Program. By earning this </span><a href="https://www.blog.google/products/google-cloud/building-a-better-cloud-with-our-partners-at-next-18/"><span style="font-weight: 400;">Partner Specialization</span></a><span style="font-weight: 400;">,</span> <span style="font-weight: 400;">Pythian </span><span style="font-weight: 400;">has proven its expertise and success in building customer solutions in the Cloud Migration field using Google Cloud Platform technology.</span></p> <p><span style="font-weight: 400;">The Google Cloud Partner Specialization Program is designed to provide Google Cloud customers with qualified partners that have demonstrated technical proficiency and proven success in specialized solution and service areas.</span></p> <p><span style="font-weight: 400;">Pythian was awarded the Google Cloud Platform Partner Specialization in Cloud Migration due to </span><span style="font-weight: 400;">demonstrated success in building foundational architectures and then migrating significant numbers of customer workloads from either on-premise or other cloud pr</span><span style="font-weight: 400;">oviders to Google Cloud Platform. </span></p> <p><span style="font-weight: 400;">“The Google Cloud Platform is changing how the world does business,” said Vanessa Simmons, VP of Business Development at Pythian. “As a Google Cloud Migration Specialized Partner, we’re able to help clients everywhere get the maximum benefit from GCP’s full range of emerging technologies. Pythian has over twenty years’ experience in data and deep expertise in GCP, and for our clients, that means a quick, easy and cost-efficient move to the cloud.”</span></p> <p><span style="font-weight: 400;">Recently, BBM </span><span style="font-weight: 400;">selected Pythian to help them move their mission-critical IT infrastructure for Android, iOS and Windows consumer BBM from BlackBerry on-premise data centers in Canada to Google Cloud Platform (GCP) in Asia.</span><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;"></p> <p>To learn more about Pythian’s full range of </span><a href="https://pythian.com/google-cloud-platform/."><span style="font-weight: 400;">expert services</span></a><span style="font-weight: 400;"> for Google Cloud Platform, including real use cases and specializations, visit us at Google NEXT, July 24-26th, 2018 at the Moscone Center in San Francisco, booth #S1324. Vanessa Simmons will also deliver a session titled </span><i><span style="font-weight: 400;">Solution Selling Mindset – Moving from Product to Solution Selling </span></i><span style="font-weight: 400;">on Monday, July 23, 2018 at 11:00 am PDT (room number 310-314).  </span></p> <p><b>Vanessa Simmons will be available for interviews at Google NEXT. To book an on-site interview email media@pythian.com. </b></p> <p>Learn more about <a href="https://pythian.com/google-cloud-platform/">Pythian&#8217;s Services &amp; Solutions for Google Cloud Platform</a>.</p> </div></div> Pythian News https://blog.pythian.com/?p=104796 Mon Jul 23 2018 13:30:27 GMT-0400 (EDT) Swingbench Short cookbook commands http://oracledba.blogspot.com/2018/07/swingbench-short-cookbook-commands.html <div class="separator" style="clear: both; text-align: center;"><a href="https://draft.blogger.com/null" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="873" data-original-width="1600" height="174" src="https://3.bp.blogspot.com/-LVd3-gAi8h8/W0yrGM8-yvI/AAAAAAAFy-s/43vDMuauLDYJl9U0851nqlQcCu2K59Y9QCLcBGAs/s320/swing.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div style="margin-top: 10px; padding: 0px;">In this example we are using Swingbench 2.6 with <strong>Order Entry</strong> stress test - using a configuration file (SOE_Server_Side_V2.xml)<br /><br /><strong>Connection String:</strong><br />The connection will always be defined to pluggable database (not CDB) - if installed.</div><ul><li><u>//hostname/service_name</u> valid for driver type: <strong>Oracle oci Driver</strong> &amp; <strong>Oracle jdbc Driver</strong> (<strong>oewizard</strong>,<strong>sbutil</strong> use only this option) .<br />Make sure the service name is recognized by the listener (<em>lsnrctl services | grep Service</em>) </li><li><u>Tnsnames alias</u> valid <strong>only</strong> for driver type: <strong>Oracle oci Driver</strong> – requires Oracle 12.2 client </li></ul><strong>Creating the schema:</strong><br /><blockquote class="tr_bq">cd swingbench/bin<br />$ ./oewizard -dbap <span style="color: red;">change_on_install</span> -ts SOE -nopart -u soe -p soe -cl -df +DATA -create <span style="color: red;">-scale 1 -cs //db_server/db_service</span></blockquote><strong>Running:</strong><br />If you need to use <span style="text-decoration-line: underline;">Connection Pooling</span>:<br /><ul style="margin: 10px 0px 0px;"><li>Change the driver type to "<em>Oracle oci Driver"</em></li><li>Change the format of the connection string to tns alias</li><li>For further needed changes, look at: <a class="external-link" href="http://dominicgiles.com/blog/files/8278e1395e583ab4ba63429cfcf609bb-138.html" rel="nofollow" style="text-decoration-line: none;">Application Continuity in Oracle Database 12c (12.1.0.2)</a></li></ul><div style="margin-top: 10px; padding: 0px;">charbench - character mode benchmark<br />minibench - graphical minimal mode benchmark<br />swingbench - graphical full mode benchmark (need Xming)</div><div style="margin-top: 10px; padding: 0px;"><br class="atl-forced-newline" /> <strong><span style="text-decoration-line: underline;">Swingbench</span></strong><br />Main parameters:</div><div style="margin-left: 30px; margin-top: 10px; padding: 0px;">-uc: user count<br />-r: results file<br />-rt: run time for the benchmark<br />-cs:connection string<br />-a: run automatically<br />-cpuuser: os username of the user used to monitor cpu<br />-cpupass: os password of the user used to monitor cpu<br />-cpuloc: location/hostname of the cpu monitor<br /></div><div style="margin-top: 10px; padding: 0px;">When running swingbench or minibench with "<strong>-a</strong>" parameter it starts automatically, you may want to omit this parameter, to look or change some parameters.<br />See more <a href="https://draft.blogger.com/blogger.g?blogID=6061714#Swingbench-Swingbenchparameters" style="text-decoration-line: none;">Swingbench parameters</a> at the end of this document.<br /><br />Run XLaunch (Xming) or equivalent application</div><blockquote class="tr_bq">$ export DISPLAY=your windows ip<you ip="" windows="">:0.0</you><br />$ cd swingbench/bin<br />$ <strong>./swingbench</strong> -u soe -p soe -min 10 -max 200 -stats full -c ../configs/SOE_Server_Side_V2.xml -ld 500 -f -dbap <span style="color: red;">change_on_install</span> -dbau "sys as sysdba" <span style="background: yellow;">-r scale1_100user.xml -uc 100 -rt 0:05 -cs //db_server/db_service -cpuuser db_os_owner -cpupass db_os_owner_password -cpuloc db_server<b> -a</b></span></blockquote><div style="margin-top: 10px; padding: 0px;"><div><strong><span style="text-decoration-line: underline;">Minibench</span></strong></div><blockquote class="tr_bq">$ <strong>./minibench</strong> -u soe -p soe -min 10 -max 200 -stats full -c ../configs/SOE_Server_Side_V2.xml -ld 500 -f -dbap <span style="color: red;">change_on_install</span> -dbau "sys as sysdba" <span style="background: yellow;">-r scale1_100user.xml -uc 100 -rt 0:05 -cs //db_server/db_service -cpuuser db_os_owner -cpupass db_os_owner_password -cpuloc db_server<b> -a</b></span></blockquote><div style="margin-top: 10px; padding: 0px;"><div><strong><span style="text-decoration-line: underline;">Charbench</span></strong></div><blockquote class="tr_bq">$ <strong>./charbench</strong> -u soe -p soe -min 10 -max 200 -stats full -c ../configs/SOE_Server_Side_V2.xml -ld 500 -f -dbap <span style="color: red;">change_on_install</span> -dbau "sys as sysdba" <span style="background: yellow;">-r scale1_100user.xml -uc 100 -rt 0:05 -cs //db_server/db_service -cpuuser db_os_owner -cpupass db_os_owner_password -cpuloc db_server</span></blockquote><div><strong><span style="text-decoration-line: underline;">See swingbench database tables</span></strong></div><blockquote class="tr_bq">$ ./sbutil -soe -u soe -p soe -cs <span style="color: red;">//db_server/db_service</span> -tables</blockquote><div><strong>Compress tables:</strong></div><blockquote class="tr_bq">$ ./sbutil -soe -u soe -p soe -cs <span style="color: red;">//db_server/db_service</span> -dup 1 -ac -sort</blockquote><div><strong>Format result output to pdf</strong></div><blockquote class="tr_bq">$ ./results2pdf -c results.xml</blockquote><div>The formatting may not work properly in Linux. The solution will be to use the windows version:</div><blockquote class="tr_bq">C:\swingbench\winbin\results2pdf.bat -c results.xml</blockquote><div style="margin-top: 10px; padding: 0px;"><div><strong>Combine</strong> <span style="color: red;"><strong>main</strong></span> <strong>data of several loads to one csv</strong></div><blockquote class="tr_bq">$ ./utils/parse_results.py -r scale1_400user.xml scale1_300user.xml scale1_200user.xml scale1_100user.xml -o results.csv</blockquote><div><strong>Compare several runs:</strong></div><blockquote class="tr_bq">$ ./bmcompare -r scale1_100user.xml,scale1_200user.xml</blockquote><div><a class="external-link" href="https://www.youtube.com/watch?v=b5E5579-ITo&amp;feature=youtu.be" rel="nofollow" style="text-decoration-line: none;">Swingbench 2.6 Walkthrough</a> - This screencast shows a complete walk through of Swingbench 2.6 from running a wizard to selecting a benchmark to run.</div><div><br /></div><div><b>Running swingbench simultanically from several hosts</b></div></div><div><span style="background-color: white;"><a href="http://dominicgiles.com/clusteroverviewwalkthough23.html">Clusteroverview Walkthrough</a></span></div><div style="background-color: white; margin-top: 10px; padding: 0px;"><strong><span style="text-decoration-line: underline;">On coordinator_server</span></strong></div><blockquote class="tr_bq">$ ./coordinator -g &amp;</blockquote><div style="background-color: white; color: black; margin-top: 10px; padding: 0px;"><strong><span style="text-decoration-line: underline;">On other machines</span></strong></div><blockquote class="tr_bq">$ ./charbench -c ../configs/SOE_Server_Side_V2.xml -dt thin -dbap <span style="color: red;">change_on_install </span>-uc 100 -r scale1_100user.xml -rt 0:05 -cs <span style="color: red;">//db_server/db_service</span> -g group1 -co <span style="color: red;"><i>coordinator_server</i></span> -bg &amp;<br /><br />$ ./charbench -c ../configs/SOE_Server_Side_V2.xml -dt thin -dbap <span style="color: red;">change_on_install</span>-uc 100 -r scale2_100user.xml -rt 0:05 -cs <span style="color: red;">//db_server/db_service</span> -g group2 -co <span style="color: red;"><i>coordinator_server</i></span> -bg &amp;</blockquote><div style="background-color: white; color: black; margin-top: 10px; padding: 0px;"><strong><span style="text-decoration-line: underline;">On coordinator_server / PC</span></strong></div><blockquote class="tr_bq">$ ./clusteroverview -rt 00:02 -co <span style="color: red;"><i>coordinator_server</i></span> -c ../configs/clusteroverview.xml</blockquote><div><strong>Swingbench parameters:</strong></div><div style="margin-top: 10px; padding: 0px;">$ ./swingbench -h<br />Application : Swingbench<br />Author : Dominic Giles<br />Version : 2.6.0.1082</div><div style="margin-top: 10px; padding: 0px;">usage: parameters:</div><div class="table-wrap" style="margin: 10px 0px 0px; overflow-x: auto; padding: 0px;"><br /><table resolved="" style="border-collapse: collapse; margin: 0px; overflow-x: auto;"><tbody><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-D <variable value=""></variable></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">use value for given environment variable</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-a</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">run automatically</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-be <stopafter></stopafter></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">end recording statistics after. Value is in the form hh:mm.sec</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-bs <startafter></startafter></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">start recording statistics after. Value is in the form hh:mm.sec</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-c <filename></filename></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify config file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cf <username></username></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">the location of a crendentials file for Oracle Exadata Express</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cm</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">maximise the charts of swingbench/minibench at startup</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-co <hostname></hostname></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/override coordinator in configuration file.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-com <comment></comment></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify comment for this benchmark run (in double quotes)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cpuloc <hostname></hostname></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide location/hostname of the cpu monitor.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cpupass</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide os password of the user used to monitor cpu.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cpuuser</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide os username of the user used to monitor cpu.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-cs <connectstring></connectstring></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override connect string in configuration file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dbap <password></password></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">the password of admin user (used for collecting DB Stats)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dbau <username></username></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">the username of admin user (used for collecting DB stats)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-debug</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">turn on debugging. Written to standard out</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-debugf</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">turn on debugging. Witten to debug.log.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-debugfine</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">turn on finest level of debugging</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-debugg</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">turn on debugging via graphical window.</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-di <shortname s=""></shortname></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">disable transactions(s) by short name, comma separated</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dim <dimension></dimension></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">dimensions of swingbench/minibench <width height=""></width></div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dt <drivertype></drivertype></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override driver type in configuration file(thin, oci, ttdirect, ttclient)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dumptx</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">output transaction response times to file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-dumptxdir <directory name=""></directory></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">directory for transaction response times files</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-en <shortname s=""></shortname></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">enable transactions(s) by short name, comma separated</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-env</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">display environment configuration</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-f</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">force data collection and run termination regardless of state</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-g <groupid></groupid></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">distributed group identifier</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-h,--help</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">print this message</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-i</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">run interactively (default)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-intermax <milliseconds></milliseconds></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override minimum inter transaction sleep time (default = 0)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-intermin <milliseconds></milliseconds></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override minimum inter transaction sleep time (default = 0)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-ld <milliseconds></milliseconds></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide the logon delay(milliseconds)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-max <milliseconds></milliseconds></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override maximum intra transaction think time in configuration file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-min <milliseconds></milliseconds></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override minimum intra transaction think time in configuration file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-p <password></password></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">override password in configuration file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-pos <startpos></startpos></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">starting position of swingbench/minibench <topleftx toplefty=""></topleftx></div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-r <filename></filename></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify results file</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-rr</div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide refresh rate for charts in secs</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-rt <runtime></runtime></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify/overide run time for the benchmark. Value is in the form hh:mm.sec</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-stats <stats level=""></stats></div></td><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="452"><div style="padding: 0px;">specify level result stats detail (full or simple)</div></td></tr><tr><td style="border: 1px solid rgb(221, 221, 221); min-width: 8px; padding: 7px 10px; vertical-align: top;" width="245"><div style="padding: 0px;">-t <title></p></td><td width="452" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;">set the window title for swingbench and minibench</p></td></tr><tr><td width="245" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;"> -u <username></p></td><td width="452" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;">override username in configuration file</p></td></tr><tr><td width="245" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;"> -uc <user count></p></td><td width="452" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;">override user count in configuration file.</p></td></tr><tr><td width="245" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;"> -wc</p></td><td width="452" style="border: 1px solid rgb(221, 221, 221); padding: 7px 10px; vertical-align: top; min-width: 8px;"><p style="padding: 0px;">wait until all session have disconnected from the database</p></td></tr></tbody></table></div><p style="margin-top: 10px; padding: 0px; color: rgb(9, 30, 66); background-color: rgb(255, 255, 255);"><br /></p><p style="margin-top: 10px; padding: 0px; color: rgb(9, 30, 66); background-color: rgb(255, 255, 255);">Resource: <a href="http://www.dominicgiles.com/swingbench.html">http://www.dominicgiles.com/swingbench.html</a></p><p style="margin-top: 10px; padding: 0px; color: rgb(9, 30, 66); background-color: rgb(255, 255, 255);"><br /></p></title></div></td></tr><br /><br /></tbody></table></div></div></div> Yossi Nixon tag:blogger.com,1999:blog-6061714.post-2612951972654274146 Mon Jul 23 2018 10:00:00 GMT-0400 (EDT) Setting up MySQL Encrypted Replication on MySQL 5.7 with GTID https://blog.pythian.com/setting-up-mysql-encrypted-replication-on-mysql-5-7-with-gtid/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>In this blog post, I&#8217;ll walk you through setting up encrypted replication on MySQL 5.7 with GTID enabled. I will walk you through how to create sample certificates and keys, and then configure MySQL to only use replication via an encrypted SSL tunnel.</p> <p>For simplicity, the credentials and certificates I used in this tutorial are very basic. I would suggest, of course, you use stronger passwords and accounts.</p> <p>Let&#8217;s get started.</p> <p><strong>Create a folder where you will keep the certificates and keys</strong></p> <pre lang="bash" escaped="true">mkdir /etc/newcerts/ cd /etc/newcerts/ </pre> <p><strong>Create CA certificate</strong></p> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl genrsa 2048 &gt; ca-key.pem Generating RSA private key, 2048 bit long modulus .............+++ ..................+++ e is 65537 (0x10001) </pre> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl req -new -x509 -nodes -days 3600 -key ca-key.pem -out ca.pem You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []: </pre> <p><strong>Create server certificate</strong></p> <p>server-cert.pem = public key, server-key.pem = private key</p> <p>NOTE: The Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate otherwise the certificate and key files will not work for servers compiled using OpenSSL.</p> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-key.pem -out server-req.pem Generating a 2048 bit RSA private key ....................................................................+++ .+++ writing new private key to 'server-key.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:server Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: </pre> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl rsa -in server-key.pem -out server-key.pem writing RSA key </pre> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl x509 -req -in server-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out server-cert.pem Signature ok subject=/C=XX/L=Default City/O=Default Company Ltd/CN=server Getting CA Private Key </pre> <p><strong>Create client certificate</strong></p> <p>client-cert.pem = public key, client-key.pem = private key</p> <p>NOTE: The Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate otherwise the certificate and key files will not work for servers compiled using OpenSSL.</p> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl req -newkey rsa:2048 -days 3600 -nodes -keyout client-key.pem -out client-req.pem Generating a 2048 bit RSA private key .....................+++ ....................................................................................+++ writing new private key to 'client-key.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:client Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: </pre> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl rsa -in client-key.pem -out client-key.pem writing RSA key </pre> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl x509 -req -in client-req.pem -days 3600 -CA ca.pem -CAkey ca-key.pem -set_serial 01 -out client-cert.pem Signature ok subject=/C=XX/L=Default City/O=Default Company Ltd/CN=client Getting CA Private Key </pre> <p><strong>Verify both client and server certificates</strong></p> <pre lang="bash" escaped="true">[root@po-mysql2 newcerts]# openssl verify -CAfile ca.pem server-cert.pem client-cert.pem server-cert.pem: OK client-cert.pem: OK </pre> <p><strong> Copy certificates, adjust permissions and restart MySQL </strong></p> <p>Add the server cert files and key to all hosts.<br /> Add the entry below to my.cnf on all hosts.<br /> Make sure the folder and files are owned by MySQL user and group.<br /> Restart MySQL.</p> <pre lang="bash" escaped="true">scp *.pem master:/etc/newcerts/ scp *.pem slave:/etc/newcerts/ chown -R mysql:mysql /etc/newcerts/ [mysqld] ssl-ca=/etc/newcerts/ca.pem ssl-cert=/etc/newcerts/server-cert.pem ssl-key=/etc/newcerts/server-key.pem service mysql restart </pre> <p><strong>Verify SSL is enabled and key and certs are shown (check both master and slave)</strong></p> <pre lang="bash" escaped="true">(root@localhost) [(none)]&gt;SHOW VARIABLES LIKE '%ssl%'; +---------------+-------------------------------+ | Variable_name | Value | +---------------+-------------------------------+ | have_openssl | YES | | have_ssl | YES | | ssl_ca | /etc/newcerts/ca.pem | | ssl_capath | | | ssl_cert | /etc/newcerts/server-cert.pem | | ssl_cipher | | | ssl_crl | | | ssl_crlpath | | | ssl_key | /etc/newcerts/server-key.pem | +---------------+-------------------------------+ 9 rows in set (0.01 sec) </pre> <p><strong>Verify you are able to connect from slave to master </strong></p> <p>From command line, issue the following commands and look for this output:<br /> &#8220;SSL: Cipher in use is ECDHE-RSA-AES128-GCM-SHA256&#8221;</p> <pre lang="bash" escaped="true">[root@po-mysql2 ~]# mysql -urepluser -p -P53306 --host po-mysql1 --ssl-cert=/etc/newcerts/client-cert.pem --ssl-key=/etc/newcerts/client-key.pem -e '\s' Enter password: -------------- mysql Ver 14.14 Distrib 5.7.21-20, for Linux (x86_64) using 6.2 Connection id: 421 Current database: Current user: repluser@192.168.56.101 SSL: Cipher in use is ECDHE-RSA-AES128-GCM-SHA256 Current pager: stdout Using outfile: '' Using delimiter: ; Server version: 5.7.21-21-log Percona Server (GPL), Release 21, Revision 2a37e4e Protocol version: 10 Connection: po-mysql1 via TCP/IP Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 TCP port: 53306 Uptime: 13 min 38 sec Threads: 6 Questions: 6138 Slow queries: 4 Opens: 112 Flush tables: 1 Open tables: 106 Queries per second avg: 7.503 -------------- </pre> <p><strong> Enable encrypted replication. </strong></p> <p>We are using GTID in this example, so adjust the command below if you are not using GTID based replication.<br /> Go to the slave host and run the following: (details below)<br /> stop slave<br /> change master<br /> start slave<br /> verify replication is working and using an encrypted connection</p> <pre lang="bash" escaped="true">(root@localhost) [(none)]&gt;select @@hostname; +------------+ | @@hostname | +------------+ | po-mysql2 | +------------+ 1 row in set (0.00 sec) (root@localhost) [(none)]&gt;STOP SLAVE; Query OK, 0 rows affected, 1 warning (0.00 sec) (root@localhost) [(none)]&gt;CHANGE MASTER TO MASTER_HOST="po-mysql1", MASTER_PORT=53306, MASTER_USER="repluser", MASTER_AUTO_POSITION = 1, MASTER_PASSWORD='replpassword', -&gt; MASTER_SSL=1, MASTER_SSL_CA = '/etc/newcerts/ca.pem', MASTER_SSL_CERT = '/etc/newcerts/client-cert.pem', MASTER_SSL_KEY = '/etc/newcerts/client-key.pem'; Query OK, 0 rows affected, 2 warnings (0.16 sec) (root@localhost) [(none)]&gt;START SLAVE; Query OK, 0 rows affected (0.01 sec) (root@localhost) [(none)]&gt;SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: po-mysql1 Master_User: repluser Master_Port: 53306 Connect_Retry: 60 Master_Log_File: mysql-bin.000008 Read_Master_Log_Pos: 491351 Relay_Log_File: relay.000002 Relay_Log_Pos: 208950 Relay_Master_Log_File: mysql-bin.000008 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 257004 Relay_Log_Space: 443534 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: /etc/newcerts/ca.pem Master_SSL_CA_Path: Master_SSL_Cert: /etc/newcerts/client-cert.pem Master_SSL_Cipher: Master_SSL_Key: /etc/newcerts/client-key.pem Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Reading event from the relay log Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:82150-83149 Executed_Gtid_Set: 3a19f03e-5f76-11e8-b99e-0800275ae9e7:1-2842, 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:1-82620, 85209bfc-d45c-11e7-80f7-0800275ae9e7:1-3, cc1d9186-5f6b-11e8-b061-0800275ae9e7:1-3 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec) </pre> <p><strong> Congratulations, you have configured encrypted replication </strong></p> <p>This process was only to enable SSL replication; however, if you wish to limit replication to only use SSL connections, you&#8217;ll need to alter the replication account accordingly, as shown below.</p> <p>Go to the master and alter the replication user.</p> <p>NOTE: For some reason, the SHOW GRANTS command does not show REQUIRE SSL as part of the output, even after changing the account</p> <pre lang="bash" escaped="true">(root@localhost) [(none)]&gt;SHOW GRANTS FOR 'repluser'@'%'; +----------------------------------------------+ | Grants for repluser@% | +----------------------------------------------+ | GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%' | +----------------------------------------------+ 1 row in set (0.00 sec) (root@localhost) [(none)]&gt;ALTER USER 'repluser'@'%' REQUIRE SSL; Query OK, 0 rows affected (0.04 sec) (root@localhost) [(none)]&gt;SHOW GRANTS FOR 'repluser'@'%'; +----------------------------------------------+ | Grants for repl@% | +----------------------------------------------+ | GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%' | +----------------------------------------------+ 1 row in set (0.00 sec) </pre> <p><strong>Test from a slave which has not yet been configured to use encrypted replication.</strong></p> <p>Notice the error below from this slave, so we know for sure, we can only connect via SSL and replication will not work until we make the required changes:</p> <p>Last_IO_Error: error connecting to master &#8216;repluser@po-mysql1:53306&#8217; &#8211; retry-time: 60 retries: 1</p> <pre lang="bash" escaped="true">(root@localhost) [(none)]&gt;select @@hostname; +------------+ | @@hostname | +------------+ | po-mysql3 | +------------+ 1 row in set (0.00 sec) (root@localhost) [(none)]&gt;stop slave; Query OK, 0 rows affected (0.00 sec) (root@localhost) [(none)]&gt;start slave; Query OK, 0 rows affected (0.01 sec) (root@localhost) [(none)]&gt;show slave status\G *************************** 1. row *************************** Slave_IO_State: Connecting to master Master_Host: po-mysql1 Master_User: repluser Master_Port: 53306 Connect_Retry: 60 Master_Log_File: mysql-bin.000008 Read_Master_Log_Pos: 730732 Relay_Log_File: relay.000003 Relay_Log_Pos: 730825 Relay_Master_Log_File: mysql-bin.000008 Slave_IO_Running: Connecting Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 730732 Relay_Log_Space: 7465275 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 1045 Last_IO_Error: error connecting to master 'repluser@po-mysql1:53306' - retry-time: 60 retries: 1 Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: 180719 23:29:07 Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:66868-83690 Executed_Gtid_Set: 3a19f03e-5f76-11e8-b99e-0800275ae9e7:1-2842, 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:1-83690, 85209bfc-d45c-11e7-80f7-0800275ae9e7:1-3, cc1d9186-5f6b-11e8-b061-0800275ae9e7:1-134 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec) </pre> <p><strong> Setup encrypted replication on another slave </strong></p> <p>Now we just need to follow the same steps as documented above to copy the certs and keys. We restart MySQL, stop slave and reset replication, and then replication will work again, this time using SSL.</p> <pre lang="bash" escaped="true">(root@localhost) [(none)]&gt;SELECT @@hostname; +------------+ | @@hostname | +------------+ | po-mysql3 | +------------+ 1 row in set (0.00 sec) (root@localhost) [(none)]&gt;STOP SLAVE; Query OK, 0 rows affected (0.02 sec) (root@localhost) [(none)]&gt;CHANGE MASTER TO MASTER_HOST="po-mysql1", MASTER_PORT=53306, MASTER_USER="repluser", MASTER_AUTO_POSITION = 1, MASTER_PASSWORD='r3pl', -&gt; MASTER_SSL=1, MASTER_SSL_CA = '/etc/newcerts/ca.pem', MASTER_SSL_CERT = '/etc/newcerts/client-cert.pem', MASTER_SSL_KEY = '/etc/newcerts/client-key.pem'; Query OK, 0 rows affected, 2 warnings (0.01 sec) (root@localhost) [(none)]&gt;START SLAVE; Query OK, 0 rows affected (0.04 sec) (root@localhost) [(none)]&gt;SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: po-mysql1 Master_User: repluser Master_Port: 53306 Connect_Retry: 60 Master_Log_File: mysql-bin.000008 Read_Master_Log_Pos: 1128836 Relay_Log_File: relay.000002 Relay_Log_Pos: 398518 Relay_Master_Log_File: mysql-bin.000008 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 1128836 Relay_Log_Space: 398755 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: Yes Master_SSL_CA_File: /etc/newcerts/ca.pem Master_SSL_CA_Path: Master_SSL_Cert: /etc/newcerts/client-cert.pem Master_SSL_Cipher: Master_SSL_Key: /etc/newcerts/client-key.pem Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_UUID: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:83691-84588 Executed_Gtid_Set: 3a19f03e-5f76-11e8-b99e-0800275ae9e7:1-2842, 7f0b0f43-d45c-11e7-80f7-0800275ae9e7:1-84588, 85209bfc-d45c-11e7-80f7-0800275ae9e7:1-3, cc1d9186-5f6b-11e8-b061-0800275ae9e7:1-134 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec) </pre> <p>Congratulations, you now have SSL replication enabled. MySQL replication will now only work with encryption.</p> </div></div> Daniel Almeida https://blog.pythian.com/?p=104784 Mon Jul 23 2018 09:09:22 GMT-0400 (EDT) Announcement: Webinars for “Oracle Indexing Internals and Best Practices” Now Confirmed !! https://richardfoote.wordpress.com/2018/07/23/announcement-webinars-for-oracle-indexing-internals-and-best-practices-now-confirmed/ Exciting News !! I can now confirm the dates for my first webinars of my fully updated and highly acclaimed “Oracle Indexing Internals and Best Practice” seminar. For details of all the extensive content covered in the webinars, please visit my Indexing Seminar page. The webinars will run for 4 hours each day, spanning a full week period [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5653 Mon Jul 23 2018 04:28:36 GMT-0400 (EDT) Windows Laptop : Update… Again… http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/sZDmTzF2daE/ <p><img class="alignleft wp-image-8055" src="https://oracle-base.com/blog/wp-content/uploads/2018/05/dell-xps-15.png" alt="" width="200" height="119" />Just another quick update about how things are going with the new laptop.</p> <p>I read with interest <a href="http://dsavenko.me/lenovo-thinkpad-x1-carbon-personal-impression-of-laptop/">this post</a> by <a href="http://dsavenko.me">Denis Savenko</a> about his choice of a Lenovo ThinkPad X1 Carbon (6th gen), which looks like a nice bit of kit. The ThinkPad seems to have almost as much loyalty as MacBook Pro. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>The recent announcement about the revamped MacBook Pro range caught my eye in a, &#8220;Did I make a mistake?&#8221;, kind-of way. A quick comparison tells me I didn&#8217;t based on UK pricing. In both cases the Dell has a 3840 x 2160 resolution touch screen. There are cheaper options available, which makes the discrepancy even greater.</p> <ul> <li>Dell XPS 15&#8243; : Core i9, 32G RAM, 1TB SSD = £2,599</li> <li>MBP 15&#8243; : Core i9, 32G RAM, 1TB SSD = £3,689</li> </ul> <ul> <li>Dell XPS 15&#8243; : Core i7, 32G RAM, 1TB SSD = £2,048</li> <li>MBP 15&#8243; : Core i7, 32G RAM, 1TB SSD = £3,419</li> </ul> <p>That price differential is crazy&#8230;</p> <p>You may have seen the YouTube video by <a href="https://www.youtube.com/watch?v=Dx8J125s4cg">Dave Lee</a> talking about the thermal throttling of the i9 in the new MBP, and that is really what I want to talk about here.</p> <p>The XPS 15&#8243; i9 runs hot! Like burn your hand hot. I had one incident when playing Fortnite where the machine shutdown as the internal temperature was so hot. Under normal workload, like a few VMs, it doesn&#8217;t get quite so hot, but it is still noticeable. I got a <a href="https://www.amazon.co.uk/gp/product/B0164R64Y8/">cooler pad</a>, which helped a lot, but doesn&#8217;t do much if it&#8217;s under really high load. It seems all these laptops that try to look small and cute don&#8217;t have a cooling solution that can cope with an i9. On reflection an i7 would probably have been a better, and cheaper, choice.</p> <p>I&#8217;m still happy with the purchase, and with Windows 10. If you are out in the market for a new laptop, I would seriously consider the i7 over the i9 unless you buy a big laptop with a great cooling solution. You will save yourself a bunch of cash, and I really don&#8217;t think you will notice the difference.</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/21/windows-laptop-update-again/">Windows Laptop : Update&#8230; Again&#8230;</a> was first posted on July 21, 2018 at 10:42 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/sZDmTzF2daE" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8255 Sat Jul 21 2018 05:42:05 GMT-0400 (EDT) MongoDB GridFS http://oracle-help.com/articles/mongodb/mongodb-gridfs/ <p>GridFS is a versatile storage system that is suited to handling large files, such as those exceeding the 16 MB document size limit.</p> <p>&nbsp;</p> <p>For more details about GridFs : <a href="https://docs.mongodb.com/manual/core/gridfs/">Click Here</a></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-gridfs/">MongoDB GridFS</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5149 Fri Jul 20 2018 20:19:06 GMT-0400 (EDT) MongoDB Checkpoint http://oracle-help.com/articles/mongodb/mongodb-checkpoint/ <p>Yes checkpoint also available in MongoDB: Read my Previous Post Journaling clear the all deatils how the data will be written on physical files. <span style="color: #0000ff">Every 60 seconds or journal file reach 2 GB . whichever first</span></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-checkpoint/">MongoDB Checkpoint</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5147 Fri Jul 20 2018 20:16:45 GMT-0400 (EDT) MongoDB Journaling http://oracle-help.com/articles/mongodb/mongodb-journaling/ <p>It&#8217;s also part of storage : the journal is a log that helps the database recover in the event of a hard shutdown. There are several configurable options that allows the journal to strike a balance between performance and reliability that works for your particular use case.</p> <p><span style="text-decoration: underline"><strong>In simple Words:</strong></span></p> <p>With journaling, MongoDB&#8217;s storage layer has two internal views of the data set:<br /> <strong>Private view</strong>, used to write to the journal files,  and<br /> <strong>Shared view</strong>, used to write to the data files:<span style="color: #0000ff"> MongoDB first applies write operations to the private view.</span></p> <p>In this process, a write operation occurs in mongod, which then creates changes in private view.<br /> The first block is memory and the second block is ‘my disc’. After a specified interval, which is called a ‘journal commit interval’,  the private view writes those operations in journal directory (residing in the disc).</p> <p>Once the journal commit happens, mongod pushes data into shared view. As part of the process, it gets written to actual data directory  from the shared view (as this process happens in background). The basic advantage is,we have a reduced cycle from 60 seconds to 200 milliseconds.</p> <p><strong>If you want to change the value of Interval commng and flush shared view follow below the link</strong></p> <p>For More details about Journaling: <a href="https://docs.mongodb.com/manual/core/journaling/">Click Here</a></p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-journaling/">MongoDB Journaling</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5145 Fri Jul 20 2018 20:13:43 GMT-0400 (EDT) MongoDB Storage/Storage Engine http://oracle-help.com/articles/mongodb/mongodb-storage-storage-engine/ <p>The storage engine is the primary component of MongoDB responsible for managing data. MongoDB provides a variety of storage engines, allowing you to choose one most suited to your application.</p> <p>WiredTiger Storage Engine (<em>Default</em>)</p> <p>MMAPv1 Storage Engine (<em>Deprecated as of MongoDB 4.0</em>)</p> <p>In-Memory Storage Engine <em>(Rarely Used)</em></p> <p>Pluggable Storage Engine. <em>(Customized accordingly requirement)</em></p> <p><a href="https://docs.mongodb.com/manual/core/storage-engines/">Click Here</a> For More details about above mention the storage Engine.</p> <p><strong>*.Check default storage Engine.</strong><br /> <span style="color: #0000ff">db.serverStatus().storageEngine;</span></p> <p><strong>*.Difference :</strong><br /> <em>WiredTiger better write performance.</em><br /> <em>WiredTiger Support Compression.</em><br /> <em>WiredTiger is used for snapshots and checkpoints system.</em><br /> <em>WiredTiger uses document level concurrency, where as MMAPV1 uses collection level locking (i.e table level locking).</em><br /> <em>WiredTiger is not available on Solaris platform whereas MMPAV1 is.</em><br /> <em>MMPAv1 use for databware housing.</em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-storage-storage-engine/">MongoDB Storage/Storage Engine</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5143 Fri Jul 20 2018 20:08:50 GMT-0400 (EDT) MongoDB Security http://oracle-help.com/articles/mongodb/mongodb-security/ <p>MongoDB provides various features, such as authentication, access control, encryption, to secure your MongoDB deployments. Some key security features include:</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg"><img data-attachment-id="5141" data-permalink="http://oracle-help.com/articles/mongodb/mongodb-security/attachment/security/" data-orig-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?fit=742%2C180" data-orig-size="742,180" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;Vinay Kumar&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1532112176&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Security" data-image-description="" data-medium-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?fit=300%2C73" data-large-file="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?fit=742%2C180" class="alignnone wp-image-5141" src="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?resize=584%2C142" alt="" width="584" height="142" srcset="https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?resize=300%2C73 300w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?resize=60%2C15 60w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?resize=150%2C36 150w, https://i0.wp.com/oracle-help.com/wp-content/uploads/2018/07/Security.jpg?w=742 742w" sizes="(max-width: 584px) 100vw, 584px" data-recalc-dims="1" /></a></p> <p>By Default not any authentication enabled in mongoDB env.</p> <p>1.) Basic authentication like a role based give the privileges/role etc to the user.</p> <p>2.) Database authentication enabled with the help of below the parameter set in configuration file.</p> <p><span style="color: #0000ff">create user with admin role</span></p> <p><span style="color: #0000ff">security.authorization</span> so after enable when you try to login they not permit to execute any command so swith the admin user and login to authenticate user then try.</p> <p><strong>Login to authenticate user.</strong></p> <p><span style="color: #0000ff">&gt; use admin</span></p> <p><span style="color: #0000ff">&gt; db.auth(&#8220;TEST&#8221;,&#8221;tesT123&#8243;);</span></p> <p><span style="color: #0000ff">1</span></p> <p><span style="color: #0000ff">&gt; show dbs</span></p> <p>Another authentication like SSL/TSL also provide the mongodb i&#8217;m giving you some idea how to configure.</p> <p>For SSL. Generate Certificate &gt; then verify the signature &gt; .cert/pem file allocated to the server the pass the path through ops-manager / CLI also doing this thing &gt; configure with LDAP .. then try to login like this.:</p> <p><strong>Connection string using LDAP :</strong><br /> mongo &#8211;ssl &#8211;sslCAFile /var/lib/mongod/cert/TESTca.pem &#8211;host $(hostname).$(dnsdomainname) &#8211;port 27022 -u &#8220;TEST&#8221; -p &#8220;&#8221; &#8211;authenticationMechanism &#8216;PLAIN&#8217; &#8211;authenticationDatabase &#8216;$external&#8217; admin</p> <p>So security enable is not a part of only DBA involve more teams like . Application/LDAP/Network/Security/DBA</p> <p>&nbsp;</p> <p>For more detail about Security : <a href="https://docs.mongodb.com/manual/administration/security-checklist/">Click Here</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-security/">MongoDB Security</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5140 Fri Jul 20 2018 20:00:29 GMT-0400 (EDT) GCP Features Wish List for Google Next 2018 https://blog.pythian.com/gcp-features-wish-list-google-next-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>As I&#8217;ve just posted about Pythian at Goole Next, and I though it would be cool to share what are the new GCP features I&#8217;d love to see introduced next week. Since I&#8217;m focusing on building AI products these days, my wish list is focused there.</p> <p>First of all, I want to see more beta products and features to transition to GA just to be confident using them in production for the clients. Specifically, <a href="https://cloud.google.com/functons/">Google Cloud Functions</a> have been in beta way too long and while I&#8217;m on that topic&#8230; Google, would you please announce &#8220;planned&#8221; <a href="https://googlecloudplatform.uservoice.com/forums/299943--google-cloud-platform/suggestions/18981547-support-python-in-cloud-functions">Python 3 support</a>? Otherwise, it&#8217;s kinda useless to us&#8230;</p> <p>In the area of AI itself&#8230; <a href="https://cloud.google.com/automl/">Google Cloud AutoML</a> is at the top of my wish list, especially now that we’ve gotten to work with its alpha release. AutoML has several neural network architectures pre-created for specific classes of ML problems and tunes the network architecture (using ML to train ML as Google says). This means that you can feed in arbitrary pre-labeled objects like images and the service does the rest including converting the objects into model features. <a href="https://cloud.google.com/blog/big-data/2018/03/automl-vision-in-action-from-ramen-to-branded-goods">Here</a> is how it works and <a href="https://ai.googleblog.com/2017/11/automl-for-large-scale-image.html">here</a> are more brainy details and <a href="https://www.youtube.com/watch?v=GbLQE2C181U">here</a> is the demo of AutoML Vision user experience. If Google opens up AutoML at the show next week then hopefully we could show what we did with it analyzing Cricket match recordings for one of our clients.</p> <p>One area where Google has been creating lots of new products is <a href="https://cloud.google.com/products/machine-learning/">Cloud AI APIs</a> for specific use cases with pre-trained models such as <a href="https://cloud.google.com/vision/">Vision API</a>, <a href="https://cloud.google.com/speech-to-text/">Speech-to-Tex API</a> and <a href="https://cloud.google.com/video-intelligence/">Video Intelligence API</a>. I suspect we may see new APIs as well as enhancements and new features to the existing APIs. Dialogflow Enterprise Edition seems to be in very high demand these days so conversational UI is a hot area for innovations!</p> <p>I&#8217;ve mentioned Python 3 for Cloud Functions and it would also be good to see its support for Apps Engine Standard. Of course, we can use flexible environments already but orchestrating our custom docker images on Kubernetes Engine seems to be as good or better. Maybe Python 3 will never make it to Apps Engine Standard as Kubernetes is the new direction&#8230;</p> <p>Something I would like to see in Stackdriver is evolving it into permanent platform to store logs and metrics instead of the need to export them into the long term / archive storage. The export itself is easy but it required to build some services on top of that for long term access. I&#8217;d rather have one user experience and data availability interfaces. One can wish&#8230;</p> <p>Finally, while Google Data Studio is pretty cool, it&#8217;s capabilities are rather simplistic and it&#8217;s not very polished yet comparing to something like Tableau. I&#8217;m really missing a visualization product as a GCP service that&#8217;s similar in capabilities and ease of use to Tableau.</p> <p>Turning biblical &#8220;Ask and it will be given to you&#8221;. Well, hope at least some! :)</p> <p>Do you have the wish list of GCP features?</p> </div></div> Alex Gorbachev https://blog.pythian.com/?p=104789 Fri Jul 20 2018 19:50:00 GMT-0400 (EDT) MongoDB BUG SERVER-31101 http://oracle-help.com/articles/mongodb/mongodb-bug-server-31101/ <p>Few months back me and my team work in mongoDB 3 stage Production Upgrade with Replicaset as follow the Path.</p> <p>1.) DB and Backup DB upgraded from v3.0.4 to v3.0.15<br /> 2.) DB and Backup DB upgraded from v3.0.15 to 3.2.16<br /> 3.) DB and Backup DB upgraded from v3.2.16 to 3.4.7</p> <p>&nbsp;</p> <p>Now currently running version 3.4.7 and facing some issue in this version and its known bug later to know after upgrade.</p> <p>1.) The Bug is when some one object(Table/Index) is created  in Primary Node and same resync on all the nodes.</p> <p>2.) After resync again drop the same object from Primary Node so dependent datafile is also drop from filesystem. But resync process doesn&#8217;t work i.e object also deleted from another node but the physical datafile not deleted from secondary node and consumes lots of space.</p> <p>&nbsp;</p> <p><strong>Resolution Available on : v3.4.11 version.</strong></p> <p>So due to some down time issue from application side we fix this issue for temporary basis.</p> <p>For more detail about this bug: <a href="https://jira.mongodb.org/browse/SERVER-31101">Click Here</a></p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/articles/mongodb/mongodb-bug-server-31101/">MongoDB BUG SERVER-31101</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Vinay Magrania http://oracle-help.com/?p=5138 Fri Jul 20 2018 19:23:18 GMT-0400 (EDT) Log Buffer #551: A Carnival of the Vanities for DBAs https://blog.pythian.com/log-buffer-551-carnival-vanities-dbas/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>This Log Buffer Edition covers Cloud, Oracle, and MySQL.</p> <p><span id="more-104786"></span></p> <p><strong>Cloud:</strong></p> <p>Artificial intelligence (<a href="https://aws.amazon.com/blogs/machine-learning/create-a-model-for-predicting-orthopedic-pathology-using-amazon-sagemaker/">AI</a>) and machine learning (ML) are gaining momentum in the healthcare industry, especially in healthcare imaging. The Amazon SageMaker approach to ML presents promising potential in the healthcare field. ML is considered a horizontal enabling layer applicable across industries.</p> <p>For years, <a href="https://aws.amazon.com/blogs/database/migrating-an-application-from-an-on-premises-oracle-database-to-amazon-rds-for-postgresql/">companies</a> have had to set up their own local databases and maintain the hardware themselves. However, as the cloud infrastructure continues to improve, there’s far less need to own and manage your own hardware.</p> <p>One of the <a href="https://aws.amazon.com/blogs/compute/introducing-amazon-api-gateway-private-endpoints/">biggest</a> trends in application development today is the use of APIs to power the backend technologies supporting a product. Increasingly, the way mobile, IoT, web applications, or internal services talk to each other and to application frontends is using some API interface.</p> <p>It’s <a href="https://azure.microsoft.com/en-gb/blog/announcing-new-options-for-sql-server-2008-and-windows-server-2008-end-of-support/">incredible</a> how much and how rapidly technology evolves. Microsoft’s server technology is no exception. We entered the 2008 release cycle with a shift from 32-bit to 64-bit computing, the early days of server virtualization and advanced analytics.</p> <p>Many of our customers with hybrid <a href="https://cloudplatform.googleblog.com/2018/07/vmware-and-google-cloud-building-the-hybrid-cloud-together-with-vrealize-orchestrator.html">cloud</a> environments rely on VMware software on-premises. They want to simplify provisioning and enable end-user self service. At the same time, they also want to make sure they’re complying with IT policies and following IT best practices.</p> <p><strong>Oracle:</strong></p> <p>This post is about how to modify <a href="https://ilmarkerm.eu/blog/2018/07/modify-sqldeveloper-connections-using-ansible/#utm_source=rss&amp;utm_medium=rss">SQLDeveloper</a> connections using Ansible.</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/">Knowing</a> where log files are and how to turn on debugging is an essential part of any technical job and this goes for Power BI, too.</p> <p><a href="http://jastraub.blogspot.com/2018/07/removing-prior-versions-of-apex-in-pdb.html">Jason Straub</a> writes about removing prior versions of APEX in a PDB.</p> <p>Uwe Hesse shares an easy way to create large <a href="https://uhesse.com/2018/07/18/easy-way-to-create-large-demo-tables-in-exasol-and-oracle/">demo-tables</a> in Exasol and Oracle.</p> <p>In case you have a small application where development, test and maybe also production environment are on the same database and your applications in this <a href="https://www.apex-at-work.com/2018/07/set-apex-application-name-for-dev-test.html">environment</a> distinguish only by the application IDs.</p> <p><strong>MySQL:</strong></p> <p>While <a href="https://mariadb.com/resources/blog/data-streaming-mariadb">Big</a> Data is being used across the globe by companies to solve their analytical problems, sometimes it becomes a hassle to extract data from a bunch of data sources, do the necessary transformation and then eventually load it into an analytical platform such as Hadoop or something else.</p> <p>The X <a href="https://blog.gabriela.io/2018/07/18/a-small-dive-into-the-mysql-8-0-x-devapi/">DevAPI</a> is the common client-side API used by all connectors to abstract the details of the X Protocol. It specifies the common set of CRUD-style functions/methods used by all the official connectors to work with both document store collections and relational tables.</p> <p>If you’re reading consumer <a href="https://www.percona.com/blog/2018/07/18/why-consumer-ssd-reviews-are-useless-for-database-performance-use-case/">SSD</a> reviews and using them to estimate SSD performance under database workloads, you’d better stop. Databases are not your typical consumer applications and they do not use IO in the same way.</p> <p>There are multiple ways to configure <a href="https://www.continuent.com/mastering-continuent-clustering-series-connection-handling-in-the-tungsten-connector/">session</a> handling in the Connector. The three main modes are Bridge, Proxy/Direct and Proxy/SmartScale.</p> <p>ClusterControl 1.6.2 introduces new exciting Backup Management as well as Security &amp; Compliance features for MySQL &amp; PostgreSQL, support for <a href="https://severalnines.com/blog/clustercontrol-release-162-new-backup-management-and-security-features-mysql-postgresql">MongoDB</a> v 3.6 … and more!</p> </div></div> Fahd Mirza https://blog.pythian.com/?p=104786 Fri Jul 20 2018 09:58:08 GMT-0400 (EDT) Pythian at Google Next 2018: Who and Why https://blog.pythian.com/pythian-google-next-2018/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">Pythian has a longstanding reputation for invading major conferences in the technology and business areas of our services focus and Google NEXT in San Francisco next week is no exception.</span></p> <p><img class="alignnone size-full wp-image-104773" src="https://blog.pythian.com/wp-content/uploads/next-hero-3.max-1100x1100.png" alt="Google Next" width="1100" height="458" srcset="https://blog.pythian.com/wp-content/uploads/next-hero-3.max-1100x1100.png 1100w, https://blog.pythian.com/wp-content/uploads/next-hero-3.max-1100x1100-465x194.png 465w, https://blog.pythian.com/wp-content/uploads/next-hero-3.max-1100x1100-350x146.png 350w" sizes="(max-width: 1100px) 100vw, 1100px" /></p> <p><span style="font-weight: 400;">I thought it would be cool to share a bit of history behind our tradition and why we have a small army of Pythianites showing up, to show the love (like love your data) of our Google partnership &#8211; from our CEO to our many technology experts.</span></p> <p><span style="font-weight: 400;">First of all, the amount of </span><a href="https://pythian.com/clients/"><span style="font-weight: 400;">work</span></a><span style="font-weight: 400;"> we&#8217;ve been doing on </span><a href="https://pythian.com/google-cloud-platform/"><span style="font-weight: 400;">Google Cloud Platform</span></a><span style="font-weight: 400;"> (GCP) has been growing exponentially in the last couple of years and it&#8217;s incredible how much interest the platform generates among our customers. </span></p> <p><span style="font-weight: 400;">For example, today we </span><strong><a href="https://blog.pythian.com/pythian-achieves-google-cloud-platform-partner-specialization-in-infrastructure/">announced</a></strong><span style="font-weight: 400;"> that Pythian has achieved its Google Cloud Platform Specialization in Infrastructure. We’re proud of this achievement as it speaks to our proven expertise in helping our clients.</span></p> <p><span style="font-weight: 400;">Google NEXT is where we get to meet a lot of our existing (and future) clients as well as many of the awesome members of Google sales and supporting teams that we normally see during virtual meetings. </span></p> <p><span style="font-weight: 400;">It&#8217;s always exciting to learn about new things as they get announced. It&#8217;s especially cool when it happens with alpha products that we often get a chance to work on &#8212; seeing it announced live is something special. Meeting product managers and key members of Google engineering face to face is invaluable &#8212; again, normally we only see their faces on the screen, and it’s another plus on the partnership side that these folks are so accessible to us and that we’re recognized as having enough tech chops to be invited to participate in these valuable interactions.</span></p> <p><span style="font-weight: 400;">Google is also a leader in the open-source community and this is near and dear to Pythian&#8217;s heart and, with our strong ties to technical communities &#8211; we’ve always held engineering opinions high up. A significant part of this community will be at Google NEXT! Oh&#8230; and Google&#8217;s commitment to open-source is yet another reason why we are so aligned with Google&#8217;s cloud strategy.</span></p> <p><span style="font-weight: 400;">Conferences are an excellent place to learn and hone technical chops for senior engineers who often see less value in training courses and other traditional education. Google NEXT has awesome technical content not only generally about cloud but also AI and machine learning, which is the focus of my Enterprise Data Science team. As a result, much of my team will be there too.</span></p> <p><span style="font-weight: 400;">GCP is often called the leading cloud when it comes to building AI applications. Customers continually share that Google&#8217;s AI and Big Data capabilities are often the reasons they choose GCP. It’s fun, challenging and exciting work to build AI products on GCP and we enjoy showing what we’re working on so we are going to demo our cool data platform product as well as show off some cool demos like our gameplay video toxicity detection demo. You will find us in<strong> booth #S1324</strong> of the exhibition area of Moscone South &#8212; we are keen to chat about the ins and outs and the details of what we do and why/how we do it.</span></p> <p><span style="font-weight: 400;">I must emphasize that partnering with Google has been a blast over the last four years and a very rewarding experience &#8212; we get to apply cool technology to solve interesting business problems while our customers see how cloud transforms their business and how data insights change the way they work. We can’t imagine being anywhere else to shout from the rooftops about how much we love Google. Gotta show some respect, eh?</span></p> <p><span style="font-weight: 400;">Finally, conferences are always a great place to meet like-minded people in your profession. This applies internally and externally. In fact, because my data science team is so global and spread out, it&#8217;s the best opportunity for many of us to get together in one place &#8212; it doesn&#8217;t happen too often, and we’re going to make it a ‘must’ at Google NEXT for years to come.</span></p> <p><span style="font-weight: 400;">I couldn’t begin to list all the other great reasons why attending Google NEXT is such a great idea. If you are at Google NEXT,  then you must have your own very good reasons to be there (care to share in the comments?) and there is a very good chance you’ll bump into one of us &#8211; in the presentations, social events, as well as in the exhibition area (make sure to find us there!). Watch for the “Love Your Data” t-shirts roaming around and stay tuned to our <a href="http://blog.pythian.com">blog</a>, our </span><a href="https://twitter.com/Pythian"><span style="font-weight: 400;">Twitter</span></a><span style="font-weight: 400;"> feed and our </span><a href="https://www.linkedin.com/company/pythian/"><span style="font-weight: 400;">LinkedIn</span></a><span style="font-weight: 400;"> feed as we will be covering things we see and like and find cool at Google NEXT, live throughout the conference (as well as announcing our own big news).</span></p> <p><span style="font-weight: 400;">And our very own VP of Business Development will </span><span style="font-weight: 400;">deliver a session titled </span><i><span style="font-weight: 400;">Solution Selling Mindset – Moving from Product to Solution Selling </span></i><span style="font-weight: 400;">on Monday, July 23, 2018 at 11:00 am PDT (room number 310-314).</span></p> <p><span style="font-weight: 400;">Oh&#8230; and if you want to meet with anyone of us here, here are some names to put with the faces you’ll find &#8211; reach out to Vanessa at </span><a href="mailto:simmons@pythian.com"><span style="font-weight: 400;">simmons@pythian.com</span></a><span style="font-weight: 400;"> to get connected:</span></p> <p><span style="font-weight: 400;">Our business and technology leadership:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Paul Vallée &#8211; our fearless CEO. Yep. The man.</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Vanessa Simmons &#8211; leads Business Development. Our success with Google is in big part her fault (not that she complains!)</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Keith Millar &#8211; owns the whole Pythian service business. Not a small feat.</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Adam Muise &#8211; our Cloud CTO, and new face on the scene, wholeheartedly supporting the partnership</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Alex Gorbachev &#8211; that&#8217;s me and today I lead Enterprise Data Science delivery and teams at Pythian.</span></li> </ul> <p><span style="font-weight: 400;">Our customer leadership:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Jezriel Zapata &#8211; west coast sales</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Ivana Pirochtová &#8211; partner management in EMEA</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Elliot Zissman &#8211; regional lead EMEA</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Ted Maslach &#8211; regional lead North America</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Jared Leuschen &#8211; west coast sales</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Nathan Simmons &#8211; west coast sales</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Ben Abbey &#8211; east coast/central/canada sales</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Henry Tang &#8211; east coast/central/canada sales</span></li> </ul> <p><span style="font-weight: 400;">Our engineering leadership in cloud architecture, machine learning, big data and software engineering:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">John Laham &#8211; GCP solutions architect</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Kartick Sekar &#8211; GCP solutions architect</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Paul Spiegelhalter &#8211; data scientist</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Carlos Timoteo &#8211; data scientist</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Ekaba Bisong &#8211; data scientist</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Devin Sit &#8211; data science software engineer</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Danil Zburivsky &#8211; big data/analytics/platform product lead</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">Nelson Calero &#8211; lead database consultant</span></li> </ul> <p><strong>Or leave a comment below and we can set up a time to meet at our booth #S1324 (or just show up). See you there starting Sunday!</strong></p> <p>&nbsp;</p> </div></div> Alex Gorbachev https://blog.pythian.com/?p=104772 Thu Jul 19 2018 18:27:59 GMT-0400 (EDT) Announcement: Pythian Achieves Google Cloud Platform Partner Specialization In Infrastructure https://blog.pythian.com/pythian-achieves-google-cloud-platform-partner-specialization-in-infrastructure/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><div class="announcement-post"><p>Today, <span style="font-weight: 400;"><a href="http://www.pythian.com">Pythian</a> is pleased to announce t</span><span style="font-weight: 400;">hat it has achieved a Google Cloud Partner </span><a href="https://cloud.google.com/partners/"><span style="font-weight: 400;">Specialization</span></a><span style="font-weight: 400;"> in Infrastructure.</span></p> <p><span style="font-weight: 400;">The Google Cloud Partner Specialization Program is designed to connect customers to qualified partners with demonstrated technical proficiency and proven project success in areas including Data Analytics, Application Development, Infrastructure and Machine Learning.</span></p> <p><span style="font-weight: 400;">Pythian was awarded the Google Cloud Platform Partner Specialization in Infrastructure due to its </span><span style="font-weight: 400;">demonstrated success assisting customers architect and build their Google Cloud Platform infrastructure and workflows, and migrate to Google Cloud Platform with Google’s</span> <a href="https://cloud.google.com/compute/"><b>Compute Engine</b></a><b>, </b><a href="https://cloud.google.com/kubernetes-engine/"><b>Kubernetes Engine</b></a><b>, </b><a href="https://cloud.google.com/stackdriver/"><b>Stackdriver</b></a><b>, </b><a href="https://console.cloud.google.com/launcher/details/google-cloud-platform/cloud-virtual-network?pli=1"><b>Cloud Virtual Network</b></a><b>, </b><a href="https://cloud.google.com/load-balancing/"><b>Cloud Load Balancing</b></a><b>, </b><a href="https://cloud.google.com/interconnect/"><b>Cloud Interconnect</b></a><b>, </b><a href="https://cloud.google.com/dns/"><b>Cloud DNS</b></a><b>, </b><a href="https://cloud.google.com/iam/"><b>Cloud IAM</b></a><span style="font-weight: 400;"> and </span><a href="https://cloud.google.com/kms/"><b>Cloud Key Management Service</b></a><b>.</b></p> <p><span style="font-weight: 400;">“We’re proud to achieve the GCP Partner Specialization in Infrastructure, which recognizes the breadth and depth of expertise that we bring to our partners and clients worldwide every day,” said Vanessa Simmons, VP of Business Development at Pythian. With over twenty years of experience in data and infrastructure services, our experts have the knowledge to help businesses move seamlessly to the cloud without interruption—then, with a solid foundation, they can take another leap forward by optimizing their data for the future leveraging machine learning, for example.”</span></p> <p><span style="font-weight: 400;">To learn more about Pythian’s full range of </span><a href="https://pythian.com/google-cloud-platform/."><span style="font-weight: 400;">expert services</span></a><span style="font-weight: 400;"> for Google Cloud Platform, including real use cases and specializations, visit us at Google NEXT, July 24-26th, 2018 at the Moscone Center in San Francisco, booth #S1324. Vanessa Simmons will also deliver a session titled </span><i><span style="font-weight: 400;">Solution Selling Mindset – Moving from Product to Solution Selling </span></i><span style="font-weight: 400;">on Monday, July 23, 2018 at 11:00 am PDT (room number 310-314). </span></p> <p><b>Vanessa Simmons will be available for interviews at Google NEXT. To book an on-site interview email media@pythian.com.</b></p> </div> </div></div> Pythian News https://blog.pythian.com/?p=104774 Thu Jul 19 2018 14:01:24 GMT-0400 (EDT) Easier Execution Plans in Oracle SQL Developer https://www.thatjeffsmith.com/archive/2018/07/easier-execution-plans-in-oracle-sql-developer/ <p>SQL tuning can be fun.</p> <p>The database gives us MANY things to help with this.</p> <p>There is even a nice set of <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/refrn/V-SQL.html#GUID-2B9340D7-4AA8-4894-94C0-D5990F67BE75" rel="noopener" target="_blank">views</a> that contain everything we need to know about our SQL and their exectution plans.</p> <p>And, we have a really cool PL/SQL package, <a href="https://docs.oracle.com/database/121/ARPLS/d_xplan.htm#ARPLS378" rel="noopener" target="_blank">DBMS_XPLAN</a> for generating reports on our troublesome queries. </p> <p>The not-fun part, is going from a SQL statement to the SQL_ID.</p> <p>Now, SQL Developer has for a few releases made it easy to SEE what those SQL_IDs are for your queries&#8230;</p> <div id="attachment_3945" style="width: 564px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2013/07/explain4_1.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2013/07/explain4_1.png" alt="" width="554" height="198" class="size-full wp-image-3945" /></a><p class="wp-caption-text">Note the drop-down control added to the Explain Plan button in the worksheet toolbar.</p></div> <p>Clicking that hyperlinked text, we&#8217;d go get the plan for you and feed it to OUR plan viewer. Which, is really nice. But maybe you just want to always run DBMS_XPLAN. </p> <h3>New for Version 18.2</h3> <p>We&#8217;ve made it VERY EASY to get those DBMS_XPLANs the old-fashioned way &#8211; we generate the call for you.</p> <div id="attachment_6885" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/182-xplan.png"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/07/182-xplan.png" alt="" width="1024" height="643" class="size-full wp-image-6885" /></a><p class="wp-caption-text">Put your cursor on the query, click on the drop-down arrow on the Explain Plan button.</p></div> <p>If you want to change up the options to the call, you just need to amend the query. What you DON&#8217;T have to do now, is look up the SQL_ID or write the SELECT statement from scratch anymore. </p> <p>Also, we don&#8217;t run that query for you automatically &#8211; you need to decide if you want to get the results in a GRID (F9) or as unformatted text (F5). </p> thatjeffsmith https://www.thatjeffsmith.com/?p=6884 Thu Jul 19 2018 10:41:25 GMT-0400 (EDT) Modify SQLDeveloper connections using Ansible https://ilmarkerm.eu/blog/2018/07/modify-sqldeveloper-connections-using-ansible/#utm_source=rss&utm_medium=rss <p>This post is continuing my previous port about <a href="/blog/2018/02/using-ansible-to-distribute-sql-developer-preferences/">modifying SQLDeveloper preferences with ansible</a>. Building on the same motivation and technique my goal in thist post is to centrally push out and keep updated connection details for SQLDeveloper on client side.</p> <p>First lets declare the connections we want to push out:</p> <p><script src="https://gist.github.com/ilmarkerm/e1f17b25aa53abfafb587762af79353e.js?utm_source=rss&utm_medium=rss"></script></p> <p>NB! I&#8217;m pushing out connections referring to TNS names, since I want to add some extra RAC related settings to each connection description.</p> <p>First need to create <strong>connection.yml</strong> that will contain tasks to add a single connection to SQL Developer. This file will be called for every connection from the main playbook.</p> <p><script src="https://gist.github.com/ilmarkerm/b257939127e282683dc99a9d488e17ab.js?utm_source=rss&utm_medium=rss"></script></p> <p>Now the main playbook.</p> <p><script src="https://gist.github.com/ilmarkerm/eee3b3ab43c80ee6ae274ba44edfdc8c.js?utm_source=rss&utm_medium=rss"></script></p> <p>NB! This is just an extract from the playbook. I expect you are familiar with ansible and know how to put all these three files together <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png?utm_source=rss&utm_medium=rss" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=tjpat-_YT6I:jribY_icwkA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=tjpat-_YT6I:jribY_icwkA:F7zBnMyn0Lo"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?i=tjpat-_YT6I:jribY_icwkA:F7zBnMyn0Lo" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/ilmarkerm?a=tjpat-_YT6I:jribY_icwkA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/ilmarkerm?i=tjpat-_YT6I:jribY_icwkA:V_sGLiPBpWU" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/ilmarkerm/~4/tjpat-_YT6I" height="1" width="1" alt=""/> ilmarkerm https://ilmarkerm.eu/blog/?p=441 Thu Jul 19 2018 07:03:02 GMT-0400 (EDT) VirtualBox 5.2.16 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/LoK9vmVvuOo/ <p><img class="size-full wp-image-4959 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2015/05/virtualbox.jpg" alt="" width="129" height="145" />Hot on the heels of 5.2.14 two weeks ago, we now have <a href="https://www.virtualbox.org/">VirtualBox</a> 5.2.16.</p> <p>The <a href="https://www.virtualbox.org/wiki/Downloads">downloads</a> and <a href="https://www.virtualbox.org/wiki/Changelog#2">changelog</a> are in the usual places.</p> <p>I’ve done the install on my Windows 10 PC at work and  Windows 10 laptop at home and in both cases it worked fine. I can&#8217;t see any problems using it with Vagrant 2.1.2 either.</p> <p>I would have a go at installing on by MacBook Pro, only the latest macOS updates have turned it into a brick again. Nothing changes&#8230;</p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/19/virtualbox-5-2-16/">VirtualBox 5.2.16</a> was first posted on July 19, 2018 at 8:57 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/LoK9vmVvuOo" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8249 Thu Jul 19 2018 03:57:19 GMT-0400 (EDT) Power BI 101 – Log Files and Tracing https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/ <p>Knowing where log files are and how to turn on debugging is an essential part of any technical job and this goes for Power BI, too.  Remember, as I learn, so does everyone else&#8230;.Come on, pretty please?</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/never/" rel="attachment wp-att-8053"><img class="alignnone size-full wp-image-8053" src="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/07/never.gif?resize=498%2C205&#038;ssl=1" alt="" width="498" height="205" data-recalc-dims="1" /></a></p> <h4>Power BI Desktop</h4> <p>Log files and traces can be accessed one of two ways-</p> <ul> <li>Via the Power BI Application</li> <li>Via File Explorer</li> </ul> <p><strong>In the Power BI application</strong>, go to File &#8211;&gt; Options and Settings &#8211;&gt; Options &#8211;&gt; Diagnostics.</p> <p>Crash and dump files are automatically stored with an option to disable them from this screen, but unsure why you&#8217;d ever want to do this.  If Power BI does crash, you would lose any valuable data on what the cause was.</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/tracing1-2/" rel="attachment wp-att-8055"><img class="alignnone wp-image-8055" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?resize=318%2C194&#038;ssl=1" alt="" width="318" height="194" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?resize=1024%2C625&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?resize=300%2C183&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?resize=768%2C469&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?w=1519&amp;ssl=1 1519w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing1.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 318px) 100vw, 318px" data-recalc-dims="1" /></a></p> <p>To debug with a trace, you&#8217;ll need to enable it from this screen as well, as it&#8217;s not turned on by default.  Remember that tracing can be both resource and storage intensive, so only enable it when you actually need to diagnose something.  You can also choose to bypass tracing the geo code cache, as this is used to help map coordinates and it can be very chatty.</p> <p>To view files, you can click on the open crash/dump file folder and this will open up a File Explorer to the traces directory on your pc.</p> <h4><strong>Directly From File Explorer:</strong></h4> <p>Ensure that File Explorer has viewing set to display hidden items.</p> <p>C:\Users\&lt;user&gt;\AppData\Local\Microsoft\Power BI Desktop\Traces</p> <h4>Log Files</h4> <p>These are all retained inside the Performance folder under the Traces directory</p> <p>The file&#8217;s will be named with the following naming convention:</p> <p>&lt;Usage&gt;.&lt;PID&gt;.&lt;Timestamp&gt;&lt;unique identifier&gt;</p> <p>Locating the files that you need for your current process is easiest if you sort by Date Modified.  Verify that you&#8217;re working with the file that is being written to and not the file used to keep track of startup and shutdown log tracking, (0 KB):</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/tracing4-2/" rel="attachment wp-att-8051"><img class="alignnone wp-image-8051" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?resize=438%2C136&#038;ssl=1" alt="" width="438" height="136" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?resize=1024%2C318&amp;ssl=1 1024w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?resize=300%2C93&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?resize=768%2C239&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?w=1512&amp;ssl=1 1512w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing4.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 438px) 100vw, 438px" data-recalc-dims="1" /></a></p> <p>The third log file in the list above, and also the one started before the executable for Power BI Desktop, (PID 13396) is the Microsoft Mashup Container, (Microsoft.Mashup.Container.NetFX40.exe) with its own PID of 16692.  It&#8217;s contains valuable information about calculations,  measures and other caching processes.  Take care to ensure the PID of the one used by Power BI in the logs matches the one you&#8217;re inspecting in the Task Manager-  Excel and other programs are also known to have a version of this executable, so there may be more than one listed for Power BI, as well as others for different Microsoft applications.</p> <h4>Log File Breakdown</h4> <p>Each file will contain entries providing information on high level processing, including start time, total size of cache allocated for the task, process information, Process ID, (PID), Transaction ID, (TID) and duration.</p> <p>An example of an entry can be seen below:</p> <pre>ObjectCacheSessions/CacheStats/Size {"<strong>Start":"2018-07-19T01:42:24.9707127Z</strong>","Action":"ObjectCacheSessions/CacheStats/Size","entryCount":"1",<strong>"totalSize":"24</strong>","ProductVersion":"2.59.5135.781 (PBIDesktop)","ActivityId":"00000000-0000-0000-0000-000000000000",<strong>"Process":"PBIDesktop</strong>"<strong>,"Pid":13396,"Tid":8</strong>,"<strong>Duration":"00:00:00.0046865</strong>"}</pre> <p>We can easily match up the Process name and the PID with what is displayed in our Task Manager detail view:</p> <p><a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/tracing3-2/" rel="attachment wp-att-8050"><img class="alignnone wp-image-8050" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing3.png?resize=460%2C228&#038;ssl=1" alt="" width="460" height="228" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing3.png?resize=1024%2C508&amp;ssl=1 1024w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing3.png?resize=300%2C149&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing3.png?resize=768%2C381&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/07/tracing3.png?w=1300&amp;ssl=1 1300w" sizes="(max-width: 460px) 100vw, 460px" data-recalc-dims="1" /></a></p> <p>We&#8217;ve now identified the process, the amount of memory allocated to perform a task captured in the log, start time and the duration.  The information in these log files can assist when diagnosing if Power BI desktop crashes, but the data collected is quite rudimentary.</p> <p>If you shut down Power BI Desktop, the PBIDesktop* log file writes to the startup file, which was once empty and it then empties and saves off the timestamp of the exit of the program.</p> <p>The Microsoft Mashup file has much of the same information, but includes deeper level processing work by Power BI, such as work done in the Query Editor or when we create a measure or new column/table.</p> <p>In the three examples from the file below, you can see a compile, a save and then an evaluate task.  Note that the Host Process ID is my Power BI Desktop we&#8217;ve seen earlier, but the interaction with the Microsoft Mashup Container is demonstrated as well:</p> <pre>SimpleDocumentEvaluator/GetResult/Compile {"Start":"2018-07-19T01:48:46.1441843Z","Action":"SimpleDocumentEvaluator/<strong>GetResult/Compile</strong>","<strong>HostProcessId":"13396</strong>","ProductVersion":"2.59.5135.781 (PBIDesktop)","ActivityId":"04248470-07e1-4862-b184-a32f186f26fd","<strong>Process":"Microsoft.Mashup.Container.NetFX40","Pid":16692</strong>,"Tid":1,"<strong>Duration":"00:00:00.4302569</strong>"} ObjectCache/CacheStats/Size {"Start":"2018-07-19T01:48:47.3504705Z","Action":"ObjectCache/<strong>CacheStats/Size</strong>","<strong>HostProcessId":"13396</strong>","entryCount":"5","totalSize":"14564","ProductVersion":"2.59.5135.781 (PBIDesktop)","ActivityId":"04248470-07e1-4862-b184-a32f186f26fd","<strong>Process":"Microsoft.Mashup.Container.NetFX40","Pid":16692</strong>,"Tid":1,"<strong>Duration":"00:00:00.0000170</strong>"} SimpleDocumentEvaluator/<strong>GetResult/Evaluate</strong> {"Start":"2018-07-19T01:48:46.5744678Z","Action":"SimpleDocumentEvaluator/GetResult/Evaluate","<strong>HostProcessId":"13396</strong>","ProductVersion":"2.59.5135.781 (PBIDesktop)","ActivityId":"04248470-07e1-4862-b184-a32f186f26fd","<strong>Process":"Microsoft.Mashup.Container.NetFX40","Pid":16692</strong>,"Tid":1,"<strong>Duration":"00:00:00.7780750</strong>"}</pre> <p>Another common file in the Performance directory will contain the msmdsrv* naming convention, which collect log information on the data source loader.  Duration information and cache/memory allocation could offer valuable information on poor performance during data loading processes.  First stop is always to check the settings for the desktop to see what has been set for memory allocation vs. assuming it&#8217;s the default.</p> <p>If I just start the program and don&#8217;t open anything, only the high level processing of starting, basic memory allocation and stopping will be tracked in the PBIDesktop* file until I open up a PBIX file.  Then anything that needs to be updated and refreshed for the visuals, etc. will begin to write log data to the Microsoft Mashup log file and if a data refresh must be performed, the msmdsrv file.</p> <h4>Trace files</h4> <p>When you do turn on debugging, tracing, as shown in the beginning of this post, a file is created in the parent directory, TRACES.</p> <p>When enabled and after a restart of the Power BI Desktop, you will receive not only similar information about PID, TID, the process and the duration, but also encounter granule information about Power BI and what&#8217;s going on behind the scenes:</p> <ul> <li>Application graphics info</li> <li>Settings</li> <li>Parameters</li> <li>Background processes</li> <li>Caching</li> <li>Extensions</li> <li>Query edits</li> <li>Changes applied</li> </ul> <p>You&#8217;ll even see entries similar to the following:</p> <p><strong>SharedLocalStorageAccessor/AcquireMutex</strong></p> <p>A mutex is a small, efficient allocation of memory.  As mutexes have thread affinity, it means the mutex can only be released by the thread in Power BI that owns it.  If it&#8217;s released by another thread, an application exception will be thrown in the application and trapped in the trace file.</p> <p>The interesting aspect of tracing in Power BI Desktop, the options are put back to default, with granule level tracing disabled when you restart the application.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/&title=Power BI 101 - Log Files and Tracing"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/&title=Power BI 101 - Log Files and Tracing"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/&title=Power BI 101 - Log Files and Tracing"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/&title=Power BI 101 - Log Files and Tracing"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/03/gold-agent-image/" >Gold Agent Image</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/06/installing-em13c-on-windows-tips/" >Installing EM13c on Windows Tips</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/08/exadata-cant-fix-your-temp-problem/" >Exadata Can't Fix Your Temp Problem</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2011/12/solid-choices-for-oracle-tuning-on-solid-state-disk/" >Solid Choices for Oracle Tuning on Solid State Disk</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2009/11/ora-01427-single-row-subquery-returns-more-than-one-row/" >ORA-01427: single-row subquery returns more than one row</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBA Kevlar</a> [<a href="https://dbakevlar.com/2018/07/power-bi-101-log-files-and-tracing/">Power BI 101 - Log Files and Tracing</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8049 Wed Jul 18 2018 23:54:59 GMT-0400 (EDT) Optimizing CPU cores and threads for Oracle on AWS https://blog.pythian.com/optimize-cpu-cores-and-threads-for-oracle-on-aws/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>About a couple of weeks ago, AWS <a href="https://aws.amazon.com/about-aws/whats-new/2018/06/introducing-optimize-cpus-for-amazon-rds-for-oracle/?sc_channel=em&amp;sc_campaign=Launch_RN20180611&amp;sc_publisher=aws&amp;sc_medium=em_&amp;sc_content=launch_la_nontier1&amp;sc_country=&amp;sc_geo=&amp;sc_category=mult&amp;sc_outcome=launch&amp;mkt_tok=eyJpIjoiTVRWbVpHRTFNV1prTW1RNCIsInQiOiJyZ1lpZlBuQVNpcE9MSURJUzU5Tkt1ZVBiZ2lnZ1F4V1JJYThRQWJzek1EelU2QzFOM1lXcm9JSURTSGdERElqOXR0YkZXNEdvM1JvOEtpVWxwR0NWYzIxcU05SVc4S3hBWm1kSTA5TTR0U0N5WDdIYlJcL1pmd3Q4UmNsUUFwSU4ifQ%3D%3D" target="_blank" rel="noopener">introduced</a> a new option to manage CPU cores and threads on EC2 or RDS instances. So now you can reduce the number of cores or threads per core for your AWS instances. It doesn&#8217;t mean you are going to pay less money to Amazon when you reduce cores or threads, but it might help if you have software licensed by CPU cores or if you are not CPU bound and want a certain instance type that provides the desirable memory and IO bandwidth.</p> <p>Here is an example: You want your system to have at least 200 GiB memory and sustain IO load up to 7,000 MBs and 37,500 IOPS. An r4.8xlarge instance fulfills all the requirements, but it has 32 vCPU when you need only 16 vCPU and you don&#8217;t want to pay for additional licenses. In some cases, the licensing costs can be quite heavy. Let&#8217;s imagine that one CPU core license for our software costs $50,000. Considering that one core is two vCPU, it adds up to 50,000 x 8 which is $400,000 in licensing costs plus another $90,000 for yearly support. This total is for the additional 8 CPU/16 vCPU licensing cost. Having the option to reduce the number of cores and threads can save you about half of million and save another hundred thousand every year. It sounds like a good option even if you still pay AWS for those unused cores.</p> <p>So how we can do that? The procedure is pretty simple. I am going to use AWS RDS for Oracle as an example. If you create a new instance, you can choose a number of cores when you either launch a new database or restore from a snapshot. Also, you can modify an existing instance. Unfortunately, the GUI Web interface doesn&#8217;t provide those options for now. As far as I have been told, the AWS team is working to restore functionality and plans to make it available soon. Let&#8217;s see how we can do it using AWS CLI.</p> <p>We have one new option &#8220;&#8211;processor-features&#8221; for creating, modifying or restoring an RDS instance from a command line. If you already have a template you use to provision or restoring your instances, you need to just add the parameter and provide the number of cores and threads you plan to make available. Or, if you rely on the GUI interface, you can create the instance from the AWS console and then modify it from the command line. I will show you both methods.</p> <p>Before doing anything in the AWS command line make sure you have the latest version of AWS CLI or you may get a message like &#8220;Unknown options: &#8211;processor-features, Name=coreCount,Value..&#8221; For some reason, the pip upgrade didn&#8217;t work on my Mac, so I used the bundle to update the tool.</p> <pre lang="bash" escaped="true">MacBook:~ $ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws Password: Running cmd: /anaconda3/bin/python virtualenv.py --no-download --python /anaconda3/bin/python ... You can now run: /usr/local/bin/aws --version MacBook:~ $ aws --version aws-cli/1.15.59 Python/3.6.5 Darwin/17.7.0 botocore/1.10.58 MacBook:~ $ </pre> <p>With our tool updated, we can run the command &#8220;aws rds create-db-instance&#8221; and create an instance with the reduced number of cores. I used db.r3.xlarge class instance for AWS RDS for Oracle for the demonstration. The full command looks like :</p> <pre lang="text" escaped="true">aws rds create-db-instance --db-instance-identifier orcl --allocated-storage 20 --db-instance-class db.r3.xlarge --engine oracle-ee --master-username superdba --master-user-password "Mysecretpassword" --no-multi-az --backup-retention-period 0 --engine-version 12.1.0.2.v12 --license-model bring-your-own-license --publicly-accessible --storage-type gp2 --processor-features "Name=coreCount,Value=1" </pre> <p>The instance was created and, for db.r3.xlarge, we should have 2 CPU cores and 4 vCPU (threads) by default. We have reduced the number of cores from default 2 to 1 or to 2 vCPU having two threads per core. It should now show only two CPUs used by the Oracle instance. Let&#8217;s connect to the instance and check it out.</p> <pre lang="sql" escaped="true">orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; show parameter cpu_count NAME TYPE VALUE ------------------------------- ------- ----- cpu_count integer 2 orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; </pre> <p>It looks right and even if we don&#8217;t see it from the Web GUI interface, we always can confirm it using the CLI command &#8220;aws rds describe-db-instances&#8221;:</p> <pre lang="text" escaped="true">MacBook:~ $ aws rds describe-db-instances { "DBInstances": [ { "DBInstanceIdentifier": "orcl", "DBInstanceClass": "db.r3.xlarge", "Engine": "oracle-ee", …. "PerformanceInsightsEnabled": false, "ProcessorFeatures": [ { "Name": "coreCount", "Value": "1" } ] } ] } MacBook:~ $ </pre> <p>Let&#8217;s modify the number of threads per core reducing number of vCPU to one.</p> <pre lang="text" escaped="true">MacBook:~ $ aws rds modify-db-instance --db-instance-identifier orcl --processor-features "Name=threadsPerCoe,Value=1" --apply-immediately { "DBInstance": { "DBInstanceIdentifier": "orcl", "DBInstanceClass": "db.r3.xlarge", "Engine": "oracle-ee", … </pre> <p>After the maintenance has been completed, we can verify the number of CPUs used by the instance again.</p> <pre lang="sql" escaped="true">orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; show parameter cpu_count NAME TYPE VALUE --------- ------- ----- cpu_count integer 1 orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; </pre> <p>It worked correctly and now we can return everything back to default using the option &#8220;&#8211;use-default-processor-features&#8221;.</p> <pre lang="text" escaped="true">MacBook:~ $ aws rds modify-db-instance --db-instance-identifier orcl --use-default-processor-features --apply-immediately { "DBInstance": { "DBInstanceIdentifier": "orcl", "DBInstanceClass": "db.r3.xlarge", "Engine": "oracle-ee", … orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; show parameter cpu_count NAME TYPE VALUE --------- ------- ----- cpu_count integer 4 orcl.qwrrtdbklb.us-east-1.rds.amazonaws.com:1521/orcl&gt; </pre> <p>It has worked pretty well from the command line and I hope the GUI interface is going to be fixed soon so that we are able to set it up and see it from the web console. I think the new feature may help some customers with their cloud migration, providing more options to choose instance type, according to the load rather than the CPU count.</p> </div></div> Gleb Otochkin https://blog.pythian.com/?p=104760 Wed Jul 18 2018 09:32:43 GMT-0400 (EDT) ODC Latin America Tour (Northern Leg) 2018 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/-Z7SA-LmCbo/ <p><img class="alignleft wp-image-8247" src="https://oracle-base.com/blog/wp-content/uploads/2018/07/logo-laouc.png" alt="" width="200" height="144" />Just a quick heads-up to say I&#8217;ll be taking part in most of the <a href="http://www.laouc.org/2018/03/23/ya-esta-abierto-el-llamado-para-propuestas-de-odc-lad-tour-2018/">ODC Latin America Tour (Northern Leg) 2018</a>. These are the events I&#8217;ll be speaking at.</p> <ul> <li>Quito, Ecuador &#8211; 14th August</li> <li>Barranquilla, Colombia &#8211; 16th &#8211; 17th August</li> <li>San Jose, Costa Rica &#8211; 20th August</li> <li>Panama City, Panama &#8211; 22nd August</li> <li>Mexico City, Mexico &#8211; 24th August</li> </ul> <p>There is also an event in Guatemala on the 28th August, but I can&#8217;t make that as it adds another 4 days on to the trip, which isn&#8217;t practical for me. Sorry folks!</p> <p>I&#8217;m still in the process of booking flights and hotels, but I&#8217;ve got the travel approval now from the Oracle ACE Program &amp; Oracle Developer Champions Program, I everything should be good now!</p> <p>It&#8217;s great that people go to the trouble to organise these tours and that Oracle sponsor them, but they only work if attendees come and interact. Your stories are as important as our presentations. Please make the effort to come along, join in and make the Latin America tour as fun as usual! <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>See you soon!</p> <p>Cheers</p> <p>Tim&#8230;</p> <p><strong>Update</strong>. It seems some people think I&#8217;m doing some additional events on the <a href="http://www.laouc.org/2018/03/23/ya-esta-abierto-el-llamado-para-propuestas-de-odc-lad-tour-2018/">ODC Latin America Tour</a>. I don&#8217;t know if this is because of some mistakes on event agendas, or some other mistaken communication. Sorry to disappoint you, but these are the only events I&#8217;m doing on the tour this year. I didn&#8217;t agree to do any more and I don&#8217;t have time, approval or funding for any more. If you do notice something that contradicts this post, please inform the relevant user group, or drop me a line so I can clear it up. Really sorry if some communications have gone out to make you think differently&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/07/18/odc-latin-america-tour-northern-leg-2018/">ODC Latin America Tour (Northern Leg) 2018</a> was first posted on July 18, 2018 at 9:20 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/-Z7SA-LmCbo" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8245 Wed Jul 18 2018 04:20:24 GMT-0400 (EDT) Announcement: Venue Confirmed For Upcoming Brussels “Oracle Indexing Internals and Best Practices” Seminar https://richardfoote.wordpress.com/2018/07/18/announcement-venue-confirmed-for-upcoming-brussels-oracle-indexing-internals-and-best-practices-seminar/ I can finally confirm the venue for my upcoming &#8220;Oracle Indexing Internals and Best Practices&#8221; seminar in beautiful Brussels, Belgium running on 27-28 September 2018. The venue will be the Regus Brussels City Centre Training Rooms Facility, Avenue Louise / Louizalaan 65, Stephanie Square, 1050, Brussels. Note: This will be the last public seminar I&#8217;ll run [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5651 Wed Jul 18 2018 03:55:42 GMT-0400 (EDT) Connect to Snowflake Data Warehouse with GO https://dbaontap.com/2018/07/18/connect-snowflake-using-go/ <p>In this installment, I am going to walk through the process of connecting GO to the Snowflake Data Warehouse Service (DWaaS). This tutorial requires that you have a Snowflake account. You can sign up here for a 30 day/$400.00 trial. Download and Install the ODBC Driver Once you have your account set up in Snowflake, ...</p> <p>The post <a rel="nofollow" href="https://dbaontap.com/2018/07/18/connect-snowflake-using-go/">Connect to Snowflake Data Warehouse with GO</a> appeared first on <a rel="nofollow" href="https://dbaontap.com">dbaonTap</a>.</p> DB http://dbaontap.com/?p=1570 Wed Jul 18 2018 02:59:20 GMT-0400 (EDT)