ODTUG Aggregator ODTUG Blogs http://localhost:8080 Wed, 21 Nov 2018 09:47:26 +0000 http://aggrssgator.com/ LEAP#433 0-30V/3A Adjustable Power Supply Kit https://blog.tardate.com/2018/11/leap433-0-30v-3a-adjustable-power-supply-kit.html <p>The 0-30V/3A Adjustable Power Supply Kit at the heart of this build will no doubt be instantly recognisable to anyone familiar with the usual online electronics market places. It features continuously variable output voltage, and a variable current limit with overcurrent indicator/shutdown.</p> <p>It appears the curciot design may have originated from <a href="https://www.smartkit.gr/stabilised-power-supply-0-30v-3a-m.html">SmartKit</a> in Greece, been improved by <a href="http://www.electronics-lab.com/project/0-30-vdc-stabilized-power-supply-with-current-control-0-002-3-a/">various people</a>, and at some point the “canonical design” was picked up for mass production (instantly identifiable by the red PCB and tall cap).</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/AdjustablePowerSupplyKit">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p><a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Equipment/AdjustablePowerSupplyKit"><img src="https://leap.tardate.com/Equipment/AdjustablePowerSupplyKit/assets/AdjustablePowerSupplyKit_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/11/leap433-0-30v-3a-adjustable-power-supply-kit.html Sun Nov 11 2018 02:52:15 GMT-0500 (EST) VirtualBox 5.2.22 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/sqocLW4zK8I/ <p><a href="https://www.virtualbox.org/"><img class="alignnone size-full wp-image-8303" src="https://oracle-base.com/blog/wp-content/uploads/2018/08/virtualbox-big.png" alt="" width="1222" height="408" /></a></p> <p><a href="https://www.virtualbox.org/">VirtualBox</a> 5.2.22 has been released.</p> <p>The <a href="https://www.virtualbox.org/wiki/Downloads">downloads</a> and <a href="https://www.virtualbox.org/wiki/Changelog#22">changelog</a> are in the usual places.</p> <p>I’ve installed it on my Windows 10 laptop at work. I’ll do my personal laptop and check my Vagrant and Docker stuff over the weekend. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/09/virtualbox-5-2-22/">VirtualBox 5.2.22</a> was first posted on November 9, 2018 at 3:46 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/sqocLW4zK8I" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8658 Fri Nov 09 2018 09:46:09 GMT-0500 (EST) PASS Summit, Women in Technology ROCKS! https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/ <p>PASS Summit 2018 is the 16<sup>th</sup> annual WIT luncheon.  Many of the men are wearing kilts in support of the women in the SQL community, (it&#8217;s a thing here&#8230;) there&#8217;s a luncheon, there&#8217;s special panels and highlight on the women authors and speakers.</p> <blockquote> <p>&nbsp;</p> </blockquote> <h4>Women in Tech Speaker</h4> <p>The WIT speaker was Lauri Bingham, the Director of Technology Engineering Project Management at T-Mobile.  </p> <p>Lauri shared her early life, the challenges her mother went through as a single, working mother without an employment history.  She said she made a promise to herself that she’d never be in that position and how her passion in technology was first sparked by a class in BASIC in school.</p> <p>Lauri was able to share her story of how, at 25, she was a single mother, herself and how her career in technology created an opportunity for her to support and succeed where without her tech career, she might not have had that.</p> <p>Lauri told us the history of TMobile’s grassroots WITorganization, what worked, what didn’t and their opportunities to redesign whatthey’d begun with if it didn’t first work. </p> <h4>Grow it</h4> <p>The growth of TMobile’s WIT initiative was backed by data to ensure that what they were doing was building success.  They adjusted what they were doing. </p> <p>TMobile does a Boys and Girls Club STEM event, exposing ournext generation to technology and careers they may not have had an opportunityto experience, called Project-T.  They’vecontinued these events and have grown these events even with a mobileclassroom.</p> <p>I love hearing how women in tech initiatives can be grown, the success they result in towards the business and the women who have a passion for technology.  This is why so many women are becoming more involved in WIT events and groups.  The results I see around me at Summit speak for themselves.</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/pass-summit/" rel="tag">PASS Summit</a>, <a href="https://dbakevlar.com/tag/wit/" rel="tag">WIT</a>, <a href="https://dbakevlar.com/tag/women-in-technology/" rel="tag">women in technology</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/&title=PASS Summit, Women in Technology ROCKS!"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/&title=PASS Summit, Women in Technology ROCKS!"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/&title=PASS Summit, Women in Technology ROCKS!"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/&title=PASS Summit, Women in Technology ROCKS!"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/#comments">1 (One) on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/08/omnifocus-and-applescript/" >OmniFocus and AppleScript</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/08/enqueue-pk-fk-or-bitmap-index-problem/" >Enqueue – PK, FK or Bitmap Index problem?</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/07/awr-warehouse-status/" >AWR Warehouse, Status</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/04/oem-after-hours-notification-schedule-option-part-i/" >OEM After Hours Notification Schedule Option- Part I</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/06/the-new-and-improved-extensibility-exchange-is-here/" >The *New and Improved* Extensibility Exchange is Here!</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/11/pass-summit-women-in-technology-rocks/">PASS Summit, Women in Technology ROCKS!</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8338 Thu Nov 08 2018 22:18:10 GMT-0500 (EST) Column And Table Redefinition With Minimal Locking https://ardentperf.com/2018/11/08/column-and-table-redefinition-with-minimal-locking/ <p><strong>TLDR:</strong> Note to future self&#8230; (1) <strong>Read this before you modify a table</strong> on a live PostgreSQL database. If you do it wrong then your app might totally hang. There is a right way to do it which avoids that. (2) <strong>Especially remember the lock_timeout</strong> step. Many blog posts around the &#8216;net are missing this and it&#8217;s very important.</p> <p>Recently I was chatting with some PostgreSQL users (who, BTW, were doing rather large-scale cool stuff in PG) and they asked a question about making schema changes with minimal impact to the running application. They were specifically curious about changing a primary key from INT to BIGINT.  (Oh, you are making all your new PK fields BIGINT right?)</p> <p>And then, lo and behold, I discovered a chat today on the very same topic. Seemed useful enough to file away on my blog so that I can find it later. BTW I got permission from <a href="https://twitter.com/pg_xlog">Jim Nasby</a>, Jim F and Robins Tharakan to blame them for this&#8230;  ;)</p> <p>Most useful part of the chat was <strong>how to think about doing table definition changes in PostgreSQL</strong> with minimal application impact due to locking:</p> <ol> <li>Use lock_timeout. <ol> <li>Can be set at the session level.</li> </ol> </li> <li>For changes that do more than just a quick metadata update, work with copies. <ol> <li>Create a new column &amp; drop old column instead of modifying.</li> <li>Or create a new table &amp; drop old table.</li> <li>Use triggers to keep data in sync.</li> <li>Carefully leverage transactional DDL (PostgreSQL rocks here!) to make changes with no windows for missing data.</li> </ol> </li> </ol> <p>We can follow this line of thought even for a primary key &#8211; creating a unique index on the new column, using existing index to update table constraints, then dropping old column.</p> <p>One of the important points here is making sure that operations which require locks are metadata-only. That is, they don&#8217;t need to actually modify any data (while holding said lock) for example rewriting or scanning the table. We want these ops to run very very fast, and even time out if they still can&#8217;t run fast enough.</p> <p>A few minutes on google yields proof that Jim Nasby was right: lots of people have already written up some really good advice about this topic.  Note that (as always) you should be careful about dates and versions in stuff you find yourself.  Anything pre-2014 should be scrutinized very carefully (PostgreSQL has changed a lot since then); and for the record, PostgreSQL 11 changes this specific list again (and none of these articles seem to be updated for pg11 yet). And should go without saying, but test test test&#8230;</p> <p><span id="more-2184"></span></p> <ul> <li><a href="https://www.braintreepayments.com/blog/safe-operations-for-high-volume-postgresql/">This article from BrainTree is my favorite</a> of what I saw this morning. Concise yet clear list of green-light and red-light scenarios, with workaround for all the red lights. <ul> <li>Add a new column, Drop a column, Add an index concurrently, Drop a constraint (for example, non-nullable), Add a default value to an existing column, Add an index, Change the type of a column, Add a column with a default, Add a column that is non-nullable, Add a column with a unique constraint, VACUUM FULL</li> </ul> </li> <li><a href="https://www.citusdata.com/blog/2018/02/22/seven-tips-for-dealing-with-postgres-locks/">Citus has a practical tips article</a> that&#8217;s linked pretty widely. <ul> <li>adding a column with a default value, using lock timeouts, Create indexes, Taking aggressive locks, Adding a primary key, VACUUM FULL, ordering commands</li> </ul> </li> <li><a href="https://leopard.in.ua/2016/09/20/safe-and-unsafe-operations-postgresql"><span class="author fn n">Alexey Vasiliev</span> assembled a list in 2016</a> which is worth reviewing. <ul> <li>Add a new column, Add a column with a default, Add a column that is non-nullable, Drop a column, Change the type of a column, Add a default value to an existing column, Add an index, Add a column with a unique constraint, Drop a constraint, VACUUM FULL, ALTER TABLE SET TABLESPACE</li> </ul> </li> <li><a href="http://www.joshuakehn.com/2017/9/9/postgresql-alter-table-and-long-transactions.html">Joshua Kehn put together a good article in late 2017</a> that especially illustrates the importance of using lock_timeout (though he doesn&#8217;t mention it in the article) <ul> <li>Default values for new columns, Adding a default value on an existing column, Concurrent index creation, ALTER TABLE, importance of typical transaction length</li> </ul> </li> </ul> <p>For fun and posterity, here&#8217;s the original chat (which has a little more detail) where they gave me these silly ideas:</p> <p><code>[11/08/18 09:01] Colleague1: I have a question with regard to APG. How can we make DDL modifications to a table with minimalistic locking (downtime)?<br /> [11/08/18 09:31] Jim N: It depends on the modification you're trying to make. Many forms of ALTER TABLE are very fast. Some don't even require an exclusive lock.<br /> [11/08/18 09:32] Jim N: What you have to be careful of are alters that will force a rewrite of the entire table. Common examples of that are adding a new column that has a default value, or altering the type of an existing column.<br /> [11/08/18 09:33] Jim N: What I've done in the past for those scenarios is to create a new field (that's null), put a before insert or update trigger on the table to maintain that field.<br /> [11/08/18 09:33] Jim N: Then run a "backfill" that processes a few hundred / thousand rows per transaction, with a delay between each batch.<br /> [11/08/18 09:34] Jim N: Once I know that all rows in the table have been properly updated, drop the old row, and maybe mark the new row as NOT NULL.<br /> [11/08/18 09:43] Jim N: btw, I know there's been a talk about this at a conference in the last year or two...<br /> [11/08/18 09:49] Jim F: What happens at the page level if the default value of an ALTER TABLE ADD COLUMN is null? Once upon a time when I worked at [a commercialized fork of PostgreSQL], which was built on a version of PostgreSQL circa 2000, I recall that the table would be versioned. This was a pure metadata change, but the added columns would be created for older-version rows on read, and probably updated on write. Is that how it currently works?<br /> [11/08/18 09:55] Jim N: Jim F in essence, yes.<br /> [11/08/18 09:56] Jim N: Though I wouldn't describe it as being "versioned"<br /> [11/08/18 09:57] Jim N: But because columns are always added to the end of the tuple (and we never delete from pg_attribute), heap_deform_tuple can detect if a tuple is "missing" columns at the end of the tuple and just treat them as being null.<br /> [11/08/18 09:57] Jim N: At least I'm pretty sure that's what's going on, without actually re-reading the code right now. &#x1f609;<br /> [11/08/18 10:08] Jim F: does it work that way for non-null defaults as well? that would create a need for versioning, if the defaults changed at different points in time<br /> [11/08/18 10:08] Robins: While at that topic.... Postgres v11 now has the feature to do what Jim F was talking about (even for non-NULLs). Although as Jim Nasby said, you still need to be careful about which (other) kind of ALTERs force a rewrite and use the Trigger workaround. "Many other useful performance improvements, including the ability to avoid a table rewrite for ALTER TABLE ... ADD COLUMN with a non-null column default"<br /> [11/08/18 10:08] Jim F: exactly...<br /> </code></p> <p>Did we get anything wrong here? Do you disagree? Feel free to comment. :)</p> Jeremy http://ardentperf.com/?p=2184 Thu Nov 08 2018 21:22:48 GMT-0500 (EST) PASS Summit, Day 2 Keynote! https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/ <p>After a long night in a coma to restore my energy expel for the first day of Summit, (anyone else exhausted already???)  I showed up bright eyed and bushy tailed for the second day&#8217;s keynote and the blogger table.  </p> <figure class="wp-block-image"><img src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/11/exhausted.gif?w=650&#038;ssl=1" alt="" class="wp-image-8334" data-recalc-dims="1"/><figcaption>Me this week&#8230;</figcaption></figure> <h4>PASS Time</h4> <p>The awesome Wendy Pastrick started us out letting us know what goes on behind the scenes deciding how PASS makes decisions&#8230;and then broke into song.  She will survive&#8230;.jus&#8217; saying.</p> <p>Tim Ford let us know that 40% of the attendees to PASS Summit this year are first time attendees.  I&#8217;ve always been one to collect conference data and I understand the importance of building new attendance.  Those numbers are fantastic and I love seeing people attend this event and taking advantage of the incredible content that PASS offers the community and the membership.</p> <p>He also talked about the power of the Women in Tech movement at PASS.  It had its beginnings in 2003  and have grown into a robust, multi-event group at Summit each year, involving everyone in the membership.</p> <h4>Walk Through Time</h4> <p>With the anniversary of both SQL Server, (25 years) and PASS Summit, (20 years) the second keynote brought many of those that were important to the shape of SQL Server today.</p> <p>Ron Soukup joined the five person SQL Server 2 team back when it the software shipped on a floppy disk.  He started with Microsoft back in 1989 and was part of the product until 1995.  He covered the significant changes in version 6.0, (my first version to work on&#8230;) and it brought back a lot of memories&#8230;some good and some bad&#8230; <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Peter Carlin was next, who started in 1994 and how important the discussion it was deciding if databases should be based in kilobytes or megabytes.  The power of Dynamic Management Views began at this time and as powerful as the query store is, I still fall back to my trusted DMVs for the data I often need.</p> <p>Paul Flessner, who was with head of Microsoft SQL Server from 1995-2005, took on a more competitive view of the database platform and made in SQL Server 2005 and grew as we saw in version 2008, (R2) as Ted Kummart arrived on the scene.  This is a version of SQL Server with incredible loyalty and the main version I last worked with until I joined Oracle in 2014, so that tells you how dedicated I was to the version.</p> <p>Ted joined in 2005 and stayed at the helm till 2014.  He spoke about the product being incredible, but that the shift in the culture really was what brought the product to the powerhouse we see today.  The Microsoft vision he&#8217;s still passionate about, even after departing four years.</p> <p>We then had the chance to hear from Rohan again, (as he was our keynote on Day 1) about how it has been to follow in the footsteps of these great leaders of the data platform we love.</p> <p>Raghu Ramakrishnan joined us after the walk down memory lane to discuss a number of topics, but I was most interested in the Resilient Buffer Poll Extension, (RBPEX).  This is the process of extending the buffer pool using SSD.  It&#8217;s similar to me using my SSD on my Surface Book and Surface Pro4 for swap, but at the database level, creating a &#8220;swap&#8221; area for what doesn&#8217;t fit within the main buffer pool, using very fast SSD.  Its a great new feature and I hope a number of folks embrace it.  The performance for those that need an extended buffer pool will benefit.</p> <h4>Hyperscale</h4> <p>He also touched on MVCC, (Multi-version Timestamp CC) and uses PVS, (Persistent Version Store) vs. the temp tablespace, which increases the performance on logging transactions, backup and recoveries.  The talk then took an interesting turn that caused me to stop and pay close attention.</p> <p>I sat, a bit astounded, as I watched Raghu, describe to me and 1000&#8217;s of my friends, a clear architecture design in Azure that put Oracle RAC to shame.  At the same time, it isn&#8217;t REALLY RAC.  The &#8220;always on&#8221; is a separate animal and this is more about a hybrid of what Oracle has in multi-tenant and RAC.  It&#8217;s going to take a bit for me to wrap my brain around this new architecture, as I believe its well thought out and an ocean of data environments that will work well together- as Raghu referred to it, &#8220;a lake&#8221; and not to be confused with a data lake.  Between the RBPEX, the XLOG service, with underlying SSD that won&#8217;t suffer the Global cache, (GC) waits we see in Oracle RAC environments.  There will be limited latency due to the design and separation of data and transactional state from the compute, which has its own RBPEX at that layer.   This is hyperscale.  This is cool.</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/&title=PASS Summit, Day 2 Keynote!"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/&title=PASS Summit, Day 2 Keynote!"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/&title=PASS Summit, Day 2 Keynote!"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/&title=PASS Summit, Day 2 Keynote!"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><distyle="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/03/aws-cloudwatch-delphix-aws-trial/" >AWS CloudWatch with Delphix AWS Trial</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/01/working-with-awr-reports-from-em12c/" >Working With AWR Reports From EM12c</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2010/10/the-seasons-of-a-dba/" >The Seasons of a DBA</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/03/cbo-statistics-and-a-rebeldba-with-a-cause/" >CBO, Statistics and A Rebel(DBA) With A Cause</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/04/conference-networking-tips-right/" >Conference Networking- Tips to Doing it Right</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/11/pass-summit-day-2-keynote/">PASS Summit, Day 2 Keynote!</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8330 Thu Nov 08 2018 12:42:48 GMT-0500 (EST) Building a Web Service for Uploading and Downloading Files: The Video! https://www.thatjeffsmith.com/archive/2018/11/building-a-web-service-for-uploading-and-downloading-files-the-video/ <img width="840" height="278" src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/file-ords.png" class="attachment-large size-large wp-post-image" alt="" /><p>This video is a bit longer than most, but I&#8217;ll show you how to deploy a web service to:</p> <ul> <li>get a list of files from a table &#8211; stored as BLOBs</li> <li>get individual file details</li> <li>download/render the file using the mime type</li> <li>how to upload a file</li> <li>how to generate LINKs in your {json} responses</li> <li>how to set the HTTP Status Codes for your responses</li> </ul> <h3>The Video</h3> <p><iframe width="560" height="315" src="https://www.youtube.com/embed/n9xy0GF1TYc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p> <p><a href="https://www.youtube.com/watch?v=n9xy0GF1TYc&#038;t=265s" rel="noopener" target="_blank">Skip the Intro, Go Straight to the Demo.</a></p> <h3>The Slides</h3> <p><iframe src="//www.slideshare.net/slideshow/embed_code/key/hjSuo6yeM7Mp6U" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> </p> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/hillbillyToad/build-a-web-service-to-up-and-download-files-for-oracle-database" title="Build a Web Service to Up and Download Files for Oracle Database" target="_blank">Build a Web Service to Up and Download Files for Oracle Database</a> </strong> from <strong><a href="https://www.slideshare.net/hillbillyToad" target="_blank">Jeff Smith</a></strong> </div> <h3>The Code</h3> <p>Here&#8217;s the table:</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="sql"><pre class="de1">&nbsp; <span class="kw1">CREATE</span> <span class="kw1">TABLE</span> <span class="st0">&quot;HR&quot;</span><span class="sy0">.</span><span class="st0">&quot;MEDIA&quot;</span> <span class="br0">&#40;</span> <span class="st0">&quot;ID&quot;</span> <span class="kw1">NUMBER</span><span class="br0">&#40;</span><span class="sy0">*,</span><span class="nu0">0</span><span class="br0">&#41;</span> GENERATED ALWAYS <span class="kw1">AS</span> <span class="kw1">IDENTITY</span> MINVALUE <span class="nu0">1</span> MAXVALUE <span class="nu0">9999999999999999999999999999</span> <span class="kw1">INCREMENT</span> <span class="kw1">BY</span> <span class="nu0">1</span> <span class="kw1">START</span> <span class="kw1">WITH</span> <span class="nu0">1</span> CACHE <span class="nu0">20</span> NOORDER NOCYCLE NOKEEP NOSCALE <span class="kw1">NOT</span> <span class="kw1">NULL</span> ENABLE<span class="sy0">,</span> <span class="st0">&quot;FILE_NAME&quot;</span> VARCHAR2<span class="br0">&#40;</span><span class="nu0">256</span> BYTE<span class="br0">&#41;</span> <span class="kw1">NOT</span> <span class="kw1">NULL</span> ENABLE<span class="sy0">,</span> <span class="st0">&quot;CONTENT_TYPE&quot;</span> VARCHAR2<span class="br0">&#40;</span><span class="nu0">256</span> BYTE<span class="br0">&#41;</span> <span class="kw1">NOT</span> <span class="kw1">NULL</span> ENABLE<span class="sy0">,</span> <span class="st0">&quot;CONTENT&quot;</span> <span class="kw1">BLOB</span> <span class="kw1">NOT</span> <span class="kw1">NULL</span> ENABLE<span class="sy0">,</span> <span class="kw1">CONSTRAINT</span> <span class="st0">&quot;MEDIA_PK&quot;</span> <span class="kw1">PRIMARY</span> <span class="kw1">KEY</span> <span class="br0">&#40;</span><span class="st0">&quot;ID&quot;</span><span class="br0">&#41;</span>;</pre></div></div></div></div></div></div></div> <p>If you&#8217;re on a version of Oracle older than 12c, you&#8217;ll need to create a sequence/trigger, or you&#8217;ll need to add the ID&#8217;s yourself in the POST Handler/Inserts.</p> <p>Here&#8217;s the REST Module (you&#8217;ll need to run this in a REST Enabled Schema):</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="plsql"><pre class="de1"><span class="co1">-- Generated by Oracle SQL Developer REST Data Services 18.3.0.276.0148</span> <span class="co1">-- Exported REST Definitions from ORDS Schema Version 18.3.0.r2701456</span> <span class="co1">-- Schema: HR Date: Thu Nov 08 11:20:45 EST 2018</span> <span class="co1">--</span> <span class="kw1">BEGIN</span> ORDS<span class="sy0">.</span>DEFINE_MODULE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_base_path <span class="sy0">=&gt;</span> <span class="st0">'/ora_magazine/'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_status <span class="sy0">=&gt;</span> <span class="st0">'PUBLISHED'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_TEMPLATE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_priority <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_etag_type <span class="sy0">=&gt;</span> <span class="st0">'HASH'</span><span class="sy0">,</span> p_etag_query <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'POST'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'plsql/block'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'declare image_id integer; &nbsp; begin &nbsp; insert into media (file_name,content_type,content) values (:file_name,:file_type,:body) -- :body is defined by ORDS returning id into image_id; :status := 201; -- http status code :location := '</span><span class="st0">'./'</span><span class="st0">' || image_id; -- included in the response to access the new record &nbsp; end;'</span> <span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'POST'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'X-ORDS-STATUS-CODE'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'status'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'INT'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'OUT'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'POST'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'file_name'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'file_name'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'IN'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'POST'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'file_type'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'file_type'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'IN'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'POST'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'location'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'location'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'OUT'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'json/collection'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'select ID , FILE_NAME , CONTENT_TYPE, '</span><span class="st0">'./'</span><span class="st0">' || id &quot;$record&quot; -- the $ tells ORDS to render this as a LINK from media order by id asc -- optional if you want insertion order'</span> <span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_TEMPLATE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/:id'</span><span class="sy0">,</span> p_priority <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_etag_type <span class="sy0">=&gt;</span> <span class="st0">'HASH'</span><span class="sy0">,</span> p_etag_query <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/:id'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'json/item'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'select FILE_NAME, CONTENT_TYPE, ID || '</span><span class="st0">'/content'</span><span class="st0">' &quot;$file&quot; from MEDIA where ID = :id'</span> <span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_TEMPLATE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/:id/content'</span><span class="sy0">,</span> p_priority <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_etag_type <span class="sy0">=&gt;</span> <span class="st0">'HASH'</span><span class="sy0">,</span> p_etag_query <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'ora_magazine'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'media/:id/content'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'resource/lob'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'select CONTENT_TYPE, CONTENT from MEDIA where ID = :id'</span> <span class="br0">&#41;</span><span class="sy0">;</span> &nbsp; &nbsp; <span class="kw1">COMMIT</span><span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;</span></pre></div></div></div></div></div></div></div> thatjeffsmith https://www.thatjeffsmith.com/?p=7081 Thu Nov 08 2018 11:43:43 GMT-0500 (EST) Look Sharp: An Introduction to Edge Computing https://blog.pythian.com/look-sharp-introduction-edge-computing/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">For years, data consumption was associated with individuals: people who stream videos, play games and otherwise live life with the help of the internet. But even though the average user’s daily data consumption is expected to increase to </span><a href="https://mashable.com/2016/08/17/intel-autonomous-car-data/#F_bK.NjGFqqm"><span style="font-weight: 400;">1.5 gigabytes by 2020</span></a><span style="font-weight: 400;">, that figure is dwarfed by the exponentially growing data demands of the Internet of Things (IoT). Today, there are more internet-connected devices than people in the world, and Gartner </span><a href="https://www.gartner.com/imagesrv/books/iot/iotEbook_digital.pdf"><span style="font-weight: 400;">predicts</span></a><span style="font-weight: 400;"> those numbers will grow to 20 billion by 2020. The Internet of Things is creating new challenges in how all this data will get processed, and edge computing is providing the answers.</span></p> <p><span style="font-weight: 400;">What, exactly, is edge computing? It’s an approach to computing that takes much of the burden of processing away from the cloud, offloading it instead to a small server that is physically close to the user. </span></p> <p><span style="font-weight: 400;">Edge computing gets its name from the idea that it pushes computing intelligence to the edge of a network’s devices. The relevance of this localized data becomes clear the moment we see how IoT is now evolving. Consider, for example, the anticipated data needs of a self-driving car. With a data consumption rate of 4 terabytes per day (equal to more than 2,600 individual users), a self-driving car is effectively a cloud on wheels, one that can’t afford latency when making life-or-death decisions for passengers and pedestrians. Any network, no matter how fast, would soon be overwhelmed by the processing demands of a fleet of autonomous cars. </span></p> <p><span style="font-weight: 400;">And the self-driving car is far from the most demanding use case. A connected airplane uses 5 terabytes each day. A smart hospital needs 3 terabytes. And for a smart factory, the data needs balloon to 3 petabytes each day — that’s 3 million gigabytes.</span></p> <p><span style="font-weight: 400;">But the real-world uses for edge computing aren’t necessarily as ambitious as the ones described above. The humble digital surveillance camera is a perfect example of a use case that is relevant for the typical enterprise. Pre-edge security cameras were incapable of doing any processing on their own; all data needed to be sent to an external server or the cloud. But today’s security demands are very different. Organizations are installing far more cameras, and those cameras can now recognize faces, licence plates and more. Edge computing allows more of the necessary processing to happen within the camera itself, thus sparing more distant resources from dealing with workloads that exceed their bandwidths.</span></p> <p><span style="font-weight: 400;">Edge computing is establishing itself as the standard approach to handling the data demands of IoT. But it also presents some risks that need to be considered and planned for. Mistakes in configuration, for example, are far more common when organizations are working with hundreds or thousands of devices. Security issues will now become even more critical, since the explosion of intelligent devices provides hackers with a greatly expanded vector for attack. Finally, a financial plan for edge computing should anticipate licensing costs. In the example of digital surveillance, you’re no longer done when you pay for the camera; you also need to plan for the costs of specific applications, future support, security upgrades and more.</span></p> <p><span style="font-weight: 400;">In the age of IoT, the limits of the centralized data-processing warehouse are painfully clear. Today’s data needs to be processed quickly and reliably, and the best place to do that is near the edge of your network, where the data is being generated. <a href="https://pythian.com/cloud-services/">Find out how</a> Pythian can help make edge computing work for your organization.</span></p> <p>&nbsp;</p> </div></div> Krista Colby-Wheatley https://blog.pythian.com/?p=105342 Thu Nov 08 2018 11:06:31 GMT-0500 (EST) Troubleshooting a GoldenGate Error Code https://blog.pythian.com/troubleshooting-goldengate-ogg-00303-unable-to-open-credential-store-error-code-43490/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>After applying Oracle GoldenGate V12.2.0.1.170919 for Oracle Database 12c OPTIMIZER Patch for Bug# 26849949, starting GoldenGate extract failed with OGG-00303: Unable to open credential store. Error code 43,490.</p> <p>Here is what the report looks like.</p> <pre>$ head -50 E_LAX6.rpt *********************************************************************** Oracle GoldenGate Capture for Oracle Version 12.2.0.1.170919 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_171030.0908_FBO Linux, x64, 64bit (optimized), Oracle 12c on Oct 30 2017 20:59:41 Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved. Starting at 2018-10-25 15:08:33 *********************************************************************** Operating System Version: Linux Version #2 SMP Wed Jul 11 12:11:36 PDT 2018, Release 4.1.12-94.8.5.el7uek.x86_64 Node: localhost Machine: x86_64 soft limit hard limit Address Space Size : unlimited unlimited Heap Size : unlimited unlimited File Size : unlimited unlimited CPU Time : unlimited unlimited Process id: 23154 Description: *********************************************************************** ** Running with the following parameters ** *********************************************************************** 2018-10-25 15:08:33 INFO OGG-03059 Operating system character set identified as UTF-8. 2018-10-25 15:08:33 INFO OGG-02695 ANSI SQL parameter syntax is used for parameter parsing. EXTRACT e_lax USERIDALIAS gguser Source Context : SourceModule : [er.init] SourceID : [/scratch/aime/adestore/views/aime_adc4150431/oggcore/OpenSys/src/app/er/init.cpp] SourceFunction : [get_infile_params] SourceLine : [5554] ThreadBacktrace : [11] elements : [/u01/gg/12.2.0/libgglog.so(CMessageContext::AddThreadContext()+0x1b) [0x7f714024709b]] : [/u01/gg/12.2.0/libgglog.so(CMessageFactory::CreateMessage(CSourceContext*, unsigned int, ...)+0x135) [0x7f7140241165]] : [/u01/gg/12.2.0/libgglog.so(_MSG_ERR_STARTUP_PARAMERROR_ERRORTEXT(CSourceContext*, char const*, CMessageFactory::MessageDisposition)+0x30) [0x7f71402308d0]] : [/u01/gg/12.2.0/extract(get_infile_params(time_elt_def*, time_elt_def*, char**, ggs::gglib::ggdatasource::DataSourceParams&amp;, ggs::Heartbeat::MapGeneratorParams&amp;)+0x5da1) [0x5c4c91]] : [/u01/gg/12.2.0/extract() [0x5f036a]] : [/u01/gg/12.2.0/extract(ggs::gglib::MultiThreading::MainThread::ExecMain()+0x60) [0x6cea60]] : [/u01/gg/12.2.0/extract(ggs::gglib::MultiThreading::Thread::RunThread(ggs::gglib::MultiThreading::Thread::ThreadArgs*)+0x14d) [0x6cfcdd]] </pre> <p>&nbsp;</p> <p>Since I did not have the password for database gguser, I modified the password from the database and recreated credentialstore using ggsci.</p> <pre> info credentialstore delete credentialstore create wallet. add credentialstore alter credentialstore add user gguser alias gguser. alter credentialstore add user GGUSER alias GGUSER info credentialstore </pre> <p>The reason we check credentialstore before deleting is to determine aliases.</p> <pre>GGSCI 1&gt; info credentialstore Reading from ./dircrd/: Default domain: OracleGoldenGate Alias: gguser Userid: gguser Alias: GGUSER Userid: GGUSER </pre> <p>Here are the steps from ggserr.log.</p> <pre>2018-10-25 15:44:00 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): create wallet. 2018-10-25 15:44:00 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Created wallet at location 'dirwlt'. 2018-10-25 15:44:00 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Opened wallet at location 'dirwlt'. 2018-10-25 15:44:10 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): add credentialstore. 2018-10-25 15:44:10 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Credential store created in ./dircrd/. 2018-10-25 15:44:22 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): alter credentialstore add user gguser alias gguser. 2018-10-25 15:44:31 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Credential store in ./dircrd/ altered. 2018-10-25 15:44:55 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): alter credentialstore add user GGUSER alias GGUSER. 2018-10-25 15:45:06 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Credential store in ./dircrd/ altered. 2018-10-25 15:45:15 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): info credentialstore. 2018-10-25 15:45:15 INFO OGG-02096 Oracle GoldenGate Command Interpreter for Oracle: Reading from ./dircrd/:. </pre> <p>Here are the permissions for the files.</p> <pre>$ chmod 775 -R dirwlt/ dircrd/ $ ls -l dirwlt/ dircrd/ dircrd/: total 4 -rwxrwxr-x 1 gguser oinstall 701 Oct 25 15:45 cwallet.sso dirwlt/: total 4 -rwxrwxr-x 1 gguser oinstall 290 Oct 25 15:44 cwallet.sso </pre> <p>Starting extract failed again!</p> <pre>2018-10-25 15:45:30 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (gguser): dblogin useridalias gguser. 2018-10-25 15:45:35 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (gguser): start e*. 2018-10-25 15:45:35 INFO OGG-00963 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 10.80.27.191:39060 (START EXTRACT E_LAX ). 2018-10-25 15:45:35 INFO OGG-00960 Oracle GoldenGate Manager for Oracle, mgr.prm: Access granted (rule #5). 2018-10-25 15:45:35 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT E_LAX starting. 2018-10-25 15:45:35 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, e_lax.prm: EXTRACT E_LAX starting. 2018-10-25 15:45:35 INFO OGG-03059 Oracle GoldenGate Capture for Oracle, e_lax.prm: Operating system character set identified as UTF-8. 2018-10-25 15:45:35 INFO OGG-02695 Oracle GoldenGate Capture for Oracle, e_lax.prm: ANSI SQL parameter syntax is used for parameter parsing. 2018-10-25 15:45:35 ERROR OGG-00303 Oracle GoldenGate Capture for Oracle, e_lax.prm: Unable to open credential store. Error code 43,490. 2018-10-25 15:45:35 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, e_lax.prm: PROCESS ABENDING. 2018-10-25 15:45:38 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (gguser): info all. </pre> <p>Set environment variables for ORACLE_HOME and ORACLE_SID for extract, and restart extract solved the issue.</p> <pre>2018-10-25 15:52:44 INFO OGG-00987 Oracle GoldenGate Command Interpreter for Oracle: GGSCI command (ggsuser): start e*. 2018-10-25 15:52:44 INFO OGG-00963 Oracle GoldenGate Manager for Oracle, mgr.prm: Command received from GGSCI on host 10.80.27.191:39169 (START EXTRACT E_LAX ). 2018-10-25 15:52:44 INFO OGG-00960 Oracle GoldenGate Manager for Oracle, mgr.prm: Access granted (rule #5). 2018-10-25 15:52:44 INFO OGG-00975 Oracle GoldenGate Manager for Oracle, mgr.prm: EXTRACT E_LAX starting. 2018-10-25 15:52:44 INFO OGG-00992 Oracle GoldenGate Capture for Oracle, e_lax.prm: EXTRACT E_LAX starting. 2018-10-25 15:52:44 INFO OGG-03059 Oracle GoldenGate Capture for Oracle, e_lax.prm: Operating system character set identified as UTF-8. 2018-10-25 15:52:44 INFO OGG-02695 Oracle GoldenGate Capture for Oracle, e_lax.prm: ANSI SQL parameter syntax is used for parameter parsing. 2018-10-25 15:52:44 INFO OGG-02095 Oracle GoldenGate Capture for Oracle, e_lax.prm: Successfully set environment variable ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_2. 2018-10-25 15:52:44 INFO OGG-02095 Oracle GoldenGate Capture for Oracle, e_lax.prm: Successfully set environment variable ORACLE_SID=sourcedb. 2018-10-25 15:52:44 INFO OGG-02095 Oracle GoldenGate Capture for Oracle, e_lax.prm: Successfully set environment variable ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_2. 2018-10-25 15:52:44 INFO OGG-02095 Oracle GoldenGate Capture for Oracle, e_lax.prm: Successfully set environment variable ORACLE_SID=sourcedb. 2018-10-25 15:52:56 INFO OGG-00993 Oracle GoldenGate Capture for Oracle, e_lax.prm: EXTRACT E_LAX started. </pre> <p>Currently, I don&#8217;t know if patching caused the issue or if it is a pre-existing condition.</p> <p>What&#8217;s interesting is the same issue happened in a different system where environment variables for ORACLE_HOME and ORACLE_SID were not set for extract.</p> <p>Instead of restarting just the extract, all the processes were stopped and restarted.</p> <p>In summary, it would be a good idea to stop and start Goldengate processes before patching in order to determine any pre-existing conditions.</p> </div></div> Michael Dinh https://blog.pythian.com/?p=105323 Thu Nov 08 2018 09:12:27 GMT-0500 (EST) Oracle Digital Assistant: Hooking up your chatbot to twitter. http://lucbors.blogspot.com/2018/11/oracle-digital-assistant-hooking-up.html Luc Bors tag:blogger.com,1999:blog-29432516.post-8577187481178273935 Thu Nov 08 2018 07:28:00 GMT-0500 (EST) Where / Having https://jonathanlewis.wordpress.com/2018/11/08/where-having/ <p>There&#8217;s a very old mantra about the use of the <em>&#8220;having&#8221;</em> clause that tells us that <em><strong>if it&#8217;s valid</strong></em> (i.e. will always give the same results) then any predicate that could be moved from the <em>having</em> clause to the <em>where</em> clause should be moved. In recent versions of Oracle the optimizer will do this for itself in some cases but (for reasons that I&#8217;m not going to mention) I came across a silly example recently where a little manual editing produced a massive performance improvement.</p> <p>Here&#8217;s a quick demo:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: where_having.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem Purpose: rem rem Last tested rem 18.3.0.0 rem 12.2.0.1 rem 11.2.0.4 rem reate table t1 as select * from all_objects where rownum &lt;= 50000 -- &gt; comment to avoid WordPress format issue ; spool where_having.lst set serveroutput off select /*+ gather_plan_statistics */ object_type, count(*) from t1 group by object_type having count(*) &gt; 0 and 1 = 2 ; select * from table(dbms_xplan.display_cursor(null,null,'allstats last')) ; </pre> <p>The big question is: will Oracle do a full tablescan of <em><strong>t1</strong></em>, or will it apply a <em>&#8220;null is not null&#8221;</em> filter early to bypass that part of the plan. Here&#8217;s the plan pulled from memory, with run-time statistics (all versions from 11g to 18c):</p> <pre class="brush: plain; title: ; notranslate"> -------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.02 | 957 | 955 | | | | |* 1 | FILTER | | 1 | | 0 |00:00:00.02 | 957 | 955 | | | | | 2 | HASH GROUP BY | | 1 | 1 | 27 |00:00:00.02 | 957 | 955 | 1186K| 1186K| 1397K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 50000 | 50000 |00:00:00.01 | 957 | 955 | | | | -------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter((COUNT(*)&gt;0 AND 1=2)) </pre> <p>As you can see, the filter at operation 1 includes the contradiction <em>&#8220;1=2&#8221;</em>, but Oracle tests this only <strong>after</strong> doing the full tablescan and aggregation. If you move the <em>&#8220;1=2&#8221;</em> into the <em>where</em> clause the tablescan doesn&#8217;t happen.</p> <p>Interestingly, if you write the query with an in-line view and trailing <em>where</em> clause:</p> <pre class="brush: plain; title: ; notranslate"> select /*+ gather_plan_statistics */ * from ( select object_type, count(*) from t1 group by object_type having count(*) &gt; 0 ) where 1 = 2 ; </pre> <p>The optimizer is clever enough to push the final predicate inside the view (where you might expect it to become part of the <em>having</em> clause) and push it all the way down into a <em>where</em> clause on the base table.</p> <pre class="brush: plain; title: ; notranslate"> ----------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | ----------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | |* 1 | FILTER | | 1 | | 0 |00:00:00.01 | | 2 | HASH GROUP BY | | 1 | 1 | 0 |00:00:00.01 | |* 3 | FILTER | | 1 | | 0 |00:00:00.01 | | 4 | TABLE ACCESS FULL| T1 | 0 | 50000 | 0 |00:00:00.01 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(COUNT(*)&gt;0) 3 - filter(NULL IS NOT NULL) </pre> <p>A quirky case of the optimizer handling the (apparently) more complex query than it does the simpler query.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19132 Thu Nov 08 2018 07:11:05 GMT-0500 (EST) Goldengate ERROR OGG-02037 Failed to retrieve the name of a missing Oracle redo log. http://www.fahdmirza.com/2018/11/goldengate-error-ogg-02037-failed-to.html <div dir="ltr" style="text-align: left;" trbidi="on">One extract got abended and wasn't able to start in Oracle Goldengate&nbsp;Version 12.2.0.1.160517 23194417_FBO. The redologs were present but extract was still abended and threw following error in the report file.<br /><br /><b>Error:</b><br /><br /><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;">ERROR OGG-02037 Failed to retrieve the name of a missing Oracle redo log.</span><br /><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;"><br /></span><b style="color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;">Solution:</b><br /><br /><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;"><b></b></span><br /><a name='more'></a><b><br /></b><br /><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;">The solution for this error is to unregister, register and then start the extract as follows:</span><br /><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;"><br /></span><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;"><br /></span><span style="background-color: white; font-size: 12px; white-space: pre-wrap;"><span style="color: #333333; font-family: Arial, Helvetica, sans-serif;">GGSCI (test) 6&gt; unregister extract ext database 2018-11-07 17:07:03 INFO OGG-01750 Successfully unregistered EXTRACT ext from database. GGSCI (test) 7&gt; GGSCI (test) 7&gt; register extract ext database 2018-11-07 17:07:56 INFO OGG-02003 Extract ESTATDEV successfully registered with database at SCN 1373637632014. GGSCI (test) 8&gt; start extract ext </span></span><br /><div><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;"><br /></span></div><div><span style="background-color: white; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 12px; white-space: pre-wrap;">Hope this helps.</span></div></div> Fahd Mirza tag:blogger.com,1999:blog-3496259157130184660.post-2548834183693450842 Wed Nov 07 2018 19:37:00 GMT-0500 (EST) Pass Summit 2018 Keynote 11/7 https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/ <p>So I made it to PASS Summit 2018.  After a flight from an airport with one gate-  yes, you heard me right, one gate.  No Wi-Fi, no connectivity and four employees at the airport.  It was a new level of disconnect.</p> <figure class="wp-block-image"><img class="wp-image-8310" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/11/help.gif?w=650&#038;ssl=1" alt="" data-recalc-dims="1" /></figure> <p>After a number of parties last night, I&#8217;m at the bloggers table for the first keynote this morning.  The first keynote started with an energized talk from PASS president and friend, Grant Fritchey.  It was a gracious discussion about the dedication of those in the community and power of those involved.</p> <h3>Keynote #1</h3> <p style="text-align: left;">The theme is #V20, the newest version of PASS Summit, the 20th anniversary of the Summit conference and the same for SQL Server!  As someone who&#8217;s an old-timer of the conference circuit, I love the maturity of the event here.  Its commitment to the community, diversity and inclusion and the technology.  It is an integral part of the event, never a second thought.</p> <p>Rohan Kumar was the first day keynote.  Rohan, (@RohanKData) is in charge of the engineering strategy behind Azure.  He discussed in his talk what I&#8217;ve been telling folks for awhile now-  80% of companies are on a hybrid/multi-cloud environment.  Microsoft seems more aware of this than other cloud providers as they have Office 365 along with other Azure products.  They know how many are using Azure, even when they only see their data or big data platforms as their ONLY cloud.  Microsoft isn&#8217;t threatened by hybrid and instead embrace it.  There will always be data that MUST be held on-premises.  It may be due to policies, archaic systems, but it will be so.  As a cloud provider, Microsoft seems to understand, train their models to expect this.  </p> <h3>All The DATA<img alt="" /></h3> <p>He spoke at length to the importance of java working alongside with TSQL in the database, ability to have R and Python for analytics and more.  After he finished filling our brains with information about the newest SQL2019 release, Bob Ward came on Conor Cunningham demonstrated how not to receive waits in tempDB in SQL 2019 in the midst of the keynote.<img /></p> <p>I have so many customers that have source data from Oracle, (of course I do, I worked in Oracle for how long? :)) unstructured and structured data that I can&#8217;t even keep track of it all.  They&#8217;re tested with challenges of how to pull it into one source and build value from this data.  Right now, they&#8217;re using less than 10% of their data, but what I&#8217;m seeing with SQL 2019, shipping with Spark, using the Azure Data Studio, that I could take this data and query across multiple data sources, no matter if CSV files, Oracle, SQL Server, big data and build joins, use python and actually get the value from data they were never able to do before.  </p> <h3>Managed Instance</h3> <p>Rohan then went into Managed Instance, (one of the most awesome recent features to come out in Azure, IMHO) which offers much of the PaaS benefits with the option to easily migrate from on-premises environments, (yeah, those 2008 environments have until July 9th, 2019.  If you have trouble remembering, my birthday is July 8th.  I won&#8217;t forget the date&#8230; :))  If you had concerns about mission critical workloads, fear not, December 1st the new mission critical workload version of managed instance will be available.  That gives you the time to start your migration project now.</p> <h3>Recover Without the Wait </h3> <p>I was incredibly impressed with the Accelerated Database Recovery, (ADR) demo.  I am aware of the power of snapshots, built into the SQL 2019 product in Azure to eliminate wait time on recovery.  The question for me if this will be built into a cloning mechanism for environment deployment to secondary development and test environments?  It will be interesting to monitor this feature as it matures.</p> <h3>Power BI- Reporting Server</h3> <p>Paginated reports will always have a place in people&#8217;s hearts.  I have so many customers that have been waiting for the integration of RDL, (reporting server paginated reports) into Power BI.  Power BI is going to rule the world, (I can say that, I&#8217;m in that group&#8230; :))  but the ability to create the report you need, no matter if its an interactive report with visuals and dashboards OR paginated is incredibly powerful for the customer.  Its now here and I can finally make my customers happier by telling them they can do it all from Power BI.… Oh yeah, that Dataflows feature came out, too&#8230; <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <h3>Machine Learn</h3> <p>There was a cool demo of how machine learning with Spark that used Shell data and a sample camera that could detect a customer with a (pretend) cigarette and alerting the danger of the situation.  It was a realization that this isn&#8217;t science fiction.  We&#8217;re there now and I plan to be there in the future to see how machine learning is building more with our data and our technology.</p> <p>&nbsp;</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/&title=Pass Summit 2018 Keynote 11/7"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/&title=Pass Summit 2018 Keynote 11/7"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/&title=Pass Summit 2018 Keynote 11/7"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StmbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/&title=Pass Summit 2018 Keynote 11/7"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/04/oracle-management-cloud-at-ioug-collaborate-2016/" >Oracle Management Cloud at #IOUG Collaborate 2016</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2011/02/for-the-love-of-awr-and-ash/" >For the Love of AWR and ASH...</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/12/expanded-ace-recognition-at-rmoug-training-days/" >Expanded ACE Recognition at RMOUG Training Days</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/06/path-better-presentations/" >Path to Better Presentations</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/06/em-on-a-vm-oxymoron/" >EM on a VM- OxyMoron?</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/11/pass-summit-2018-keynote-11-7/">Pass Summit 2018 Keynote 11/7</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8307 Wed Nov 07 2018 12:55:40 GMT-0500 (EST) How to Get Cloud Analytics Costs Under Control https://blog.pythian.com/how-to-get-cloud-analytics-costs-under-control/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">Big data analytics can potentially cost big money. </span></p> <p><span style="font-weight: 400;">But being reactive instead of proactive about optimizing your analytics in the cloud can, unfortunately, cost you even more money. This may sound obvious, but making the right decisions when configuring your cloud analytics platform and related processes now can save buckets of cash down the road.</span></p> <p><span style="font-weight: 400;">Before you begin your journey to cloud analytics cost optimization, however, you must self-assess. You need to be honest with yourself and get a clear understanding of where you currently are, where you want to go, and – maybe most importantly – what kind of costs you’re dealing with both now and in the future.</span></p> <p><b>The data silo problem</b></p> <p><span style="font-weight: 400;">If you’ve got a data silo problem – and it’s relatively </span><a href="https://blog.pythian.com/dismantling-data-silos-cloud-integration/"><span style="font-weight: 400;">easy to recognize</span></a><span style="font-weight: 400;"> if you do – then it’s time to face facts: your organization is spending way too much money on analysis that isn’t adding enough value. That’s because it’s almost certainly based on incomplete data sets. </span></p> <p><span style="font-weight: 400;">In fact, </span><a href="http://www.dbta.com/Editorial/Think-About-It/The-5-Ways-Modern-Data-Governance-Helps-Business-Productivity-113101.aspx"><span style="font-weight: 400;">Database Trends and Applications</span></a><span style="font-weight: 400;"> says poor data quality hurts productivity by up to 20 percent and prevents 40 percent of business initiatives from achieving targets. And a recent </span><a href="https://www.gartner.com/smarterwithgartner/how-to-stop-data-quality-undermining-your-business/"><span style="font-weight: 400;">Gartner survey</span></a><span style="font-weight: 400;"> found that poor data quality costs businesses $15 million every year.</span></p> <p><span style="font-weight: 400;">Not only that, but you’re also incurring a ton of hidden costs:</span></p> <ul> <li style="font-weight: 400;"><b>Lost employees (and clients)</b><span style="font-weight: 400;">: Good employees hate dealing with bad data. They’ll eventually grow frustrated and leave. Bad data can also lead to wrong decisions and embarrassing client mishaps</span></li> <li style="font-weight: 400;"><b>Lost time: </b><span style="font-weight: 400;">The more time lost fumbling with incomplete data, the less effective your employees will be (and the more frustrated they’ll get). Not to mention the </span><a href="https://www.entrepreneur.com/article/316450"><span style="font-weight: 400;">needless cost </span></a><span style="font-weight: 400;">of all that wasted time</span></li> <li style="font-weight: 400;"><b>Lost opportunities:</b><span style="font-weight: 400;"> Analysis based on flawed modeling is often worse than no analysis at all. With no central ownership, groups working with siloed data they believe to be complete is a recipe for disaster</span></li> </ul> <p><span style="font-weight: 400;">Unfortunately, data silos are most prevalent in older, more traditional data warehouses that don’t have strong data integration tools to help them out. Along with this, the more obvious costs associated with running a traditional, on-prem data warehouse lie in scaling the system – which typically requires expensive equipment investments and upgrades – along with finding specialized expertise to keep it online.</span></p> <p><b>First things first: Run a TCO analysis</b></p> <p><span style="font-weight: 400;">Clearly, there are real costs associated with traditional data warehouses that may not immediately show up on a balance sheet. The same goes for modern data platforms in the cloud that haven’t been cost optimized. But these sometimes hidden costs are just one part of your overall cost analysis. </span></p> <p><a href="https://www.infoworld.com/article/3198366/cloud-computing/the-high-cost-and-risk-of-on-premise-vs-cloud.html"><span style="font-weight: 400;">The first thing</span></a><span style="font-weight: 400;"> you need to do is run a </span><a href="https://www.datacenterknowledge.com/archives/2013/10/01/using-a-total-cost-of-ownership-tco-model-for-your-data-center"><span style="font-weight: 400;">Total Cost of Ownership</span></a><span style="font-weight: 400;"> analysis – initial capital expenditures (CapEx) summed with operational expenses (OpEx) – on both the current and proposed systems. </span></p> <p><span style="font-weight: 400;">This also requires a cost comparison of on-premises versus cloud systems. The high costs of on-prem systems have already been mentioned and are </span><a href="https://www.infoworld.com/article/3198366/cloud-computing/the-high-cost-and-risk-of-on-premise-vs-cloud.html"><span style="font-weight: 400;">well-documented</span></a><span style="font-weight: 400;">: They require a large CapEx investment out of the gate, are expensive to upgrade, and require all sorts of cooling and fire suppression add-ons. Cloud systems typically only require smaller, monthly OpEx. </span></p> <p><span style="font-weight: 400;">So while cloud users end up receiving a regular bill, they aren’t hobbled by gigantic initial investments. </span><a href="https://cloud.google.com/products/calculator/"><span style="font-weight: 400;">Google Cloud Platform </span></a><span style="font-weight: 400;">and </span><a href="https://azure.microsoft.com/en-us/pricing/tco/calculator/"><span style="font-weight: 400;">Microsoft Azure</span></a><span style="font-weight: 400;"> have even released their own cost calculators to drive this point home. </span></p> <p><span style="font-weight: 400;">The business case can be broken down like this:</span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">Large CapEx expenses associated with an on-prem system can take significant money away from other areas of the organization</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">A monthly OpEx paid as a subscription fee is much easier on the corporate wallet, keeping organizations more nimble</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">If performance or costs aren’t up to standard, cloud users can always cancel</span></li> </ul> <p><span style="font-weight: 400;">But this begs the question: If cloud is the way to go in terms of optimizing costs, how does one optimize costs further within the cloud itself – especially for analytics purposes?</span></p> <p><b>Optimizing processing costs in the cloud </b></p> <p><span style="font-weight: 400;">Whether you’re using Google Cloud Platform or Microsoft Azure, or AWS, optimizing cloud analytics costs essentially comes down to two things: optimizing data processing costs, and optimizing data storage costs. </span></p> <p><span style="font-weight: 400;">A good first step is to look at running data transformations outside the data warehouse. It may seem counter-intuitive, but after ingesting and integrating all of your data into the data warehouse, it’s more efficient to pull that data back out and process it in something like </span><a href="https://spark.apache.org/"><span style="font-weight: 400;">Apache Spark</span></a><span style="font-weight: 400;"> – a framework that, when combined with Google’s </span><a href="https://cloud.google.com/dataproc/"><span style="font-weight: 400;">Dataproc</span></a><span style="font-weight: 400;">, can be up to 30 percent more cost-efficient than similar alternatives. This minimizes your cloud data warehouse processing costs.</span></p> <p><span style="font-weight: 400;">Organizations can also utilize </span><a href="https://cloud.google.com/blog/products/gcp/fastest-track-to-apache-hadoop-and-spark-success-using-job-scoped-clusters-on-cloud-native-architecture"><span style="font-weight: 400;">ephemeral clusters</span></a><span style="font-weight: 400;"> within Spark, which allows you to run several different jobs in parallel along with configuring cluster idle time. You can provision your Spark clusters to terminate if they’ve been idle for a set amount of time, or automatically come online when fresh data arrives, thus increasing cluster efficiency and eliminating downtime.</span></p> <p><span style="font-weight: 400;">Organizations can also use something called </span><a href="https://cloud.google.com/preemptible-vms/"><span style="font-weight: 400;">preemptible virtual machines</span></a><span style="font-weight: 400;"> (VMs), which are essentially low-cost, short-life instances. They’re cheap and aren’t super reliable, but are great (and cost-effective) for fault-tolerant workloads and batch jobs. You can use pre-emptible VMs for exploration, machine learning algorithm training, and development work.</span></p> <p><span style="font-weight: 400;">And if preemptible VMs aren’t right for your use case, data exploration involving heavy processing can also be conducted in the data lake instead of the data warehouse. This can also lead to cost savings.</span></p> <p><b>Optimizing storage costs in the cloud</b></p> <p><span style="font-weight: 400;">Optimizing your cloud processing costs, however, is just half the battle. Deploying the right cloud storage options can also go a long way.</span></p> <p><span style="font-weight: 400;">Remember the data lake we just mentioned? It can also be used for storage to minimize cloud data warehouse storage costs. But there are other, relatively simple, approaches you can take to keep cloud storage costs down:</span></p> <ul> <li style="font-weight: 400;"><b>AVRO vs JSON:</b><span style="font-weight: 400;"> Instead of storing data as JSON files, it’s smart to institute a standard file conversion to AVRO files, which are more size-efficient</span></li> <li style="font-weight: 400;"><b>Compression equals savings:</b><span style="font-weight: 400;"> Similarly, compressing all your data files as a matter of process helps keep storage costs down</span></li> <li style="font-weight: 400;"><b>Consider cold storage:</b><span style="font-weight: 400;"> Cloud platforms like Azure and GCP offer cold storage options, such as </span><a href="https://azure.microsoft.com/en-ca/blog/introducing-azure-cool-storage/"><span style="font-weight: 400;">Azure Cool Blob</span></a><span style="font-weight: 400;"> and </span><a href="https://cloud.google.com/storage/archival/"><span style="font-weight: 400;">Google’s Nearline and Coldline</span></a><span style="font-weight: 400;">, which are less expensive options for storing large datasets and archived information: under some conditions, cold tier storage can equal </span><a href="https://www.datacenterknowledge.com/archives/2016/06/08/cold-storage-in-the-cloud-comparing-aws-google-microsoft"><span style="font-weight: 400;">savings of up to 50 per cent</span></a></li> <li style="font-weight: 400;"><b>Evaluate data retention policies:</b><span style="font-weight: 400;"> In a perfect world, you’d keep all your data. But if you have so much that even keeping it in cold storage is cost prohibitive, you can always change your retention policies to delete very old raw data (you always have the option of keeping the aggregate data around, which takes up less storage space). Watch our video, </span><a href="https://resources.pythian.com/webinar/data-hoarding-age-of-machine-learning/"><span style="font-weight: 400;">Data Hoarding in the Age of Machine Learning</span></a><span style="font-weight: 400;"> to learn more. </span></li> </ul> <p><b>Get cost control baked into your analytics platform</b></p> <p><span style="font-weight: 400;">By including the best of cloud services, open source software and automated processes, Pythian’s cloud-native analytics platform, </span><a href="https://pythian.com/analytics-as-a-service/"><span style="font-weight: 400;">Kick AaaS</span></a><span style="font-weight: 400;">, has cost controls built into it. It starts with the infrastructure layer that uses Spark on Kubernetes, which simplifies cluster management and makes resource utilization more efficient for Spark workloads by letting you spin them up and down as you need. Kick AaaS also uses size-efficient Avro files, data file compression, and makes the most of available cloud cost control features such as cold storage and setting upper limits on data processing for queries. The Kick AaaS platform, along with Pythian professional services, ensures your cloud analytics costs are always optimized and under control. </span></p> <p><span style="font-weight: 400;">Whatever your cloud strategy, Pythian expertise is there to help you every step of the way, including helping you make the most of the cost control features offered by many of the cloud service platforms. </span><span style="font-weight: 400;">Find out how <a href="https://pythian.com/cloud-services/">Pythian can help you</a> control your analytics costs in the cloud. </span></p> </div></div> Ron Kennedy https://blog.pythian.com/?p=105352 Wed Nov 07 2018 09:15:48 GMT-0500 (EST) Oracle 18c New Feature Pluggable Database Switchover https://gavinsoorma.com/2018/11/oracle-18c-new-feature-pluggable-database-switchover/ <p>In earlier releases prior to Oracle 18c, while we could enable Data Guard for a Multitenant Container/Pluggable database environment, we were restricted when it came to performing a Switchover or Failover &#8211; it had to be performed at the Container Database (CDB) level.</p> <p>This meant that a database role reversal would affect each and every PDB hosted by the CDB undergoing a Data Guard Switchover or Failover.</p> <p>In Oracle 12c  Release 2, a new feature called <strong class="term">refreshable clone PDB</strong> was introduced. A refreshable  clone PDB is a <span id="GUID-13A96755-8407-42B0-A8B8-00E84FD8A360__d345e364">read-only clone that can periodically synchronize itself with its source PDB. </span></p> <p>This synchronization could be configured to happen manually or automatically based on a predefined interval for the refresh.</p> <p>In Oracle 18c a new feature has been added using  the refreshable clone mechanism which enables us to now <strong>perform a switchover at the individual PDB level.</strong> So we are enabling high availability at the PDB level within the CDB.</p> <p>We can now issue a command in Oracle 18c like this:</p> <p>SQL&gt; alter pluggable database orclpdb1<br /> refresh mode manual<br /> from orclpdb1@cdb2_link<br /> <strong>switchover</strong>;</p> <p>After the switchover completes, the original source PDB becomes the refreshable clone PDB (which can only be opened in <code class="codeph">READ ONLY</code> mode), while the original refreshable clone PDB is now open in read/write mode functioning as a source PDB.</p> <p><a href="https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/"> How to perform a Switchover for a Pluggable Database (Members Only) </a></p> Gavin Soorma https://gavinsoorma.com/?p=8389 Tue Nov 06 2018 22:28:06 GMT-0500 (EST) Oracle SaaS – Business Applications in the Cloud – as of Oracle OpenWorld 2018 https://technology.amis.nl/2018/11/06/oracle-saas-business-applications-in-the-cloud-as-of-oracle-openworld-2018/ <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-4.png"><img width="867" height="320" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-4.png" border="0"></a>Oracle has two main pillars on which its (future) business rests &#8211; dixit Larry Ellison: Oracle Cloud Infrastructure (including the Autonomous Database) and SaaS. In this article, I will relate some of the key announcement from Oracle regarding the business applications.</p> <p>It is useful to realize that Oracle&#8217;s portfolio of business applications can be regarded in various ways. </p> <p>There are horizontal applications &#8211; with generic functionality that is applicable to more or less every organization in the world. For example financial administration, human capital management or customer relationship management. The vertical applications are for specific &#8211; such as health care, energy upstream or dairy production. In this article we fill focus mainly on the horizontal applications &#8211; but please realize that Oracle offers dozens if not hundreds of vertical apps as well.</p> <p>Another way to segment the applications is by their operating model. Some of the applications run on premises whereas others are offered in the form of SaaS. Note that the products that not offered as SaaS could still run on the cloud &#8211; but that would be on IaaS, managed by the customer. Oracle&#8217;s traditional horizontal applications &#8211; EBusiness Suite, Siebel, PeopleSoft and JD Edwards &#8211; are all non-SaaS and are typically run on premises. The Fusion Applications suite is a SaaS offering, as are tens of other products that Oracle has acquired over the years &#8211; for example BlueKai, Eloqua, Taleo, Vitrue, Involver and BigMachines.</p> <p>The following is not an exhaustive list of acquisitions in the applications space (from <a title="https://www.oracle.com/corporate/acquisitions/" href="https://www.oracle.com/corporate/acquisitions/">https://www.oracle.com/corporate/acquisitions/</a>)<a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-5.png"><img width="655" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-5.png" border="0"></a></p> <p>One of the USPs that Oracle wants to leverage against the competition is the mutual integration between all these applications. The (implied) synergy from having various functional areas services by members of the same family &#8211; because obviously family knows how to talk amongst themselves. To some extent there is that synergy &#8211;&nbsp; but as much as you might expect from products from the same vendor or from listening to the salesman.</p> <h3>Fusion Applications and CX</h3> <p>Around 2006, Oracle started the development of Fusion Applications. The next generation&nbsp; business application. Leveraging the functional richness of EBS, PeopleSoft, Siebel and JD Edwards as well as the latest generation of (platform) technology &#8211; Fusion MIddleware. It took a while. And many things changed along the way. For example the emergence of the cloud. Then Fusion Applications were announced, launched and released finally actually implemented &#8211; over the period 2012 through 2016. Today, Oracle claims over 6000 customers live on &#8211; some aspect of &#8211; Fusion Applications. Oracle claims leadership in virtually every area of horizontal applications &#8211; except Sales &amp; CRM where it defers to Salesforce.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-6.png"><img width="603" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-6.png" border="0"></a></p> <p>Oracle proclaims its SaaS portfolio: &#8220;The World&#8217;s most innovative cloud applications suite&#8221;:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-7.png"><img width="598" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-7.png" border="0"></a></p> <p>This statement arises in part from the cranked up release rhythm: once every quarter a new release is rolled out. With quick feedback loops. With the applications running on the cloud, Oracle is able to collect a lot of metrics (anonymized) on the usage of features and functions and it uses that information to drive, shape and prioritize innovation.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-8.png"><img width="598" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-8.png" border="0"></a></p> <p></p> <p>The key innovation themes are clearly stated and shine through in many of the new features announced. These themes are: IoT, Blockchain, AI/ML and Smart UI (powered by ML) &#8211; such as voice powered user interface and digital assistant with natural language based conversational style UI.</p> <p>An example of what this innovation leads to is the Expenses Chatbot in SaaS ERP:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-9.png"><img width="596" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-9.png" border="0"></a></p> <p>It allows employees to submit expenses through a mere photograph of a receipt. Using optical character recognition from scanning the receipt along with contextual information about the employee, her agenda, physical location, past behavior, the expense form is fully or largely completed automatically. Whenever possible, it is processed automatically as well &#8211; although it may engage the employee in a conversational dialog when company policies are not satisfied. This was demonstrated in his keynote by Larry Ellison.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-10.png"><img width="676" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-10.png" border="0"></a></p> <h3>Fusion Analytics Data Warehouse</h3> <p>The synergy or even the mutual acknowledgement of various Oracle products is not always obvious. However, the newly announced Fusion Analytics Data Warehouse is a nice combination of various technologies and products Oracle has at its disposal. It brings together Autonomous Data Warehouse which is preconfigured with database schemas for storing consolidated data from across various Fusion Applications products as well as predefined data integration (ETL) flows for populating the data warehouse. Additionally, Oracle Analytics Cloud is leveraged &#8211; prepopulated with relevant meta-data (describing business data objects) in the context of Fusion Applications as well as predefined dashboards and reports. </p> <p>&nbsp;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-11.png"><img width="599" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-11.png" border="0"></a></p> <p>In Oracle Analytics Cloud, this is what Fusion Analytics Data Warehouse (a successor of sorts to BI Apps) looks like:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-12.png"><img width="601" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-12.png" border="0"></a></p> <p>with many predefined visualizations for various&nbsp; business aspects for Fusion Applications:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-13.png"><img width="598" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-13.png" border="0"></a></p> <p></p> <p>Here an example of one of the dashboards &#8211; on this case for Campus Hire Performance:</p> <p>This dashboard was predefined against data structures in ADW that were predefined and that are automatically populated. The business analyst using this dashboard to support her job did not have to configure, design, prepare or program anything. From this point onwards, the dashboard can be annotated or tailored.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-14.png"><img width="596" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-14.png" border="0"></a></p> <h3>Applications Unlimited</h3> <p>After Oracle had acquired PeopleSoft (along with JD Edwards) and Siebel in quick succession, it made a strong gesture towards all customers of these products. Oracle made a pledge &#8211; referred to as Applications Unlimited &#8211; to not only honor existing commitments to the customers of these now acquired products but to step up the evolution and innovation. To continue with each product line indefinitely (or at least until 2030 as was later stipulated). It was critical that no customer would feel neglected and would be under the impression that Oracle wanted to migrate them away [to Fusion Applications] from their current product &#8211; as such an impression would open the door to other vendors.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-15.png"><img width="602" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-15.png" border="0"></a></p> <p>Oracle certainly delivered on that promise. All products have evolved &#8211; some at a much faster pace than was the case under their previous owner. And for many years (in fact over a decade) there was not the slightest suggestion from Oracle that customers should consider moving away from their business application.</p> <p>Until this year. Oracle has announced the Soar program.&#8221;The Last Upgrade you&#8217;ll ever do&#8221; [no threat intended].</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-16.png"><img width="842" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-16.png" border="0"></a></p> <p>With Soar, Oracle provides a clear path to the cloud. From EBusiness Suite and PeopleSoft and Hyperion to the cloud. Not to IaaS &#8211; but to SaaS. To Fusion Applications. So at long last Oracle tries to tempt customers to move from one product line to another. To the one that innovates fastest and will be there the longest.</p> <p>Given the fact that Fusion Applications borrowed heavily in some areas from EBusines Suite (Financials, ERP) and in others from PeopleSoft (HCM) it is understandable that these are first Soar trajectories on offer. If the on premises EBusiness Suite or PeopleSoft instances are not riddled with customizations &#8211; which of course many are &#8211; then the upgrade to Fusion Applications in these particular areas could be relatively simple.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-17.png"><img width="593" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-17.png" border="0"></a></p> <p>Oracle states that Soar comprises automated utilities that will move business data and presumably configuration (meta) data to SaaS. And Oracle has a proven approach &#8211; a step by step process that should bring customers within 20 weeks to SaaS &#8211; while the shop stays open obviously. When the upgrade is complete &#8211; Oracle suggests a substantial cost saving.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-18.png"><img width="601" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-18.png" border="0"></a></p> <p>Oracle recognizes that fact that many instances have customizations and custom integrations, and offers (some) support for bringing over these to the SaaS environment. However, for many organizations it is important to very carefully assess the viability of this approach for their customizations. Oracle comments that because of the much richer functionality of Fusion ERP compared to EBusiness Suite, the need for customizations will be far less in Fusion ERP. </p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-19.png"><img width="615" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-19.png" border="0"></a></p> <p></p> <p>Additional tools listed for the Soar approach:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-20.png"><img width="585" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-20.png" border="0"></a></p> <p>And more upgrade paths to the clouds are heading our way.</p> <p>Note: to further sway organizations to adopt Oracle SaaS Fusion Applications, Oracle provided some details about how they are leveraging Oracle SaaS themselves to run their own business:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-21.png"><img width="599" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-21.png" border="0"></a></p> <p></p> <h2>Resources</h2> <p>Second OOW 2018 keynote by Larry Ellison: <a title="https://www.oracle.com/openworld/on-demand.html?bcid=5853119603001" href="https://www.oracle.com/openworld/on-demand.html?bcid=5853119603001">https://www.oracle.com/openworld/on-demand.html?bcid=5853119603001</a> .</p> <p>AMIS Oracle OpenWorld Review &#8211; Pillar 2 &#8211; SaaS &#8211; slide deck: <a title="https://t.co/PtJSBbNrC4" href="https://t.co/PtJSBbNrC4">https://t.co/PtJSBbNrC4</a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/11/06/oracle-saas-business-applications-in-the-cloud-as-of-oracle-openworld-2018/">Oracle SaaS &#8211; Business Applications in the Cloud &#8211; as of Oracle OpenWorld 2018</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50370 Tue Nov 06 2018 14:36:25 GMT-0500 (EST) Orphaned Files in ASM https://oracledba.blogspot.com/2018/11/orphaned-files-in-asm.html <div class="separator" style="clear: both; text-align: center;"><a imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="315" data-original-width="477" height="211" src="https://4.bp.blogspot.com/-f3d1Pfia_gE/W96x4DiNgzI/AAAAAAAF7Xo/MWeRWYu4Y182INp-WkzSMY6tcatvou02QCLcBGAs/s320/orphaned.png" width="320" /></a></div><br />Hi,<br />In our lab environments we test Data Guard on a daily basis, and we frequently “play” with failover, switchover, and flashback . The output of this playground is that we have some leftovers in the ASM; we call these leftovers <b>orphan files</b>.<br />To solve this, I created SQL to query ASM views against database views.<br />This query should run on the database (not ASM).<br /><blockquote>SET VERIFY OFF<br /><br />SET LINESIZE 200<br />SET SERVEROUTPUT ON<br />SET PAGESIZE 50000<br /><br />DECLARE<br />&nbsp; &nbsp;cmd&nbsp; &nbsp;CLOB;<br />BEGIN<br />&nbsp; &nbsp;FOR c IN (SELECT name Diskgroup<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM V$ASM_DISKGROUP)<br />&nbsp; &nbsp;LOOP<br />&nbsp; &nbsp; &nbsp; FOR l<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;IN (SELECT 'rm ' || files files<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (SELECT '+' || c.Diskgroup || files files, TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM (&nbsp; &nbsp; SELECT UPPER<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;SYS_CONNECT_BY_PATH (aa.name, '/')<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; )<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;files<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , aa.reference_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , b.TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM (SELECT file_number<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , alias_directory<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , name<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , reference_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , parent_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM v$asm_alias) aa<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , (SELECT parent_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM (SELECT parent_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM v$asm_alias<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE&nbsp; &nbsp; &nbsp;group_number =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(SELECT group_number<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$asm_diskgroup<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE name =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; c.Diskgroup)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AND alias_index = 0)) a<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , (SELECT file_number, TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM (SELECT file_number, TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;FROM v$asm_file<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE group_number =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(SELECT group_number<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$asm_diskgroup<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE name =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; c.Diskgroup)))<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; b<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE&nbsp; &nbsp; &nbsp;aa.file_number = b.file_number(+)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AND aa.alias_directory = 'N'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; AND b.TYPE IN<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;('DATAFILE'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , 'ONLINELOG'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , 'CONTROLFILE'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; , 'TEMPFILE')<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;START WITH aa.PARENT_INDEX = a.parent_index<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;CONNECT BY PRIOR aa.reference_index =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;aa.parent_index)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; WHERE SUBSTR<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;files<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;, INSTR (files, '/', 1, 1)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;,&nbsp; &nbsp;INSTR (files, '/', 1, 2)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;- INSTR (files, '/', 1, 1)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;+ 1<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ) =<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(SELECT dbname<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM (SELECT&nbsp; &nbsp; '/'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|| UPPER (db_unique_name)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|| '/'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; dbname<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$database))<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;MINUS<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;(SELECT UPPER (name) files, 'DATAFILE' TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$datafile<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; UNION ALL<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT UPPER (name) files, 'TEMPFILE' TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$tempfile<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; UNION ALL<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT UPPER (name) files, 'CONTROLFILE' TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$controlfile<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE name LIKE '+' || c.Diskgroup || '%'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; UNION ALL<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT UPPER (name), 'CONTROLFILE' TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$datafile_copy<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE deleted = 'NO'<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; UNION ALL<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; SELECT UPPER (MEMBER) files, 'ONLINELOG' TYPE<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; FROM v$logfile<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;WHERE MEMBER LIKE '+' || c.Diskgroup || '%')))<br />&nbsp; &nbsp; &nbsp; LOOP<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;DBMS_OUTPUT.put_line (l.files);<br />&nbsp; &nbsp; &nbsp; END LOOP;<br />&nbsp; &nbsp;END LOOP;<br />END;<br />/</blockquote>Sample output:<br /><blockquote class="tr_bq">rm +DATA/MYDB/CONTROLFILE/BACKUP.11645.952252647</blockquote>Personally, I am not running this script automatically. I consider it as a report of deletion candidates.<br />Run the commands in the ASM instance using asmcmd, for example:<br /><blockquote class="tr_bq">ASMCMD&gt; rm +DATA/MYDB/CONTROLFILE/BACKUP.11645.952252647</blockquote><br />Disclaimer: <span style="color: red;"><b>Use at your own risk</b></span>.<br /><br />Yossi<br /><br /> Yossi Nixon tag:blogger.com,1999:blog-6061714.post-8519947262075480455 Tue Nov 06 2018 10:00:00 GMT-0500 (EST) Troubleshooting a Datapatch Error in Oracle https://blog.pythian.com/troubleshooting-datapatch-error/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>As part of patching preparation, the following command was executed and failed:<br /> <code><br /> Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db</code><br /> <code><br /> Following step failed during analysis:<br /> /bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=cdb /u01/app/oracle/product/12.1.0/db/OPatch/<strong>datapatch -prereq -verbose</strong>'<br /> </code></p> <p>Let&#8217;s run datapatch -prereq -verbose manually.<br /> <code><br /> $ORACLE_HOME/OPatch/datapatch -prereq -verbose<br /> SQL Patching tool version 12.1.0.2.0 Production on Thu Oct 11 11:06:07 2018<br /> Copyright (c) 2012, 2017, Oracle. All rights reserved.</code><br /> <code><br /> Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_21425_2018_10_11_11_06_07/sqlpatch_invocation.log</code><br /> <code><br /> Connecting to database...OK<br /> Note: Datapatch will only apply or rollback SQL fixes for PDBs<br /> that are in an open state, no patches will be applied to closed PDBs.<br /> Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1)<br /> Bootstrapping registry and package to current versions...done</code><br /> <code><br /> Queryable inventory could not determine the current opatch status.<br /> Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual' and/or check the invocation log<br /> /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_21425_2018_10_11_11_06_07/sqlpatch_invocation.log for the complete error.<br /> Prereq check failed, exiting without installing any patches.</code><br /> <code><br /> Please refer to MOS Note 1609718.1 and/or the invocation log<br /> /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_21425_2018_10_11_11_06_07/sqlpatch_invocation.log for information on how to resolve the above errors.</code><br /> <code><br /> SQL Patching tool complete on Thu Oct 11 11:06:15 2018</code></p> <p>The documentation provided in Doc ID 1585822.1 did not help resolve the issue.</p> <p>Instead, running the SQL from the above output helped to identify the issue.<br /> <code><br /> SQL&gt; select dbms_sqlpatch.verify_queryable_inventory from dual;</code><br /> <code><br /> VERIFY_QUERYABLE_INVENTORY<br /> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<br /> <strong>ORA-20013</strong>: DBMS_QOPATCH ran mostly in non install area</code><br /> <code><br /> SQL&gt;</code></p> <p>Here is the referenced document:</p> <p>Datapatch fails with &#8220;ORA-20009: Job Load_opatch_inventory_1&#8221; &#8220;<strong>ORA-20013</strong>: DBMS_QOPATCH ran mostly in non install area&#8221; (Doc ID 2033620.1)</p> <p>The following DIRECTORY_NAME, DIRECTORY_PATH from dba_directories was pointing to incorrect Oracle Home: OPATCH_SCRIPT_DIR, OPATCH_LOG_DIR, OPATCH_INST_DIR.</p> <p>This occurred because database was cloned from source where source and target had different Oracle Home.</p> <p>In conclusion, when cloning databases verify if Oracle Home is identical between source and target and if not, then change database configuration at target accordingly.</p> <p>&nbsp;</p> </div></div> Michael Dinh https://blog.pythian.com/?p=105320 Tue Nov 06 2018 09:14:55 GMT-0500 (EST) Birmingham City University (BCU) Talk #7 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/1kGpo1xS5VI/ <p><img class="size-full wp-image-4907 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2015/05/bcu.jpg" alt="" width="116" height="107" />Yesterday I went to <a href="http://www.bcu.ac.uk/">Birmingham City University (BCU)</a> to do a talk on &#8220;Graduate Employability&#8221; to a bunch of second year undergraduate IT students. I&#8217;ve done this a few times at BCU, and also at UKOUG for a session directed at students.</p> <p>The session is what originally inspired the my series of blog posts called <a href="https://oracle-base.com/blog/2017/07/31/what-employers-want-a-series-of-posts/">What Employers Want</a>.</p> <p>I&#8217;ve mentioned before, these sessions are a little different to your typical conference sessions. Perhaps you should try reaching out to a local college or university to see if they need some guest speakers, and try something outside your comfort zone.</p> <p>Thanks to <a href="http://www.bcu.ac.uk/computing/about-us/our-staff/jagdev-bhogal">Jagdev Bhogal</a> and <a href="http://www.bcu.ac.uk/">BCU</a> for inviting me again. See you again soon.</p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/06/birmingham-city-university-bcu-talk-7/">Birmingham City University (BCU) Talk #7</a> was first posted on November 6, 2018 at 9:53 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/1kGpo1xS5VI" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8655 Tue Nov 06 2018 03:53:47 GMT-0500 (EST) Oracle 18c and 12c on Fedora 29 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/c0rGcUU2noE/ <p><img class="size-full wp-image-6126 alignleft" src="/blog/wp-content/uploads/2016/06/fedora.jpg" alt="" width="119" height="117" />Danger, Will Robinson! Obligatory warning below.</p> <ul> <li><a href="https://oracle-base.com/articles/linux/do-not-install-oracle-on-fedora-before-reading-this">Do not install Oracle on Fedora before reading this!</a></li> </ul> <p>So here we go…</p> <p>Fedora 29 has been out for a bit over a week now. Over the weekend I had a play with it and noticed a couple of differences between Fedora 28 and Fedora 29 as far as Oracle installations are concerned. There are some extra packages that need to be installed. Also, one of the two symbolic links that were needed for the Oracle installation on Fedora 28 is now present in Fedora 29, but pointing to the wrong version of the package.</p> <p>Here are the articles I did as a result of this.</p> <ul> <li><a href="/articles/linux/fedora-29-installation">Fedora 29 (F29) Installation</a></li> <li><a href="/articles/12c/oracle-db-12cr2-installation-on-fedora-29">Oracle Database 12c Release 2 (12.2) Installation On Fedora 29 (F29)</a></li> <li><a href="/articles/18c/oracle-db-18c-installation-on-fedora-28">Oracle Database 18c Installation On Fedora 29 (F29)</a></li> </ul> <p>It’s pretty similar to the installation on Fedora 28, with the exception of the extra packages and a slight alteration to the symbolic links.</p> <p>Once the &#8220;bento/fedora-29&#8221; box becomes available I&#8217;ll probably do a Vagrant build for this, but for the moment is was the old fashioned approach. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>So now you know how to do it, please don’t! <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/05/oracle-18c-and-12c-on-fedora-29/">Oracle 18c and 12c on Fedora 29</a> was first posted on November 5, 2018 at 1:33 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/c0rGcUU2noE" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8650 Mon Nov 05 2018 07:33:19 GMT-0500 (EST) FIRST_ROWS_10 CBO Is Hopeless, It’s Using The Wrong Index !! (Weeping Wall) https://richardfoote.wordpress.com/2018/11/05/first_rows_10-cbo-is-hopeless-its-using-the-wrong-index-weeping-wall/ There&#8217;s an organisation I had been dealing with on and off over the years who were having all sorts of issues with their Siebel System and who were totally convinced their performance issues were due directly to being forced to use the FIRST_ROWS_10 optimizer. I&#8217;ve attempted on a number of occasions to explain that their [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5697 Mon Nov 05 2018 06:45:41 GMT-0500 (EST) UKOUG Tech 18 https://jonathanlewis.wordpress.com/2018/11/05/ukoug-tech-18/ <p>One month to go before <em><strong><a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAgendaWebsite.csp?pageID=306&amp;eventID=2&amp;language=1&amp;mainFramePage=dailyagenda.csp&amp;mode=">the big event in Liverpool</a></strong></em>. so I&#8217;ve been browsing the agenda to get some idea of the talks I&#8217;ll probably go to. At present this is what my list looksl like:</p> <h3>Sunday</h3> <pre>14:00 Database block checking - the unknown truth 15:00 TBD 16:10 Oracle Database 12c consolidation: why and how to manage CPU resources 17:10 Securefiles - the hidden storage organisation inside LOB segments </pre> <h3>Monday</h3> <pre> 9:00 The Optimizer &amp; the road to the latest generation of the Oracle database 11:20 Making Materialized View great again 11:50 Winning performance challenges in Oracle Multitenant 13:35 Struggling with Statistics (ME) 16:15 Constraint Optimization (or the difference one comma makes) 17:10 TBD </pre> <h3>Tuesday</h3> <pre> 9:00 The basics of understanding Execution Plans (ME - double session) 11:40 Dissecting SQL Plan Management Options 12:35 Cost Based Optimsation - The round table (ME) 14:25 Single Row vs. the Array Interface vs. Parallelism 15:20 Cost Based Optimisation - The Panel (ME &amp; several others) 16:35 Declarative Constraints - Features and Performance impact 17:05 Oracle SQL Developer - Everything you need to know about tuning </pre> <h3>Wednesday</h3> <pre> 9:00 Successful Star Schemas 9:55 Hardening the Oracle database 11:40 Tracing parallel execution 12:35 Advanced RAC programming features 14:25 TBD 15:20 Pitfalls and Surprises with dbms_stats; how to solve them </pre> <p>I reserve the right to change my mind on the day, of course, since the competition is strong &#8211; and I may get wrapped up in conversations with other attendees and not notice the time passing.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19115 Mon Nov 05 2018 05:46:21 GMT-0500 (EST) Oracle 18c New Feature Read-Only Oracle Homes https://gavinsoorma.com/2018/11/oracle-18c-new-feature-read-only-oracle-homes/ <p>One of the new features of Oracle Database 18c is that we can now configure an Oracle Home in <strong>read-only</strong> mode.</p> <p><strong>In a read-only Oracle home, all the configuration files like database init.ora, password files, listener.ora, tnsnames.ora as well as related log files reside outside of the read-only Oracle home</strong>.</p> <p>This feature allows us to use the read-only Oracle home as a &#8216;master or gold&#8217; software image that can be distributed across multiple servers. So it <strong>enables mass provisioning</strong> and also <strong>simplifies the patching process</strong> where hundreds of target servers are potentially required to have a patch applied. Here we patch the &#8216;master&#8217; read-only Oracle Home and this image can then be deployed on multiple target servers seamlessly.</p> <p>To configure a read-only Oracle Home, we need to do a software only 18c installation &#8211; that is we do not create a database as part of the software installation.</p> <p>We then run a command <strong>roohctl -enable </strong>which will configure the Oracle Home in read-only mode.</p> <p>In addition to the ORACLE_HOME and ORACLE_BASE variables, we have a new variable defined called <strong>ORACLE_BASE_CONFIG</strong> and like the oratab file we have an additional file called <strong>orabasetab</strong>.</p> <p>So in an 18c read-only Oracle Home, for example the dbs directory is now not located as it was traditionally under the $ORACLE_HOME/dbs, but is now located under $ORACLE_BASE_CONFIG &#8211; which takes the form of a directory structure called $ORACLE_BASE/homes/&lt;ORACLE_HOME_NAME&gt;.</p> <p><a href="https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/">Read more about how to configure an Oracle 18c read-only Oracle Home (Members Only)</a></p> Gavin Soorma https://gavinsoorma.com/?p=8374 Sun Nov 04 2018 20:29:04 GMT-0500 (EST) How to configure an Oracle 18c read-only Oracle Home https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8376 Sun Nov 04 2018 20:23:14 GMT-0500 (EST) PASS Summit 2018- Times up! https://dbakevlar.com/2018/11/pass-summit-2018-times-up/ <p>I’ve had my nose to the grindstone for almost four months now investing my brain in new technology, reinvesting fully in performing technical work and prioritizing it all to ensure that I’ll be successful. &nbsp;I stepped back from many speaking events to make sure my private and professional life would succeed in this transition, but that doesn’t mean I would skip PASS Summit. &nbsp;What I didn’t expect was the event would arrive so quickly and I’d feel like I’m always short of time preparing for it technically or personally!</p> <div class="wp-block-image"><figure class="aligncenter"><img src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/11/8C724475-90AB-4004-9B5D-29C34E3767B0.gif?w=650&#038;ssl=1" alt="" class="wp-image-8302" data-recalc-dims="1"/><figcaption>Times up</figcaption></figure></div> <p>Well, its 4am on Sunday morning before and I’m up, so I might as well make use of the time and blog. &nbsp;</p> <p>This year is no different than most and as is common for most Microsoft events, some of my Oracle peeps wanted to know if I had time to meet for lunch while in Seattle. &nbsp;I started looking for my two, main calendars and I realized, although I thought I’d make it easier on me professionally by only attending Summit the 6-9, those four days have packed a slew of sessions, presentations, meetings and social events. &nbsp;Maybe I’m just getting old, but how the hell do we do this for a full week, let along less than four days???</p> <p>First off, the awesome Diversity and Inclusion book in the Let Them Finish series has come out on Amazon! &nbsp;“<b>Let Them Finish, Stories from the Trenches</b>” is available and we’ll have a panel to discuss the book and answer questions from those that attend. &nbsp;It’s going to be a great panel, with <a href="https://twitter.com/sqlmelody?lang=e">Melody</a> Zacharias, <a href="https://twitter.com/angelatidwell?lang=en">Angela</a> Tidwell, <a href="https://twitter.com/tracyboggiano?lang=en">Tracy</a> Boggiano, <a href="https://twitter.com/dbawithabat">John</a> Moorehouse, <a href="https://twitter.com/bornsql">Randolph</a> West and myself. &nbsp;The panel will be in Skagit4 on Wednesday, Nov. 7th, so don’t miss out on the book or the authors who put themselves out there to tackle this difficult and fascinating subject.</p> <p>I’ll be presenting in two technical sessions and one panel this year. &nbsp;Both technical sessions will be on Friday, so don’t run off before learning how to automate appropriately for the right DevOps use case in<b> “DevOps and Decoys- &nbsp;How to Build a <a href="https://www.pass.org/summit/2018/Learn/ConferenceSessions.aspx">Successful</a> Microsoft DevOps Solution”</b> in room 618 at 9:30am on Nov. 9th. &nbsp;Just a short while later, at 11:15, I’ll be discussing the future of GDPR, along with how companies need to think about the data landscape in room 6c, in <b>“GDPR &#8211; The Buck Stops Here”</b>. &nbsp;</p> <p>Now three sessions in four days doesn’t sound like *that* much, but then you have the bloggers tables.  I’ll be live blogging during both keynotes and the WIT luncheon.  I will be attending and blogging the executive sessions, (another three sessions.). I am the president of the Denver SQL Server User Group, so I’ll be attending the leadership session and trying to connect with other leaders in the community at the Community space.  </p> <p>Then, then&#8230; we have the social events.  Yes, I want to karaoke.  I have been fearful of attending this event, not because I can’t sing, but because noise and the distraction make it difficult to concentrate hard enough to sing.  Luckily, no one cares if you can sing at these events, so I’m cool. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I have great parties, at least two a night, every night for the time I’m there.  I have happy hours, planned meetings with people I’m mentoring, peers I can’t wait to see and it all just seems too packed into the four days.  </p> <p>This is <a href="https://www.pass.org/summit/2018/Home.aspx">PASS</a> Summit-  a wonderful, crazy, chaotic event of learning, networking, socializing and getting our geek on.  See you on Tuesday, folks.</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/dba-life/" rel="tag">DBA Life</a>, <a href="https://dbakevlar.com/tag/microsoft/" rel="tag">Microsoft</a>, <a href="https://dbakevlar.com/tag/pass/" rel="tag">Pass</a>, <a href="https://dbakevlar.com/tag/summit/" rel="tag">Summit</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/11/pass-summit-2018-times-up/&title=PASS Summit 2018- Times up!"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/11/pass-summit-2018-times-up/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/11/pass-summit-2018-times-up/&title=PASS Summit 2018- Times up!"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/11/pass-summit-2018-times-up/&title=PASS Summit 2018- Times up!"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/11/pass-summit-2018-times-up/&title=PASS Summit 2018- Times up!"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/11/pass-summit-2018-times-up/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/11/administration-group-creation-failure-in-em12c/" >Administration Group Creation Failure in EM12c</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/10/upcoming-events-and-speaking-engagements-fall-2016/" >Upcoming Events and Speaking Engagements- Fall 2016</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/06/note-pro-12-2-review-officially-drove-me-to-apple/" >Note Pro 12.2 Review- Officially Drove Me to Apple!</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2018/02/scene-cut-training-days-2018/" >Scene and Cut. Training Days 2018</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/08/upcoming-raspberry-pi-family-coding-day-project/" >Upcoming Raspberry Pi Family Coding Day Project</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/11/pass-summit-2018-times-up/">PASS Summit 2018- Times up!</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8300 Sun Nov 04 2018 07:44:08 GMT-0500 (EST) Oracle 18c Pluggable Database Switchover https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8370 Sun Nov 04 2018 03:18:47 GMT-0500 (EST) दूर कहीं क्यों जाते हो? http://ezsaid.blogspot.com/2018/11/blog-post.html ashish tag:blogger.com,1999:blog-14142302.post-6525692150297312827 Sat Nov 03 2018 16:03:00 GMT-0400 (EDT) MobaXTerm 11.0 http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/J628l_sDeuE/ <p><img class="alignleft wp-image-4858" src="https://oracle-base.com/blog/wp-content/uploads/2015/05/command-prompt.png" alt="" width="150" height="150" />Looks like <a href="http://mobaxterm.mobatek.net/">MobaXTerm 11.0</a> was released yesterday</p> <p>The <a href="https://mobaxterm.mobatek.net/download-home-edition.html">downloads</a> and <a href="https://mobaxterm.mobatek.net/download-home-edition.html">changelog</a> are in the usual places.</p> <p>This version comes with a log list of bug fixes and improvements in the changelog.</p> <p>Cheers</p> <p>Tim…</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/02/mobaxterm-11-0/">MobaXTerm 11.0</a> was first posted on November 2, 2018 at 6:27 pm.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/J628l_sDeuE" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8647 Fri Nov 02 2018 13:27:27 GMT-0400 (EDT) Relational to JSON with PL/SQL https://jsao.io/2018/11/relational-to-json-with-pl-sql/ <p>In the <a href="https://jsao.io/2018/10/relational-to-json-with-sql/">last post in this series</a>, I demonstrated how powerful functions added to the SQL engine in Oracle Database 12.2 allow you to generate JSON with ease. But what if you were doing something sufficiently complex that it required the procedural capabilities that PL/SQL provides? Well, you&#8217;re covered there too! In this post, I&#8217;ll show you how new JSON based object types can be used to get the job done.<br /> <span id="more-3215"></span></p> <div class="alert alert-info" role="alert"> <strong>Please Note:</strong> This post is part of <a href="https://jsao.io/2015/07/relational-to-json-in-oracle-database">a series on generating JSON from relational data in Oracle Database</a>. See that post for details on the solution implemented below as well as other options that can be used to achieve that goal. </div> <h4>Solution</h4> <p>The <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/pl-sql-object-types-for-json.html#GUID-C0C2A8C0-99BD-4770-9EA2-B7D53804FC18">12.2+ PL/SQL object types</a> available for working with JSON are:</p> <ul> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-639D871E-D116-4793-888E-F7948E48F4DE">JSON_ELEMENT_T</a> &#8211; supertype of the other JSON types (rarely used directly)</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-10062646-E36F-48B1-9F24-751B613DFB5A">JSON_OBJECT_T</a> &#8211; used to represent JSON objects</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-69E61601-5533-418B-8C03-E591B4F7FE36">JSON_ARRAY_T</a> &#8211; used to represent JSON arrays</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-B9526171-92E2-423A-8831-872ADCC71D1E">JSON_SCALAR_T</a> &#8211; used to represent scalar values in JSON (strings, numbers, booleans, and null)</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-50692D34-FEAF-471E-BA22-17530E31D95D">JSON_KEY_LIST </a> &#8211; lesser used collection type for object keys</li> </ul> <p>In the following solution, I use the <span class="inline-code">JSON_OBJECT_T</span> and <span class="inline-code">JSON_ARRAY_T</span> types to generate the desired JSON output. Smaller data structures are used to compose larger ones until I have the JSON object I want. Then I use the <span class="inline-code&qu>to_clob</span> method to serialize the in-memory representation to JSON.</p> <pre class="crayon-plain-tag">create or replace function get_dept_json( p_dept_id in departments.department_id%type ) return clob is cursor manager_cur ( p_manager_id in employees.employee_id%type ) is select * from employees where employee_id = manager_cur.p_manager_id; l_date_format constant varchar2(20) := 'DD-MON-YYYY'; l_dept_rec departments%rowtype; l_dept_json_obj json_object_t; l_loc_rec locations%rowtype; l_loc_json_obj json_object_t; l_country_rec countries%rowtype; l_country_json_obj json_object_t; l_manager_rec manager_cur%rowtype; l_manager_json_obj json_object_t; l_employees_json_arr json_array_t; l_employee_json_obj json_object_t; l_job_rec jobs%rowtype; l_jobs_json_arr json_array_t; l_job_json_obj json_object_t; begin select * into l_dept_rec from departments where department_id = get_dept_json.p_dept_id; l_dept_json_obj := json_object_t(); l_dept_json_obj.put('id', l_dept_rec.department_id); l_dept_json_obj.put('name', l_dept_rec.department_name); select * into l_loc_rec from locations where location_id = l_dept_rec.location_id; l_loc_json_obj := json_object_t(); l_loc_json_obj.put('id', l_loc_rec.location_id); l_loc_json_obj.put('streetAddress', l_loc_rec.street_address); l_loc_json_obj.put('postalCode', l_loc_rec.postal_code); select * into l_country_rec from countries cou where cou.country_id = l_loc_rec.country_id; l_country_json_obj := json_object_t(); l_country_json_obj.put('id', l_country_rec.country_id); l_country_json_obj.put('name', l_country_rec.country_name); l_country_json_obj.put('regionId', l_country_rec.region_id); l_loc_json_obj.put('country', l_country_json_obj); l_dept_json_obj.put('location', l_loc_json_obj); open manager_cur(l_dept_rec.manager_id); fetch manager_cur into l_manager_rec; if manager_cur%found then l_manager_json_obj := json_object_t(); l_manager_json_obj.put('id', l_manager_rec.employee_id); l_manager_json_obj.put('name', l_manager_rec.first_name || ' ' || l_manager_rec.last_name); l_manager_json_obj.put('salary', l_manager_rec.salary); select * into l_job_rec from jobs job where job.job_id = l_manager_rec.job_id; l_job_json_obj := json_object_t(); l_job_json_obj.put('id', l_job_rec.job_id); l_job_json_obj.put('title', l_job_rec.job_title); l_job_json_obj.put('minSalary', l_job_rec.min_salary); l_job_json_obj.put('maxSalary', l_job_rec.max_salary); l_manager_json_obj.put('job', l_job_json_obj); l_dept_json_obj.put('manager', l_manager_json_obj); else l_dept_json_obj.put_null('manager'); end if; close manager_cur; l_employees_json_arr := json_array_t(); for emp_rec in ( select * from employees where department_id = l_dept_rec.department_id ) loop l_employee_json_obj := json_object_t(); l_employee_json_obj.put('id', emp_rec.employee_id); l_employee_json_obj.put('name', emp_rec.first_name || ' ' || emp_rec.last_name); l_employee_json_obj.put('isSenior', emp_rec.hire_date &lt; to_date('01-jan-2005', 'dd-mon-yyyy')); l_employee_json_obj.put('commissionPct', emp_rec.commission_pct); l_jobs_json_arr := json_array_t(); for jh_rec in ( select job_id, department_id, start_date, end_date from job_history where employee_id = emp_rec.employee_id ) loop l_job_json_obj := json_object_t(); l_job_json_obj.put('id', jh_rec.job_id); l_job_json_obj.put('departmentId', jh_rec.department_id); l_job_json_obj.put('startDate', to_char(jh_rec.start_date, l_date_format)); l_job_json_obj.put('endDate', to_char(jh_rec.end_date, l_date_format)); l_jobs_json_arr.append(l_job_json_obj); end loop; l_employee_json_obj.put('jobHistory', l_jobs_json_arr); l_employees_json_arr.append(l_employee_json_obj); end loop; l_dept_json_obj.put('employees', l_employees_json_arr); return l_dept_json_obj.to_clob(); exception when others then if manager_cur%isopen then close manager_cur; end if; raise; end get_dept_json;</pre> <h4>Output</h4> <p>When passed a department id of 10, the function returns a CLOB populated with JSON that matches the goal 100%. </p> <pre class="crayon-plain-tag">{ &quot;id&quot;: 10, &quot;name&quot;: &quot;Administration&quot;, &quot;location&quot;: { &quot;id&quot;: 1700, &quot;streetAddress&quot;: &quot;2004 Charade Rd&quot;, &quot;postalCode&quot;: &quot;98199&quot;, &quot;country&quot;: { &quot;id&quot;: &quot;US&quot;, &quot;name&quot;: &quot;United States of America&quot;, &quot;regionId&quot;: 2 } }, &quot;manager&quot;: { &quot;id&quot;: 200, &quot;name&quot;: &quot;Jennifer Whalen&quot;, &quot;salary&quot;: 4400, &quot;job&quot;: { &quot;id&quot;: &quot;AD_ASST&quot;, &quot;title&quot;: &quot;Administration Assistant&quot;, &quot;minSalary&quot;: 3000, &quot;maxSalary&quot;: 6000 } }, &quot;employees&quot;: [ { &quot;id&quot;: 200, &quot;name&quot;: &quot;Jennifer Whalen&quot;, &quot;isSenior&quot;: true, &quot;commissionPct&quot;: null, &quot;jobHistory&quot;: [ { &quot;id&quot;: &quot;AD_ASST&quot;, &quot;departmentId&quot;: 90, &quot;startDate&quot;: &quot;17-SEP-1995&quot;, &quot;endDate&quot;: &quot;17-JUN-2001&quot; }, { &quot;id&quot;: &quot;AC_ACCOUNT&quot;, &quot;departmentId&quot;: 90, &quot;startDate&quot;: &quot;01-JUL-2002&quot;, &quot;endDate&quot;: &quot;31-DEC-2006&quot; } ] } ] }</pre> <h4>Summary</h4> <p>The JSON types for PL/SQL are a very welcome addition to Oracle Database. I&#8217;ve only demonstrated how to build up objects in memory to generate JSON, but there are many other methods for modification, serialization, introspection, and so on.</p> <p>If you&#8217;ve seen the <a href="https://jsao.io/2015/07/relational-to-json-with-pljson/">PL/JSON solution</a>, you&#8217;ll note that the code is very similar since they both use the object-oriented capablities of Oracle Database (as opposed to <a href="https://jsao.io/2015/07/relational-to-json-with-apex_json/">APEX_JSON</a> which is more procedural). When compared to PL/JSON, the main advantages to the 12.2+ built-in types are:</p> <ul> <li>Simplicity: There&#8217;s no installation needed.</li> <li>Documentation: The <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/">JSON Developer&#8217;s Guide</a> provides some getting started content in <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/pl-sql-object-types-for-json.html#GUID-C0C2A8C0-99BD-4770-9EA2-B7D53804FC18">Part IV: PL/SQL Object Types for JSON</a> and the <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/arpls/json-types.html#GUID-BDE10AAA-445B-47B5-8A39-D86C8EA99283">PL/SQL Packages and Types Reference </a>provides additional API details.</li> <li>Peformance: I ran a small test on a local <a href="https://www.oracle.com/database/technologies/appdev/xe.html">18c XE</a> database where I generated the JSON for each department in the HR schema 100 times. The PL/JSON solution took about 4.6 seconds on average while the solution in this post and the APEX_JSON solution both took around 1.5 seconds.</li> </ul> <p>Having said all that, if you&#8217;re not yet using Oracle Database 12.2+, then PL/JSON is still a great option for working with JSON. The PL/JSON team continues to build out the APIs, address issues, and develop the documentation.</p> danmcghan https://jsao.io/?p=3215 Fri Nov 02 2018 12:18:23 GMT-0400 (EDT) LEAP#430 Driving Scavenged Linear Steppers https://blog.tardate.com/2018/11/leap430-driving-scavenged-linear-steppers.html <p>CD/DVD drives are a great source of interesting scavenged components - in particular laser units and stepper motors.</p> <p>I pulled some of the head control stepper motors some time back. They are small 4-wire bipolar stepper motors with a worm drive for linear motion. Datasheets are non-existent(!), so this is a little project to figure out their specs and demonstrate driving the units with an Arduino and bespoke H Bridge control circuit.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Kinetics/StepperMotors/BipolarWormDrive/SimpleHBridge">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p><a href="https://github.com/tardate/LittleArduinoProjects/tree/master/Kinetics/StepperMotors/BipolarWormDrive/SimpleHBridge"><img src="https://leap.tardate.com/Kinetics/StepperMotors/BipolarWormDrive/SimpleHBridge/assets/SimpleHBridge_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/11/leap430-driving-scavenged-linear-steppers.html Fri Nov 02 2018 09:35:05 GMT-0400 (EDT) Query Builder: Where Are My Joins? https://www.thatjeffsmith.com/archive/2018/11/query-builder-where-are-my-joins/ <p>One of the improvements for version 18.3 was a much improved, performant query builder.</p> <p>Of course, not all things are free. Or in fact, nothing is free.</p> <p>So who paid for these performance gains? We disabled one of its primary features.</p> <p>But first, let&#8217;s take a step a back and explain the scenario.</p> <h3>How it Was, Pre 18.3</h3> <p>The query builder would allow you to reverse engineer a query in a worksheet to a visual representation. And it would allow you to build queries from scratch, by dragging and dropping them into the query builder design area.</p> <p>I advocated that users do a combination of both, especially if they were new to SQL in general. The query builder is also useful for helping build a &#8216;picture&#8217; of your queries.</p> <p><strong>The Problem</strong><br /> It was tremendously slow. It could take 30 or more seconds to visually render an existing query. And dragging and dropping new tables into a query could take 10 seconds, each time you did it. So slow, that I didn&#8217;t recommend to people that they should use it anymore.</p> <p>Why was it slow? </p> <p>Well, the folks that build the solution (it&#8217;s a 3rd party library that we have licensed) had some pretty gnarly queries used that do look-ups on the table to find foreign keys and &#8216;related&#8217; tables. This would do two things. It would &#8216;draw the pretty lines for you&#8217;, and it would give you a list of related objects for each table.</p> <p>Like this &#8211;</p> <!-- Easy AdSense V7.43 --> <!-- [midtext: 1 urCount: 1 urMax: 0] --> <div class="ezAdsense adsense adsense-midtext" style="float:left;margin:12px;"><script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> <!-- 336-rectangle --> <ins class="adsbygoogle" style="display:inline-block;width:336px;height:280px" data-ad-client="ca-pub-1495560488595385" data-ad-slot="5904412551"></ins> <script> (adsbygoogle = window.adsbygoogle || []).push({}); </script></div> <!-- Easy AdSense V7.43 --> <p><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/query-builder-joins-1.png" alt="" width="731" height="391" class="aligncenter size-full wp-image-7073" /></p> <p>Nice, right?</p> <h3>The Solution</h3> <p>While that was a nice feature, it just cost too much. It was slow to the point of users wouldn&#8217;t give it a 2nd try, and I wouldn&#8217;t reccommend they even try it in the first place.</p> <p>The easiest solution was to &#8216;nuke&#8217; those &#8216;bad fk lookup queries.&#8217; </p> <p>So in 18.3, we don&#8217;t do that, and the Query Builder renders in a second or less for more queries. That&#8217;s crazy-good. It just also means, that now you need to draw or code the joins yourself.</p> <div id="attachment_7074" style="width: 1056px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/query-builder-joins-2.png" alt="" width="1046" height="547" class="size-full wp-image-7074" /><p class="wp-caption-text">Here&#8217;s the option to disable if you want the magic, auto joins back.</p></div> <p>You can also just drag and drop the queries to the worksheet FIRST, say YES to the JOINS, then toggle to the Query Builder.</p> <p>Here, let me show you how I mean:</p> <div id="attachment_7076" style="width: 1034px" class="wp-caption aligncenter"><a href="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/dnd-worksheet-qb.gif"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/dnd-worksheet-qb.gif" alt="" width="1024" height="764" class="size-full wp-image-7076" /></a><p class="wp-caption-text">I really like this.</p></div> <h3>Can we do better?</h3> <p>We could try to refactor the 3rd party vendor&#8217;s bad SQL, but every time we do that, upgrades get much trickier. I&#8217;d like to have our cake and eat it too, but I also need to make game-time decisions and try to make for the best user experience. I think we&#8217;ve found a good compromise here&#8230;especially if my assumption that many folks will use the Query Builder for existing queries.</p> <p>If I&#8217;m wrong, here&#8217;s your chance to tell me.</p> <p>The good news is, we have a new release every 3 months now, so tweaks, fixes, and improvements are never that far away.</p> thatjeffsmith https://www.thatjeffsmith.com/?p=7072 Fri Nov 02 2018 08:09:48 GMT-0400 (EDT) Why Automation Matters : Keep Your Auditors Happy http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/SIGSAR-k8OI/ <p><img class="wp-image-8641 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2018/11/audit-automation-2.jpg" alt="" width="200" height="146" />We were having some of our systems audited recently. I&#8217;ve been part of this sort of things a few times over the years, but I was pleasantly surprised by a number of the questions that were being asked during this most recent session. I&#8217;ll paraphrase some of their questions and my answers.</p> <ul> <li><strong>How do you document your build processes?</strong> We have silent build scripts (where possible). The same build scripts are used for each build, with the differences just being environment variables. If a silent build is not possible, we do a semi-silent build, and use screen grabs for the manual bits.</li> <li><strong>How do you keep control of your builds and configuration?</strong> Everything goes into a cloud-based Git repository, and we have a local git server as a backup of the cloud service.</li> <li><strong>How do you manage change through your systems?</strong> Requests, Incidents, Enhancements, Tasks are raised and placed in a Task Board, which is kind-of like a Kanban board, in Service Now. Progression of changes to production require a Change Request (CR), which may need to be agreed by the Change Advisory Board (CAB), depending on the nature of the change.</li> <li><strong>Are changes applied manually, or using automation?</strong> This was followed by a long discussion about what we can and can&#8217;t automate because of our internal company structure and politics. It also covered the differences between automation of changes to infrastructure and in the development process. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li> </ul> <p>There was a lot more than this, but this is enough to make my point.</p> <p>The reactions to the answers can be summarised as follows.</p> <ul> <li>When we had a repeatable automated process we got a thumbs up.</li> <li>When we had a process that was semi-automated, because full automation was impractical (because of additional constraints), we got a thumbs up.</li> <li>When we had a manual process, we got a thumbs down, because maintaining consistency and preventing human error is really hard when using manual processes.</li> </ul> <p>In a sentence I guess I could say, if you are using DevOps you pass. If you are not using DevOps you fail. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Now I am coming to this with a certain level of bias in favour of DevOps, and that bias may be skewing my interpretation of the situation somewhat, but that is how it felt to me.</p> <p>As I said earlier, I was pleasantly surprised by this angle. It&#8217;s nice to see the auditors giving me some extra leverage, and it certainly feels like automation is a good way to keep the auditors happy! <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Cheers</p> <p>Tim&#8230;</p> <p>PS. This is just one part of the whole auditing process.</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/02/why-automation-matters-keep-your-auditors-happy/">Why Automation Matters : Keep Your Auditors Happy</a> was first posted on November 2, 2018 at 10:35 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/SIGSAR-k8OI" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8636 Fri Nov 02 2018 05:35:29 GMT-0400 (EDT) Monitoring Spring Boot applications with Prometheus and Grafana https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/ <p>In order to compare the performance of different JDKs for reactive Spring Boot services, I made a setup in which a Spring Boot application is wrapped in a Docker container. This makes it easy to create different containers for different JDKs with the same Spring Boot application running in it. The Spring Boot application exposes metrics to Prometheus. Grafana can read these metrics and allows to make nice visualizations from it. This blog post describes a setup to get you up and running in minutes. A next post will show the JDK comparisons. You can download the code <a href="https://github.com/MaartenSmeets/gs-reactive-rest-service.git">here</a> (in the complete folder). To indicate how easy this setup is, getting it up and running and write this blog post took me less than 1.5 hours total. I did not have much prior knowledge on Prometheus and Grafana save for a single workshop at AMIS by Lucas Jellema (<a href="https://technology.amis.nl/2018/09/25/getting-started-on-monitoring-with-prometheus-and-grafana/">see here</a>).</p> <p><span id="more-50310"></span></p> <h2>Wrapping Spring Boot in a Docker container</h2> <p>Wrapping Spring Boot applications in a Docker container is easy. See for example here. You need to do the following:</p> <p>Create a Dockerfile as followed (change the FROM entry to get a different JDK)</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile.png"><img data-attachment-id="50313" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/dockerfile/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile.png" data-orig-size="531,95" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="dockerfile" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile-300x54.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile.png" class="aligncenter size-full wp-image-50313" src="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile.png" alt="" width="531" height="95" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile.png 531w, https://technology.amis.nl/wp-content/uploads/2018/11/dockerfile-300x54.png 300w" sizes="(max-width: 531px) 100vw, 531px" /></a></p> <p>Add a plugin to the pom.xml file.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin.png"><img data-attachment-id="50316" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/pom-plugin/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin.png" data-orig-size="632,380" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="pom plugin" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin-300x180.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin.png" class="aligncenter size-full wp-image-50316" src="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin.png" alt="" width="632" height="380" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin.png 632w, https://technology.amis.nl/wp-content/uploads/2018/11/pom-plugin-300x180.png 300w" sizes="(max-width: 632px) 100vw, 632px" /></a></p> <p>And define the property used:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/property.png"><img data-attachment-id="50320" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/property/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/property.png" data-orig-size="372,93" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="property" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/property-300x75.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/property.png" class="aligncenter size-full wp-image-50320" src="https://technology.amis.nl/wp-content/uploads/2018/11/property.png" alt="" width="372" height="93" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/property.png 372w, https://technology.amis.nl/wp-content/uploads/2018/11/property-300x75.png 300w" sizes="(max-width: 372px) 100vw, 372px" /></a></p> <p>Now you can do mvn clean package dockerfile:build and it will create the Docker image springio/gs-reactive-rest-service:latest for you. You can run this with: docker run -p 8080:8080 -t springio/gs-reactive-rest-service:latest</p> <h2>Making Prometheus style metrics available from Spring Boot</h2> <p>In order to make Prometheus metrics available from the String Boot application, some dependencies need to be added (see <a href="https://dzone.com/articles/monitoring-using-spring-boot-2-prometheus-and-graf">here</a>).</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics.png"><img data-attachment-id="50319" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/prometheus-metrics/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics.png" data-orig-size="834,664" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="prometheus metrics" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics-300x239.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics.png" class="aligncenter size-full wp-image-50319" src="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics.png" alt="" width="834" height="664" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics.png 834w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics-300x239.png 300w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-metrics-768x611.png 768w" sizes="(max-width: 834px) 100vw, 834px" /></a></p> <p>Now you can run the Docker container and go to an URL like: http://localhost:8080/actuator/prometheus and you will see something like:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator.png"><img data-attachment-id="50318" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/prometheus-from-actuator/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator.png" data-orig-size="1108,804" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="prometheus from actuator" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator-300x218.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator-1024x743.png" class="aligncenter size-large wp-image-50318" src="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator-1024x743.png" alt="" width="702" height="509" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuat1024x743.png 1024w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator-300x218.png 300w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator-768x557.png 768w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-from-actuator.png 1108w" sizes="(max-width: 702px) 100vw, 702px" /></a></p> <h2>Provide Prometheus configuration</h2> <p>I&#8217;ve provided a small configuration file to make Prometheus look at the metrics URL from Spring Boot (see <a href="https://github.com/MaartenSmeets/gs-reactive-rest-service/blob/master/complete/prom.yml">here</a>):</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config.png"><img data-attachment-id="50317" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/prometheus-config/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config.png" data-orig-size="476,178" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="prometheus config" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config-300x112.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config.png" class="aligncenter size-full wp-image-50317" src="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config.png" alt="" width="476" height="178" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config.png 476w, https://technology.amis.nl/wp-content/uploads/2018/11/prometheus-config-300x112.png 300w" sizes="(max-width: 476px) 100vw, 476px" /></a></p> <h2>Putting Spring Boot, Prometheus and Grafana together</h2> <p>As you can see in the above screenshot, I&#8217;ve used the hostname spring-boot. I can do this because of the docker compose configuration, container_name. This is as you can see below:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1.png"><img data-attachment-id="50326" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/putting-it-together-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1.png" data-orig-size="321,376" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="putting it together" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1-256x300.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1.png" class="aligncenter size-full wp-image-50326" src="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1.png" alt="" width="321" height="376" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1.png 321w, https://technology.amis.nl/wp-content/uploads/2018/11/putting-it-together-1-256x300.png 256w" sizes="(max-width: 321px) 100vw, 321px" /></a></p> <p>Grafana and Prometheus are the official Docker images for those products. I&#8217;ve added the previously mentioned configuration file to the Prometheus instance (the volumes entry under prometheus).</p> <p>Now I can do docker-compose up and it will start Spring Boot (available at localhost:8080), Prometheus with the configuration file (available at localhost:9090) and Grafana (available at localhost:3000). They will be put in the same Docker network and can access each other by the hostnames &#8216;prometheus&#8217;, &#8216;grafana&#8217; and spring-boot&#8217;</p> <h2>Configure Grafana</h2> <p>In Grafana it is easy to add Prometheus as a data source.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/grafana.png"><img data-attachment-id="50315" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/grafana/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/grafana.png" data-orig-size="755,694" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="grafana" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/grafana-300x276.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/grafana.png" class="aligncenter size-full wp-image-50315" src="https://technology.amis.nl/wp-content/uploads/2018/11/grafana.png" alt="" width="755" height="694" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/grafana.png 755w, https://technology.amis.nl/wp-content/uploads/2018/11/grafana-300x276.png 300w" sizes="(max-width: 755px) 100vw, 755px" /></a></p> <p>When you have done this, you can add dashboards. An easy way to do this is to create a simple query in Prometheus and copy it to Grafana to create a graph from it. There are probably better ways to do thus but I have yet to dive into Grafana to learn more about its capabilities.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus.png"><img data-attachment-id="50312" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/copy-from-prometheus/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus.png" data-orig-size="1278,821" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="copy from prometheus" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-300x193.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-1024x658.png" class="aligncenter size-large wp-image-50312" src="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-1024x658.png" alt="" width="702" height="451" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-1024x658.png 1024w, https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-300x193.png 300w, https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus-768x493.png 768w, https://technology.amis.nl/wp-content/uploads/2018/11/copy-from-prometheus.png 1278w" sizes="(max-width: 702px) 100vw, 702px" /></a></p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana.png"><img data-attachment-id="50314" data-permalink="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/from-prometheus-to-grafana/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana.png" data-orig-size="1274,804" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="from prometheus to grafana" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-300x189.png" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-1024x646.png" class="aligncenter size-large wp-image-50314" src="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-1024x646.png" alt="" width="702" height="443" srcset="https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-1024x646.png 1024w, https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-300x189.png 300w, https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana-768x485.png 768w, https://technology.amis.nl/wp-content/uploads/2018/11/from-prometheus-to-grafana.png 1274w" sizes="(max-width: 702px) 100vw, 702px" /></a></p> <h2>Finally</h2> <p>It is easy and powerful to monitor a Spring Boot application using Prometheus and Grafana. Using a docker-compose file, it is also easy to put an assembly together to start/link the different containers. This makes it easy to start fresh if you want to.</p> <p>To try it out for yourself do the following (I&#8217;ve used the following VM (requires Vagrant and VirtualBox to build) with docker, docker-compose and maven preinstalled: <a href="https://github.com/MaartenSmeets/provisioning/tree/master/ubuntudev">here</a>)</p> <pre class="brush: plain; title: ; notranslate"> git clone https://github.com/MaartenSmeets/gs-reactive-rest-service cd gs-reactive-rest-service/complete mvn clean package mvn dockerfile:build docker-compose up </pre> <p>Then you can access the previously specified URL&#8217;s to access the Spring Boot application, Prometheus and Grafana.</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/11/01/monitoring-spring-boot-applications-with-prometheus-and-grafana/">Monitoring Spring Boot applications with Prometheus and Grafana</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=50310 Thu Nov 01 2018 15:12:20 GMT-0400 (EDT) New Emerging Technologies Track at ODTUG Kscope19 https://www.odtug.com/p/bl/et/blogaid=836&source=1 New to ODTUG Kscope19, the Emerging Technologies track offers ODTUG Kscope attendees the opportunity to learn about the latest and greatest technologies making a mark on the world. ODTUG https://www.odtug.com/p/bl/et/blogaid=836&source=1 Thu Nov 01 2018 14:31:43 GMT-0400 (EDT) It’s Official: Kubernetes is King https://blog.pythian.com/official-kubernetes-king/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">When it comes to container orchestration, nobody’s sitting on the fence anymore. In the four short years since it launched, Kubernetes has become the de facto standard, the platform chosen by a list of tech giants that includes AWS, Microsoft, Dell, Cisco, IBM, Intel, Red Hat, SAP, VMware and more. In fact, Kubernetes is even supported by Docker, in what is surely an admission that Docker’s own competing product, Swarm, can’t hope to beat Kubernetes.</span></p> <p><span style="font-weight: 400;">Kubernetes’ revenues are looking as good as its reputation. 451 Research </span><a href="https://451research.com/images/Marketing/press_releases/Application-container-market-will-reach-2-7bn-in-2020_final_graphic.pdf"><span style="font-weight: 400;">reports</span></a><span style="font-weight: 400;"> that application containers will be a $2.7 billion market by 2020, and it’s a safe bet that Kubernetes will claim the lion’s share of that spending. </span></p> <p><span style="font-weight: 400;">Containers in general provide a huge opportunity to the modern enterprise. They’re a boon to productivity, as they allow developers to spend less time debugging and more time writing code. They reduce server costs, because they can accommodate more applications than a traditional server deployment. Containers can run anywhere, thus expanding the range of deployment options available. And containers isolate the components of complex applications, easing worries about unintended knock-on effects when updates are performed.  </span></p> <p><span style="font-weight: 400;">The reasons for choosing Kubernetes, in particular, go well beyond its industry-wide adoption. It actually builds on two earlier iterations of Google’s internal orchestration platform, and it reflects over 15 years of experience with what was perhaps the most demanding production environment in the history of computing. Though Kubernetes involves a steeper learning curve than Docker Swarm, it offers a number of compensating benefits:</span></p> <ol> <li><b> Portability</b><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">For organizations that hesitate to go all in with the cloud, Kubernetes offers reassuring flexibility. It can run on-premises in your own data center, in a public cloud, or in a hybrid cloud environment. With Kubernetes, the same commands apply throughout.</span></li> <li><b> Scalability</b><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">Kubernetes automatically scales up your cluster the moment you need it. When the demand subsides, Kubernetes immediately scales back down to make sure your technology dollars aren’t wasted.</span></li> <li><b> Consistency in Deployments</b><span style="font-weight: 400;"><br /> </span><span style="font-weight: 400;">Deployments in the pre-container era were fussy, idiosyncratic and time-consuming. Containers solve that problem by mass-producing identical deployments that are easy to replace. And because Kubernetes can run anywhere, deployments remain consistent across clouds, bare metal and local development environments. For developers, this means less time debugging and more time doing work with strategic purpose.</span></li> <li><b> Harmony Between Operations and Development</b><b><br /> </b><span style="font-weight: 400;">The sophistication and reliability of Kubernetes can work to ease the tradition tensions that exist between development and operations personnel. Now, at last, developers can innovate through rapid iteration cycles without jeopardizing the system stability that’s so important to operations.</span></li> </ol> <p><span style="font-weight: 400;">Though containers are a relatively recent addition to enterprise technology, they are now pretty much a necessity in development. If your organization is already using containers, congratulations — incorporating Kubernetes will not be a problem. If you’re new to containers, you’ll have to begin by containerizing your applications, but the task is not as daunting as it might sound. </span></p> <p><a href="https://pythian.com/kubernetes-as-a-service/"><span style="font-weight: 400;">Find out how</span></a><span style="font-weight: 400;"> Pythian can help you establish or improve your container environment.</span></p> </div></div> Krista Colby-Wheatley https://blog.pythian.com/?p=105328 Thu Nov 01 2018 12:35:23 GMT-0400 (EDT) Video: What’s New in Oracle SQL Developer for 2018 https://www.thatjeffsmith.com/archive/2018/11/video-whats-new-in-oracle-sql-developer-for-2018/ <img width="840" height="480" src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/spool-zip-1024x585.png" class="attachment-large size-large wp-post-image" alt="" /><p>This was one of my 3 sessions at Open World last week. None of them were recorded, but I figured you might find this one interesting, so I ran through the slides again here.</p> <p><iframe width="853" height="480" src="https://www.youtube.com/embed/hUC0spkebDw" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></p> <p>If you search this blog for things like &#8216;18.1&#8217;, &#8216;18.2&#8217;, and &#8216;18.3&#8217;, you can find all the blog posts I put together showing some of these features.</p> <p>But a short list would definitely include:</p> <ul> <li><a href="https://www.thatjeffsmith.com/archive/2018/04/18-1-features-sql-injection-detection/" rel="noopener" target="_blank">SQL Injection Detection</a></li> <li><a href="https://www.thatjeffsmith.com/archive/2018/10/query-builder-on-inline-views-and-ansi-joins/" rel="noopener" target="_blank">Faster Query Builder &#038; ANSI Joins</a></li> <li><a href="https://www.thatjeffsmith.com/archive/2018/07/loading-data-from-oss-to-oracle-autonomous-cloud-services-with-sql-developer/" rel="noopener" target="_blank">Loading data from OSS to Autonomous DB</a></li> <li><a href="https://www.thatjeffsmith.com/archive/2018/04/18-1-new-formatting-option-right-align-query-keywords/" rel="noopener" target="_blank">Right Align Keywords Formatting</a></li> </ul> <p><strong>You know about this one, right? SPOOL to a ZIP file, whiz-bang!</strong><br /> <div id="attachment_7071" style="width: 1034px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/11/spool-zip.png" alt="" width="1024" height="585" class="size-full wp-image-7071" /><p class="wp-caption-text">We like to make things EASY.</p></div> thatjeffsmith https://www.thatjeffsmith.com/?p=7070 Thu Nov 01 2018 11:42:12 GMT-0400 (EDT) Data Guard Broker 19c https://oracledba.blogspot.com/2018/11/data-guard-broker-19c.html <div class="separator" style="clear: both; text-align: center;"><a imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://1.bp.blogspot.com/-b1PQ1CiI-c8/W9sEpoBcSbI/AAAAAAAF7WA/oxJX5ZfpDX4v_6HEeDVqCDFLZaueMWj0wCPcBGAYYCw/s200/Untitled.png" width="200" height="176" data-original-width="236" data-original-height="208" /></a></div>Hi,<br />At earlier version of Data Guard when the broker had some problems, one of the automatic answers I got from the support was: to recreate the broker configuration.<br />In other words:<br /><ol><li>Drop the configuration.</li><li>&nbsp;Create the configuration.</li></ol>If your broker is simple, this is not a huge request. Actually, you could follow "<a href="https://support.oracle.com/knowledge/Oracle%20Database%20Products/808783_1.html">Step By Step How to Recreate Data Guard Broker Configuration (Doc ID 808783.1)</a>"<br />But in my case, using Active Data Guard with Far Sync, with many fine-tuned configurations, it was not so convenient. So I had to find out some other ways trying to be more efficient:<br /><ol><li>Write down all broker commands, to be able to run them again next time.</li><li>For versions 11.2.0.4 and 12.1.0.2 I found a way to query the broker and re-create a current configuration creation script (<a href="https://oracledba.blogspot.com/2016/05/extract-data-guard-commands.html">Extract Data Guard Commands </a>&nbsp;).</li><li>I also had some issues trying to <a href="https://oracledba.blogspot.com/2016/10/dropremove-far-sync-configurations-from.html">Drop/remove Far-Sync Configurations from broker 12.1</a></li><li>On version 12.2.0.1 I found out that the broker metadata was changed, so I had to update my previous script <a href="https://oracledba.blogspot.com/2018/02/extract-data-guard-commands-on-oracle.html">Extract Data Guard Commands on Oracle 12.2</a></li><li>Finally on Oracle 19c there will be new commands to export and import a broker configuration 😊</li></ol><blockquote class="tr_bq">dgmgrl&gt; export configuration to ‘meta.xml’<br />dgmgrl&gt; import configuration from ‘meta.xml’</blockquote>Here is a full listing, was shown in Oracle Open World 2018, of new features promised to be in Oracle 19c<br /><br /><div class="separator" style="clear: both; text-align: center;"><a imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-uw0qbX1zQF0/W9sEA-4k-SI/AAAAAAAF7V0/UsHwm3TVN2g-DvtDtrlOtV4IhfIYHldqACPcBGAYYCw/s1600/Untitled2.png" data-original-width="1600" data-original-height="815" /></a></div> Yossi Nixon tag:blogger.com,1999:blog-6061714.post-3828926972354891292 Thu Nov 01 2018 09:56:00 GMT-0400 (EDT) Join Cardinality – 5 https://jonathanlewis.wordpress.com/2018/11/01/join-cardinality-5/ <p>So far in this series I&#8217;ve written about the way that the optimizer estimates cardinality for an equijoin where one end of the join has a frequency histogram and the other end has a histogram of type:</p> <ul> <li><a href="https://jonathanlewis.wordpress.com/2018/10/05/join-cardinality-2/"><em><strong>Frequency</strong></em></a></li> <li><a href="https://jonathanlewis.wordpress.com/2018/10/09/join-cardinality-3/"><em><strong>Top-Frequency</strong></em></a></li> <li><a href="https://jonathanlewis.wordpress.com/2018/10/25/join-cardinality-4/"><em><strong>Hybrid</strong></em></a></li> </ul> <p>It&#8217;s now time to look at a join where the other end has a <em><strong>height-balanced</strong></em> histogram. Arguably it&#8217;s not sensible to spend time writing about this since you shouldn&#8217;t be creating them in 12c (depending, instead, on the hybrid histogram that goes with the <em><strong>auto_sample_size</strong></em>), and the arithmetic is different in 11g. However, there still seem to be plenty of people running 12c but not using the <em><strong>auto_sample_size</strong></em> and that means they could be generating some height-balanced histograms &#8211; so let&#8217;s generate some data and see what happens.</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: freq_hist_join_04a.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem Purpose: rem rem Last tested rem 18.3.0.0 rem 12.2.0.1 rem 12.1.0.2 rem 11.2.0.4 Different results rem drop table t2 purge; drop table t1 purge; set linesize 156 set trimspool on set pagesize 60 set feedback off execute dbms_random.seed(0) create table t1( id number(6), n04 number(6), n05 number(6), n20 number(6), j1 number(6) ) ; create table t2( id number(8,0), n20 number(6,0), n30 number(6,0), n50 number(6,0), j2 number(6,0) ) ; insert into t1 with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, mod(rownum, 4) + 1 n04, mod(rownum, 5) + 1 n05, mod(rownum, 20) + 1 n20, trunc(2.5 * trunc(sqrt(v1.id*v2.id))) j1 from generator v1, generator v2 where v1.id &lt;= 10 -- &gt; comment to avoid WordPress format issue and v2.id &lt;= 10 -- &gt; comment to avoid WordPress format issue ; insert into t2 with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, mod(rownum, 20) + 1 n20, mod(rownum, 30) + 1 n30, mod(rownum, 50) + 1 n50, 28 - round(abs(7*dbms_random.normal)) j2 from generator v1 where rownum &lt;= 800 -- &gt; comment to avoid WordPress format issue ; commit; prompt ========================================================== prompt Using estimate_percent =&gt; 100 to get height-balanced in t2 prompt ========================================================== begin dbms_stats.gather_table_stats( ownname =&gt; null, tabname =&gt; 'T1', method_opt =&gt; 'for all columns size 1 for columns j1 size 254' ); dbms_stats.gather_table_stats( ownname =&gt; null, tabname =&gt; 'T2', estimate_percent =&gt; 100, method_opt =&gt; 'for all columns size 1 for columns j2 size 20' ); end; / </pre> <p>As in earlier examples I&#8217;ve created some empty tables, then inserted randomly generated data (after calling the <em><strong>dbms_random.seed(0)</strong></em> function to make the data reproducible). Then I&#8217;ve gathered stats, knowing that there will be 22 distinct values in <em><strong>t2</strong></em> so forcing a height-balanced histogram of 20 buckets to appear.</p> <p>When we try to calculate the join cardinality we&#8217;re going to need various details from the histogram information, such as bucket sizes, number of distinct values, and so on, so in the next few queries to display the histogram information I&#8217;ve captured a few values into SQL*Plus variables. Here&#8217;s the basic information about the histograms on the join columns <em><strong>t1.j1</strong></em> and <em><strong>t2.j2</strong></em>:</p> <pre class="brush: plain; title: ; notranslate"> column num_distinct new_value m_t2_distinct column num_rows new_value m_t2_rows column num_buckets new_value m_t2_buckets column bucket_size new_value m_t2_bucket_size select table_name, column_name, histogram, num_distinct, num_buckets, density from user_tab_cols where table_name in ('T1','T2') and column_name in ('J1','J2') order by table_name ; select table_name, num_rows, decode(table_name, 'T2', num_rows/&amp;m_t2_buckets, null) bucket_size from user_tables where table_name in ('T1','T2') order by table_name ; column table_name format a3 heading &quot;Tab&quot; break on table_name skip 1 on report skip 1 with f1 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count, endpoint_number from user_tab_histograms where table_name = 'T1' and column_name = 'J1' ), f2 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_buckt_count, endpoint_number from user_tab_histograms where table_name = 'T2' and column_name = 'J2' ) select f1.* from f1 union all select f2.* from f2 order by 1,2 ; Tab COLUMN_NAME HISTOGRAM NUM_DISTINCT NUM_BUCKETS DENSITY -------------------- -------------------- --------------- ------------ ----------- ---------- T1 J1 FREQUENCY 10 10 .005 T2 J2 HEIGHT BALANCED 22 20 .052652266 Tab NUM_ROWS BUCKET_SIZE -------------------- ---------- ----------- T1 100 T2 800 40 Tab VALUE ROW_OR_BUCKET_COUNT ENDPOINT_NUMBER --- ---------- ------------------- --------------- T1 2 5 5 5 15 20 7 15 35 10 17 52 12 13 65 15 13 78 17 11 89 20 7 96 22 3 99 25 1 100 T2 1 0 0 14 1 1 17 1 2 18 1 3 19 1 4 20 1 5 21 2 7 22 1 8 23 1 9 24 2 11 25 2 13 26 3 16 27 2 18 28 2 20 </pre> <p>As you can see, there is a <em><strong>frequency</strong></em> histogram on <em><strong>t1</strong> </em>reporting a cumulative total of 100 rows; and the histogram on <em><strong>t2</strong> </em>is a <em><strong>height-balanced</strong></em> histogram of 20 buckets, showing 21, 24, 25, 26, 27 and 28 as popular values with 2,2,2,2,3 and 2 endpoints (i.e. buckets) respectively. You&#8217;ll also note that the <em><strong>t2</strong></em> histogram has 21 rows with row/bucket 0 showing us the minimum value in the column and letting us know that bucket 1 is not exclusively full of the value 14. (If 14 had been the minimum value for the column as well as an end point Oracle would not have created a bucket 0 &#8211; that may be a little detail that isn&#8217;t well-known &#8211; and will be the subject of a little follow-up blog note.)</p> <p>Let&#8217;s modify the code to join the two sets of hisogram data on data value &#8211; using a full outer join so we don&#8217;t lose any data but restricting ourselves to values where the histograms overlap. We&#8217;re going to follow the idea we&#8217;ve developed in earlier postings and multiply frequencies together to derive a join frequency, so we&#8217;ll start with a simple full outer join and assume that when we find a real match value we should behave as if the height-balanced buckets (<em><strong>t2</strong></em>) where the bucket count is 2 or greater represent completely full buckets and are popular values..</p> <p>I&#8217;ve also included in this query (because it had a convenient full outer join) a column selection that counts how many rows there are in <em><strong>t1</strong></em> with values that fall inside the range of the <em><strong>t2</strong></em> histogram but <strong>don&#8217;t</strong> match a popular value in <em><strong>t2</strong></em>.</p> <pre class="brush: plain; title: ; notranslate"> column unmatch_ct new_value m_unmatch_ct column product format 999,999.99 break on report skip 1 compute sum of product on report with f1 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency, endpoint_number from user_tab_histograms where table_name = 'T1' and column_name = 'J1' ), f2 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency, endpoint_number from user_tab_histograms where table_name = 'T2' and column_name = 'J2' ), join1 as ( select f1.value t1_value, f2.value t2_value, f1.frequency t1_frequency, f2.frequency t2_frequency, sum( case when f2.frequency &gt; 1 and f1.frequency is not null then 0 else f1.frequency end ) over() unmatch_ct, f2.frequency * &amp;m_t2_bucket_size * case when f2.frequency &gt; 1 and f1.frequency is not null then f1.frequency end product from f1 full outer join f2 on f2.value = f1.value where coalesce(f1.value, f2.value) between 2 and 25 -- coalesce(f1.value, f2.value) between &amp;m_low and &amp;m_high order by coalesce(f1.value, f2.value) ) select * from join1 ; T1_VALUE T2_VALUE T1_FREQUENCY T2_FREQUENCY UNMATCH_CT PRODUCT ---------- ---------- ------------ ------------ ---------- ----------- 2 5 99 5 15 99 7 15 99 10 17 99 12 13 99 14 1 99 15 13 99 17 17 11 1 99 18 1 99 19 1 99 20 20 7 1 99 21 2 99 22 22 3 1 99 23 1 99 24 2 99 25 25 1 2 99 80.00 ----------- sum 80.00 </pre> <p>We captured the bucket size (<em><strong>&amp;m_bucket_size</strong></em>) for the <em><strong>t2</strong></em> histogram as 40 in the earlier SQL, and we can see now that in the overlap range (which I&#8217;ve hard coded as 2 &#8211; 25) we have three buckets that identify popular values, but only one of them corresponds to a value in the frequency histogram on <em><strong>t1</strong></em>, so the <em><strong>Product</strong></em> column shows a value of 1 * 2 * 40 = 80. Unfortunately this is a long way off the prediction that the optimizer is going to make for the simple join. (Eventually we&#8217;ll see it&#8217;s 1,893 so we have a lot more rows to estimate for).</p> <p>Our code so far only acounts for items that are popular in both tables. Previous experience tells us that when a popular value exists only at one end of the join predicate we need to derive a contribution to the total prediction through an <em>&#8220;average selectivity&#8221;</em> calculated for the other end of the join predicate. For frequency histograms we&#8217;ve seen that <em>&#8220;half the number of the least frequently occuring value&#8221;</em> seems to be the appropriate frequency estimate, and if we add that in we&#8217;ll get two more contributions to the total from the values 21 and 24 which appear in the height-balanced (<em><strong>t2</strong></em>) histogram as popular but don&#8217;t appear in the frequency (<em><strong>t1</strong></em>) histogram. Since the lowest frequency in <em><strong>t1</strong></em> is 1 this would give us two contributions of 0.5 * 2 (buckets) * 40 (bucket size) viz: two contributions of 40 bringing our total to 160 &#8211; still a serious shortfall from Oracle&#8217;s prediction. So we need to work out how Oracle generates an <em>&#8220;average frequency&#8221;</em> for the non-popular values of <em><strong>t2</strong></em> and then apply it to the 99 rows in <em><strong>t1</strong></em> that haven&#8217;t yet been accounted for in the output above.</p> <p>To calculate the <em>&#8220;average selectivity&#8221;</em> of a non-popular row in <em><strong>t2</strong></em> I need a few numbers (some of which I&#8217;ve already acquired above). The total number of rows in the table (NR), the number of distinct values (NDV), and the number of popular values (NPV), from which we can derive the the number of distinct non-popular values and the number of rows for the non-popular values. The model that Oracle uses to derive these numbers is simply to assume that a value is popular if its frequency in the histogram is greater than one and the number of rows for that value is <em>&#8220;frequency * bucket size&#8221;</em>.</p> <p>The first query we ran against the <em><strong>t2</strong></em> histogram showed 6 popular values, accounting for 13 buckets of 40 rows each. We reported 22 distinct values for the column and 800 rows for the table so the optimizer assumes the non-popular values account for (22 &#8211; 6) = 16 distinct values and (800 &#8211; 13 * 40) = 280 rows. So the selectivity of non-popular values is (280/800) * (1/16) = 0.021875. This needs to be multiplied by the 99 rows in <em><strong>t1</strong></em> which don&#8217;t match a popular value in <em><strong>t2</strong></em> &#8211; so we now need to write some SQL to derive that number.</p> <p>We could enhance our earlier full outer join and slot 0.5, 99, and 0.021875 into it as &#8220;magic&#8221; constants. Rather than do that though I&#8217;m going to write a couple of messy queries to derive the values (and the low/high range we&#8217;re interested in) so that I will be able to tweak the data later on and see if the formula still produces the right answer.</p> <pre class="brush: plain; title: ; notranslate"> column range_low new_value m_low column range_high new_value m_high column avg_t1_freq new_value m_avg_t1_freq column new_density new_value m_avg_t2_dens with f1 as ( select endpoint_value ep_val, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency from user_tab_histograms where table_name = 'T1' and column_name = 'J1' ), f2 as ( select endpoint_value ep_val, endpoint_number ep_num, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency from user_tab_histograms where table_name = 'T2' and column_name = 'J2' ) select max(min_v) range_low, min(max_v) range_high, min(min_f)/2 avg_t1_freq, max(new_density) new_density from ( select min(ep_val) min_v, max(ep_val) max_v, min(frequency) min_f, to_number(null) new_density from f1 union all select min(ep_val) min_v, max(ep_val) max_v, null min_f, (max(ep_num) - sum(case when frequency &gt; 1 then frequency end)) / ( max(ep_num) * (&amp;m_t2_distinct - count(case when frequency &gt; 1 then 1 end)) ) new_density from f2 ) ; RANGE_LOW RANGE_HIGH AVG_T1_FREQ NEW_DENSITY ---------- ---------- ----------- ----------- 2 25 .5 .021875 </pre> <p>This query finds the overlap by querying the two histograms and reporting the lower high value and higher low value. It also reports the minimum frequency from the frequency histogram and divides by 2, and calculates the number of non-popular rows divided by the total number of rows and the number of distinct non-popular values. (Note that I&#8217;ve picked up the number of distinct values in <em><strong>t2.j2</strong></em> as a substituion variable generated by one of my earlier queries.) In my full script this messy piece of code runs before the query that showed I showed earlier on that told us how well (or badly) the two histograms matched.</p> <p>Finally we can use the various values we&#8217;ve picked up in a slightly more complex version of the full outer join &#8211; with a special row added through a <em><strong>union all</strong></em> to give us our the estimate:</p> <pre class="brush: plain; title: ; notranslate"> break on report skip 1 compute sum of product on report with f1 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency, endpoint_number from user_tab_histograms where table_name = 'T1' and column_name = 'J1' ), f2 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) frequency, endpoint_number from user_tab_histograms where table_name = 'T2' and column_name = 'J2' ), join1 as ( select f1.value t1_value, f2.value t2_value, f1.frequency t1_frequency, f2.frequency t2_frequency, f2.frequency * case when f2.frequency &gt; 1 and f1.frequency is not null then f1.frequency when f2.frequency &gt; 1 and f1.frequency is null then &amp;m_avg_t1_freq end * &amp;m_t2_bucket_size product from f1 full outer join f2 on f2.value = f1.value where coalesce(f1.value, f2.value) between &amp;m_low and &amp;m_high order by coalesce(f1.value, f2.value) ) select * from join1 union all select null, &amp;m_avg_t2_dens, &amp;m_unmatch_ct, &amp;m_t2_rows * &amp;m_avg_t2_dens, &amp;m_t2_rows * &amp;m_avg_t2_dens * &amp;m_unmatch_ct from dual ; T1_VALUE T2_VALUE T1_FREQUENCY T2_FREQUENCY PRODUCT ---------- ---------- ------------ ------------ ----------- 2 5 5 15 7 15 10 17 12 13 14 1 15 13 17 17 11 1 18 1 19 1 20 20 7 1 21 2 40.00 22 22 3 1 23 1 24 2 40.00 25 25 1 2 80.00 .021875 99 17.5 1,732.50 ----------- sum 1,892.50 </pre> <p>It remains only to check what the optimizer thinks the cardinality will be on a simple join, and then modify the data slightly to see if the string of queries continues to produce the right result. Here&#8217;s a starting test:</p> <pre class="brush: plain; title: ; notranslate"> set serveroutput off alter session set statistics_level = all; alter session set events '10053 trace name context forever'; alter session set tracefile_identifier='BASELINE'; select count(*) from t1, t2 where t1.j1 = t2.j2 ; select * from table(dbms_xplan.display_cursor(null,null,'allstats last')); alter session set statistics_level = typical; alter session set events '10053 trace name context off'; COUNT(*) ---------- 1327 PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------------------------------------------ SQL_ID f8wj7karu0hhs, child number 0 ------------------------------------- select count(*) from t1, t2 where t1.j1 = t2.j2 Plan hash value: 906334482 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 41 | | | | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 41 | | | | |* 2 | HASH JOIN | | 1 | 1893 | 1327 |00:00:00.01 | 41 | 2545K| 2545K| 1367K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 100 | 100 |00:00:00.01 | 7 | | | | | 4 | TABLE ACCESS FULL| T2 | 1 | 800 | 800 |00:00:00.01 | 7 | | | | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;T1&quot;.&quot;J1&quot;=&quot;T2&quot;.&quot;J2&quot;) </pre> <p>The <em><strong>E-rows</strong></em> for the hash join operation reports 1893 &#8211; and a quick check of the 10053 trace file shows that this is 1892.500000 rounded &#8211; a perfect match for the result from my query. I&#8217;ve modified the data in various ways (notably updating the <em><strong>t1</strong></em> table to change the value 25 (i.e. the current maximum value of <em><strong>j1</strong></em>) to other, lower, values) and the algorithm in the script seems to be sound &#8211; for 12c and 18c. I won&#8217;t be surprised, however, if someone comes up with a data pattern where the wrong estimate appears.</p> <h3>Don&#8217;t look back</h3> <p>Upgrades are a pain. With the same data set and same statistics on 11.2.0.4, running the same join query between <em><strong>t1</strong></em> and <em><strong>t2</strong></em>, here&#8217;s the execution plan I got:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID f8wj7karu0hhs, child number 0 ------------------------------------- select count(*) from t1, t2 where t1.j1 = t2.j2 Plan hash value: 906334482 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 12 | | | | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 12 | | | | |* 2 | HASH JOIN | | 1 | 1855 | 1327 |00:00:00.01 | 12 | 2440K| 2440K| 1357K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 100 | 100 |00:00:00.01 | 6 | | | | | 4 | TABLE ACCESS FULL| T2 | 1 | 800 | 800 |00:00:00.01 | 6 | | | | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;T1&quot;.&quot;J1&quot;=&quot;T2&quot;.&quot;J2&quot;) </pre> <p>Notice that the <em><strong>E-rows</strong></em> value is different. The join cardinality algorithm seems to have changed in the upgrade from 11.2.0.4 to 12c. I haven&#8217;t quite figured out how to get to the 11g result, but I seem to get quite close <span style="text-decoration:underline;"><strong>most of the time</strong></span> by making a simple change to the final query that I used to predict the optimizer&#8217;s estimate. In the <em><strong>case</strong></em> expression that chooses between the actual <em><strong>t1.j1</strong></em> frequency and the <em>&#8220;average frequency&#8221;</em> don&#8217;t choose, just use the latter:</p> <pre class="brush: plain; title: ; notranslate"> case when f2.frequency &gt; 1 and f1.frequency is not null -- then f1.frequency -- 12c then &amp;m_avg_t1_freq -- 11g when f2.frequency &gt; 1 and f1.frequency is null then &amp;m_avg_t1_freq end * </pre> <p>As I modified the <em><strong>t1</strong></em> row with the value 25 to hold other values this change kept producing results that were exactly 2, 2.5, or 3.0 different from the execution plan <em><strong>E-Rows</strong></em> &#8211; except in one case where the error was exactly 15.5 (which looks suspiciously like 17.5: the <em>&#8220;average frequency in t2&#8221;</em> minus 2). I&#8217;m not keen to spend time trying to work out exactly what&#8217;s going on but the takeaway from this change is that anyone upgrading from 11g to 12c may find that some of their queries change plans because they happen to match the type of example I&#8217;ve been working with in this post.</p> <p>In some email I exchanged with Chinar Aliyev, he suggested three fix-controls that might be relevant. I&#8217;ve added these to <a href="https://jonathanlewis.wordpress.com/2018/10/28/upgrades-again-2/"><em><strong>an earlier posting</strong></em></a> I did when I first hit the anomaly a few days ago but I&#8217;ll repeat them here. I will be testing their effects at some point in the not too distant future:</p> <pre class="brush: plain; title: ; notranslate"> 14033181 1 QKSFM_CARDINALITY_14033181 correct ndv for non-popular values in join cardinality comp. (12.1.0.1) 19230097 1 QKSFM_CARDINALITY_19230097 correct join card when popular value compared to non popular (12.2.0.1) 22159570 1 QKSFM_CARDINALITY_22159570 correct non-popular region cardinality for hybrid histogram (12.2.0.1) </pre> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19081 Thu Nov 01 2018 09:34:00 GMT-0400 (EDT) Becoming an Oracle ACE http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/kIh5PrSKMak/ <p><img class="wp-image-8561 alignleft" src="https://oracle-base.com/blog/wp-content/uploads/2018/10/O-ACEDirector-rgb.png" alt="" width="200" height="69" />I got asked about this a few times at OpenWorld 2018, so I figured it was about time to visit this subject&#8230; Again&#8230;</p> <p>I&#8217;m not saying becoming an ACE should be your motivation for contributing to the community, but it is for some people, and who am I to judge. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Remember, this is just my opinion! Someone from the ACE program might jump in and tell me I&#8217;m wrong. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <h2>What do I have to do to become an ACE?</h2> <p>It&#8217;s explained <a href="https://www.oracle.com/technetwork/community/oracle-ace/become-an-ace/index.html">here</a>, and if you follow the links. In the past it used to be a bit more &#8220;fluid&#8221;, but there are still a lot of different types of things that can count towards your &#8220;community contributions&#8221; with various weightings, but most of the points come from technical content creation and presenting.</p> <p>If you follow the links provided you can fill in the score card and see if what you currently do adds up to a &#8220;reasonable&#8221; number of points. I&#8217;m not sure if they tell you how many point you need up front, and I&#8217;m not going to talk about specifics, but you may be unpleasantly surprised by how few points some contributions get.</p> <h2>Does Oracle User Group work count?</h2> <p>The program was born out of online content. The old timers reading this will remember a time when any user group work, like being on the board, organising conferences and conference volunteering counted for zero. It was not considered as part of your contribution where the ACE program was concerned. Later on it was given a little credit. Now, if you do everything possible with regards to a user group, you can get about half way to qualifying for the ACE program without producing any content. That still means you have to pick up about half of the points from presenting and producing technical content. User group work alone will not get you there.</p> <p>There are a lot of people who do loads of work for their local user groups. In addition, some write lots of blog posts to promote events. Some are super active on social media to promote events. No matter how much of that you do, from what I can see you qualify for *about* half the points needed to become an ACE. Assuming my calculations are correct, that&#8217;s really important, because there are probably some people that think they should be an ACE, and believe they more than qualify, but in fact don&#8217;t. You can question the *current judging criteria*, but as it stands, that&#8217;s the way it is.</p> <p>I happen to think this is correct because it&#8217;s relatively easy to reach a very wide audience with technical content. In comparison most user groups have a very limited audience. They both have value, but from a &#8220;product evangelism&#8221; perspective, I think the focus on reach makes sense. Once again, just my opinion. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <h2>Does Twitter (and other social media) count?</h2> <p>No, not really. Technically it does, as you can get 5 points for being super-on-message with your tweets all year. I don&#8217;t even attempt to count and submit tweets, because what&#8217;s the point? I can get the same amount of points for one technical post. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>If you are using social media to push out your own original content, that&#8217;s great. You will get credit for your original content, not the social media posts linking to it. If you are just being &#8220;active&#8221; on social media, or tweeting out other people&#8217;s content, you are not doing something that will earn a lot of points. You are providing a service by introducing people to content they might otherwise have missed, but you will not get a lot of points for it, which means you will not qualify for the ACE program.</p> <p>Going back to the previous point, it&#8217;s mostly about creating original technical content, not curating other people&#8217;s content. Some people will feel like they are super active and will feel hard done by if they are not included in the program, but on the *current judging criteria* they should not be included.</p> <h2>What should I spend my time on then?</h2> <p>In my opinion, your time would be best spent on the creation of original technical content.</p> <ul> <li>Technical blog posts and articles. Notice the word technical. Blogging random crap doesn&#8217;t count, which is why most of my blog posts don&#8217;t go on to my score card. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li> <li>Presenting at conferences and meetups.</li> <li>Videos, webinars and podcasts, but the rules for inclusion mean if you do the 2-3 minute technical videos on YouTube, like I used to, they are not going to count, unless you batch them together into playlists and submit as a single video.</li> <li>Technical books. They get a lot of points, but take a crazy amount of time.</li> </ul> <p>As mentioned, you will get points for other things too, but they are either inefficient, or will not get you &#8220;all the way&#8221;. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>You get more points for content related to Oracle Cloud. When this was introduced the points difference between regular and Oracle Cloud content was significant and people freaked out. The difference is much smaller now and I don&#8217;t think it&#8217;s significant. You should be able to make the points easily without doing any cloud content.</p> <h2>But I don&#8217;t want to do that!</h2> <p>That&#8217;s cool. Do whatever you feel comfortable with, even if that&#8217;s nothing. Being an Oracle ACE is not a certification of greatness or a badge of approval. If you love doing this stuff, you get nominated and become an ACE that&#8217;s great. If you don&#8217;t enjoy creating technical content or presenting, it doesn&#8217;t mean you are worse than those that do. Do what you want to do!</p> <h2>I am awesome, but I don&#8217;t write/present much!</h2> <p>Remember, this is not a certification. It&#8217;s not a measure of how good you are. On countless occasions I&#8217;ve read people bleating on about how person X should be an Oracle ACE because they are great, even though they do almost nothing that qualifies for inclusion. It&#8217;s about community contribution. If you are great, but you are not out there, you shouldn&#8217;t be part of the program.</p> <p>If you only write a handful of posts a year, even if they are great, you shouldn&#8217;t be part of the program becausere not meeting the criteria.</p> <p>There are a specific set of criteria for entry to, and continued participation in the program. Do you live up to them? If yes, you should be part of it. If not, you shouldn&#8217;t.</p> <p>That&#8217;s not to say you have to agree with the *current judging criteria*, but they exist. That is how your contribution is judged.</p> <h2>Conclusion</h2> <p>Don&#8217;t project onto the program what you want it to be. It is what it is.</p> <p>Check out the criteria, rather than making up what you think the criteria should be. They do change over time.</p> <p>Don&#8217;t listen to other people&#8217;s interpretation of what counts, even mine. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <h2>Related Posts</h2> <p>As I mentioned at the start of the post, I&#8217;ve written about the ACE program a lot over the years, and covered some of these points also. I&#8217;ve listed a few of those posts below.</p> <ul> <li><a href="/blog/2007/09/04/oracle-ace-program/" rel="bookmark">Oracle ACE Program…</a></li> <li><a href="/blog/2012/02/22/ace-director-program-updates-my-thoughts/" rel="bookmark">ACE Director Program Updates: My thoughts…</a></li> <li><a href="/blog/2013/02/02/an-inside-look-at-the-oracle-ace-program/" rel="bookmark">An inside look at the Oracle ACE program</a></li> <li><a href="/blog/2013/04/02/should-you-aim-to-become-an-oracle-ace/" rel="bookmark">Should you aim to become an Oracle ACE?</a></li> <li><a href="/blog/2014/10/22/oracle-ace-oracles-bitch/" rel="bookmark">Oracle ACE = Oracle’s Bitch?</a></li> <li><a href="/blog/2014/10/24/oracle-ace-program-follow-up/" rel="bookmark">Oracle ACE Program: Follow Up</a></li> <li><a href="/blog/2015/11/02/twitter-is-it-a-valuable-community-contribution/">Twitter : Is it a valuable community contribution?</a></li> <li><a href="/blog/2015/11/03/twitter-is-it-a-valuable-community-contribution-follow-up/" rel="bookmark">Twitter : Is it a valuable community contribution? (Follow Up)</a></li> <li><a href="/blog/2016/11/15/the-oracle-ace-program-some-opinions/" rel="bookmark">Oracle ACE Program : Some more opinions</a></li> <li><a href="/blog/2018/09/13/community-participation-is-not-cheap/">Community participation is not cheap!</a></li> <li><a href="/blog/2018/08/19/technology-evangelist-programs-wont-suit-everybody/" rel="bookmark">Technology evangelist programs won’t suit everybody…</a></li> <li><a href="/blog/2018/10/11/odc-appreciation-day-effective-evangelism-staying-positive/" rel="bookmark">ODC Appreciation Day : Effective Evangelism – Staying Positive</a></li> </ul> <p>Cheers</p> <p>Tim&#8230;</p> <p>PS. If I&#8217;m factually incorrect, I will gladly make corrections. Differences of opinion may be a little harder to sway me on. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/11/01/becoming-an-oracle-ace/">Becoming an Oracle ACE</a> was first posted on November 1, 2018 at 9:54 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/kIh5PrSKMak" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8621 Thu Nov 01 2018 04:54:06 GMT-0400 (EDT) Oracle APEX: the low-code and low-cost application middle tier https://technology.amis.nl/2018/11/01/oracle-apex-the-low-code-and-low-cost-application-middle-tier/ <p>Oracle APEX is a low code application development framework. It can be used free of charge &#8211; either as part of an existing Oracle Database license or running in the free Oracle Database 18c XE product. An Oracle APEX application should be considered a three-tier application &#8211; consisting of a client tier (the browser), the middle tier (the APEX application engine) and the data tier (back end databases and REST APIs on top of various systems and data stores).</p> <p>A way to visualize the multi-tier architecture for the most common approach to APEX applications is the following:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb.png" alt="image" width="799" height="338" border="0" /></a></p> <p>The application is used by the end user in a browser. From that browser, HTTP requests are made to the ORDS (Oracle REST Data Services) listener &#8211; running on a web server in the DMZ. Requests for the APEX application engine are passed to the PL/SQL packages that make up this engine. The APEX application engine runs in an Oracle Database &#8211; which could well be the light weight and free Oracle Database Express Edition.</p> <p>Whenever data must be retrieved in order to handle a request &#8211; or data should be manipulated as result of a user action &#8211; the APEX middle tier will reach out to the backend system. This will frequently be an Oracle Database &#8211; but it does not have to be.</p> <p>The next figure makes the role of the Oracle Database a little bit clearer. It is very important to realize the (logical) decoupling between the APEX application engine and the Oracle Database that contains the business data presented and manipulated in the APEX application. In most APEX applications, the Oracle Database will appear in three locations. It could be the same Oracle Database instance in all three cases &#8211; but it does not have to be.</p> <p>ORDS is a REST server with its own meta data defining “modules” and “templates&#8221;.  The meta data for ORDS can be stored in any Oracle database (in an ORDS schema), so it could be the local database serving APEX or another database.  ORDS may be capable to work without a database repo and just use XML files for a highly optimized runtime, with no metadata to look up (this is not currently a supported feature &#8211; although the capability exists for internal use at Oracle for example in SQL Developer Web &#8211; but it is on the roadmap).</p> <p>The APEX application engine has its local database &#8211; an Oracle Database instance &#8211; that contains the meta-data that describes the application itself (pages, fields, navigation, validation logic, and more). It also holds the relevant session data: APEX is a stateless engine; the user session state is held partly in the client and partly in the APEX database. Note: the size of this session state is very small &#8211; typically just a few KB. The APEX database can also be used as data cache &#8211; to retain local, quickly accessible, read only copies of data retrieved from remote sources.</p> <p>The APEX application can reach out to business data in the local database in which it is running itself as well to a PDB co-located within the same container database (in a shared multi-tenant instance, you would use database link syntax &#8211; with schema.objectname@PDB_LINK- and when the PDBs for APEX and the business data are in the same root, the Oracle Database can transparently re-write queries expressed in Database Link syntax to use SQL that is effectively local). It can also access business data in a different Oracle Database across a database link or through an ORDS instance that exposes HTTP access to packages and tables in this other database instance.</p> <p>&nbsp;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-1.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-1.png" alt="image" width="957" height="453" border="0" /></a></p> <p>&nbsp;</p> <p>Mike Hichwa, VP Software Development at Oracle and responsible among other products for APEX, has many insights with me, some of which I have paraphrased here:</p> <blockquote><p>&#8220;Accessing data in a PDB from APEX in a different PDB (with both PDBs in the same CDB) can be done today via database link syntax.  The read access is optimized but updates do perform distributed transactions.  The intra-PDB access is being improved and optimized. You may or will be able to use simplified syntax (e.g. not database links) and that updates without two phase commit / distributed transactions. Database links and gateways (for non-Oracle databases) can be used, but would not be recommended for applications with large numbers of end users (so personally I like database links / gateways for ETL but not for general web apps)&#8221;</p></blockquote> <p>The idea that APEX is only suitable for low code application development <em>on top of an Oracle Database</em> is no longer correct. Through database links and gateways, APEX has long been able to [make to] run against business data in different databases. With the Oracle Database capabilities to call out to Web Services (SOAP and REST, for example using the UTL_HTTP package) it was also quite possible &#8211; albeit a bit cumbersome &#8211; to create an APEX application on external data sources. With the recent APEX feature called Web Source Module it has become largely a declarative &#8211; low code! &#8211; action to retrieve data from or manipulate data through a remote REST API.</p> <p>With Web Source Modules, any REST API becomes a viable data source for APEX applications. Low code application development against virtually any backend becomes a reality. This is depicted in the next figure. Here you see how the APEX application shows and potentially creates and updates data from various types of databases (through REST APIs or possibly through heterogenuous gateways) , microservices and serverless functions as well as from Oracle SaaS applications.</p> <p>&nbsp;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-2.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-2.png" alt="image" width="935" height="511" border="0" /></a></p> <p>&nbsp;</p> <p>Note: ORDS is expected to have support for MySQL sometime in 2019. That would enable quick, declarative exposure of tables in MySQL through a generated REST API. ORDS also has support for TimesTen in-memory database on its 2019 roadap. For other databases &#8211; SQL or NoSQL &#8211; a REST API has to be developed. For this, several tools and frameworks are available and of course implementing REST APIs is quite straightforward these days.</p> <h3>APEX is not only a Low Code Application Middle Tier &#8211; it is also a Low Cost Application Middle Tier</h3> <p>Low code development is attractive because of the high productivity and high quality (robustness) that can be achieved with a relatively low investment in technological skills and knowledge. A quick time to market can be realized. All of this applies primarily if functional requirements can be met by the out of the box capabilities of the low code framework.</p> <p>The cost of low code development is of course also determined by the cost of the tooling and run time infrastructure that is required. With APEX, this cost is extremely low. The required components for developing and running an APEX application are ORDS and APEX on an Oracle Database. ORDS can be used for free and can run on a free web server like Apache Tomcat, Jetty or GlassFish (note: The <a href="https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/18.3/aelig/installing-REST-data-services.html#GUID-5F7A8DB0-B0D2-48FF-A99B-7ABCA7DFF9DA">documentation</a> does state that &#8220;Glassfish Server support will be desupported in a future release&#8221;) . The database used for running APEX can be Oracle Database 18c XE &#8211; free as well!</p> <p>And, as discussed before, the business data can be held in various data stores &#8211; from Oracle Database (any edition including the free XE) to MySQL or other open source databases, either SQL/ACID or NoSQL/BASE.</p> <p>&nbsp;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/11/image-3.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/11/image_thumb-3.png" alt="image" width="932" height="536" border="0" /></a></p> <p>&nbsp;</p> <h3>APEX &#8211; more than just low code</h3> <p>The term &#8220;low code&#8221; (associated with the Citizen Developer) is a catchphrase that perhaps does not deal well with close scrutiny. The essence of software development is not coding &#8211; as in writing lines of program code. It is much more about capturing the logic associated with functional requirements &#8211; in a structured way such that a machine can execute the logic. How you instruct the machine &#8211; with low level code or high level no code is not so relevant in my book. Low code frameworks can help speed up the process of laying down the machine instruction &#8211; and improving the quality of the instruction by providing a framework within it is created with reusable constructs and visual representation.</p> <p>One of the challenges with low code platforms can be that the abstract high level language for describing the application behavior may not have enough expressiveness to capture all nuances stated in the business requirements. A lower level programming model may be needed then to capture the nuances and subtleties. As Joel Kallman &#8211; Director of Software Development at Oracle, responsible for APEX &#8211; states:</p> <blockquote><p>&#8220;APEX has the important ability to gracefully transition from No Code to Low Code to High Control.  Many customers can live within the &#8220;black box&#8221; of APEX and do no coding.  But everyone needs to customize, and the way you customize is with code.  With APEX, you can use a very small amount of code (snippets, as we call it) to customize your application.  It could be a snippet of PL/SQL or snippet of CSS, HTML or JavaScript.  Most low code frameworks abruptly go from no code to a high control (or &#8220;high code&#8221;) environment, with no middle ground.  Once you go into high code, you&#8217;ve lost all but the most professional of developers.  With APEX, it&#8217;s a very elegant and seamless transition. And for those who demand high control (high code), you can still exploit pre-compiled PL/SQL packages or JavaScript libraries or completely customized HTML templates &amp; themes.  &#8220;</p></blockquote> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/11/01/oracle-apex-the-low-code-and-low-cost-application-middle-tier/">Oracle APEX: the low-code and low-cost application middle tier</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50301 Thu Nov 01 2018 03:51:27 GMT-0400 (EDT) How to install the Oracle 18c RPM-based database software https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8358 Thu Nov 01 2018 02:39:34 GMT-0400 (EDT) Oracle 18c RPM Based Software Installation https://gavinsoorma.com/2018/11/oracle-18c-rpm-based-software-installation/ <p>One of the (many) new features in Oracle Database 18c enables us to install a single-instance Oracle Database software (no support for Grid Infrastructure as yet) using an RPM package.</p> <p>So say as part of say provisioning a new Linux server, the system administrator can also provide the Oracle 18c software already pre-installed and ready to be used by the DBA.</p> <p>Note that the  RPM-based Oracle Database installation is not available for Standard Edition 2. Standard Edition 2 support is planned in the next release 19c.</p> <p>The naming convention for RPM packages is <em><span class="variable">name</span>&#8211;<span class="variable">version</span>&#8211;<span class="variable">release</span>.<span class="variable">architecture</span></em>.rpm.</p> <p>Currently the RPM for 18c is : <strong>oracle-database-ee-18c-1.0-1.x86_64.rpm</strong></p> <p>So we can see that this RPM is for 18c Enterprise Edition (ee-18c), the version number (1.0) , release number of the package (1) and the platform architecture (x86_64).</p> <p>To install the 18c database software we will do the following :</p> <ul> <li>Connect as root and download and install the 18c pre-installation RPM using the <strong>yum install</strong> command</li> <li>Download the 18c Oracle Database RPM-based installation software from OTN or the Oracle Software Delivery Cloud portal (Edelivery).</li> <li>Install the database software using the <strong>yum localinstall</strong> command</li> </ul> <p>Once the 18c software has been installed, we can run a script as root (<strong> /etc/init.d/oracledb_ORCLCDB-18c configure</strong>) which will automatically create a Container Database (ORCLDB) with a Pluggable Database (ORCLPDB1) as well as configure and start  the listener!!</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/">How to perform a RPM-based Oracle 18c Software Installation and execute the oracledb_ORCLCDB-18c configure script (Members Only)</a></p> Gavin Soorma https://gavinsoorma.com/?p=8350 Thu Nov 01 2018 02:12:54 GMT-0400 (EDT) Running Reactive Spring Boot on GraalVM in Docker https://technology.amis.nl/2018/11/01/running-reactive-spring-boot-on-graalvm-in-docker/ <p><a href="https://www.graalvm.org/">GraalVM</a> is an open source polyglot VM which makes it easy to mix and match different languages such as Java, Javascript and R. It has the ability (with some restrictions) to compile code to native executables. This of course offers great performance benefits. Recently, GraalVM Docker files and images have become available. See <a href="https://github.com/oracle/docker-images/tree/master/GraalVM/CE">here</a>.</p> <p>Since Spring Boot is a popular Java framework and reactive (non blocking) RESTful services/clients implemented in Spring Boot are also interesting to look at, I thought; lets combine those and produce a Docker image running a reactive Spring Boot application on GraalVM.</p> <p>I&#8217;ve used and combined the following</p> <ul> <li><a href="https://spring.io/guides/gs/reactive-rest-service/">Building a Reactive RESTful Web Service</a></li> <li><a href="https://spring.io/guides/gs/spring-boot-docker/">Spring Boot with Docker</a> and <a href="https://technology.amis.nl/2018/03/18/running-spring-boot-in-a-docker-container-on-openjdk-oracle-jdk-zulu-on-alpine-linux-oracle-linux-ubuntu/">Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu</a></li> <li><a href="https://github.com/oracle/docker-images/tree/master/GraalVM/CE">Oracle&#8217;s GraalVM Docker images</a></li> <li>(my very own) <a href="https://github.com/MaartenSmeets/provisioning/tree/master/ubuntudev">Ubuntu Development VM</a> (requires VirtualBox, Vagrant)</li> </ul> <p>As a base I&#8217;ve used the code provided in the following Git repository here. In the &#8216;complete&#8217; folder (the end result of the tutorial) is a sample Reactive RESTful Web Service and client.</p> <p><span id="more-50288"></span></p> <h1>The reactive Spring Boot RESTful web service and client</h1> <p>When looking at the sample, you can see how you can implement a non-blocking web service and client. Basically this means you use;</p> <ul> <li>org.springframework.web.reactive.function.server.ServerRequest and ServerResponse and instead of the org.springframework.web.bind.annotation.RestController</li> <li>Mono&lt;ServerResponse&gt; for the response of the web service</li> <li>for a web service client you use org.springframework.web.reactive.function.client.ClientResponse and Mono&lt;ClientResponse&gt; for getting a response</li> <li>since you won&#8217;t use the (classic blocking) RestController with the RequestMapping annotations, you need to create your own configuration class which defines routes using org.springframework.web.reactive.function.server.RouterFunctions</li> </ul> <p>Since the response is not directly a POJO, it needs to be converted into one explicitly like with res.bodyToMono(String.class). For more details look at <a href="https://spring.io/guides/gs/reactive-rest-service/">this tutorial</a> or browse <a href="https://github.com/spring-guides/gs-reactive-rest-service">this repository</a></p> <p>Personally I would have liked to have something like a ReactiveRestController and keep the rest (pun intended) the same. This would make refactoring to reactive services and clients more easy.</p> <h1>GraalVM</h1> <p>GraalVM is a polyglot VM open sourced by Oracle. It has a community edition and enterprise edition which provides improved performance (a smaller footprint) and better security (sandboxing capabilities for native code) as indicated <a href="https://www.graalvm.org/docs/faq/">here</a>. The community edition can be downloaded from GitHub and the enterprise edition from Oracle&#8217;s Technology Network. Support for GraalVM for Windows is currently still under development and not released yet. A challenge for Oracle with GraalVM will be to keep the polyglot systems it supports up to date version wise. This already was a challenge with for example the R support in Oracle database and Node support in Application Container Cloud Service. See <a href="https://www.oracle.com/technetwork/database/database-technologies/r/r-enterprise/overview/index.html">here</a>.</p> <p>When you download GraalVM CE you&#8217;ll get GraalVM with a specific OpenJDK 8 version (for GraalVM 1.0.0-rc8 this is 1.8.0_172). When you download GraalVM EE from OTN, you&#8217;ll get Oracle JDK 8 of the same version.</p> <h2>GraalVM and LLVM</h2> <p>GraalVM supports LLVM. LLVM is a popular toolset to provide language agnostic compilation and optimization of code for specific platforms. LLVM is one of the reasons many programming languages have starting popping up recently. Read more about LLVM <a href="https://www.infoworld.com/article/3247799/development-tools/what-is-llvm-the-power-behind-swift-rust-clang-and-more.html">here</a> or visit their site <a href="https://llvm.org/">here</a>. If you can compile a language into LLVM bitcode or LLVM Intermediate Representation (IR), you can run it on GraalVM (see <a href="https://www.graalvm.org/docs/reference-manual/languages/llvm/">here</a>). The LLVM bitcode is additionally optimized by GraalVM to receive even better results.</p> <h2>GraalVM and R</h2> <p>GraalVM uses FastR which is based on GNU-R, the reference implementation of R. This is an alternative implementation of the R language for GraalVM and thus not actual R! For example: &#8216;support for dplyr and data.table are on the way&#8217;. Read more <a href="https://github.com/oracle/fastr">here</a>. Especially if you use exotic packages in R, I expect there to be compatibility issues. It is interesting to compare the performance of FastR on GraalVM to compiling R code to LLVM instructions and run that on GraalVM (using something like <a href="https://github.com/duncantl/RLLVMCompile">RLLVMCompile</a>). Haven&#8217;t tried that though. GraalVM seems to have momentum at the moment and I&#8217;m not so sure about RLLVMCompile.</p> <h2>Updating the JVM of GraalVM</h2> <p>You can check out the following post <a href="https://neomatrix369.wordpress.com/2018/06/11/building-wholly-graal-with-truffle/">here</a> for building GraalVM with a JDK 8 version. This refers to documentation on GitHub <a href="https://github.com/oracle/graal/blob/master/compiler/README.md">here</a>.</p> <p>&#8220;Graal depends on a JDK that supports a compatible version of JVMCI (JVM Compiler Interface). There is a JVMCI port for JDK 8 and the required JVMCI version is built into the JDK as of JDK 11 (build 20 or later).&#8221;</p> <p>I have not tried this but it seems thus relatively easy to compile GraalVM from sources with support for a different JDK.</p> <h2>GraalVM in Docker</h2> <p>Oracle has recently provided GraalVM as Docker images and put the Dockerfile&#8217;s in their Github repository. See <a href="https://github.com/oracle/docker-images/tree/master/GraalVM/CE">here</a>. These are only available for the community edition. Since the Dockerfiles are provided on GitHub, it is easy to make your own GraalVM EE images if you want (for example want to test with GraalVM using Oracle JDK instead of OpenJDK).</p> <p>To checkout GraalVM you can run the container like:</p> <pre class="brush: plain; title: ; notranslate"> docker run -it oracle/graalvm-ce:1.0.0-rc8 bash bash-4.2# gu available Downloading: Component catalog ComponentId Version Component name ---------------------------------------------------------------- python 1.0.0-rc8 Graal.Python R 1.0.0-rc8 FastR ruby 1.0.0-rc8 TruffleRuby </pre> <h1>Spring Boot in GraalVM in Docker</h1> <p>How to run a Spring Boot application in Docker is relatively easy and described <a href="https://spring.io/guides/gs/spring-boot-docker/">here</a>. I&#8217;ve run Spring Boot applications on various VM&#8217;s also and described the process on how to achieve this <a href="http://javaoraclesoa.blogspot.com/2018/03/running-spring-boot-in-docker-container.html">here</a>. As indicated above, I&#8217;ve used this <a href="https://github.com/MaartenSmeets/provisioning/tree/master/ubuntudev">Ubuntu Development VM</a>.</p> <pre class="brush: plain; title: ; notranslate"> sudo apt-get install maven git clone https://github.com/spring-guides/gs-reactive-rest-service.git cd gs-reactive-rest-service/complete </pre> <p>Now create a Dockerfile:</p> <pre class="brush: plain; title: ; notranslate"> FROM oracle/graalvm-ce:1.0.0-rc8 VOLUME /tmp ARG JAR_FILE COPY ${JAR_FILE} app.jar ENTRYPOINT [&quot;java&quot;,&quot;-Djava.security.egd=file:/dev/./urandom&quot;,&quot;-jar&quot;,&quot;/app.jar&quot;] </pre> <p>Edit the pom.xml file</p> <p>Add to the properties tag a prefix variable:</p> <pre class="brush: plain; title: ; notranslate"> &lt;properties&gt; &lt;java.version&gt;1.8&lt;/java.version&gt; &lt;docker.image.prefix&gt;springio&lt;/docker.image.prefix&gt; &lt;/properties&gt; </pre> <p>Add a build plugin</p> <pre class="brush: plain; title: ; notranslate"> &lt;build&gt; &lt;plugins&gt; &lt;plugin&gt; &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt; &lt;artifactId&gt;spring-boot-maven-plugin&lt;/artifactId&gt; &lt;/plugin&gt; &lt;plugin&gt; &lt;groupId&gt;com.spotify&lt;/groupId&gt; &lt;artifactId&gt;dockerfile-maven-plugin&lt;/artifactId&gt; &lt;version&gt;1.3.6&lt;/version&gt; &lt;configuration&gt; &lt;repository&gt;${docker.image.prefix}/${project.artifactId}&lt;/repository&gt; &lt;buildArgs&gt;&lt;JAR_FILE&gt;target/${project.build.finalName}.jar&lt;/JAR_FILE&gt; &lt;/buildArgs&gt; &lt;/configuration&gt; &lt;/plugin&gt; &lt;/plugins&gt; &lt;/build&gt; </pre> <p>Now you can do:</p> <pre class="brush: plain; title: ; notranslate"> mvn clean package mvn dockerfile:build </pre> <p>And run it:</p> <pre class="brush: plain; title: ; notranslate"> doer run -p 8080:8080 -t springio/gs-reactive-rest-service:latest </pre> <p>It’s as simple as that!</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/11/01/running-reactive-spring-boot-on-graalvm-in-docker/">Running Reactive Spring Boot on GraalVM in Docker</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Maarten Smeets https://technology.amis.nl/?p=50288 Thu Nov 01 2018 01:02:29 GMT-0400 (EDT) ODTUG October News https://www.odtug.com/p/bl/et/blogaid=835&source=1 Announcing the 2018–2019 ODTUG Leadership Program Class! ODTUG is pleased to announce its sixth ODTUG Leadership Program, a program dedicated to enhancing the leadership skills of ODTUG members. ODTUG https://www.odtug.com/p/bl/et/blogaid=835&source=1 Wed Oct 31 2018 10:25:26 GMT-0400 (EDT) Oracle 18c Certification for Fusion Middleware 12c Release 2 http://dirknachbar.blogspot.com/2018/10/oracle-18c-certification-for-fusion.html Since a few days the Certification Matrix for Oracle Fusion Middleware 12c Release 2 (12.2.1.x) was updated within Oracle Technology Network, now Oracle 18c (18.1 on Exadata and 18.3 on On-Premise) is certified and supported as Target Database for RCU (Repository Creation Utility and as Application Datasource.<br /><br />Certification Matrix for Fusion Middleware 12.2.1.2.0:&nbsp;<a href="https://www.oracle.com/technetwork/middleware/fusion-middleware/documentation/fmw-122120-certmatrix-3254735.xlsx" target="_blank">https://www.oracle.com/technetwork/middleware/fusion-middleware/documentation/fmw-122120-certmatrix-3254735.xlsx</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-HdYmffcizgA/W9mZ0Q10vuI/AAAAAAAAA3s/XwLsvcljxygi3jUC1lMD7M4kbYEpGvyoACLcBGAs/s1600/OTN_Cert12.2.1.2.0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="292" data-original-width="1600" height="112" src="https://4.bp.blogspot.com/-HdYmffcizgA/W9mZ0Q10vuI/AAAAAAAAA3s/XwLsvcljxygi3jUC1lMD7M4kbYEpGvyoACLcBGAs/s640/OTN_Cert12.2.1.2.0.png" width="640" /></a></div><br /><br />Certification Matrix for Fusion Middleware 12.2.1.3.0:&nbsp;<a href="https://www.oracle.com/technetwork/middleware/fmw-122130-certmatrix-3867828.xlsx" target="_blank">https://www.oracle.com/technetwork/middleware/fmw-122130-certmatrix-3867828.xlsx</a><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-h12xfadq8hs/W9maXD6F3CI/AAAAAAAAA30/K3jNxrIUc44RSQGr4DfYGdHXXCweo4k2gCLcBGAs/s1600/OTN_Cert12.2.1.3.0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="167" data-original-width="1600" height="66" src="https://2.bp.blogspot.com/-h12xfadq8hs/W9maXD6F3CI/AAAAAAAAA30/K3jNxrIUc44RSQGr4DfYGdHXXCweo4k2gCLcBGAs/s640/OTN_Cert12.2.1.3.0.png" width="640" /></a></div><br />In My Oracle Support under the Certification Tab, the new Certification for Oracle Database 18c (18.1 on Exadata and 18.3 on On-Premise) is not yet updated.<br /><br />See for example the Certification for SOA Suite 12.2.1.2.0<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fgZR1XopTh8/W9ma2i4BK9I/AAAAAAAAA38/qnhSzlKQhRAqaTQ1Lr7mJw7_xhS7it5JQCLcBGAs/s1600/SOA12.2.1.2.0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1322" height="270" src="https://3.bp.blogspot.com/-fgZR1XopTh8/W9ma2i4BK9I/AAAAAAAAA38/qnhSzlKQhRAqaTQ1Lr7mJw7_xhS7it5JQCLcBGAs/s640/SOA12.2.1.2.0.png" width="640" /></a></div><br />See for example the Certification for SOA Suite 12.2.1.3.0<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-A2S8C9vg6S8/W9mbAUtw7JI/AAAAAAAAA4A/NZOR-BqzSrMgAJE8xu3YpXC7wwoLiYjZgCLcBGAs/s1600/SOA12.2.1.3.0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="552" data-original-width="1327" height="266" src="https://3.bp.blogspot.com/-A2S8C9vg6S8/W9mbAUtw7JI/AAAAAAAAA4A/NZOR-BqzSrMgAJE8xu3YpXC7wwoLiYjZgCLcBGAs/s640/SOA12.2.1.3.0.png" width="640" /></a></div><br />I have created a Support Request with My Oracle Support for that and according to My Oracle Support, the Certification Matrix on Oracle Technology Network is correct and therefore Oracle 18c (18.1 on Exadata and 18.3 on On-Premise) is fully supported and certified with Oracle Fusion Middleware 12c Release 2<br /><br /><br /> Dirk Nachbar tag:blogger.com,1999:blog-4344684978957885806.post-1669107913782613098 Wed Oct 31 2018 08:11:00 GMT-0400 (EDT) Autonomous Database : “Hand-tuning doesn’t scale” http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/zsBh3dpev2E/ <p><img class="alignleft wp-image-7779" src="https://oracle-base.com/blog/wp-content/uploads/2017/12/tux-worker-161507_640.png" alt="" width="200" height="198" />I was at a talk by <a href="https://twitter.com/christhalinger">Chris Thalinger</a> at Oracle Code One called “Performance tuning Twitter services with Graal and machine learning”. One of the things he said was, &#8220;Hand-tuning doesn&#8217;t scale&#8221;, and it brought into focus some of the things that have been going on in the Autonomous Database, which is closer to my world. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>In my post called <a href="https://oracle-base.com/blog/2018/10/21/its-not-all-about-you/">It&#8217;s not all about you!</a> I discussed the reaction to a new feature mentioned in the ACE Director briefing. It has been spoken about publicly now, so I guess I&#8217;m allowed to mention it by name. The feature in question was Automatic Index Tuning that (insert Safe Harbour slide) might be in Oracle 19c, or in an autonomous database cloud service in the future. Once this feature was mentioned, the list of questions started to pile up, before we even knew what it was or how it was implemented. I mentioned my own reaction to this specific feature, but let&#8217;s look at this in the broader sense of autonomous services generally.</p> <p>As I mentioned, watching Chris&#8217; session brought all this into focus for me. Sorry if I&#8217;m stating the obvious, but here goes.</p> <ul> <li>Even if I were capable of doing a better job than an automatic performance tuning feature, and I&#8217;m not sure I can, that is just me. Is everyone else I work with at my level of understanding or better? Is everyone else who works with the database across the world at my level of understanding or better? If the answer to that is no, then there is a need for feature X, whatever it is.</li> <li>Let&#8217;s say I have a group of really skilled people that can do better than automatic feature X. Are they constantly looking at the system, trying to get the best performance possible, or are they working on hundreds or thousands of different targets, and actually spending very little time on each? As their workload grows, which it invariably will, will they be able to spend more or less time looking at each specific feature?</li> </ul> <p>I know there are some consultants that get to go in and solve specific problems on specific systems, and maybe those folks will look down on automatic performance tuning features, but I have to look after loads of disparate systems and I get 30 seconds to get something done before I have to move on. I like to think I&#8217;m pretty good at Oracle database stuff, but I need all the help I can get if I want to keep things running smoothly.</p> <p>When a new automatic feature is announced we always get super intense about it, which usually results in a lot of wailing and gnashing of teeth. Sometimes this is for very good reason, as the early incarnations of some features have been problematic, but over time they often become the norm. Think about the following, and what life would be like without them&#8230;</p> <ul> <li><a href="/articles/9i/memory-management-9i#AutomaticSQLExecutionMemoryManagement">Automatic PGA Management</a></li> <li><a href="/articles/10g/performance-tuning-enhancements-10g#automatic_shared_memory_management">Automatic Shared Memory Management</a></li> <li><a href="/articles/9i/automatic-segment-free-space-management">Automatic Segment Space Management</a></li> <li><a href="/articles/9i/automatic-undo-management">Automatic Undo Management</a></li> <li><a href="/articles/10g/performance-tuning-enhancements-10g#automatic_optimizer_statistics_collection">Automatic Statistics Gathering</a></li> </ul> <p>For some people reading this, they may never have experienced life without these features. Believe me, it wasn&#8217;t pretty! <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Whether it&#8217;s a specific automatic feature, like Automatic Index Tuning, or a grander vision, like the Autonomous Database family of cloud services, this is part of the natural evolution of the database. At *some point* in the future I can see all my databases running on the cloud and all of them being some form of autonomous service, regardless of which cloud provider is running them.</p> <p>Cheers</p> <p>Tim&#8230;</p> <p>PS. I hope people understand the spirit of what I&#8217;m saying, but I feel the need to include a few statements, as some people on Twitter seemed to get the wrong end of the stick.</p> <ul> <li>I&#8217;m not saying you can do a rubbish job and leave it up to an automatic tuning feature to fix your crap application. Bad software always runs badly, no matter what you do with it. You might be able to mask some of the problems, but you don&#8217;t fix them.</li> <li>I&#8217;m not suggesting the development process shouldn&#8217;t include proper testing, including unit, integration, UAT and performance testing. See previous point.</li> <li>The more you know about your platform, the better job you can do, even if you have automatic features to help you.</li> </ul> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/10/31/autonomous-database-hand-tuning-doesnt-scale/">Autonomous Database : &#8220;Hand-tuning doesn&#8217;t scale&#8221;</a> was first posted on October 31, 2018 at 9:22 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/zsBh3dpev2E" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8602 Wed Oct 31 2018 04:22:13 GMT-0400 (EDT) “Hidden” Efficiencies of Non-Partitioned Indexes on Partitioned Tables Part IV” (Hallo Spaceboy) https://richardfoote.wordpress.com/2018/10/31/hidden-efficiencies-of-non-partitioned-indexes-on-partitioned-tables-part-iv-hallo-spaceboy/ In Part I, Part II and Part III we looked at some advantages of Global Indexes that may not be obvious to some. One of the advantages of a Local Index vs. Non-Partitioned Global Index is that a Local Index being a smaller index structures may have a reduced BLEVEL in comparison. This can save [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5687 Tue Oct 30 2018 23:17:02 GMT-0400 (EDT) Index Splits – 2 https://jonathanlewis.wordpress.com/2018/10/30/index-splits-2/ <p>In <a href="https://jonathanlewis.wordpress.com/2018/10/29/index-splits/"><em><strong>yesterday&#8217;s article</strong></em></a> I described the mechanism that Oracle for an index leaf block split when you try to insert a new entry into a leaf block that is already full, and I demonstrated that the <em>&#8220;50-50&#8221;</em> split and the <em>&#8220;90-10&#8221;</em> split work in the same way, namely:</p> <ul> <li>save the old block into the undo</li> <li>prepare a new leaf block</li> <li>&#8220;share&#8221; the data between the old and new leaf blocks</li> <li>sort out pointers</li> </ul> <p>The obvious question to ask about this process is:<em> &#8220;Why does Oracle save and rewrite the whole of the old leaf block during a 90-10 split when the data in the block doesn&#8217;t appear to change ?&#8221;</em> The &#8220;sharing&#8221; in the 90-10 split is most uneven, and it appears that Oracle simply attaches a new leaf block to the index structure and writes the new index entry into it, leaving the existing index entries unchanged in the current leaf block.</p> <p>The answer to that question can be found by doing block dumps &#8211; except you won&#8217;t see the answer if you use my original test data. So here&#8217;s a follow-on script to the previous test (written 11 years after the previous script):</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: index_splits3a.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem Purpose: rem drop table t2 purge; create table t2 as select * from t1 where id &lt;= 148; alter table t2 add constraint t2_pk primary key(id, idx_pad) using index pctfree 0; column object_id new_value m_index_id select object_id from user_objects where object_name = 'T2_PK' and object_type = 'INDEX'; begin for r in (select * from t1 where id between 149 and 292 order by dbms_random.value) loop insert into t2 values(r.id, r.idx_pad, r.padding); end loop; commit; end; / alter session set events 'immediate trace name treedump level &amp;m_index_id'; alter system flush buffer_cache; prompt check the trace file for the last block of the index prompt then do a block dump for it. prompt then press return pause insert into t2 values(293, rpad('x',40,'x'), rpad('y',50,'y')); commit; alter session set events 'immediate trace name treedump level &amp;m_index_id'; alter system flush buffer_cache; </pre> <p>This test depends on the number of rows used for the previous test &#8211; and I have four hard-coded values (148, 149, 292, 293) in it that matter. If you&#8217;ve had to use a different number of rows in your version of the first test you will need to adjust these values to match.</p> <p>I&#8217;ve created a clone of the <em><strong>t1</strong></em> table copying only the first 148 rows &#8211; this is just enough rows that when I create a unique (PK) index on the table the index will have two leaf blocks, the first holding 147 entries and the second holding one entry. I&#8217;ve then inserted the next 144 rows from <em><strong>t1</strong></em> into <em><strong>t2</strong></em> in <span style="text-decoration:underline;"><strong>random</strong> </span>order, so that I end up with two full leaf blocks.</p> <p>Once the data is ready the code issues a <em><strong>treedump</strong></em> command (so that we can check the index is as I&#8217;ve described it) and flushes the buffer_cache, then prompts you with some instructions and waits for you to press return. At this point you need some manual intervention from <strong><span style="text-decoration:underline;">another session</span></strong> &#8211; you can examine the treedump to work out the file and block addresses of the two leaf blocks and dump the second leaf block (<em>&#8216;alter database dump datafile N block M;&#8217;</em>).</p> <p>After you&#8217;ve done the block dump press return and my code resumes and inserts a new row that will cause a 90-10 split to happen, then it does another treedump (to let you check the block addresses and see that the split was 90-10), and flushes the buffer cache again. This is where you can check the block address of the second leaf block (in case it has managed to change &#8211; which it shouldn&#8217;t) and dump the block again.</p> <p>Here, with a huge chunk removed from the middle, are the results I got from searching for the expression <em>&#8220;row#&#8221;</em> in the two block dumps that I generated in my test.</p> <pre class="brush: plain; title: ; notranslate"> Before 90/10 block split: ------------------------- row#0[7979] flag: -------, lock: 0, len=53, data:(6): 01 40 01 84 00 03 row#1[1885] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 2b row#2[1938] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 2a row#3[5595] flag: -------, lock: 2, len=53, data:(6): 01 40 01 f9 00 2c row#4[3581] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 0b ... row#142[1408] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 34 row#143[2150] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 26 row#144[878] flag: -------, lock: 2, len=53, data:(6): 01 40 01 fd 00 3e After 90/10 block split ----------------------- row#0[348] flag: -------, lock: 0, len=53, data:(6): 01 40 01 84 00 03 row#1[401] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 2b row#2[454] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 2a row#3[507] flag: -------, lock: 0, len=53, data:(6): 01 40 01 f9 00 2c row#4[560] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 0b ... row#142[7873] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 34 row#143[7926] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 26 row#144[7979] flag: -------, lock: 0, len=53, data:(6): 01 40 01 fd 00 3e </pre> <p>The <em>&#8220;row#&#8221;</em> is in ascending order &#8211; these lines in an index leaf block dump show Oracle walking through the block&#8217;s <a href="https://jonathanlewis.wordpress.com/2009/05/21/row-directory/"><strong><em>&#8220;row directory&#8221;</em></strong></a>; the number in square brackets following the row number is the offset into the block where the corresponding index entry will be found. When Oracle inserts an index entry into a leaf block it adjusts the row directory to make a gap in the right place so that walking the directory in <em><strong>row# order</strong></em> allows Oracle to jump around the block and find the index entries in <em><strong>key order</strong></em>.</p> <p>When Oracle rewrites the block it first sorts the index entries into key order so that the actual index entries are written into the block in key order and a range scan that moves a pointer smoothly through the row directory will be moving another pointer smoothly down the block rather than making the pointer jump all over the place. Presumably this has (or maybe had) a benefit as far as the CPU cache and cache lines are concerned.</p> <p>So there is a method in the madness of <em>&#8220;copy the whole block even when the content doesn&#8217;t change&#8221;</em>. The content doesn&#8217;t change but the order does, and paying the cost of sorting once may return a benefit in efficiency many times in the future.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19075 Tue Oct 30 2018 09:29:42 GMT-0400 (EDT) Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0) Upgrade http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/g5DzVnJh1qs/ <p><img class="alignleft wp-image-6512" src="https://oracle-base.com/blog/wp-content/uploads/2016/10/em13c.jpg" alt="" width="200" height="192" />A few months ago I wrote about the installation and upgrade <a href="https://oracle-base.com/blog/2018/07/12/oracle-enterprise-manager-cloud-control-13c-release-3-13-3-0-0-installation-upgrade/">Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0)</a>.</p> <p>At the time I did a clean install and an example upgrade from 13.2 to 13.3. The idea behind the upgrade was basically to practice what I needed to do at work.</p> <p>Just before I left for OpenWorld I got our virtualization folks to give me a clone of the production Cloud Control VM and I ran a practice upgrade on that. It&#8217;s important to do a &#8220;real&#8221; run through, as sometimes you hit issues you don&#8217;t see when upgrading from a clean installation of the previous version. In the past the upgrade of the clean installation of the previous version has worked fine, but the real upgrade failed the prerequisite checks as some of the agents or plugins were too old. The latest test on the clone worked fine, so we had the green light to do the production upgrade.</p> <p>Post OOW18, my first job on returning to work was to get Cloud Control upgraded. I repeated the process I had done on the clone and it went fine.</p> <p>In a funny coincidence, while I was doing the upgrade someone retweeted the blog post from a few months ago. Weird.</p> <p>As a reminder, here are the 13.3 articles.</p> <ul> <li><a href="https://oracle-base.com/articles/13c/cloud-control-13cr3-installation-on-oracle-linux-6-and-7">Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0) Installation on Oracle Linux 6 and 7</a></li> <li><a href="https://oracle-base.com/articles/13c/cloud-control-13cr2-to-13cr3-upgrade">Upgrade Oracle Enterprise Manager Cloud Control 13c Release 2 (13cR2) to 13c Release 3 (13cR3)</a></li> </ul> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/10/30/oracle-enterprise-manager-cloud-control-13c-release-3-13-3-0-0-upgrade/">Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0) Upgrade</a> was first posted on October 30, 2018 at 8:58 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/g5DzVnJh1qs" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8615 Tue Oct 30 2018 03:58:10 GMT-0400 (EDT) Index splits https://jonathanlewis.wordpress.com/2018/10/29/index-splits/ <p>After writing this note I came to the conclusion that it will be of no practical benefit to anyone &#8230;  but I&#8217;m publishing it anyway because it&#8217;s just an interesting little observation about the thought processes of some Oracle designer/developer. (Or maybe it&#8217;s an indication of how it&#8217;s sensible to re-use existing code rather than coding for a particular boundary case, or maybe it&#8217;s an example of how to take advantage of &#8220;dead time&#8221; to add a little icing to the cake when the extra time required might not get noticed). Anyway, the topic came up in a recent thread on the OTN/ODC database forum and since the description given there wasn&#8217;t quite right I thought I&#8217;d write up a correction and a few extra notes.</p> <p>When an index leaf block is full and a new row has to be inserted in the block Oracle will usually allocate a new leaf block, split the contents of the full block fairly evenly between two leaf blocks, then update various pointers to bring the index structure up to date. At various moments in this process the branch block above the leaf block and the leaf blocks either side of the splitting block have to be pinned. The number of times this happens is reported under the statistic <em>&#8220;leaf node splits&#8221;</em> but there is a special subset of leaf node splits that handles the case when the key in the new row is greater than the current high value in the block <em><strong>and</strong> </em>the block is the <em>&#8220;rightmost&#8221;</em> (i.e. high values) block in the index. In this case Oracle adds a new leaf block to the end of the index and inserts the new value in the new block; it doesn&#8217;t share the data at all between the old and new leaf blocks. This special case is reported under the statistic <em>&#8220;leaf node 90-10 splits&#8221;</em>, even though <em>&#8220;100-0&#8221;</em> would be a more accurate description than <em>&#8220;90-10&#8221;</em>.</p> <p>This note is a description of the work done by  a leaf node split and compares the work for a &#8220;50-50&#8221; split (as the general case is often called) and a 90-10 split. You might think that the latter would be less resource-intensive than the former but, in fact, that&#8217;s not the case. Here&#8217;s a little script to get things going &#8211; I&#8217;m using an 8KB block size and ASSM (automatic segment space management); if your default tablespace definition is different the number of rows you have to use will need to be changed.</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: index_splits3.sql rem Author: Jonathan Lewis rem Dated: September 2007 rem start setenv set timing off set feedback off define m_limit = 292 drop table t1 purge; create table t1 (id number, idx_pad varchar2(40), padding varchar2(50)); alter table t1 add constraint t1_pk primary key(id, idx_pad); column object_id new_value m_index_id select object_id from user_objects where object_name = 'T1_PK' and object_type = 'INDEX' ; begin for i in 1..&amp;m_limit loop insert into t1 values( i, rpad('x',40,'x'), rpad(' ',50) ); commit; end loop; end; / </pre> <p>I&#8217;ve created a table with a two-column primary key and inserted <strong><em>&#8220;m_limit&#8221;</em></strong> rows into that table in an order that matches the primary key. The limit of 292 rows (which I&#8217;ve declared at the top of the program) means that the index entries for the data set will exactly fill two leaf blocks. I&#8217;ve captured the <em><strong>object_id</strong></em> of the index because I want to do a <a href="https://jonathanlewis.wordpress.com/2009/08/17/treedump/"><em><strong>&#8220;treedump&#8221;</strong></em></a> of the index before and after inserting the next row.</p> <p>I now need to run one of two tests inserting a single row. Either insert a row that is above the current highest value to force a 90-10 index leaf node split, or insert a row a little below the current high value to force a 50-50 index node split in the 2nd of the two index leaf blocks.</p> <pre class="brush: plain; title: ; notranslate"> alter session set events 'immediate trace name treedump level &amp;m_index_id'; alter system switch logfile; execute snap_my_stats.start_snap; begin for i in &amp;m_limit + 1 .. &amp;m_limit + 1 loop insert into t1 values( i, rpad('x',40,'x'), rpad(' ',50) -- i - 2 , rpad('y',40,'y'), rpad(' ',50) ); commit; end loop; end; / execute snap_my_stats.end_snap alter session set events 'immediate trace name treedump level &amp;m_index_id'; execute dump_log </pre> <p>The calls to <a href="https://jonathanlewis.wordpress.com/2016/10/06/my-session-workload/"><em><strong>snap_my_stats</strong></em></a> are using a package I wrote a long time ago to report a delta in my session&#8217;s stats. The call to <em><strong>dump_log</strong></em> uses another little package to identify the current log file and issue an <em>&#8220;alter system dump logfile &#8230;&#8221;</em> command. Of the two possible sets of values for the row being inserted the first one will cause a 90-10 split the second (commented out) will cause a 50-50 split.</p> <p>Here are the results from the calls to <em><strong>treedump</strong></em> &#8211; first the dump taken before the insert then the dump after a 90-10 split, finally the dump after re-running the test and forcing a 50-50 split. These results came from a database running 11.2.0.4, but the results are the same for 12.1.0.2 and 18.3.0.0:</p> <pre class="brush: plain; title: ; notranslate"> ----- begin tree dump branch: 0x140008b 20971659 (0: nrow: 2, level: 1) leaf: 0x140008e 20971662 (-1: nrow: 147 rrow: 147) leaf: 0x140008f 20971663 (0: nrow: 145 rrow: 145) ----- end tree dump ----- begin tree dump branch: 0x140008b 20971659 (0: nrow: 3, level: 1) leaf: 0x140008e 201662 (-1: nrow: 147 rrow: 147) leaf: 0x140008f 20971663 (0: nrow: 145 rrow: 145) leaf: 0x140008c 20971660 (1: nrow: 1 rrow: 1) ----- end tree dump ----- begin tree dump branch: 0x140008b 20971659 (0: nrow: 3, level: 1) leaf: 0x140008e 20971662 (-1: nrow: 147 rrow: 147) leaf: 0x140008f 20971663 (0: nrow: 78 rrow: 78) leaf: 0x140008c 20971660 (1: nrow: 68 rrow: 68) ----- end tree dump </pre> <p>As you can see the extra row in the first case has been inserted into a new leaf block leaving the 2nd leaf block (apparently) unchanged; in the second case the 145 initial rows plus the one extra row have been shared fairly evenly between two leaf block. I can&#8217;t explain the imbalance in this case, it doesn&#8217;t affect the length of the branch entry. If you&#8217;re wondering why the first leaf block held 147 entries while the original 2nd leaf block held 145 it&#8217;s because the first 100 entries in the first leaf block had a value for the <em><strong>id</strong></em> column that was 2 bytes long, after which the <em><strong>id</strong></em> needed 3 bytes storage for Oracle&#8217;s internal representation.)</p> <p>Having examined the treedumps to see that the splits are 90-10 and 50-50 respectively we can now look at the <em><strong>undo</strong></em> and <em><strong>redo</strong></em> generated by the different cases. Here are the relevant values extracted from the snapshots of the session stats. Again the first set comes from the 90-10 split, the second from the 50-50 split.</p> <pre class="brush: plain; title: ; notranslate"> Redo/Undo stats 90/10 split -------------------------------------------- redo entries 9 redo size 18,500 undo change vector size 8,736 Redo/Undo stats 50/50 split -------------------------------------------- redo entries 9 redo size 18,520 undo change vector size 8,736 </pre> <p>In both cases the volume of undo and redo is the same (plus or minus a tiny bit &#8211; with tiny variations across versions). The <em><strong>undo</strong> </em>is equivalent to roughly a whole block plus a few percent (and that will be copied into the <em><strong>redo</strong></em>, of course) and the &#8220;pure&#8221; <em><strong>redo</strong></em> is also equivalent to a whole block plus a few percent for a total of two data blocks worth plus a couple of thousand bytes. (The extra percentage is mostly about the old and new pointers as we break and make links in the leaf blocks and update and insert links from the branch block above.)</p> <p>So why does a 90/10 split, which appears simply to add a leaf block and insert one row, do so much work? The answer lies (to start with) in the dump of the redo log file. The session statistics show 9 redo entries (<em><strong>redo change records</strong></em>) generated in both cases, so I&#8217;m going to start by picking out a summary of those records from the log file dumps using <em><strong>egrep</strong></em> to identify the lines showing the redo change record length (<em><strong>LEN:</strong></em>) and the <em><strong>redo change vector</strong></em> op codes (<a href="https://jonathanlewis.wordpress.com/2017/07/25/redo-op-codes/"><em><strong>OP:</strong></em></a>). Here&#8217;s the output, with a little cosmetic modification, for the 90-10 split.</p> <pre class="brush: plain; title: ; notranslate"> egrep -e &quot;LEN:&quot; -e&quot;OP:&quot; test_ora_20690.trc REDO RECORD - Thread:1 RBA: 0x00314b.00000002.0010 LEN: 0x0074 VLD: 0x05 (LWN RBA: 0x00314b.00000002.0010 LEN: 0038 NST: 0001 SCN: 0x0b86.0cfca74e) CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008f OBJ:349950 SCN:0x0b86.0cfca74a SEQ:2 OP:4.1 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000002.0084 LEN: 0x0144 VLD: 0x01 CHANGE #1 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfca746 SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:1 CLS:74 AFN:3 DBA:0x00c0465f OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:1 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349950 SCN:0x0b86.0cfca74e SEQ:1 OP:10.6 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000002.01c8 LEN: 0x20a4 VLD: 0x01 *** CHANGE #1 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:1 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:1 CLS:74 AFN:3 DBA:0x00c04660 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:1 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349950 SCN:0x0b86.0cfca74e SEQ:2 OP:10.9 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000013.017c LEN: 0x0044 VLD: 0x01 CHANGE #1 TYP:0 CLS:8 AFN:5 DBA:0x01400088 OBJ:349950 SCN:0x0b86.0cfca638 SEQ:3 OP:13.22 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000013.01c0 LEN: 0x01ac VLD: 0x01 CHANGE #1 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:1 CLS:74 AFN:3 DBA:0x00c04661 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:1 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:349950 SCN:0x0b86.0cfca638 SEQ:2 OP:10.8 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000014.017c LEN: 0x0048 VLD: 0x01 CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008b OBJ:349950 SCN:0x0b86.0cfca639 SEQ:2 OP:4.1 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000014.01c4 LEN: 0x00e0 VLD: 0x01 CHANGE #1 TYP:0 CLS:74 AFN:3 DBA:0x00c04661 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:2 OP:5.1 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008b OBJ:349950 SCN:0x0b86.0cfca74e SEQ:1 OP:10.15 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000015.00b4 LEN: 0x1fb0 VLD: 0x01 *** CHANGE #1 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfca74e SEQ:3 OP:5.4 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349950 SCN:0x0b86.0cfca74e SEQ:3 OP:10.8 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314b.00000025.0164 LEN: 0x0320 VLD: 0x09 CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x01400084 OBJ:349949 SCN:0x0b86.0cfca74a SEQ:2 OP:11.2 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:83 AFN:3 DBA:0x00c004a8 OBJ:4294967295 SCN:0x0b86.0cfca738 SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:349950 SCN:0x0b86.0cfca74e SEQ:1 OP:10.5 ENC:0 RBL:0 CHANGE #4 TYP:0 CLS:83 AFN:3 DBA:0x00c004a8 OBJ:4294967295 SCN:0x0b86.0cfca750 SEQ:1 OP:5.4 ENC:0 RBL:0 CHANGE #5 TYP:0 CLS:84 AFN:3 DBA:0x00c04b0f OBJ:4294967295 SCN:0x0b86.0cfca738 SEQ:2 OP:5.1 ENC:0 RBL:0 CHANGE #6 TYP:0 CLS:84 AFN:3 DBA:0x00c04b0f OBJ:4294967295 SCN:0x0b86.0cfca750 SEQ:1 OP:5.1 ENC:0 RBL:0 </pre> <p>I&#8217;ve highlighted two redo records with &#8216;***&#8217; at the end of line. One of these records has length 0x20a4, the other has length 0x1fb0 i.e. roughly a whole data block each. We&#8217;ll look at those in more detail in a moment. Here, for comparison, is the result from the 50-50 split &#8211; again with a few highlighted lines:</p> <pre class="brush: plain; title: ; notranslate"> REDO RECORD - Thread:1 RBA: 0x00314f.00000002.0010 LEN: 0x0074 VLD: 0x05 (LWN RBA: 0x00314f.00000002.0010 LEN: 0038 NST: 0001 SCN: 0x0b86.0cfcbc25) CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008f OBJ:349962 SCN:0x0b86.0cfcbc21 SEQ:2 OP:4.1 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.00000002.0084 LEN: 0x0144 VLD: 0x01 CHANGE #1 TYP:0 CLS:69 AFN:3 DBA:0x00c000e8 OBJ:4294967295 SCN:0x0b86.0cfcbc15 SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:70 AFN:3 DBA:0x00c10c43 OBJ:4294967295 SCN:0x0b86.0cfcbc15 SEQ:2 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349962 SCN:0x0b86.0cfcbc25 SEQ:1 OP:10.6 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.00000002.01c8 LEN: 0x20a4 VLD: 0x01 *** CHANGE #1 TYP:0 CLS:69 AFN:3 DBA:0x00c000e8 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:1 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:1 CLS:70 AFN:3 DBA:0x00c10c44 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:1 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349962 SCN:0x0b86.0cfcbc25 SEQ:2 OP:10.9 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.00000013.017c LEN: 0x0044 VLD: 0x01 CHANGE #1 TYP:0 CLS:8 AFN:5 DBA:0x01400088 OBJ:349962 SCN:0x0b86.0cfcbb24 SEQ:3 OP:13.22 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.00000013.01c0 LEN: 0x1010 VLD: 0x01$ *** CHANGE #1 TYP:0 CLS:69 AFN:3 DBA:0x00c000e8 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #2 TYP:1 CLS:70 AFN:3 DBA:0x00c10c45 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:1 OP:5.1 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:349962 SCN:0x0b86.0cfcbb24 SEQ:2 OP:10.8 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.0000001c.0060 LEN: 0x0048 VLD: 0x01 CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x0140008b OBJ:349962 SCN:0x0b86.0cfcbb25 SEQ:2 OP:4.1 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.0000001c.00a8 LEN: 0x00e0 VLD: 0x01 CHANGE #1 TYP:0 CLS:70 AFN:3 DBA:0x00c10c45 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:2 OP:5.1 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008b OBJ:349962 SCN:0x0b86.0cfcbc25 SEQ:1 OP:10.15 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.0000001c.0188 LEN: 0x1150 VLD: 0x01 *** CHANGE #1 TYP:0 CLS:69 AFN:3 DBA:0x00c000e8 OBJ:4294967295 SCN:0x0b86.0cfcbc25 SEQ:3 OP:5.4 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:1 AFN:5 DBA:0x0140008f OBJ:349962 SCN:0x0b86.0cfcbc25 SEQ:3 OP:10.8 ENC:0 RBL:0 REDO RECORD - Thread:1 RBA: 0x00314f.00000025.0168 LEN: 0x0330 VLD: 0x09 CHANGE #1 TYP:2 CLS:1 AFN:5 DBA:0x01400084 OBJ:349961 SCN:0x0b86.0cfcbc21 SEQ:2 OP:11.2 ENC:0 RBL:0 CHANGE #2 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfcbc1a SEQ:2 OP:5.2 ENC:0 RBL:0 CHANGE #3 TYP:0 CLS:1 AFN:5 DBA:0x0140008c OBJ:349962 SCN:0x0b86.0cfcbc25 SEQ:1 OP:10.5 ENC:0 RBL:0 CHANGE #4 TYP:0 CLS:73 AFN:3 DBA:0x00c00160 OBJ:4294967295 SCN:0x0b86.0cfcbc27 SEQ:1 OP:5.4 ENC:0 RBL:0 CHANGE #5 TYP:0 CLS:74 AFN:3 DBA:0x00c04c64 OBJ:4294967295 SCN:0x0b86.0cfcbc1a SEQ:2 OP:5.1 ENC:0 RBL:0 CHANGE #6 TYP:0 CLS:74 AFN:3 DBA:0x00c04c64 OBJ:4294967295 SCN:0x0b86.0cfcbc27 SEQ:1 OP:5.1 ENC:0 RBL:0 </pre> <p>There are three interesting records in the 50-50 split with lengths 0x20a4 (the same as the 90-10 split), 0x1010, 0x1150. So we seem to start the same way with a &#8220;full block&#8221; record, and follow up with two &#8220;half block&#8221; records. The numbers allow you to make a reasonable guess &#8211; Oracle copies the original leaf block into the <em><strong>undo</strong></em>, then writes the two new leaf blocks as &#8220;pure&#8221; <em><strong>redo</strong></em>; in one case the two new leaf block redo records constitute a whole block and a tiny fraction of a block; in the other case the two new leaf block redo records constitute two half blocks.</p> <p>I won&#8217;t show you the full detail that I checked in the log file dump, but the big 0x20a4 record in the 90-10 split is mostly made up of an <strong><em>&#8220;OP:5.1&#8221;</em></strong> change vector labelled <em>&#8220;restore block before image (8032)&#8221;</em>, while the 5th and 8th records in <span style="text-decoration:underline;"><strong>both</strong></span> dumps hold <em><strong>&#8220;OP:10.8&#8221;</strong></em> change vectors labelled <em>&#8220;(kdxlne) &#8230; new block has NNN rows&#8221;</em>. In the case of the 90-10 split the values for NNN are 1 and 145, in the case of the 50-50 split the values for NNN are 68 and 78 &#8211; in that (higher values leaf block first) order.</p> <p>The 90-10 split and the 50-50 split act in <strong><span style="text-decoration:underline;">exactly the same way</span></strong> &#8211; save the old block, allocate a new block, populate two blocks. It really looks as if code re-use has missed an easy opportunity for some optimisation &#8211; why save and rewrite a block when the data content is not going to change ?</p> <p>Before assuming there&#8217;s a bug (or defect, or sub-optimal implementation) though it&#8217;s wise to consider whether there might be something else going on &#8211; Oracle developers (especially at the deeper levels) tend to have good reasons for what they do so maybe the rewrite is deliberate and serves a useful purpose.</p> <p>If you do anything with my current test you won&#8217;t spot the extra little feature because my test is running a very special edge case &#8211; but I had a thought that would justify the cost (and timing) of the rewrite, and I&#8217;ll be publishing the idea, the test, and the results tomorrow.</p> <h3>Footnote</h3> <p>It is possible that a leaf node split means Oracle has to insert a pointer into a level 1 branch node that is already full &#8211; in which case Oracle will have to allocate a new branch node, share the branch data (including the new leaf pointer) between the two nodes, and insert a new branch pointer into the relevant level 2 branch block &#8230; and that may split etc. all the way up to the root. When the root node splits Oracle allocates two new blocks, increasing the branch level by one and keeping the original root block in place (immediately after all the space management blocks) but now pointing to just 2 &#8220;branch N-1&#8221; level blocks. Oracle will update the statistics <em><strong>&#8220;branch node splits&#8221;</strong></em> and <strong><em>&#8220;root node splits&#8221;</em></strong>.</p> <p>In certain situations (typically relating to very large deletes followed by highly concurrent small inserts) Oracle may run into problems identifying a suitable &#8220;free&#8221; block while trying to do a split, and this can result in very slow splits that generate a lot of undo and redo which pinning index leaf blocks exclusively (leading to a couple of the more rare <strong><em>&#8220;enq &#8211; TX:&#8221;</em></strong> waits. In this case you may see statistics <em>&#8220;failed probes on index block reclamation&#8221;</em> and <em>&#8220;recursive aborts on index block reclamation&#8221;</em> starting to climb.  In theory I think you shouldn&#8217;t see more than a handful of <em><strong>&#8220;failed probes&#8221;</strong></em> per <em><strong>&#8220;recursive abort&#8221;</strong></em> &#8211; but I&#8217;ve never been able to model the problem to check that.</p> <p>&nbsp;</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19065 Mon Oct 29 2018 09:48:33 GMT-0400 (EDT) The Latest Azure SQL Features in GA https://blog.pythian.com/azures-new/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><i><span style="font-weight: 400;">I </span></i><a href="https://blog.pythian.com/cloudscape-podcast-episode-9-september-2018/"><i><span style="font-weight: 400;">recently</span></i></a><i><span style="font-weight: 400;"> joined Chris Presley for his podcast, </span></i><a href="https://blog.pythian.com/?s=cloudscape"><i><span style="font-weight: 400;">Cloudscape</span></i></a><i><span style="font-weight: 400;">, to talk about what’s happening in the world of cloud-related matters. I shared the most recent events surrounding Microsoft Azure.</span></i></p> <p><i><span style="font-weight: 400;">Topics of discussion included:</span></i></p> <p><b>Ethereum Proof-of-Authority on Azure</b><br /> <b>Azure Cost Forecast API launch</b><br /> <b>SQL Data Warehouse updates</b><br /> <b><i>   &#8211; Accelerated and Flexible Restore Points</i></b><br /> <b><i>   &#8211; </i></b><b><i>Intelligent Performance Insights</i></b><br /> <b>Security Center Adaptive Application Controls in GA</b><br /> <b>Azure Management Groups now in GA</b><br /> <b>Azure Migrate enhancements:</b><br /> <b><i>   &#8211; Support for reserved instances</i></b><br /> <b><i>   &#8211; VM series policy</i></b><br /> <b><i>   &#8211; VM uptime</i></b><br /> <b><i>   &#8211; Windows 2008 support</i></b></p> <p>&nbsp;</p> <p><b>Ethereum Proof-of-Authority on Azure</b></p> <p><span style="font-weight: 400;">Ethereum is one of the biggest public network blockchains for smart contract execution. It’s also useable for private or consortium-style use cases. Consortium being, for example, eight companies that want to run their own blockchain and each one is a member of the chain network.</span></p> <p><span style="font-weight: 400;">Azure has different templates that you can use to get going pretty quickly. To deploy these blockchain networks, the Ethereum template has used what is known as a “proof-of-work” consensus mechanism. Because they are distributed networks, a blockchain has different algorithms to decide which is the real version of the truth.</span></p> <p><span style="font-weight: 400;">The initial Ethereum release of the template used the default Ethereum configuration which uses proof of work to define consensus. Proof of work is a very computational intensive activity and that’s what is called mining. It didn’t make sense to use proof of work in a private or consortium-style blockchain because in those scenarios the network conditions are not adversarial and usually the validating parties are well known. For example, if I have a distributed ledger between different regions of my corporate offices around the world.</span></p> <p><span style="font-weight: 400;">In that case, it makes no sense to have proof of work as consensus and have all of the regions doing really computationally intensive calculations in order to arrive a consensus. I’m not going to have suddenly some bad actor, inside the network that is unknown, in the public blockchain, Ethereum has a monetary value so there’s an incentive for people to do that. In a private setting it is enough to stake your reputation, using the Proof of Authority model.</span></p> <p><span style="font-weight: 400;">Now the Azure Ethereum template has the option to do exactly this type of model. You say, “These accounts are allowed to validate transactions” and that’s all you need. Then what happens is it’s less computationally intensive, so you need smaller VMs to be transactional validators. At the same time, it increases the throughput of your blockchain because you’re not doing all those computational puzzles that we call mining. Note that you haven’t made your chain less secure, as long as you know who the validators are (this piece is not feasible in a public network that anyone can join).</span></p> <p><span style="font-weight: 400;">This is going to just make it easier for people to deploy production blockchains using Ethereum in private or partnership-style scenarios. Then, for example, if they are all a part of the same supply chain and want to run a distributed ledger, then each one will have a validator and won’t waste resources to do proof of work.</span></p> <p><b>Azure cost forecast API launch</b></p> <p><span style="font-weight: 400;">This is what I call a quality-of-life type of improvement which we see month in and month out, just small little bits that come in to round up a story or make life easier for somebody. This is exactly that.</span></p> <p><span style="font-weight: 400;">Azure already has a cost API. You can get your own cost numbers through their API endpoint, you can build your own reports, you can consume it in your own application and whatnot. The new change that they have published now is a forecasting feature inside that API. You don’t have to run your own forecasts, now you can just use the same API. You can put parameters and say: “I want to see a daily forecast or a monthly forecast.” The service will give you an upper and a lower boundary of what the forecast is at a 95% degree of confidence. I think it’s all calculated based on statistical analysis of your subscription.</span></p> <p><span style="font-weight: 400;">While this is not revolutionary, it is pretty handy.  If you want to build some reporting or if you are a developer and you’re trying to build some sort of custom cloud cost solution, they just made your life a little bit easier by adding the forecasting capabilities straight into the API.</span></p> <p><b>SQL Data Warehouse updates</b></p> <p><b><i>Accelerated and flexible restore points</i></b></p> <p><span style="font-weight: 400;">There have been a couple of updates for SQL Data Warehouse.</span></p> <p><span style="font-weight: 400;">First is accelerated and flexible restore points. They are adding more options to what you can restore. Before, it used to be you could only pick restore points that were from the pool that done once every 24 hours. Now you can select restore points that are from a pool that are done once every eight hours.</span></p> <p><span style="font-weight: 400;">Potentially, you have now triple the amount of restore points that you had before. But the cool bit here really is that your restore time is 20 minutes or less regardless of data warehouse size. Regardless if it’s 10 terabytes or it’s 10 petabytes, it always takes 20 minutes or less. The restore is not a size of data operation, it is just a flat amount always below that 20-minute threshold. You don’t have to worry about how long it is going to take.</span></p> <p><span style="font-weight: 400;">It opens up other scenarios besides just regular recovery and RTO, for example, using DW restores for normal development and testing, even as part of an automated CI/CD pipeline.</span></p> <p><b><i>Intelligent performance insights</i></b></p> <p><span style="font-weight: 400;">The other SQL DW update is the introduction of intelligent performance insights into the data warehouse experience in the portal. This new feature analyzes some stats and situations inside your data warehouse and suggests improvements. Right now, there are only the basics but it is a starting point. It alerts you if you have some skewed distributions in your data and that your stats might need some updating.</span></p> <p><span style="font-weight: 400;">At least they have started down that path and maybe we’ll see in the future if they provide bigger or better recommendations as they build out that engine further.</span></p> <p><b>Security Center Adaptive Application Controls in general availability</b></p> <p><span style="font-weight: 400;">The Security Center feature is called Adaptive Application Controls.  You can give it access to the Security Center to analyze your VMs and it will make an inventory of the applications that are running inside of them. You can set it into either audit or enforce mode. </span></p> <p><span style="font-weight: 400;">In audit mode, if someone decides to install something that is not on the list of the allowed applications, it will flag it. If you feel really comfortable with the tool, you can actually set to enforce. This means that when the security center detects that the VM is about to execute something that is not whitelisted, it blocks the execution. </span></p> <p><span style="font-weight: 400;">For a production environment that has really tight security requirements and must stay under compliance at all times, I can definitely see how this could get widespread adoption.</span></p> <p><b>Azure Management Groups now in general availability</b></p> <p><span style="font-weight: 400;">Azure Management Groups is a new feature to make it easy for really big Azure users and really big clients to manage their whole Azure tenant. With management groups,  you can organize different subscriptions into groups and then you can push policies and reports on to the subscriptions inside those groups.</span></p> <p><span style="font-weight: 400;">For example, you could have a subscription for development and testing of your main revenue generating application. And then for security and segregation of duties, you have a totally separate subscription for production. So you can track your development costs separately from your production cost. For security, you wouldn’t have your admin of your dev subscription be the admin of the production one.</span></p> <p><span style="font-weight: 400;">Or maybe you decide to organize your resources based on whether they are part of the same product application. They share an overall budget, maybe they share the people who are allowed to work on them, or maybe they share the regions that they are allowed to be deployed on. You could put them inside one management group and then set those policies at the level of the management group and they would trickle down to the individual subscriptions.</span></p> <p><span style="font-weight: 400;">It’s a feature to make it a lot easier to manage Azure at scale. For individuals, they are just playing around in their house and have only one subscription. This is not going to be a feature for them. This is for enterprise-level adoption.</span></p> <p><b>Azure Migrate enhancements:</b></p> <p><span style="font-weight: 400;">Azure Migrate is the service that lets you easily migrate your on-premise virtual machine state  to Azure. The Microsoft team is working on having an agent for physical machines but it’s not here yet. What this service does is provide an analysis of all your VM’s on premises and then it gives you suggestions and estimations as to what that would look like if you were to move to Azure. </span></p> <p><span style="font-weight: 400;">The service is actually very neat. They give you a VM appliance that you download off the Azure website and run on your ESX. It talks to the ESX server and collects all of the different configuration metrics and the performance metrics of all the machines running in that hypervisor. Then it provides an estimate to move into Azure and if you do want to move into Azure, it also leads you into how you can install the site recovery agent.</span></p> <p><b><i>Support for reserved instances</i></b></p> <p><span style="font-weight: 400;">This month they have added support for reserved instances. You know your workload best, you know your compliance, your requirements and all of these things right? So you are allowed to customize some of the ways that the tool generates the estimates to make it more accurate as to what it’s going to be in the end. </span></p> <p><span style="font-weight: 400;">For example, support for reserved instances means that I can say, “Well I know that these VMs are 24/7. I know that this is not going anywhere. So in the estimate, give me the prices if I reserve them for three years” instead of having the regular pay-as-you go pricing. </span></p> <p><b><i>VM series policy &amp; VM uptime</i></b></p> <p><span style="font-weight: 400;">There is a family of general purpose VMs. There is a family of GPU VM’s, there is a family of burstable VM’s. So it also allows you to say, “Well I know that this ESX hypervisor is jam-packed with developer instances. So give me estimates here if all these developer instances turn into burstable VMs.”. These are a lot cheaper so you can tailor the estimate based on the specific family of VM you want to leverage.</span></p> <p><span style="font-weight: 400;">So again, the whole idea is that you can tweak your migrate experience with the tool. It is based on your own knowledge of your own premises state to get an accurate estimate of the cost and effort of migrating.</span></p> <p><b><i>Windows 2008 support</i></b><span style="font-weight: 400;">         </span><span style="font-weight: 400;"> </span></p> <p><span style="font-weight: 400;">The last enhancement is they have added Windows 2008 support. So you can transparently use the tool to migrate Windows 2008 which at this point is 10 years old. They even went the extra mile and are migrating Windows 2008 32 bits from on-premises into Azure. I know somebody out there is still running Windows 2008 32 bits as a production server, now is your time!</span></p> <p><i><span style="font-weight: 400;">This was Part 2 of the Microsoft Azure topics we discussed during the podcast, Chris also welcomed </span></i><a href="https://www.linkedin.com/in/gregbaker2/"><i><span style="font-weight: 400;">Greg Baker</span></i></a><i><span style="font-weight: 400;"> (Amazon Web Services expert) who discussed topics related to his expertise.</span></i></p> <p><a href="https://blog.pythian.com/cloudscape-podcast-episode-9-september-2018/"><i><span style="font-weight: 400;">Listen</span></i></a><i><span style="font-weight: 400;"> to the full conversation and be sure to subscribe to the podcast to be notified when a new episode has been released.</span></i></p> <p>&nbsp;</p> </div></div> Warner Chaves https://blog.pythian.com/?p=105313 Mon Oct 29 2018 09:14:04 GMT-0400 (EDT) Oracle OpenWorld and Code One 2018 : It’s a Wrap! http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/wJGmJz0WIyk/ <p><img class="alignnone size-full wp-image-8421" src="https://oracle-base.com/blog/wp-content/uploads/2018/09/oow18.jpg" alt="" width="614" height="137" /></p> <p><img class="alignleft wp-image-8613" src="https://oracle-base.com/blog/wp-content/uploads/2018/10/anthropomorphized-animals-2023331_640.png" alt="" width="200" height="260" />Here are some top-level thoughts about what happened over the week at Oracle OpenWorld and Oracle Code One.</p> <ul> <li>Oracle Cloud Infrastructure (OCI) has come of age. I spoke to a bunch of non-Oracle folks who are using OCI for real workloads and the general perception was that it delivers. Of course the Oracle folks are going to say this, which I why I didn&#8217;t ask them. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> It&#8217;s taken some time for Oracle to get to this point, but they finally seem to have the infrastructure to move forward with the rest of their services.</li> <li>There was a continued focus on automation and the autonomous services. I understand some people seem conflicted about this, but this is a continuation of what&#8217;s been happening over the last 20 years. As I&#8217;ve said before, what we have now is not the destination. It&#8217;s the start (of this part) of the journey.</li> <li>All the base (on-prem) products continue to evolve. As has been the case in recent years, the evolution of Oracle products seems to be based on the features Oracle themselves need to improve their cloud services, but that is fine as it&#8217;s making the products better for us on-prem customers too.</li> <li>Oracle&#8217;s support of <a href="https://www.cncf.io/">Cloud Native Computing Foundation (CNCF)</a> is interesting. Allowing people to do the same thing on-prem and in the cloud is good for a couple of reasons. It helps people in the migration from on-prem to cloud. It also stops people feeling trapped on a cloud service. The former is great for cloud providers from an adoption perspective, but the latter is a little scary I guess. It&#8217;s important cloud providers don&#8217;t give people a reason to want to move off their services!</li> <li>Speaking to non-Oracle folks, there is a perception that Oracle still lags behind on the customer service side of things. I wrote about this a couple of years ago in a post called <a href="/blog/2016/09/17/oracle-tech-company-or-service-company/">Oracle: Tech Company or Service Company?</a> I hope Oracle focus on this. There is no point having great tech if people don&#8217;t feel confident about using it because of the customer service side of things.</li> <li>I&#8217;m a little confused by the re-branding of  &#8220;Oracle Groundbreakers&#8221; and the &#8220;Oracle Groundbreaker Ambassadors&#8221;. I miss Oracle Technology Network (OTN). <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></li> </ul> <p>Here are the posts related to this trip.</p> <ul> <li><a href="https://oracle-base.com/blog/2018/10/19/oracle-openworld-and-code-one-2018-the-journey-begins/">Oracle OpenWorld and Code One 2018 : The Journey Begins</a></li> <li><a href="https://oracle-base.com/blog/2018/10/20/oracle-openworld-and-code-one-2018-oracle-groundbreaker-ambassador-briefing/">Oracle OpenWorld and Code One 2018 : Oracle Groundbreaker Ambassador Briefing</a></li> <li><a href="https://oracle-base.com/blog/2018/10/21/oracle-openworld-and-code-one-2018-oracle-ace-director-briefing-and-18c-xe/">Oracle OpenWorld and Code One 2018 : Oracle ACE Director Briefing (and 18c XE)</a></li> <li><a href="https://oracle-base.com/blog/2018/10/21/its-not-all-about-you/">It&#8217;s not all about you!</a></li> <li><a href="https://oracle-base.com/blog/2018/10/23/oracle-openworld-and-code-one-2018-day-1-monday/">Oracle OpenWorld and Code One 2018 : Day 1 &#8211; Monday</a></li> <li><a href="https://oracle-base.com/blog/2018/10/24/oracle-openworld-and-code-one-2018-day-2-tuesday/">Oracle OpenWorld and Code One 2018 : Day 2 &#8211; Tuesday</a></li> <li><a href="https://oracle-base.com/blog/2018/10/24/even-aces-make-mistakes-you-never-forget-the-first-time-you-drop-the-production-database/" rel="bookmark">Even ACEs Make Mistakes : You never forget the first time you drop the production database!</a></li> <li><a href="https://oracle-base.com/blog/2018/10/25/oracle-openworld-and-code-one-2018-day-3-wednesday/">Oracle OpenWorld and Code One 2018 : Day 3 &#8211; Wednesday</a></li> <li><a href="https://oracle-base.com/blog/2018/10/26/oracle-openworld-and-code-one-2018-day-4-thursday/">Oracle OpenWorld and Code One 2018 : Day 4 &#8211; Thursday</a></li> <li><a href="https://oracle-base.com/blog/2018/10/28/oracle-openworld-and-code-one-2018-the-journey-home/">Oracle OpenWorld and Code One 2018 : The Journey Home</a></li> <li>Oracle OpenWorld and Code One 2018 : It&#8217;s a Wrap! (this post)</li> </ul> <p>Thanks to the Oracle ACE Program and the Oracle Groundbreaker Ambassadors Program for making this trip possible for me. Let&#8217;s see what the coming year brings&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/10/29/oracle-openworld-and-code-one-2018-its-a-wrap/">Oracle OpenWorld and Code One 2018 : It&#8217;s a Wrap!</a> was first posted on October 29, 2018 at 9:55 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/wJGmJz0WIyk" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8599 Mon Oct 29 2018 04:55:55 GMT-0400 (EDT) Upgrades – again https://jonathanlewis.wordpress.com/2018/10/28/upgrades-again-2/ <p>I&#8217;ve got a data set which I&#8217;ve recreated in 11.2.0.4 and 12.2.0.1.</p> <p>I&#8217;ve generated stats on the data set, and the stats are identical.</p> <p>I don&#8217;t have any indexes or extended stats, or SQL Plan directives or SQL Plan Profiles, or SQL Plan Baselines, or SQL Patches to worry about.</p> <p>I&#8217;m joining two tables, and the join column on one table has a <em><strong>frequency histogram</strong></em> while the join column on the other table has a <em><strong>height-balanced histogram</strong></em>.  The histograms were created with <em><strong>estimate_percent</strong></em> =&gt; 100%. (which explains why I&#8217;ve got a height-balanced histogram in 12c rather than a <em><strong>hybrid histogram</strong></em>.)</p> <p>Here are the two execution plans, 11.2.0.4 first, pulled from memory by <em><strong>dbms_xplan.display_cursor()</strong></em>:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID f8wj7karu0hhs, child number 0 ------------------------------------- select count(*) from t1, t2 where t1.j1 = t2.j2 Plan hash value: 906334482 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 12 | | | | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 12 | | | | |* 2 | HASH JOIN | | 1 | 1855 | 1327 |00:00:00.01 | 12 | 2440K| 2440K| 1357K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 100 | 100 |00:00:00.01 | 6 | | | | | 4 | TABLE ACCESS FULL| T2 | 1 | 800 | 800 |00:00:00.01 | 6 | | | | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;T1&quot;.&quot;J1&quot;=&quot;T2&quot;.&quot;J2&quot;) SQL_ID f8wj7karu0hhs, child number 0 ------------------------------------- select count(*) from t1, t2 where t1.j1 = t2.j2 Plan hash value: 906334482 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 41 | | | | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 41 | | | | |* 2 | HASH JOIN | | 1 | 1893 | 1327 |00:00:00.01 | 41 | 2545K| 2545K| 1367K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 100 | 100 |00:00:00.01 | 7 | | | | | 4 | TABLE ACCESS FULL| T2 | 1 | 800 | 800 |00:00:00.01 | 7 | | | | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;T1&quot;.&quot;J1&quot;=&quot;T2&quot;.&quot;J2&quot;) </pre> <p>The key point is the the difference between the two cardinality estimates. Why has that appeared, and what might the optimizer do in a more complex plan when a cardinality estimates changes?</p> <p>The difference is only 2% but that was on a couple of data sets I just happened to run up to check something completely different, I wasn&#8217;t <em><strong>trying</strong></em> to break something, so who know how big the variation can get. Of course if you&#8217;re switching from 11g to 12c then Oracle (Corp.) expects you to be using <em><strong>auto_sample_size</strong></em> anyway so you shouldn&#8217;t be producing height-balanced histograms.</p> <p>So does this difference really matter? Maybe not, but if you (like many sites I&#8217;ve seen) are still using fixed percentage sample sizes and are generating histograms it&#8217;s another reason (on top of the usual instability effects of height-balanced and hybrid histograms) why you might see plans change as you upgrade from 11g to 12c.</p> <h3>Footnote</h3> <p>It looks as if the difference comes mostly from a coding error in 11g that has been fixed in 12c &#8211; I couldn&#8217;t find an official bug or <a href="https://jonathanlewis.wordpress.com/2011/01/28/fix-control/"><em><strong>fix_control</strong></em></a> that matched, though. More on that later in the week.</p> <h3>Update</h3> <p>Chinar Aliyev has pointed out that there are three fix-controls that may be associated with this (and other ) changes. From <em><strong>v$system_fix_control</strong></em> these are:</p> <pre class="brush: plain; title: ; notranslate"> 14033181 1 QKSFM_CARDINALITY_14033181 correct ndv for non-popular values in join cardinality comp. (12.1.0.1) 19230097 1 QKSFM_CARDINALITY_19230097 correct join card when popular value compared to non popular (12.2.0.1) 22159570 1 QKSFM_CARDINALITY_22159570 correct non-popular region cardinality for hybrid histogram (12.2.0.1) </pre> <p>I haven&#8217;t tested them yet, but with the code easily available in the article it won&#8217;t take long to see what the effects are when I have a few minutes. The first fix may also be why I had a final small discrepancy between 11g and 12c on the join on <a href="https://jonathanlewis.wordpress.com/2018/10/03/join-cardinality/"><em><strong>two columns with frequency histograms</strong></em></a>.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19095 Sun Oct 28 2018 08:39:13 GMT-0400 (EDT) Quick Start with Eclipse Che – Browser based IDE, running on Docker https://technology.amis.nl/2018/10/28/quick-start-with-eclipse-che-browser-based-ide-running-on-docker/ <p>One of the nice discoveries I made last week during CodeOne 2018 was Eclipse Che. This is a browser based polyglot IDE that runs in a Docker container &#8211; either locally, on a central server or in the cloud, on a stand alone Docker engine, a Kubernetes or an OpenShift cluster. Running Eclipse Che is quite easy, upgrading and removing Eclipse Che is just as easy. Working in a browser based IDE does not have the exact same feel and experience as a desktop application &#8211; but it comes close. And the ease of getting it going, of managing it for a larger team and of working a in clean, separated environment is very appealing. Additionally, having workspaces run in their own containers makes for another clean, separated approach that perhaps helps with better performance and quicker dev-test-roundtrips.</p> <p>Anyway, here are some quick notes from my first steps with Eclipse Che on my own machine:</p> <p>I am running a Docker host in an Ubuntu Virtual Machine managed by Vagrant. A local directory is mapped into the VM.</p> <p>Run a single Docker command to start Eclipse Che:</p> <pre>docker run -it --rm -e CHE_HOST=192.168.188.120 -v /var/run/docker.sock:/var/run/docker.sock -v /vagrant:/data eclipse/che start</pre> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-47.png"><img width="867" height="288" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-47.png" border="0"></a></p> <p>The ip-address set on the CHE_HOST environment variable is the address set for the Virtual Machine managed by Vagrant. The directory /vagrant is mapped from my Windows host into the Ubuntu VM.</p> <p>Container images are downloaded and started, checks are performed. At some point the Che container is running.</p> <p>Now I can access the Che IDE in my browser &#8211; on my Windows host at <a title="http://192.168.188.120:8080" href="http://192.168.188.120:8080">http://192.168.188.120:8080</a>. </p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-48.png"><img width="676" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-48.png" border="0"></a></p> <p>I select the stack Node, set a name for a new workspace and click on Create:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-49.png"><img width="675" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-49.png" border="0"></a></p> <p></p> <p>The newly created workspace appears. Click on Open.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-50.png"><img width="867" height="223" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-50.png" border="0"></a></p> <p>The workspace was opened. </p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-51.png"><img width="769" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-51.png" border="0"></a></p> <p>Note: the workspace is running in its own Docker container:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-52.png"><img width="1067" height="80" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-52.png" border="0"></a></p> <p>Note how port 9000 is mapped to port 32792 on the Docker Host. This means that any application running in this workspace (container) and listening at port 9000 will be accessible at port 32792 on my laptop&#8217;s Windows host .</p> <p>Click on Create Project.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML2bde3ae7.png"><img width="482" height="338" title="SNAGHTML2bde3ae7" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="SNAGHTML2bde3ae7" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML2bde3ae7_thumb.png" border="0"></a></p> <p>Select NodeJS as project template, set the name and click Create.</p> <p>The new project is created, with a sample hello.js file:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-53.png"><img width="867" height="238" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-53.png" border="0"></a></p> <p>This simple file listens for HTTP requests at port 9000. We can run that Node application, for example from the terminal</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML2be2d56e.png"><img width="691" height="174" title="SNAGHTML2be2d56e" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="SNAGHTML2be2d56e" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML2be2d56e_thumb.png" border="0"></a></p> <p>Now the Node application can be accessed from the browser on my Windows machine, at port 32792:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-54.png"><img width="501" height="84" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-54.png" border="0"></a></p> <p>Granted, in terms of application development, this is really only scratching the surface. But in terms of IDE management and dev-test-run roundtrips, this is quite impressive. Note that I used a single Docker run command to run my IDE. The only preparation I had performed was running a Linux VM with a Docker engine installed. </p> <p>Eclipse Che also runs on Kubernetes and OpenShift and of course its host can be local, a private data center or a cloud environment.</p> <p>Finally, because of the volume mapping to a directory on the Linux Docker host machine that is actually shared from my Windows laptop, the files created by Eclipse Che live and persist on my laptop:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-55.png"><img width="674" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-55.png" border="0"></a></p> <h2>Resources</h2> <p>Eclipse Che Website: <a title="https://www.eclipse.org/che/" href="https://www.eclipse.org/che/">https://www.eclipse.org/che/</a>&nbsp;</p> <p>Eclipse Che documentation: <a title="https://www.eclipse.org/che/docs/index.html" href="https://www.eclipse.org/che/docs/index.html">https://www.eclipse.org/che/docs/index.html</a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/28/quick-start-with-eclipse-che-browser-based-ide-running-on-docker/">Quick Start with Eclipse Che &#8211; Browser based IDE, running on Docker</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50286 Sun Oct 28 2018 08:25:58 GMT-0400 (EDT) Oracle OpenWorld and Code One 2018 : The Journey Home http://feedproxy.google.com/~r/TheOracleBaseBlog/~3/NUQdeRuqxTE/ <p><img class="alignleft wp-image-8309" src="https://oracle-base.com/blog/wp-content/uploads/2018/08/airplane-flying-through-clouds-small.jpg" alt="" width="200" height="139" />I had the morning in the hotel, trying to catch up on things I missed during this trip and I did a quick visit to the gym. At about 14:00 I checked out of the hotel and got the Bart to the airport.</p> <p>My boarding pass looked like it said terminal &#8220;1&#8221;, which sounded a bit odd, but I went there to check anyway. It turned out it was terminal &#8220;I&#8221;, for &#8220;international&#8221;, so I got the monorail back to the original place I had started. I got in the queue for bag drop and <a href="https://twitter.com/christhalinger">Chris Thalinger</a> was at the opposite bag drop grinning at me. He was late for his flight. We had a bit of a chat while moving through security, then he went off to catch get his flight. I was still in plenty of time, so I walked down to by boarding gate, to find my flight had been delayed by an hour, so then I was really early. <img src="https://s.w.org/images/core/emoji/11/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>The flight from San Francisco to Dublin took about 9.5 hours, which was nearly an hour quicker than it was listed. They were trying to make up for lost time, and we had a medical emergency on the plane, and this time it wasn&#8217;t me. After we landed we had to sit on the plane while the medics did their thing. We ended up about 30+ minutes late. I did a little head-nodding on the plane, but not really something I would call sleep. I couldn&#8217;t really watch films as there was so little room, my face was pretty much against the screen in front of me. I turned it off and listened to music instead, and stood at the back of the plane, getting in the way a lot.</p> <p>I originally had a 5-ish hour layover in Dublin, but because of the delay to the first flight that was cut to a bit over 4 hours. Dublin airport is OK, but hanging around at any airport for more than a couple of hours is soul destroying. I could feel myself getting progressively more jittery as time passed. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>The flight from Dublin to Birmingham was less than an hour, but felt like an eternity. I had a window seat on the plane, but someone was sitting in it, so got the aisle instead, which was a good thing.</p> <p>After a short taxi ride I was home. I put on my first load of washing, and got in the bath (sorry for the mental image)&#8230; By the time I got out it was time for the second load of washing, then bed. I woke up in the morning, put on  the third load of washing, and cut my hair. It was only at this point that I started to feel remotely clean.</p> <p>That&#8217;s another OpenWorld done. I&#8217;ll write a wrap up post once everything has distilled&#8230;</p> <p>Cheers</p> <p>Tim&#8230;</p> <hr style="border-top:black solid 1px" /><a href="https://oracle-base.com/blog/2018/10/28/oracle-openworld-and-code-one-2018-the-journey-home/">Oracle OpenWorld and Code One 2018 : The Journey Home</a> was first posted on October 28, 2018 at 8:54 am.<br />©2012 "<a href="http://www.oracle-base.com/blog">The ORACLE-BASE Blog</a>". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.<br /><img src="http://feeds.feedburner.com/~r/TheOracleBaseBlog/~4/NUQdeRuqxTE" height="1" width="1" alt=""/> Tim... https://oracle-base.com/blog/?p=8597 Sun Oct 28 2018 03:54:43 GMT-0400 (EDT) Oracle Gateway Installation for MSSQL Server on RHEL 7 http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/ <h2>Oracle Database Gateways</h2> <p>Oracle Database Gateways address the needs of disparate data access. In a heterogeneously distributed environment, Gateways make it possible to integrate with any number of non-Oracle systems from an Oracle application. They enable integration with data stores such as IBM DB2, Microsoft SQL Server and Excel, transaction managers like IBM CICS and message queuing systems like IBM WebSphere MQ.</p> <p>Here Oracle gateways is installed to extract the data in the MSSQL SERVER Database from the Oracle Database by Using DBLINK.</p> <p><strong>&#8211;SQLSERVER DATABASE CONFIGURATION AND PARAMETERS<br /> </strong></p> <p>SqlServer Ver         : 2016<br /> Hostname              : dugg-uh-oi-pt<br /> Listener Port          : 1433<br /> Instance name       : MSSQLSERVER<br /> DB Name               : fossil<br /> IP                           : 192.168.1.10</p> <p><strong>&#8211;ORACLE DATABASE CONFIGURATION</strong> <strong>AND PARAMETERS</strong><br /> Oracle DB ver        : 11g (11.2.0.2.0)<br /> HOSTNAME           : oralinuxfossil01.grgh.oracle-help.com<br /> Listener Port          : 1521<br /> Instance Name      : fossilDVL<br /> DB Name               : fossilDVL<br /> IP                           : 192.168.1.11<br /> ORACLE_HOME     : /Oracle/app/oracle/11gR2/rdbms/11.2.0.2</p> <p><strong>&#8211;ORACLE GATEWAY CONFIGURATION AND PARAMETERS<br /> </strong></p> <p>ORA GATEWAY VER   : 11g<br /> Listener Port               : 1523<br /> ORACLE_HOME         : /Oracle/app/oracle/gateway</p> <p>Download and Extract <strong><em>linux.x64_11gR2_gateways.zip </em></strong> from the <a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linx8664soft-100572.html">OTN</a></p> <p>Run the ./runInstaller from the Linux Server</p> <p>[oracle@oralinuxfossil01 admin]$ ./runInstaller</p> <p>1.Welcome Screen</p> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png"><img data-attachment-id="5797" data-permalink="http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/attachment/1-69/" data-orig-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?fit=652%2C516" data-orig-size="652,516" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?fit=300%2C237" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?fit=652%2C516" class="wp-image-5797 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?resize=492%2C388" alt="" width="492" height="388" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?resize=300%2C237 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?resize=60%2C47 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?resize=150%2C119 150w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/1-1.png?w=652 652w" sizes="(max-width: 492px) 100vw, 492px" data-recalc-dims="1" /></a></p> <p>2. Specify the Oracle Gateway Path and Home Name</p> <p><a href="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png"><img data-attachment-id="5798" data-permalink="http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/attachment/2-61/" data-orig-file="https://i1.wp.com/oracle-help.comcontent/uploads/2018/10/2-1.png?fit=657%2C518" data-orig-size="657,518" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="2" data-image-description="" data-medium-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?fit=300%2C237" data-large-file="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?fit=657%2C518" class="wp-image-5798 aligncenter" src="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?resize=490%2C387" alt="" width="490" height="387" srcset="https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?resize=300%2C237 300w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?resize=60%2C47 60w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?resize=150%2C118 150w, https://i1.wp.com/oracle-help.com/wp-content/uploads/2018/10/2-1.png?w=657 657w" sizes="(max-width: 490px) 100vw, 490px" data-recalc-dims="1" /></a></p> <p>3. Select for MSSQL Server</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png"><img data-attachment-id="5799" data-permalink="http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/attachment/3-53/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?fit=659%2C520" data-orig-size="659,520" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="3" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?fit=300%2C237" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?fit=659%2C520" class="wp-image-5799 aligncenter" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?resize=486%2C384" alt="" width="486" height="384" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?resize=300%2C237 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?resize=60%2C47 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?resize=150%2C118 150w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/3.png?w=659 659w" sizes="(max-width: 486px) 100vw, 486px" data-recalc-dims="1" /></a></p> <p>4. Give all the required parameter in the box.</p> <p><a href="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png"><img data-attachment-id="5800" data-permalink="http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/attachment/4-46/" data-orig-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?fit=656%2C519" data-orig-size="656,519" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4" data-image-description="" data-medium-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?fit=300%2C237" data-large-file="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?fit=656%2C519" class="wp-image-5800 aligncenter" src="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?resize=484%2C382" alt="" width="484" height="382" srcset="https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?resize=300%2C237 300w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?resize=60%2C47 60w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?resize=150%2C119 150w, https://i2.wp.com/oracle-help.com/wp-content/uploads/2018/10/4-1.png?w=656 656w" sizes="(max-width: 484px) 100vw, 484px" data-recalc-dims="1" /></a></p> <p>click on Next and it will Install the Oracle Gateways Successfully.</p> <p>==================================================================================</p> <p><span style="font-size: 14pt"><strong>Configuration in Oracle Gateways Home</strong></span></p> <ul style="list-style-type: disc"> <li><strong>NETWORK CONFIGURATION FOR ORACLE GATEWAY </strong></li> </ul> <p><em>Only LISTENER.ORA file will be configured in oracle gateway home<br /> </em></p> <p>[oracle@oralinuxfossil01 admin]$ cd /Oracle/app/oracle/gateways/network/admin<br /> [oracle@oralinuxfossil01 admin]$ cat listener.ora<br /> # listener.ora Network Configuration File: /Oracle/app/oracle/gateways/network/admin/listener.ora<br /> # Generated by Oracle configuration tools.</p> <p><strong>LISTENERDG</strong> =<br /> (DESCRIPTION_LIST =<br /> (DESCRIPTION =<br /> (ADDRESS = (PROTOCOL = TCP)(HOST = oralinuxfossil01.grgh.oracle-help.com)(PORT = 1523))<br /> )<br /> )</p> <p><strong>SID_LIST_LISTENERDG</strong>=<br /> (SID_LIST=<br /> (SID_DESC=<br /> (SID_NAME=fossil)<br /> (ORACLE_HOME=/Oracle/app/oracle/gateways)<br /> (ENV=”LD_LIBRARY_PATH=/Oracle/app/oracle/gateways/dg4msql/driver/lib:/Oracle/app/oracle/gateways/lib”)<br /> (PROGRAM=dg4msql)<br /> )<br /> )</p> <p>ADR_BASE_LISTENERDG = /Oracle/app/oracle</p> <p>[oracle@oralinuxfossil01 admin]$ cd /Oracle/app/oracle/gateways/bin</p> <p>[oracle@oralinuxfossil01 bin]$ ./lsnrctl start <strong>LISTENERDG</strong></p> <ul> <li>I<strong>NITPARAMETER FILE FOR ORACLE GATEWAY , IT WILL BE USED TO CONNECT THE SQLSERVER DATABASE.</strong></li> </ul> <p><em>Gatway Parameter file name must be same as the given SID in the listener file.</em></p> <p>[oracle@oralinuxfossil01 admin]$ cd /Oracle/app/oracle/gateways/dg4msql/admin</p> <p>[oracle@oralinuxfossil01 admin]$ cat initfossil.ora<br /> # This is a customized agent init file that contains the HS parameters<br /> # that are needed for the Database Gateway for Microsoft SQL Server</p> <p>#<br /> # HS init parameters<br /> #<br /> HS_FDS_CONNECT_INFO=[dugg-uh-oi-pt]:1433//fossil<br /> # alternate connect format is hostname/serverinstance/databasename<br /> HS_FDS_TRACE_LEVEL=OFF<br /> HS_FDS_RECOVERY_ACCOUNT=RECOVER<br /> HS_FDS_RECOVERY_PWD=RECOVER</p> <p>=================================================================================</p> <p><span style="font-size: 14pt"><strong>Configuration in Oracle Database Home</strong></span></p> <ul style="list-style-type: disc"> <li><strong>NETWORK CONFIGURATION FOR ORACLE GATEWAY</strong><br /> <em>Only An Entry will be made in TNSNAMES.ORA<br /> </em></li> </ul> <p>[oracle@oralinuxfossil01 admin]$ cd $ORACLE_HOME/network/admin</p> <p>[oracle@oralinuxfossil01 admin]$ cat listener.ora</p> <p>LISTENER =<br /> (DESCRIPTION_LIST =<br /> (DESCRIPTION =<br /> (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.11)(PORT = 1521))<br /> (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))<br /> )<br /> )</p> <p>[oracle@oralinuxfossil01 admin]$ cat tnsnames.ora</p> <p><strong>fossil</strong>=<br /> (DESCRIPTION=<br /> (ADDRESS=(PROTOCOL=tcp)(HOST=oralinuxfossil01.grgh.oracle-help.com)(PORT=1523))<br /> (CONNECT_DATA=(SID=<strong>fossil</strong>))<br /> (HS=OK)<br /> )</p> <p><strong><em>check the tnsping output</em></strong></p> <p>[oracle@oralinuxfossil01 admin]$ tnsping fossil</p> <p>TNS Ping Utility for Linux: Version 11.2.0.1 on 27-OCT-2018 14:46:28</p> <p>Copyright (c) 2001, 2015 Oracle Corporation. All rights reserved.</p> <p>Used parameter files:<br /> Used TNSNAMES adapter to resolve the alias<br /> Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oralinuxfossil01.grgh.oracle-help.com)(PORT=1523)) (CONNECT_DATA=(SID=fossil)) (HS=OK) )<br /> OK (10 msec)</p> <p>Now all the configurations are completed for Oracle Gateways.</p> <p>=================================================================================</p> <p><em>create a <strong>dblink</strong> in the oracle database from where the sqlserver database needs to be fetch.</em></p> <p><em><strong>ag33 </strong>is the dblink name , which will access the sqlserver from oracle database.</em></p> <p><em><strong>sqluser</strong> is the user for connecting the sqlserver database.</em></p> <p><em><strong>fossil123</strong> is the password for the sqlserver user &#8220;sqluser&#8221; to login into the database.</em></p> <p><em><strong>fossil</strong> is the tns service name in the sqlserver.</em></p> <p>[oracle@oralinuxfossil01 admin]$ !sq<br /> sqlplus / as sysdba</p> <p>SQL*Plus: Release 11.2.0.1.0 Production on Sat Oct 27 15:12:58 2018</p> <p>Copyright (c) 1982, 2012, Oracle. All rights reserved.</p> <p>15:23:30 SYS @ Tissot &gt; CREATE PUBLIC DATABASE LINK ag33 CONNECT TO &#8216;sqluser&#8217; IDENTIFIED BY &#8216;fossil123&#8217; USING &#8216;fossil&#8217;;</p> <p>Database link created.</p> <p>15:23:55 SYS @ Tissot &gt;select sysdate from dual@ag33;</p> <p>SYSDATE<br /> &#8212;&#8212;&#8212;<br /> 27-OCT-18</p> <p>=================================================================================</p> <p>Thanks for reading this article.</p> <p>&nbsp;</p> <p>The post <a rel="nofollow" href="http://oracle-help.com/oracle-database/oracle-gateway-installation-for-mssql-server-on-rhel-7/">Oracle Gateway Installation for MSSQL Server on RHEL 7</a> appeared first on <a rel="nofollow" href="http://oracle-help.com">ORACLE-HELP</a>.</p> Arun Gupta http://oracle-help.com/?p=5796 Sat Oct 27 2018 06:02:27 GMT-0400 (EDT) Partitioning - 7 : Interval Partitioning https://hemantoracledba.blogspot.com/2018/10/partitioning-7-interval-partitioning.html <div dir="ltr" style="text-align: left;" trbidi="on">Interval Partitioning was introduced in 11g as an enhancement to Range Partitioning, but supporting only DATE and NUMBER datatypes.&nbsp; This allows you to define the interval for each Partition and leave it to the database engine to automatically create new Partitions as required when data is inserted.&nbsp; Thus, you do not have to pre-create Partitions for future data.<br /><br />Here is a demo with Monthly Date Intervals.<br /><br /><pre> 2 (manufacture_date date,<br /> 3 item_code varchar2(32),<br /> 4 item_quantity number(8,0))<br /> 5 partition by range (manufacture_date)<br /> 6 interval (numtoyminterval(1,'MONTH'))<br /> 7 (partition P_1 values less than (to_date('01-JUL-2018','DD-MON-YYYY')))<br /> 8 /<br /><br />Table created.<br /><br />SQL&gt; set long 32 <br />SQL&gt; select partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'MANUFACTURING_SUMMARY'<br /> 4 /<br /><br />PARTITION_NAME HIGH_VALUE<br />------------------------------ --------------------------------<br />P_1 TO_DATE(' 2018-07-01 00:00:00',<br /><br />SQL&gt; <br /></pre><br /><br />The INTERVAL clause specifies how the upper bounds for new Partitions are to be defined.&nbsp; I only need to name the boundary for the first (lowest) Partition and name the Partition.&nbsp; All subsequent Partitions are automatically created with names assigned by Oracle and high values based on the INTERVAL clause.<br /><br />Let me insert a few rows.<br /><br /><pre>SQL&gt; insert into manufacturing_summary<br /> 2 (manufacture_date, item_code, item_quantity)<br /> 3 values<br /> 4 (to_date('29-JUN-2018','DD-MON-YYYY'), 'ABC123',4000) <br /> 5 /<br /><br />1 row created.<br /><br />SQL&gt; insert into manufacturing_summary<br /> 2 values (to_date('01-JUL-2018','DD-MON-YYYY'),'ABC123',3000)<br /> 3 /<br /><br />1 row created.<br /><br />SQL&gt; insert into manufacturing_summary<br /> 2 values (to_date('01-JUL-2018','DD-MON-YYYY'),'FGH422',1000)<br /> 3 /<br /><br />1 row created.<br /><br />SQL&gt; commit;<br /><br />Commit complete.<br /><br />SQL&gt; <br />SQL&gt; select partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'MANUFACTURING_SUMMARY'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_NAME HIGH_VALUE<br />------------------------------ --------------------------------<br />P_1 TO_DATE(' 2018-07-01 00:00:00',<br />SYS_P519 TO_DATE(' 2018-08-01 00:00:00',<br /><br />SQL&gt; <br /></pre><br /><br />Oracle automatically created Partition S_P519 for July data.<br /><br />What happens if there manufactuing daa is not available from 02-Jul-2018 to, say, 04-Sep-2018 ?&nbsp; And availability of data resumes only on 05-Sep-2018 ?<br /><br /><pre>SQL&gt; insert into manufacturing_summary<br /> 2 values (to_date('05-SEP-2018','DD-MON-YYYY'),'ABC123',3000) <br /> 3 /<br /><br />1 row created.<br /><br />SQL&gt; commit;<br /><br />Commit complete.<br /><br />SQL&gt; select partition_position, partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'MANUFACTURING_SUMMARY'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_POSITION PARTITION_NAME<br />------------------ ------------------------------<br />HIGH_VALUE<br />--------------------------------<br /> 1 P_1<br />TO_DATE(' 2018-07-01 00:00:00',<br /><br /> 2 SYS_P519<br />TO_DATE(' 2018-08-01 00:00:00',<br /><br /> 3 SYS_P520<br />TO_DATE(' 2018-10-01 00:00:00',<br /><br /><br />SQL&gt; <br /></pre><br /><br />The third Partition, SYS_P520 is created with the Upper Bound (HIGH_VALUE) of 01-Oct for the September data.<br /><br />What if August data becomes available subsequently and is inserted ?<br /><br /><pre>SQL&gt; insert into manufacturing_summary<br /> 2 values (to_date('10-AUG-2018','DD-MON-YYYY'),'ABC123',1500)<br /> 3 /<br /><br />1 row created.<br /><br />SQL&gt; commit;<br /><br />Commit complete.<br /><br />SQL&gt; select partition_position, partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'MANUFACTURING_SUMMARY'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_POSITION PARTITION_NAME<br />------------------ ------------------------------<br />HIGH_VALUE<br />--------------------------------<br /> 1 P_1<br />TO_DATE(' 2018-07-01 00:00:00',<br /><br /> 2 SYS_P519<br />TO_DATE(' 2018-08-01 00:00:00',<br /><br /> 3 SYS_P521<br />TO_DATE(' 2018-09-01 00:00:00',<br /><br /> 4 SYS_P520<br />TO_DATE(' 2018-10-01 00:00:00',<br /><br /><br />SQL&gt; <br /></pre><br /><br />A new Partition with the HIGH_VALUE of 01-Sept did get created as SYS_P521 and inserted into the ordered position 3.&nbsp; While the previously created Partition S_P520 (HIGH_VALUE 01-Oct) got renumbered to 4.&nbsp; We can verify this by actually querying the Partitions.<br /><br /><pre>SQL&gt; select * from manufacturing_summary partition (SYS_P521);<br /><br />MANUFACTU ITEM_CODE ITEM_QUANTITY<br />--------- -------------------------------- -------------<br />10-AUG-18 ABC123 1500<br /><br />SQL&gt;<br />SQL&gt; select * from manufacturing_summary partition (SYS_P520);<br /><br />MANUFACTU ITEM_CODE ITEM_QUANTITY<br />--------- -------------------------------- -------------<br />05-SEP-18 ABC123 3000<br /><br />SQL&gt; <br /></pre><br /><br />SYS_P520 was created first for September data although no August data existed.&nbsp; SYS_P521 was created subsequently for August data which was inserted later.<br /><br />Remember this : NEVER rely on Partition Names to attempt to identify what data is in a Partition.&nbsp; Always use PARTITION_POSITION and HIGH_VALUE to identify the logical position (rank) and the data that is present in the Partition.<br /><br />Where do the Partition names SYS_P519, SYS_P520, SYS_P521 come from ?&nbsp; They are from a system defined sequence, self-managed by Oracle.<br /><br />Let me demonstrate this with another example.<br /><br /><pre>SQL&gt; l<br /> 1 create table dummy_intvl_tbl<br /> 2 (id_col number,<br /> 3 data_col varchar2(15))<br /> 4 partition by range(id_col)<br /> 5 interval (100)<br /> 6* (partition P_1 values less than (101))<br />SQL&gt; /<br /><br />Table created.<br /><br />SQL&gt; insert into dummy_intvl_tbl<br /> 2 values (50,'data1');<br /><br />1 row created.<br /><br />SQL&gt;<br />SQL&gt; insert into dummy_intvl_tbl<br /> 2 values (150,'data3');<br /><br />1 row created.<br /><br />SQL&gt; <br />SQL&gt; insert into manufacturing_summary<br /> 2 values (to_date('25-OCT-2018','DD-MON-YYYY'),'FGH422',500);<br /><br />1 row created.<br /><br />SQL&gt; commit;<br /><br />Commit complete.<br /><br />SQL&gt; <br />SQL&gt; select table_name, partition_position, partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name in ('MANUFACTURING_SUMMARY','DUMMY_INTVL_TBL')<br /> 4 order by 1,2<br /> 5 /<br /><br />TABLE_NAME PARTITION_POSITION PARTITION_NAME<br />------------------------------ ------------------ ------------------------------<br />HIGH_VALUE<br />--------------------------------<br />DUMMY_INTVL_TBL 1 P_1<br />101<br /><br />DUMMY_INTVL_TBL 2 SYS_P525<br />201<br /><br />MANUFACTURING_SUMMARY 1 P_1<br />TO_DATE(' 2018-07-01 00:00:00',<br /><br />MANUFACTURING_SUMMARY 2 SYS_P519<br />TO_DATE(' 2018-08-01 00:00:00',<br /><br />MANUFACTURING_SUMMARY 3 SYS_P521<br />TO_DATE(' 2018-09-01 00:00:00',<br /><br />MANUFACTURING_SUMMARY 4 SYS_P520<br />TO_DATE(' 2018-10-01 00:00:00',<br /><br />MANUFACTURING_SUMMARY 5 SYS_P526<br />TO_DATE(' 2018-11-01 00:00:00',<br /><br /><br />7 rows selected.<br /><br />SQL&gt; <br /></pre><br /><br />Note how Partition Name SYS_P525 was allocated to DUMMY_INTVL_TBL and then P_526 to MANUFACTURING_SUMMARY.<br />These System Defined Partition names use a *global* sequence, not tied to a specific table.<br /><br />Can you rename the System Defined Partition after it has been automatically created ?<br /><br /><pre>SQL&gt; alter table manufacturing_summary<br /> 2 rename partition SYS_P519 to Y18M07<br /> 3 /<br /><br />Table altered.<br /><br />SQL&gt; alter table manufacturing_summary<br /> 2 rename partition SYS_P520 to Y18M09<br /> 3 /<br /><br />Table altered.<br /><br />SQL&gt; alter table manufacturing_summary<br /> 2 rename partition SYS_P521 to Y18M08<br /> 3 /<br /><br />Table altered.<br /><br />SQL&gt; alter table manufacturing_summary<br /> 2 rename partition SYS_P526 to Y18M10 <br /> 3 /<br /><br />Table altered.<br /><br />SQL&gt; <br />SQL&gt; select partition_name, high_value<br /> 2 from user_tab_partitions<br /> 3 where table_name = 'MANUFACTURING_SUMMARY'<br /> 4 order by partition_position<br /> 5 /<br /><br />PARTITION_NAME HIGH_VALUE<br />------------------------------ --------------------------------<br />P_1 TO_DATE(' 2018-07-01 00:00:00',<br />Y18M07 TO_DATE(' 2018-08-01 00:00:00',<br />Y18M08 TO_DATE(' 2018-09-01 00:00:00',<br />Y18M09 TO_DATE(' 2018-10-01 00:00:00',<br />Y18M10 TO_DATE(' 2018-11-01 00:00:00',<br /><br />SQL&gt; <br /></pre><br /><br />Yes, fortunately, you CAN rename the Partitions *after* they are automatically created.<br /><br /><br /><br /></div> Hemant K Chitale tag:blogger.com,1999:blog-1931548025515710472.post-4944631522074721855 Sat Oct 27 2018 02:32:00 GMT-0400 (EDT) LEAP#429 3x7 Pomodoro Timer https://blog.tardate.com/2018/10/leap429-3x7-pomodoro-timer.html <p>Over the years, I’ve become habituated to working in a <a href="https://en.wikipedia.org/wiki/Pomodoro_Technique">Pomodoro</a> style - make the day a series of tasks worked on in short blocks of time, with regular breaks. But I’ve never actually used a timer - just relied on my internal clock to work in roughly 1 hour increments.</p> <p>As I was building the Boldport 3x7, it started to appeal to me as a very nice display to use for a non-distracting Pomodoro timer.</p> <p>After breadboarding the idea my first thought was to make a PCB … but as there’s been a bit of <a href="https://twitter.com/MohitBhoite">Mohit Bhoite</a> fandom in the Boldport Club recently, I was drawn into a another copper-wire sculpture. Not very ruggedized, but it does look interesting!</p> <p>Now for the true test - is it actually useful? Well, I’ve started using it for real and so far so good.</p> <p>Note: the two left-most digits are minutes, the last digit is tenths of minutes. This is actually why I built my 3x7 with the yellow digit on the right;-)</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/3x7/PomodoroTimer">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p><a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/3x7/PomodoroTimer"><img src="https://leap.tardate.com/BoldportClub/3x7/PomodoroTimer/assets/PomodoroTimer_build.jpg" alt="hero_image" /></a></p> <p>Here’s a demonstration of a 5 minute countdown. Yes, that’s 5 minutes of your life that is non-refundable!</p> <iframe class="youtube-embed" src="https://www.youtube.com/embed/ZsGqnc2DhiA" frameborder="0" allowfullscreen=""></iframe> https://blog.tardate.com/2018/10/leap429-3x7-pomodoro-timer.html Fri Oct 26 2018 12:43:28 GMT-0400 (EDT) Preventing 500 Status Codes with Oracle REST Data Services https://www.thatjeffsmith.com/archive/2018/10/preventing-500-status-codes-with-oracle-rest-data-services/ <p>In my Oracle REST Services Demos I always show things working exactly as planned.</p> <p>But what happens when your user does something your program doesn&#8217;t expect?</p> <p>Or what happens when you code does something you don&#8217;t expect? In the web (HTTP) world, you get a 500.</p> <p><em>10.5.1 500 Internal Server Error<br /> The server encountered an unexpected condition which prevented it from fulfilling the request.</em></p> <p>Thankfully, PL/SQL provides EXCEPTIONS. We can say, when this bad thing happens, run this code instead. And really, as developers we should be good enough to expect certain problems to occur and to plan for them.</p> <p>When something doesn&#8217;t work in your database code attached to a RESTful Service, you get a HTTP Status code of 500.</p> <p>This isn&#8217;t friendly.</p> <p>Let&#8217;s look at an un-handled exception in ORDS from the user perspective.</p> <h3>Bad</h3> <div id="attachment_7055" style="width: 559px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/unhandled-exception.png" alt="" width="549" height="521" class="size-full wp-image-7055" /><p class="wp-caption-text">Boo!</p></div> <p>This looks quite unprofessional. Not the warm and fuzzy feel you want when doing business with a partner or vendor. What if that was the response I got when trying to buy the WiFi on my airline with my credit card? (A completely fake/did not really happen in real life scenario which happened to me yesterday). </p> <h3>Less Bad</h3> <div id="attachment_7056" style="width: 560px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/handled-exception.png" alt="" width="550" height="426" class="size-full wp-image-7056" /><p class="wp-caption-text">This is much better, or less bad than a 500.</p></div> <p>We have a proper status code and a message returned from the server (ORDS).</p> <p>Now, a perfect scenario would include having some client-side validation of the inputs, preventing me from sending bad values in the first place. But even then, it&#8217;s best to plan for the common scenarios. For example, what if your service is available OUTSIDE the intended application where there is no validation of inputs happening?</p> <p>Anyways, let&#8217;s see how I made this happen. And it&#8217;s quite simple really. </p> <h3>The Exception</h3> <p>When you try to select a string into a number in Oracle, you get a ORA-06502. And if we look into the ORDS log, we can see this pop out when I make the bad call w/o handling the exception:</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="plsql"><pre class="de1">Caused <span class="kw1">BY</span><span class="sy0">:</span> Error <span class="sy0">:</span> <span class="nu0">6502</span><span class="sy0">,</span> Position <span class="sy0">:</span> <span class="nu0">0</span><span class="sy0">,</span> <span class="kw1">SQL</span> <span class="sy0">=</span> <span class="kw1">DECLARE</span> x <span class="kw1">INTEGER</span><span class="sy0">;</span> <span class="kw1">BEGIN</span> <span class="kw1">SELECT</span> <span class="sy0">:</span><span class="nu0">1</span> <span class="kw1">INTO</span> x <span class="kw1">FROM</span> dual<span class="sy0">;</span> <span class="sy0">:</span><span class="nu0">2</span> <span class="sy0">:=</span> <span class="st0">'You passed in this number: '</span> <span class="sy0">||</span> x<span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;,</span> OriginalSql <span class="sy0">=</span> <span class="kw1">DECLARE</span> x <span class="kw1">INTEGER</span><span class="sy0">;</span> <span class="kw1">BEGIN</span> <span class="kw1">SELECT</span> ? <span class="kw1">INTO</span> x <span class="kw1">FROM</span> dual<span class="sy0">;</span> ? <span class="sy0">:=</span> <span class="st0">'You passed in this number: '</span> <span class="sy0">||</span> x<span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;,</span> Error Msg <span class="sy0">=</span> ORA<span class="sy0">-</span>06502<span class="sy0">:</span> PL<span class="sy0">/</span><span class="kw1">SQL</span><span class="sy0">:</span> numeric <span class="kw1">OR</span> <span class="kw2">VALUE</span> error<span class="sy0">:</span> character <span class="kw1">TO</span> <span class="kw1">NUMBER</span> conversion error ORA<span class="sy0">-</span>06512<span class="sy0">:</span> <span class="kw1">AT</span> line <span class="nu0">4</span></pre></div></div></div></div></div></div></div> <p>So, I need to account for the ORA-06502 in my handler code. Here&#8217;s how I&#8217;ve done that.</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="plsql"><pre class="de1"><span class="kw1">DECLARE</span> x <span class="kw1">INTEGER</span><span class="sy0">;</span> not_a_number <span class="kw1">EXCEPTION</span><span class="sy0">;</span> <span class="kw1">PRAGMA</span> EXCEPTION_INIT<span class="br0">&#40;</span>not_a_number<span class="sy0">,</span> <span class="sy0">-</span>06502<span class="br0">&#41;</span><span class="sy0">;</span> <span class="kw1">BEGIN</span> <span class="kw1">SELECT</span> <span class="sy0">:</span>num_in <span class="kw1">INTO</span> x <span class="kw1">FROM</span> dual<span class="sy0">;</span> <span class="sy0">:</span>string_out <span class="sy0">:=</span> <span class="st0">'You passed in this number: '</span> <span class="sy0">||</span> x<span class="sy0">;</span> <span class="kw1">EXCEPTION</span> <span class="kw1">WHEN</span> not_a_number <span class="kw1">THEN</span> <span class="sy0">:</span>string_out <span class="sy0">:=</span> <span class="st0">'That was NOT a number!'</span><span class="sy0">;</span> <span class="sy0">:</span>status <span class="sy0">:=</span> <span class="nu0">400</span><span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;</span></pre></div></div></div></div></div></div></div> <p>I&#8217;ve set the out going message/response to something a bit more helpful than a ¯\_(ツ)_/¯ and I&#8217;ve set the response to a HTTP 400, which means:</p> <p>&#8216;<em>10.4.1 400 Bad Request &#8211; The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.</em>&#8216;</p> <p>Note to pass the status back I had to set a parameter for my handler code for &#8216;X-ORDS-STATUS-CODE&#8217;</p> <div id="attachment_7057" style="width: 648px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/x-ords-status-code.png" alt="" width="638" height="144" class="size-full wp-image-7057" /><p class="wp-caption-text">This is new for 18.3 of ORDS, the X-APEX codes are being deprecated.</p></div> <h3>I&#8217;m Getting 500&#8217;s but I Don&#8217;t Know Why?</h3> <p>You need to get to the ORDS standard output logs. If you&#8217;re running ORDS, hopefully you&#8217;re logging that somewhere. If it&#8217;s Tomcat (Catalina) or WLS, they have places for that. If it&#8217;s in standalone mode, you need to redirect that out to a file yourself. </p> <p>Or&#8230;<a href="https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/18.3/aelig/troubleshooting-REST.html#GUID-459D10EB-4E62-4D28-92EB-BE1886615E00" rel="noopener" target="_blank">per the troubleshooting guide (DOCS!)</a>, you can do this:</p> <div id="attachment_7059" style="width: 768px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/500-print2.png" alt="" width="758" height="377" class="size-full wp-image-7059" /><p class="wp-caption-text">DO NOT DO THIS IN PROD.</p></div> <p>Restart ORDS, run your request again:</p> <div id="attachment_7058" style="width: 562px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/500-print1.png" alt="" width="552" height="589" class="size-full wp-image-7058" /><p class="wp-caption-text">DO NOT DO THIS IN PROD.</p></div> <p>By the way, DO NOT DO THIS IN PROD. You will be exposing details of your database to people you do not want to. Like those mysql errors you see when you try to hit a website and it overloads their system. Oh, that&#8217;s MySQL, and they have a table named &#8216;XYZ&#8217;&#8230;ahhh. Yeah, that&#8217;s bad. </p> <h3>The Code</h3> <p>Here&#8217;s how I called it:</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="text"><pre class="de1">curl --request GET \ --url http://localhost:8080/ords/hr/exceptions/unhandled \ --header 'authorization: Basic Y29sbTpvcmFjbGU=' \ --header 'num_in: hello'</pre></div></div></div></div></div></div></div> <p>My ORDS handler pulls the value out of the header (num_in) and tries to convert it to a number (x) &#8211; which works just fine if your string just happens to be a number already. I then pass back a message saying, hey, thanks for passing me that number, and here it is again just so you know I got it correctly.</p> <p>Here&#8217;s the module for my exception, handled and un-handled.</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="plsql"><pre class="de1"><span class="co1">-- Generated by Oracle SQL Developer REST Data Services 18.3.0.276.0148</span> <span class="co1">-- Exported REST Definitions from ORDS Schema Version 18.3.0.r2701456</span> <span class="co1">-- Schema: HR Date: Fri Oct 26 11:14:42 EDT 2018</span> <span class="co1">--</span> <span class="kw1">BEGIN</span> ORDS<span class="sy0">.</span>ENABLE_SCHEMA<span class="br0">&#40;</span> p_enabled <span class="sy0">=&gt;</span> <span class="kw1">TRUE</span><span class="sy0">,</span> p_schema <span class="sy0">=&gt;</span> <span class="st0">'HR'</span><span class="sy0">,</span> p_url_mapping_type <span class="sy0">=&gt;</span> <span class="st0">'BASE_PATH'</span><span class="sy0">,</span> p_url_mapping_pattern <span class="sy0">=&gt;</span> <span class="st0">'hr'</span><span class="sy0">,</span> p_auto_rest_auth <span class="sy0">=&gt;</span> <span class="kw1">FALSE</span><span class="br0">&#41;</span><span class="sy0">;</span> &nbsp; ORDS<span class="sy0">.</span>DEFINE_MODULE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_base_path <span class="sy0">=&gt;</span> <span class="st0">'/exceptions/'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_status <span class="sy0">=&gt;</span> <span class="st0">'PUBLISHED'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_TEMPLATE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'handled'</span><span class="sy0">,</span> p_priority <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_etag_type <span class="sy0">=&gt;</span> <span class="st0">'HASH'</span><span class="sy0">,</span> p_etag_query <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'handled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'plsql/block'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'declare x integer; not_a_number EXCEPTION; PRAGMA EXCEPTION_INIT(not_a_number, -06502); begin select :num_in into x from dual; :string_out := '</span><span class="st0">'You passed in this number: '</span><span class="st0">' || x; EXCEPTION WHEN not_a_number THEN :string_out := '</span><span class="st0">'That was NOT a number!'</span><span class="st0">'; :status := 400; end;'</span> <span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'handled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'X-ORDS-STATUS-CODE'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'status'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'INT'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'OUT'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'handled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'num_in'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'num_in'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'IN'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'handled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'string_out'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'string_out'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'RESPONSE'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'OUT'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_TEMPLATE<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'unhandled'</span><span class="sy0">,</span> p_priority <span class="sy0">=&gt;</span> <span class="nu0">0</span><span class="sy0">,</span> p_etag_type <span class="sy0">=&gt;</span> <span class="st0">'HASH'</span><span class="sy0">,</span> p_etag_query <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_HANDLER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'unhandled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'plsql/block'</span><span class="sy0">,</span> p_items_per_page <span class="sy0">=&gt;</span> <span class="nu0">25</span><span class="sy0">,</span> p_mimes_allowed <span class="sy0">=&gt;</span> <span class="st0">''</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="sy0">,</span> p_source <span class="sy0">=&gt;</span> <span class="st0">'declare x integer; begin select :num_in into x from dual; :string_out := '</span><span class="st0">'You passed in this number: '</span><span class="st0">' || x; end;'</span> <span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'unhandled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'num_in'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'num_in'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'HEADER'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'IN'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> ORDS<span class="sy0">.</span>DEFINE_PARAMETER<span class="br0">&#40;</span> p_module_name <span class="sy0">=&gt;</span> <span class="st0">'exceptions'</span><span class="sy0">,</span> p_pattern <span class="sy0">=&gt;</span> <span class="st0">'unhandled'</span><span class="sy0">,</span> p_method <span class="sy0">=&gt;</span> <span class="st0">'GET'</span><span class="sy0">,</span> p_name <span class="sy0">=&gt;</span> <span class="st0">'string_out'</span><span class="sy0">,</span> p_bind_variable_name <span class="sy0">=&gt;</span> <span class="st0">'string_out'</span><span class="sy0">,</span> p_source_type <span class="sy0">=&gt;</span> <span class="st0">'RESPONSE'</span><span class="sy0">,</span> p_param_type <span class="sy0">=&gt;</span> <span class="st0">'STRING'</span><span class="sy0">,</span> p_access_method <span class="sy0">=&gt;</span> <span class="st0">'OUT'</span><span class="sy0">,</span> p_comments <span class="sy0">=&gt;</span> <span class="kw1">NULL</span><span class="br0">&#41;</span><span class="sy0">;</span> &nbsp; &nbsp; <span class="kw1">COMMIT</span><span class="sy0">;</span> <span class="kw1">END</span><span class="sy0">;</span></pre></div></div></div></div></div></div></div> <!-- Easy AdSense Unfiltered [count: 3 is not less than 3] --> thatjeffsmith https://www.thatjeffsmith.com/?p=7054 Fri Oct 26 2018 11:27:46 GMT-0400 (EDT) Microsoft Ignite 2018 Special Cloudscape Podcast https://blog.pythian.com/cloudscape-podcast-microsoft-ignite-2018-special/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>This episode of the Cloudscape Podcast is dedicated to the announcements that were made at the recent Microsoft Ignite 2018 conference. We are joined solely by Warner Chaves to walk us through the most important and exciting announcements from the huge list of updates and releases aired at the conference. We chat about next year’s version of Microsoft SQL Server and the latest from Cosmos DB before launching into the intricacies of the rest of the news. Warner unpacks Azure Functions 2.0, HDInsight 4.0 and Hadoop 3.0! We also talk about Azure Dev Ops as the new, re-branded face of Visual Studio.</p> <p>For this and much more exciting details from Microsoft and Warner, be sure to tune in!</p> <p>Key points from this episode:</p> <p>• This year’s themes and pillars at Ignite.<br /> • SQL Server 2019 announced!<br /> • The new Hyperscale Managed Instances in Azure.<br /> • The latest from Cosmos DB.<br /> • Vendor lock-in and why Microsoft is shying away.<br /> • Azure Functions 2.0 and what is in store.<br /> • A rundown of the announcements regarding Kubernetes.<br /> • Microsoft’s automated machine learning preview.<br /> • Updates from HDInsight 4.0 and Hadoop 3.0.<br /> • The rebranding of Visual Studio into Azure DevOps.<br /> • Rounding up the Express Routes updates.<br /> • Ultra SSDs and ephemeral disks.<br /> • Improvements in Azure Database Migration Service<br /> • And much more!</p> <p><iframe src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/512308200&amp;color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true" width="100%" height="166" frameborder="no" scrolling="no"></iframe></p> <p>Links mentioned in today’s episode:</p> <p><a href="https://pythian.com/">Pythian</a><br /> <a href="https://pythian.com/experts/warner-chaves/">Warner Chaves</a><br /> <a href="https://www.microsoft.com/en-us/ignite">Microsoft Ignite 2018</a><br /> <a href="https://www.britannica.com/biography/Satya-Nadella">Satya Narayana Nadella</a><br /> <a href="https://azure.microsoft.com/en-us/blog/">Azure Blog</a><br /> <a href="https://cloudblogs.microsoft.com/sqlserver/2018/09/24/sql-server-2019-preview-combines-sql-server-and-apache-spark-to-create-a-unified-data-platform/">SQL Server</a><br /> <a href="https://docs.microsoft.com/en-us/sql/relational-databases/polybase/polybase-guide?view=sql-server-2017">PolyBase</a><br /> <a href="https://soundcloud.com/datascapepodcast">Datascape Podcast</a><br /> <a href="https://azure.microsoft.com/en-us/services/cosmos-db/">Cosmos DB</a><br /> <a href="https://aws.amazon.com/lambda/">Lambda</a><br /> <a href="https://www.jamesserra.com/archive/2014/02/what-is-hdinsight/">HDInsight</a><br /> <a href="https://hadoop.apache.org/">Hadoop</a><br /> <a href="http://druid.io/">Apache Druid</a><br /> <a href="https://visualstudio.microsoft.com/vs/features/azure/">Visual Studio</a><br /> <a href="https://azure.microsoft.com/en-us/blog/database-migration-service-and-tool-updates-ignite-2018/">Database Migration Service</a></p> </div></div> Chris Presley https://blog.pythian.com/?p=105299 Fri Oct 26 2018 09:57:05 GMT-0400 (EDT) Querying and Publishing Kafka Events from Oracle Database SQL and PL/SQL https://technology.amis.nl/2018/10/25/querying-and-publishing-kafka-events-from-oracle-database-sql-and-pl-sql/ <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6351.jpg"><img width="595" height="338" title="IMG_6351" align="right" style="margin: 0px; float: right; display: inline; background-image: none;" alt="IMG_6351" src="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6351_thumb.jpg" border="0"></a>One of the session at CodeOne 2018 discussed an upcoming feature for Oracle Database &#8211; supported in Release 12.2 and up &#8211; that would allow developers to consume Kafka events directly from SQL and PL/SQL and &#8211; at a late stage &#8211; also publish events from within the database straight to Kafka Topics. This article briefly walks through the feature as outlined in the session by Melli Annamalai, Senior Principal Product Manager at Oracle.</p> <p>Note: the pictures in this article are a little bit unclear as they are pictures taken of the slides shown in the session.</p> <p>The first stage of the Kafka support in Oracle Database is around consuming events. The database can be registered as a consumer (group) on a Kafka Topic (on a single, several or on all partitions). It then allows each database application that has an interest in the Kafka Topic to fetch the events and it will keep track of the application&#8217;s offset so as to allow easy control over at least once delivery.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6364.jpg"><img width="598" height="338" title="IMG_6364" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="IMG_6364" src="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6364_thumb.jpg" border="0"></a>The Oracle Database will not support continuous queries or streaming event analysis (like KSQL or Flink do). It makes it easy to receive (by periodically fetching) all events on a Kafka topic of interest.</p> <p>The Kafka-to-SQL connector as discussed in this article is planned to be available as part of Big Data Connectors (paid for product) and of Autonomous Data Warehouse Cloud. Depending on customer demand, other ways to get hold of the functionality may arise.</p> <p>The format of the Kafka message payload is described to the database through a table definition: each column is mapped to an element in the messages. CSV and JSON are supported &#8211; Avro is considered. At this moment, only flat payload structures (no nested elements) can be handled (similar as with external tables).</p> <p>Syntax for registering a Kafka Cluster with the database:</p> <blockquote> <p>BEGIN<br /> dbmskafka.register_cluster<br /> (&#8216;SENS2&#8217; <br /> ,'&lt;Zookeeper URL&gt;:2181&#8242;,<br /> &#8216;&lt;Kafka broker URL&gt;:9092&#8217;<br /> ,&#8217;DBMSKAFKA DEFAULT DIR&#8217; ,<br /> ’DBMSKAFKA_LOCATION DIR&#8217;<br /> &#8216;Testing DBMS KAFKA&#8217;);<br /> END; </p> </blockquote> <p>An example of the syntax required to create views to read messages from a specific Kafka Topic</p> <blockquote> <p>DECLARE<br />&nbsp;&nbsp; views created INTEGER; <br />&nbsp;&nbsp; view_prefix VARCHAR2(128) ;<br /> BEGIN<br /> DBMS_KAFKA.CREATE KAFKA VIEWS<br /> (&#8216;SENS2&#8217;&nbsp; &#8212; logical identifier of the Kafka Cluster<br /> , &#8216;MONITORAPP&#8217; &#8212; name of application (aka consumer group) in database<br /> , &#8216;sensor2&#8217; &#8212; name of Kafka Topic<br /> , &#8216;sensormessages_shape_table&#8217;&nbsp; &#8212; name of the database table that describes the message shape<br /> , views_created &#8212; number of views created, corresponding to the number of partitions in the Kafka Topic<br /> , view_prefix<br /> ) ;<br /> END;</p> </blockquote> <p>Two examples of SQL queries to retrieve Kafka messages from the views just created; note that Oracle adds message properties partition, timestamp and offset :</p> <p>select count(*) <br /> from KV_SENS2_MONITORAPP_SENSOR2_0;</p> <p> select timestamp, sensorunitid, temperaturereading <br /> from KV_SENS2_MONITORAPP_SENS0R2_0;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-44.png"><img width="643" height="325" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-44.png" border="0"></a></p> <p></p> <p>These queries do not load any data into the database: the data is retrieved from the Kafka Topic, returned as query result not stored anywhere.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-45.png"><img width="867" height="249" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-45.png" border="0"></a></p> <p>Messages can be loaded directly from the Kafka Topic into a table using a statement like the following:</p> <blockquote> <p>DECLARE<br /> rows_loaded number;<br /> BEGIN<br />&nbsp;&nbsp; dbms_kafka.load_table<br />&nbsp;&nbsp; ( &#8216;SENS2&#8217;, &#8216;LOADAPP&#8217;, &#8216;sensor2&#8217;<br />&nbsp;&nbsp; , &#8216;sensormessages_shape_table&#8217;,rows_loaded<br />&nbsp;&nbsp; );<br />&nbsp;&nbsp; dbms_output.put_1ine (&#8216;rows loaded: &#8216;|| rows loaded) ;<br /> END;</p> </blockquote> <p></p> <h3>Publish to Kafka Topic</h3> <p>At a later stage &#8211; version 2 of the connector &#8211; support is added for publishing of events to Kafka:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6366.jpg"><img width="597" height="338" title="IMG_6366" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="IMG_6366" src="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_6366_thumb.jpg" border="0"></a></p> <p>&nbsp;</p> <p></p> <p>Also on the roadmap is the ability to query messages from a Kafka Topic from a specified timestamp range.</p> <p></p> <p>Note: Oracle Big Data SQL also has support for retrieving data from a Kafka Topic, but in a fairly roundabout way; it requires an Hadoop cluster where Hive is running to get the Kafka event data and make that available to the BigData SQL conector.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-46.png"><img width="867" height="243" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-46.png" border="0"></a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/25/querying-and-publishing-kafka-events-from-oracle-database-sql-and-pl-sql/">Querying and Publishing Kafka Events from Oracle Database SQL and PL/SQL</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50221 Thu Oct 25 2018 13:16:58 GMT-0400 (EDT) Announcing the 2019-2020 ODTUG Board of Directors https://www.odtug.com/p/bl/et/blogaid=834&source=1 Congratulations to the Newly Elected 2019–2020 ODTUG Board of Directors! ODTUG https://www.odtug.com/p/bl/et/blogaid=834&source=1 Thu Oct 25 2018 12:47:57 GMT-0400 (EDT) Dynamic Firewall Creation for Azure Cloud Shell https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/ <p>I&#8217;ve been hesitant to post too much on my blog until since the hack, as there were some residual issues after the restoration of the site that took a bit to correct.  I&#8217;m finally feeling confident enough to start posting on everything I&#8217;m doing currently working with Azure and the education customers for Microsoft.</p> <figure class="wp-block-image"><img src="https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/10/confidence2.gif?w=650&#038;ssl=1" alt="" class="wp-image-8280" data-recalc-dims="1"/></figure> <p>One of the powerful tools I&#8217;ve been taking advantage of is the Azure Cloud Shell.  This cloud tool has the offering option of setting to PowerShell or Bash and I think you know which I chose.</p> <figure class="wp-block-image"><img src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall3-1.jpg?w=650&#038;ssl=1" alt="" class="wp-image-8277" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall3-1.jpg?w=1143&amp;ssl=1 1143w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall3-1.jpg?resize=300%2C197&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall3-1.jpg?resize=768%2C504&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall3-1.jpg?resize=1024%2C672&amp;ssl=1 1024w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /><figcaption>Azure Cloud Shell</figcaption></figure> <p>Although everything I do will require a PowerShell version in a later phase, the bash skills are strong in this one and it only makes sense that I would take on new technology in the language that I already know to remove an extra layer of challenges.</p> <p>As you work with Azure and Microsoft products, (or with any Cloud platform) you&#8217;ll be required to build firewall rules to access from your local workstation or from any location, to the cloud environment.  Microsoft has done an impressive job as new releases and updates come out, to build an automated step to offer a firewall rule creation from most products, removing the manual step requirement.  </p> <p>With my use of scripting and Azure Cloud Shell, I&#8217;m automating and building my environment, including SQL Database resources and then have a requirement to access and build the logical objects.  This means that I need a firewall rule build for the Azure Cloud Shell I&#8217;m working from.  The IP for this cloud shell is unique to the session I&#8217;m running at that moment.</p> <p>The requirement to add this enhancement to my script is:</p> <ol><li>Capture and read the IP Address for the Azure Cloud shell session.</li><li>Populate the IP Address to a Firewall rule</li><li>Log into the new SQL Server database that was created as part of the bash script and then execute SQL scripts.</li></ol> <p>Capture the IP Address from the <a href="https://shell.azure.com/">Azure Cloud Shell</a></p> <p>I chose to use a curl command to pull the correct IP6v address for the Azure Cloud Shell.  There are a number of sites it can hit that will return the address to pass to the firewall creation script.  </p> <figure class="wp-block-image"><img src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall4.jpg?w=650&#038;ssl=1" alt="" class="wp-image-8279" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall4.jpg?w=1037&amp;ssl=1 1037w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall4.jpg?resize=300%2C29&amp;ssl=1 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall4.jpg?resize=768%2C74&amp;ssl=1 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/firewall4.jpg?resize=1024%2C99&amp;ssl=1 1024w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" /><figcaption>IP Address from Curl Command</figcaption></figure> <p>I used ifconfig.me, but there&#8217;s https://canihazip.com and a slew of others if you do a websearch.  To populate the values in the script, it sets the following:</p> <pre class="wp-block-preformatted">echo "getting IP Address for Azure Cloud Shell for firewall rule"export myip=$(curl http://ifconfig.me)export startip=$myipexport endip=$myip</pre> <p>The creation command for a firewall requires a starting IP and ending IP, so both are going to receive the single IP address.  This is an Azure resource creation command:</p> <pre class="wp-block-code"><code>az sql server firewall-rule create \ --resource-group $groupname \ --server $servername \ -n AllowYourIp \ --start-ip-address $startip \ --end-ip-address $endip</code></pre> <p>When it executes inside the script, the following will be displayed in the output:</p> <pre class="wp-block-preformatted">"endIpAddress": "40.74.xxx.xxx",<br>"id": "/subscriptions/73aaxxxxx-xxxx-xxxx-xxx/resourceGroups/EDU_Group/providers/Microsoft.Sql/servers/hiedxxxxxsql1/firewallRules/AllowYourIp",<br>"kind": "v12.0",<br>"location": "East US",<br>"name": "AllowYourIp",<br>"resourceGroup": "xxx_Group",<br>"startIpAddress": "40.74.xxx.xxx",<br>"type": "Microsoft.Sql/servers/firewallRules"<br><br></pre> <p>The script is able to use the IP Address from the curl command to create the firewall rule and then the new rule is used by the Azure Cloud Shell to access the SQL Server it&#8217;s been created for.</p> <p>I can now execute my scripts against my databases:</p> <pre class="wp-block-preformatted">sqlcmd -U $adminlogin -S ${servername}.windows.net -P "$password" -d HiEd_DW -i "./edu_hied_DW.sql;"<br></pre> <p>That&#8217;s all there is to creating a dynamic firewall rule and using it from any console product, not just the Azure Cloud Shell.  </p> <p>Enjoy!</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/azure/" rel="tag">azure</a>, <a href="https://dbakevlar.com/tag/firewall-rules/" rel="tag">Firewall rules</a>, <a href="https://dbakevlar.com/tag/ip-addresses/" rel="tag">IP Addresses</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/&title=Dynamic Firewall Creation for Azure Cloud Shell"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/&title=Dynamic Firewall Creation for Azure Cloud Shell"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/&title=Dynamic Firewall Creation for Azure Cloud Shell"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/&title=Dynamic Firewall Creation for Azure Cloud Shell"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/#comments">1 (One) on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/01/what-makes-history/" >What "Makes" History</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2010/12/pythian-post-avoiding-the-perfect-storm/" >Pythian Post- Avoiding the Perfect Storm</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/08/em12c-enterprise-monitoring-part-ii/" >EM12c Enterprise Monitoring, Part II</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/01/oracles-safra-catz-trumps-transition-team/" >Oracle's Safra Catz and Trump's Transition Team</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2011/10/first-day-at-oracle-open-world/" >First Day at Oracle Open World</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/10/dynamic-firewall-creation-for-azure-cloud-shell/">Dynamic Firewall Creation for Azure Cloud Shell</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8273 Thu Oct 25 2018 11:04:17 GMT-0400 (EDT) Managing Microservices: Why Middleware is Essential As You Scale Your Business https://blog.pythian.com/managing-microservices-why-middleware-is-essential-as-you-scale-your-business/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p><span style="font-weight: 400;">Imagine this: You’re out for dinner at an international conference with a group of colleagues from around the world. Everyone there is an expert at what they do. People are trying to communicate, but since everyone speaks a different language, no one understands each other. </span></p> <p><span style="font-weight: 400;">For the event to really work – for it to be a single, cohesive unit – you’d need someone who knows each language to translate, to serve as the hub, so each member of the group could communicate with one another.</span></p> <p><span style="font-weight: 400;">That’s essentially what </span><a href="https://en.wikipedia.org/wiki/Middleware"><span style="font-weight: 400;">middleware</span></a><span style="font-weight: 400;"> does: i</span><span style="font-weight: 400;">t allows various applications, systems, and software to speak to each other, despite each not knowing each other’s respective languages. Developers sometimes call it “software glue” for this reason – it ensures that the individual systems that make up a microservices architecture don’t need built-in integration to stick together.</span></p> <p><b>Managing microservices at scale with middleware</b></p> <p><span style="font-weight: 400;">Middleware is essential for managing microservices at scale, giving you more application awareness and control precisely as the management of these microservices becomes more and more complex. </span><a href="https://thenewstack.io/kubernetes-microservices-istio%E2%80%8A-%E2%80%8Aa-great-fit/"><span style="font-weight: 400;">Even relatively simple microservices deployments</span></a><span style="font-weight: 400;"> can include hundreds of microservices, with each one having multiple instances, and each instance having several versions. All that traffic between microservices must be routed efficiently, while failures must be handled quickly and quietly. </span></p> <p><span style="font-weight: 400;">But not everyone needs middleware. Those still using legacy data warehouses with </span><a href="https://en.wikipedia.org/wiki/Monolithic_application"><span style="font-weight: 400;">monolithic architecture</span></a><span style="font-weight: 400;">, for example, won’t have much use for it (though they’ll likely have enough </span><a href="https://www.thoughtworks.com/insights/blog/monoliths-are-bad-design-and-you-know-it"><span style="font-weight: 400;">other problems</span></a><span style="font-weight: 400;"> to keep them occupied – traditional monolithic systems are slow, unreliable and difficult to scale or upgrade with new technologies).</span></p> <p><span style="font-weight: 400;">A microservices-based system, on the other hand, is akin to smashing your monolithic system into dozens of tiny, efficient pieces: instead of one system for everything. Tasks are assigned to smaller, more specialized services within the larger system. This allows for more agile testing and development, and makes the entire system less prone to catastrophic failure. </span></p> <p><span style="font-weight: 400;">Similarly, small organizations with very limited systems may not deal with the scale of data necessary in order to realize the true value of middleware. It also comes with </span><a href="https://fourcornerstone.com/app-and-software/middleware-advantages-disadvantages/"><span style="font-weight: 400;">relatively high</span></a><span style="font-weight: 400;"> development costs, which can be difficult for small businesses to swallow, and can also come with increased </span><a href="https://www.healthdatamanagement.com/opinion/how-to-mitigate-middleware-security-vulnerabilities"><span style="font-weight: 400;">security risks</span></a><span style="font-weight: 400;">.</span></p> <p><span style="font-weight: 400;">But when companies need to scale their microservices, middleware can deliver a distinct competitive edge over rival systems without such connective tissue. It can provide increased efficiency and agility, setting the stage for increased innovation at your organization by shortening product development cycles.</span></p> <p><span style="font-weight: 400;">Some of the </span><a href="https://blogs.oracle.com/profit/the-middleware-advantage"><span style="font-weight: 400;">benefits of middleware</span></a><span style="font-weight: 400;"> for organizations running microservices at scale are:</span></p> <ul> <li style="font-weight: 400;"><b>Faster time to market: </b><span style="font-weight: 400;">Services are converging, and users increasingly expect a consistent experience from one platform to the next. This can be challenging from a development perspective, especially if you’re just doing major software updates a couple times per year. But middleware combined with a microservices architecture model, which in turns supports a </span><a href="https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/finding-the-speed-to-innovate"><span style="font-weight: 400;">DevOps and continuous delivery model</span></a><span style="font-weight: 400;">, supports these demands by facilitating faster time to market.</span></li> <li style="font-weight: 400;"><b>Increased efficiency:</b><span style="font-weight: 400;"> Middleware can help automate processes that previously have been done manually, leading to increased business velocity.</span></li> <li style="font-weight: 400;"><b>Lightning-fast innovation:</b><span style="font-weight: 400;"> A microservices architecture allows for simultaneous development and testing, leading to faster innovation and lower development costs. Many users of middleware at scale see such an increase in cost efficiency, they’re able to invest even more in innovation and new product development.</span></li> </ul> <p><b>Potential pitfalls: A maelstrom of middleware</b></p> <p><span style="font-weight: 400;">However, middleware can also sometimes cause as many problems as it solves if your organization doesn’t have a clear, consistent strategy for </span><a href="https://en.wikipedia.org/wiki/Enterprise_application_integration"><span style="font-weight: 400;">enterprise application integration (EAI)</span></a><span style="font-weight: 400;">. While middleware is important for microservices to work smoothly, the management can be extremely complex (especially as your system scales). Many companies perform integration on an ad hoc basis, which often results in several disparate kinds of middleware within one system along with brittle and expensive integrations. </span></p> <p><span style="font-weight: 400;">Indeed, if not managed properly, middleware can become a multi-layered maelstrom that requires even more software – </span><a href="https://searchmicroservices.techtarget.com/tip/Middleware-Part-of-the-Solution-or-Part-of-the-Problem"><span style="font-weight: 400;">“middleware for your middleware,”</span></a><span style="font-weight: 400;"> as one commentator put it – to manage it all.</span></p> <p><span style="font-weight: 400;">Which is why, as your microservices scale and management becomes more complex, it’s crucial to deploy on a cloud platform that has services and tools available to help manage what can quickly become an unwieldy mess. </span></p> <p><b>GCP: The perfect platform for microservices</b></p> <p><a href="https://pythian.com/google-cloud-platform/"><span style="font-weight: 400;">Google Cloud Platform</span></a><span style="font-weight: 400;"> (GCP) has a host of services that help with managing microservices. One of its most recent additions to its </span><a href="https://cloudplatform.googleblog.com/2018/07/cloud-services-platform-bringing-the-best-of-the-cloud-to-you.html"><span style="font-weight: 400;">GCP family</span></a><span style="font-weight: 400;"> is </span><a href="https://cloud.google.com/istio/#resources"><span style="font-weight: 400;">Istio</span></a><span style="font-weight: 400;">, an open-source service mesh that works in tandem with Google’s </span><a href="https://cloud.google.com/kubernetes-engine/"><span style="font-weight: 400;">Kubernetes Engine</span></a><span style="font-weight: 400;">, a container orchestration system that scales effortlessly. An Istio managed </span><a href="https://medium.com/microservices-in-practice/service-mesh-for-microservices-2953109a3c9a"><span style="font-weight: 400;">service mesh architecture</span></a><span style="font-weight: 400;"> essentially plays the role of a traffic cop – routing and encrypting traffic, performing tasks, handling failures, and allowing or restricting access to various services – all while living at the application layer to ensure timely and reliable performance. </span></p> <p><span style="font-weight: 400;">Pythian has deep expertise in all facets of GCP and related services and tools, from Stackdriver and Managed Istio to Kubernetes Engine. Its certified experts can help you unlock the value of GCP faster with proven expertise and boots-on-the-ground assistance including managed GCP operations, migrations, analytics, strategy, big data, and automation.</span></p> <p><span style="font-weight: 400;">Learn more about </span><a href="https://pythian.com/kubernetes-as-a-service/"><span style="font-weight: 400;">Pythian’s Kubernetes Services</span></a><span style="font-weight: 400;"> or to talk with a technical expert </span><a href="https://pythian.com/contact/"><span style="font-weight: 400;">reach out.</span></a></p> </div></div> Ron Kennedy https://blog.pythian.com/?p=105317 Thu Oct 25 2018 08:49:08 GMT-0400 (EDT) Join Cardinality – 4 https://jonathanlewis.wordpress.com/2018/10/25/join-cardinality-4/ <p>In previous installments of this series I&#8217;ve been describing how Oracle estimates the join cardinality for single column joins with equality where the columns have histograms defined. So far I&#8217;ve  covered two options for the types of histogram involved: <a href="https://jonathanlewis.wordpress.com/2018/10/05/join-cardinality-2/"><em><strong>frequency to frequency</strong></em></a>, and <a href="https://jonathanlewis.wordpress.com/2018/10/09/join-cardinality-3/"><strong><em>frequency to top-frequency</em></strong></a>. Today it&#8217;s time to examine <em>frequency to hybrid</em>.</p> <p>My first thought about this combination was that it was likely to be very similar to <em>frequency to top-frequency</em> because a hybrid histogram has a list of values with &#8220;repeat counts&#8221; (which is rather like a simple frequency histogram), and a set of buckets with variable sizes that could allow us to work out an <em>&#8220;average selectivity&#8221;</em> of the rest of the data.</p> <p>I was nearly right but the arithmetic didn&#8217;t quite work out the way I expected.  Fortunately <a href="https://www.scribd.com/document/369165079/Join-Cardinality-Estimation-Methods"><em><strong>Chinar Aliyev&#8217;s document</strong></em></a> highlighted my error &#8211; the optimizer doesn&#8217;t use <em><strong>all</strong></em> the repeat counts, it uses only those repeat counts that identify popular values, and a popular value is one where the endpoint_repeat_count is not less than the average number of rows in a bucket. Let&#8217;s work through an example &#8211; first the data (which repeats an earlier article, but is included here for ease of reference):</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: freq_hist_join_06.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem set linesize 156 set pagesize 60 set trimspool on execute dbms_random.seed(0) create table t1 ( id number(6), n04 number(6), n05 number(6), n20 number(6), j1 number(6) ) ; create table t2( id number(8,0), n20 number(6,0), n30 number(6,0), n50 number(6,0), j2 number(6,0) ) ; insert into t1 with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, mod(rownum, 4) + 1 n04, mod(rownum, 5) + 1 n05, mod(rownum, 20) + 1 n20, trunc(2.5 * trunc(sqrt(v1.id*v2.id))) j1 from generator v1, generator v2 where v1.id &lt;= 10 -- &gt; comment to avoid WordPress format issue and v2.id &lt;= 10 -- &gt; comment to avoid WordPress format issue ; insert into t2 with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, mod(rownum, 20) + 1 n20, mod(rownum, 30) + 1 n30, mod(rownum, 50) + 1 n50, 28 - round(abs(7*dbms_random.normal)) j2 from generator v1 where rownum &lt;= 800 -- &gt; comment to avoid WordPress format issue ; commit; begin dbms_stats.gather_table_stats( ownname =&gt; null, tabname =&gt; 'T1', method_opt =&gt; 'for all columns size 1 for columns j1 size 254' ); dbms_stats.gather_table_stats( ownname =&gt; null, tabname =&gt; 'T2', method_opt =&gt; 'for all columns size 1 for columns j2 size 13' ); end; / </pre> <p>As before I&#8217;ve got a table with 100 rows using the <em><strong>sqrt()</strong></em> function to generate column <em><strong>j1</strong></em>, and a table with 800 rows using the <em><strong>dbms_random.normal</strong></em> function to generate column <em><strong>j2</strong></em>. So the two columns have skewed patterns of data distribution, with a small number of low values and larger numbers of higher values &#8211; but the two patterns are different.</p> <p>I&#8217;ve generated a histogram with 254 buckets (which dropped to 10) for the <em><strong>t1.j1</strong></em> column, and generated a histogram with 13 buckets for the <em><strong>t2.j2</strong></em> column as I knew (after a little trial and error) that this would give me a hybrid histogram.</p> <p>Here&#8217;s a simple query, with its result set, to report the two histograms &#8211; using a full outer join to line up matching values and show the gaps where (endpoint) values in one histogram do not appear in the other:</p> <pre class="brush: plain; title: ; notranslate"> define m_popular = 62 break on report skip 1 compute sum of product on report compute sum of product_rp on report compute sum of t1_count on report compute sum of t2_count on report compute sum of t2_repeats on report compute sum of t2_pop_count on report with f1 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count, endpoint_number, endpoint_repeat_count, to_number(null) from user_tab_histograms where table_name = 'T1' and column_name = 'J1' order by endpoint_value ), f2 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count, endpoint_number, endpoint_repeat_count, case when endpoint_repeat_count &gt;= &amp;m_popular then endpoint_repeat_count else null end pop_count from user_tab_histograms where table_name = 'T2' and column_name = 'J2' order by endpoint_value ) select f1.value t1_value, f2.value t2_value, f1.row_or_bucket_count t1_count, f2.row_or_bucket_count t2_count, f1.endpoint_repeat_count t1_repeats, f2.endpoint_repeat_count t2_repeats, f2.pop_count t2_pop_count from f1 full outer join f2 on f2.value = f1.value order by coalesce(f1.value, f2.value) ; T1_VALUE T2_VALUE T1_COUNT T2_COUNT T1_REPEATS T2_REPEATS T2_POP_COUNT ---------- ---------- ---------- ---------- ---------- ---------- ------------ 1 1 1 2 5 0 5 15 0 7 15 0 10 17 0 12 13 0 15 15 13 55 0 11 17 17 11 56 0 34 19 67 36 20 20 7 57 0 57 21 44 44 22 22 3 45 0 45 23 72 72 72 24 70 70 70 25 25 1 87 0 87 87 26 109 109 109 27 96 96 96 28 41 41 ---------- ---------- ---------- ---------- ---------- ------------ 100 800 703 434 </pre> <p>You&#8217;ll notice that there&#8217;s a substitution variable (m_popular) in this script that I use to identify the <em>&#8220;popular values&#8221;</em> in the hybrid histogram so that I can report them separately. I&#8217;ve set this value to 62 for this example because a quick check of <em><strong>user_tables</strong></em> and <em><strong>user_tab_cols</strong></em> tells me I have 800 rows in the table (<em><strong>user_tables.num_rows</strong></em>) and 13 buckets (<em><strong>user_tab_cols.num_buckets</strong></em>) in the histogram: 800/13 = 61.52. A value is popular only if its repeat count is 62 or more.</p> <p style="padding-left:60px;"><em>This is where you may hit a problem &#8211; I certainly did when I switched from testing 18c to testing 12c (which I just <strong>knew</strong> was going to work &#8211; but I tested anyway). Although my data has been engineered so that I get the same &#8220;random&#8221; data in both versions of Oracle, I got different hybrid histograms (hence my complaint in <a href="https://jonathanlewis.wordpress.com/2018/10/23/upgrade-threat/"><strong>a recent post</strong></a>.) The rest of this covers 18c in detail, but if you&#8217;re running 12c there are a couple of defined values that you can change to get the right results in 12c.</em></p> <p>At this point I need to &#8220;top and tail&#8221; the output because the arithmetic only applies where the histograms overlap, so I need to pick the range from 2 to 25. Then I need to inject a &#8220;representative&#8221; or &#8220;average&#8221; count/frequency in all the gaps, then cross-multiply. The average frequency for the frequency histogram is <em>&#8220;half the frequency of the least frequently occurring value&#8221;</em> (which seems to be identical to <em><strong>new_density</strong></em> * <em><strong>num_rows</strong></em>), and the representative frequency for the hybrid histogram is (&#8220;number of non-popular rows&#8221; / &#8220;number of non-popular values&#8221;). There are 800 rows in the table with 22 distinct values in the column, and the output above shows us that we have 5 popular values totally 434 rows, so the average frequency is (800 &#8211; 434) / (22 &#8211; 5) = 21.5294. (Alternatively we could say that the average selectivities (which is what I&#8217;ve used in the next query) are 0.5/100 and 21.5294/800.)</p> <p style="padding-left:60px;"><em>[Note for 12c, you&#8217;ll get 4 popular values covering 338 rows, so your figurese will be: (800 &#8211; 338) / (22 &#8211; 4) = 25.6666&#8230; and 0.0302833]</em></p> <p>So here&#8217;s a query that restricts the output to the rows we want from the histograms, discards a couple of columns, and does the arithmetic:</p> <pre class="brush: plain; title: ; notranslate"> define m_t2_sel = 0.0302833 define m_t2_sel = 0.0269118 define m_t1_sel = 0.005 break on table_name skip 1 on report skip 1 with f1 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count, endpoint_number, endpoint_repeat_count, to_number(null) pop_count from user_tab_histograms where table_name = 'T1' and column_name = 'J1' order by endpoint_value ), f2 as ( select table_name, endpoint_value value, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) row_or_bucket_count, endpoint_number, endpoint_repeat_count, case when endpoint_repeat_count &gt;= &amp;m_popular then endpoint_repeat_count else null end pop_count from user_tab_histograms where table_name = 'T2' and column_name = 'J2' order by endpoint_value ) select f1.value f1_value, f2.value f2_value, nvl(f1.row_or_bucket_count,100 * &amp;m_t1_sel) t1_count, nvl(f2.pop_count, 800 * &amp;m_t2_sel) t2_count, case when ( f1.row_or_bucket_count is not null or f2.pop_count is not null ) then nvl(f1.row_or_bucket_count,100 * &amp;m_t1_sel) * nvl(f2.pop_count, 800 * &amp;m_t2_sel) end product_rp from f1 full outer join f2 on f2.value = f1.value where coalesce(f1.value, f2.value) between 2 and 25 order by coalesce(f1.value, f2.value) ; F1_VALUE F2_VALUE T1_COUNT T2_COUNT PRODUCT_RP ---------- ---------- ---------- ---------- ---------- 2 5 21.52944 107.6472 5 15 21.52944 322.9416 7 15 21.52944 322.9416 10 17 21.52944 366.00048 12 13 21.52944 279.88272 15 15 13 21.52944 279.88272 17 17 11 21.52944 236.82384 19 .5 21.52944 20 20 7 21.52944 150.70608 21 .5 21.52944 22 22 3 21.52944 64.58832 23 .5 72 36 24 .5 70 35 25 25 1 87 87 ---------- ---------- ---------- sum 102 465.82384 2289.41456 </pre> <p>There&#8217;s an important detail that I haven&#8217;t mentioned so far. In the output above you can see that some rows show <em><strong>&#8220;product_rp&#8221;</strong></em> as blank. While we cross multiply the frequencies from <em><strong>t1.j1</strong></em> and <em><strong>t2.j2</strong></em>, filling in average frequencies where necessary, we exclude from the final result any rows where average frequencies have been used for both histograms.</p> <p style="padding-left:60px;"><em>[Note for 12c, you&#8217;ll get the result 2698.99736 for the query, and 2699 for the execution plan]<br /> </em></p> <p>Of course we now have to check that the predicted cardinality for a simple join between these two tables really is 2,289. So let&#8217;s run a suitable query and see what the optimizer predicts:</p> <pre class="brush: plain; title: ; notranslate"> set serveroutput off alter session set statistics_level = all; alter session set events '10053 trace name context forever'; select count(*) from t1, t2 where t1.j1 = t2.j2 ; select * from table(dbms_xplan.display_cursor(null,null,'allstats last')); alter session set statistics_level = typical; alter session set events '10053 trace name context off'; SQL_ID cf4r52yj2hyd2, child number 0 ------------------------------------- select count(*) from t1, t2 where t1.j1 = t2.j2 Plan hash value: 906334482 ----------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ----------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 108 | | | | | 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 108 | | | | |* 2 | HASH JOIN | | 1 | 2289 | 1327 |00:00:00.01 | 108 | 2546K| 2546K| 1194K (0)| | 3 | TABLE ACCESS FULL| T1 | 1 | 100 | 100 |00:00:00.01 | 18 | | | | | 4 | TABLE ACCESS FULL| T2 | 1 | 800 | 800 |00:00:00.01 | 34 | | | | ----------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access(&quot;T1&quot;.&quot;J1&quot;=&quot;T2&quot;.&quot;J2&quot;) </pre> <p>As you can see, the <em><strong>E-Rows</strong></em> for the join is 2,289, as required.</p> <p>I can&#8217;t claim that the model I&#8217;ve produced is definitely what Oracle does, but it looks fairly promising. No doubt, though, there are some variations on the theme that I haven&#8217;t considered &#8211; even when sticking to a simple (non-partitioned) join on equality on a single column.</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19051 Thu Oct 25 2018 04:09:58 GMT-0400 (EDT) http://learndiscoverer.blogspot.com/2018/10/are-you-interested-in-obiee-while.html <div dir="ltr" style="text-align: left;" trbidi="on"><h2 style="text-align: left;"><span style="color: blue; font-family: Verdana, sans-serif;">Are you interested in OBIEE?</span></h2><span style="color: #6fa8dc; font-family: Verdana, sans-serif; font-size: 11.0pt;">While browsing for OBIEE tutorials on YouTube we found this fantastic video on OBIEE.</span><br /><span style="color: #6fa8dc; font-family: Verdana, sans-serif;"><span style="font-size: 11.0pt;"><br /></span></span><span style="font-size: 11.0pt;"><span style="color: #6fa8dc; font-family: Verdana, sans-serif;">Click this link if you are interested in knowing more:</span><span style="font-family: calibri, sans-serif;">-</span><a href="https://www.youtube.com/watch?v=Ajh_ePwlf88" style="font-family: calibri, sans-serif;" target="_blank">OBIEE Tutorial</a></span></div> Michael tag:blogger.com,1999:blog-21606293.post-5575084653338225214 Thu Oct 25 2018 01:49:00 GMT-0400 (EDT) “Hidden” Efficiencies of Non-Partitioned Indexes on Partitioned Tables Part III” (Ricochet) https://richardfoote.wordpress.com/2018/10/25/hidden-efficiencies-of-non-partitioned-indexes-on-partitioned-tables-part-iii-ricochet/ In Part I and Part II of this series, we looked at how Global Indexes can effectively perform &#8220;Partition Pruning&#8221; when the partition keys are specified in SQL predicates, by only using those index entries that have a Data Object of interest stored within the index Rowids. In this piece, I&#8217;ll cover the key performance [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5683 Wed Oct 24 2018 21:15:22 GMT-0400 (EDT) Let Them Finish, Stories From the Trenches https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/ <p>The first of three books that I&#8217;ve been working on this year is out!  From Melody Zacharias&#8217; &#8220;Let Them Finish&#8221; series, this is the <a href="https://www.amazon.com/Stories-Trenches-Let-Them-Finish/dp/1999431006/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1540411296&amp;sr=1-1&amp;keywords=Let+them+finish&amp;dpID=51y45pj7QSL&amp;preST=_SY291_BO1,204,203,200_QL40_&amp;dpSrc=srch">Stories from the Trenches</a>, a collection of stories about diversity in tech and how to survive and overcome the challenges.</p> <figure class="wp-block-image"><img src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/letthemfinish-2.jpg?w=650&#038;ssl=1" alt="" class="wp-image-8259" srcset="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/letthemfinish-2.jpg?w=599&amp;ssl=1 599w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/letthemfinish-2.jpg?resize=219%2C300&amp;ssl=1 219w" sizes="(max-width: 599px) 100vw, 599px" data-recalc-dims="1" /><figcaption>Second Book in the Let Them Finish Series</figcaption></figure> <p>The book can be picked up via hardcopy from Amazon and a Kindle version is around the corner, if like me, paper is against your religion. <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>I want to thank Rie Irish for recommending me, Melody Zacharias for allowing me, an Oracle girl, to contribute to this book, and Tracy Boggiano, Angela Tidwell, Brian Carrig, Randolph West, Leighton and Kerrine Nelson for stepping up as authors when we redesigned it and to complete the vision for this incredible collection of stories.</p> <p></p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/diversity-and-inclusion/" rel="tag">Diversity and Inclusion</a>, <a href="https://dbakevlar.com/tag/wit/" rel="tag">WIT</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/&title=Let Them Finish, Stories From the Trenches"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/&title=Let Them Finish, Stories From the Trenches"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/&title=Let Them Finish, Stories From the Trenches"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/&title=Let Them Finish, Stories From the Trenches"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/#comments">1 (One) on this item</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2018/01/dog-will-travel/" >Have Dog, Will Travel</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/09/em12c-enterprise-monitoring-part-iv/" >EM12c Enterprise Monitoring, Part IV</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a hrf="https://dbakevlar.com/2017/04/sqlpro-for-mssql/" >SQLPro for MSSQL</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2016/02/the-ten-rules-of-database-administration/" >The Ten Rules of Database Administration</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/08/latency-heatmaps-in-d3-and-highcharts/" >Latency heatmaps in D3 and Highcharts</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/10/let-them-finish-stories-from-the-trenches/">Let Them Finish, Stories From the Trenches</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8271 Wed Oct 24 2018 20:14:53 GMT-0400 (EDT) How the Cloud may Finally Solve the Data Silo Problem https://blog.pythian.com/not-loving-data/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>Not loving your data?</p> <p><span style="font-weight: 400;">You’re not alone.</span></p> <p><span style="font-weight: 400;">Accessing corporate and external data to gain insight and get ahead is critical—but it isn’t easy. </span><a href="http://www.coveo.com/%7E/media/Files/WhitePapers/Coveo_IDC_Knowledge_Quotient_June2014.ashx"><span style="font-weight: 400;">One source</span></a><span style="font-weight: 400;"> claims 90 percent of digital information is unstructured and locked in siloed repositories, meaning departments can only see and perform analysis on their own limited data sets And this means they don’t get a complete picture of the business. </span></p> <p><span style="font-weight: 400;">Seriously—90 percent?</span></p> <p><span style="font-weight: 400;">Of course, there are data stars and digital-native companies that don’t face this challenge. But most of us (61 percent, says a recent </span><a href="https://www.cmswire.com/information-management/information-management-the-critical-thing-youre-overlooking-in-the-digital-workplace/"><span style="font-weight: 400;">CMS Wire</span></a><span style="font-weight: 400;"> article) must access at least four systems for insight. And 15 percent of us are juggling 11 or more systems to stay fully informed! </span></p> <p><span style="font-weight: 400;">Because we have to access data from these disparate systems, we become like butterflies on flowers, spending more time flitting from system to system than we spend getting down into some valuable deep thought.</span></p> <p><span style="font-weight: 400;">What’s at stake here? How is non-integrated, non-scalable data access hurting our success?</span></p> <p><a href="http://www.dbta.com/Editorial/Think-About-It/The-5-Ways-Modern-Data-Governance-Helps-Business-Productivity-113101.aspx"><span style="font-weight: 400;">Database Trends and Applications</span></a><span style="font-weight: 400;"> tells us that poor data quality hurts productivity by up to 20 percent and prevents 40 percent of business initiatives from achieving targets. And a </span><a href="https://www.gartner.com/smarterwithgartner/how-to-stop-data-quality-undermining-your-business/"><span style="font-weight: 400;">Gartner survey</span></a><span style="font-weight: 400;"> found that poor data quality costs businesses $15 million every year.</span></p> <p><span style="font-weight: 400;">So how do we climb out of this mess and find a way to love our data again?</span></p> <p><span style="font-weight: 400;">Centralized, integrated data access and sharing is the holy grail of data management. And more and more, organizations are closing server rooms and moving to the cloud in the hope that this will be the answer. They can quickly find, however, that cloud services on their own don’t necessarily offer the flexibility or cost control they need. But when a cloud-native analytics platform takes the best of cloud services, combines them with open source technologies and the automation of processes like ETL, you can finally realize the full value of your data:</span> <span style="font-weight: 400;">scalability, </span><span style="font-weight: 400;">fast access, and </span><span style="font-weight: 400;">insight for every role using any BI tool.</span></p> <p><span style="font-weight: 400;">Pythian’s </span><span style="font-weight: 400;">scalable cloud-native analytics platform is called Kick AaaS for a reason. It literally helps you kick a$$ in the insight department by taking your data headaches away and giving you an across-the-board view into: </span></p> <ul> <li style="font-weight: 400;"><span style="font-weight: 400;">customer experience and marketing</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">financial performance</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">product strategy</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">data science and machine learning</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">executive vision</span></li> <li style="font-weight: 400;"><span style="font-weight: 400;">really, anything you need to know to kick the competition’s derriere.</span></li> </ul> <p><span style="font-weight: 400;">Getting all your data into one place, cleaned, unified and prepped for your use cases sounds like a dream, right? It’s not. </span></p> <p><span style="font-weight: 400;">I repeat: this is not a dream. </span></p> <p><a href="https://pythian.com/analytics-as-a-service/"><span style="font-weight: 400;">Read more about Kick AaaS</span></a><span style="font-weight: 400;">, or to get started on a plan to take your data to the cloud, sign up for our</span><a href="https://resources.pythian.com/hubfs/Data-Sheets/Google-Cloud-Analytics-Readiness.pdf?_ga=2.182943270.1839520248.1539784546-336715144.1534276327"><span style="font-weight: 400;"> cloud analytics readiness assessment workshop</span></a><span style="font-weight: 400;">.</span></p> <p><img class="alignnone size-full wp-image-105309" src="https://blog.pythian.com/wp-content/uploads/Pythian-Kick-AaaS-NEW-FINAL.jpg" alt="" width="1700" height="3000" srcset="https://blog.pythian.com/wp-content/uploads/Pythian-Kick-AaaS-NEW-FINAL.jpg 1700w, https://blog.pythian.com/wp-content/uploads/Pythian-Kick-AaaS-NEW-FINAL-465x821.jpg 465w, https://blog.pythian.com/wp-content/uploads/Pythian-Kick-AaaS-NEW-FINAL-350x618.jpg 350w" sizes="(max-width: 1700px) 100vw, 1700px" /></p> </div></div> Lynda Partner, VP Marketing and Analytics as a Service https://blog.pythian.com/?p=105306 Wed Oct 24 2018 10:44:43 GMT-0400 (EDT) All of Your Objects: Reports https://www.thatjeffsmith.com/archive/2018/10/all-of-your-objects-reports/ <img width="840" height="472" src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/objects2-1024x575.png" class="attachment-large size-large wp-post-image" alt="" /><p>Today&#8217;s question:<br /> <em> I am looking for a way to list out all and count all objects by all schema, any idea?</em></p> <div id="attachment_7046" style="width: 789px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/schema-flat.png" alt="" width="779" height="622" class="size-full wp-image-7046" /><p class="wp-caption-text">Point, click &#8211; pretty easy.</p></div> <p>So the connection tree is nice in that it makes it easy to see specific types of objects by schema &#8211; but if you want a FLAT view of a schema, it&#8217;s not so great.</p> <p>Ok, so what&#8217;s a SQL Developer user to do?</p> <h3>Try our Data Dictionary Reports</h3> <p>The Reports panel is there, you just need to click into it.</p> <div id="attachment_7047" style="width: 723px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/flat-view.png" alt="" width="713" height="521" class="size-full wp-image-7047" /><p class="wp-caption-text">Show me stuff, don&#8217;t make me write SQL = our pre-canned reports FTW!</p></div> <p>So there&#8217;s a few reports of interest here for our questioner:</p> <ol> <li>Object Count by Type</li> <li>All Objects</li> </ol> <p>Both of these reports support SCHEMA filtering &#8211; so by default we&#8217;ll show you everything for every schema, but you can also say, &#8216;Hey, just show me what&#8217;s what in HR.&#8217;</p> <div id="attachment_7048" style="width: 1117px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/object-count-by-type1.png" alt="" width="1107" height="657" class="size-full wp-image-7048" /><p class="wp-caption-text">This covers most of what our customer is asking for.</p></div> <p>But, they also want to see the list of objects.</p> <p>We could take a look at the &#8216;All Objects&#8217; report. But, we could also CUSTOMIZE the Object Counts by Type report. Let&#8217;s do THAT.</p> <p>Select the report. Right-click.</p> <div id="attachment_7049" style="width: 944px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/copy-report1.png" alt="" width="934" height="680" class="size-full wp-image-7049" /><p class="wp-caption-text">We&#8217;re going to paste this in just a few seconds.</p></div> <p>So we can&#8217;t change these supplied reports. But, we can copy them to a &#8216;User Defined Report,&#8217; and make it do anything we want. So let&#8217;s Paste this in the User Defined section.</p> <p><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/report-paste.png" alt="" width="759" height="397" class="aligncenter size-full wp-image-7050" /></p> <p>And ta-da!</p> <p>Now that it&#8217;s a user-defined report, I can customize it.</p> <p>I&#8217;m going to add a child report called &#8216;Objects&#8217;, which selects based on object type and owner<a href="https://www.thatjeffsmith.com/archive/2012/01/sweet-child-report-o-mine/" rel="noopener" target="_blank"> using the :BIND trick we&#8217;ve discussed earlier</a>.</p> <p>Here&#8217;s our PARENT SQL &#8211;</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="sql"><pre class="de1"><span class="kw1">SELECT</span> owner<span class="sy0">,</span> object_type<span class="sy0">,</span> <span class="kw1">COUNT</span><span class="br0">&#40;</span><span class="sy0">*</span><span class="br0">&#41;</span> <span class="st0">&quot;Object Count&quot;</span> <span class="kw1">FROM</span> sys<span class="sy0">.</span>all_objects <span class="kw1">WHERE</span> substr<span class="br0">&#40;</span> object_name<span class="sy0">,</span> <span class="nu0">1</span><span class="sy0">,</span> <span class="nu0">4</span> <span class="br0">&#41;</span> !<span class="sy0">=</span> <span class="st0">'BIN$'</span> <span class="kw1">AND</span> substr<span class="br0">&#40;</span> object_name<span class="sy0">,</span> <span class="nu0">1</span><span class="sy0">,</span> <span class="nu0">3</span> <span class="br0">&#41;</span> !<span class="sy0">=</span> <span class="st0">'DR$'</span> <span class="kw1">AND</span> <span class="br0">&#40;</span> :owner <span class="kw1">IS</span> <span class="kw1">NULL</span> <span class="kw1">OR</span> instr<span class="br0">&#40;</span> <span class="kw1">UPPER</span><span class="br0">&#40;</span>owner<span class="br0">&#41;</span><span class="sy0">,</span> <span class="kw1">UPPER</span><span class="br0">&#40;</span> :owner <span class="br0">&#41;</span> <span class="br0">&#41;</span> <span class="sy0">&gt;</span> <span class="nu0">0</span> <span class="br0">&#41;</span> <span class="kw1">GROUP</span> <span class="kw1">BY</span> owner<span class="sy0">,</span> object_type <span class="kw1">ORDER</span> <span class="kw1">BY</span> <span class="nu0">3</span> <span class="kw1">DESC</span></pre></div></div></div></div></div></div></div> <p>And here&#8217;s our new child report SQL:</p> <div class="wp-geshi-highlight-wrap5"><div class="wp-geshi-highlight-wrap4"><div class="wp-geshi-highlight-wrap3"><div class="wp-geshi-highlight-wrap2"><div class="wp-geshi-highlight-wrap"><div class="wp-geshi-highlight"><div class="sql"><pre class="de1"><span class="kw1">SELECT</span> <span class="sy0">*</span> <span class="kw1">FROM</span> all_objects <span class="kw1">WHERE</span> owner <span class="sy0">=</span> :OWNER <span class="kw1">AND</span> object_type <span class="sy0">=</span> :OBJECT_TYPE <span class="kw1">ORDER</span> <span class="kw1">BY</span> object_name <span class="kw1">ASC</span></pre></div></div></div></div></div></div></div> <p>Let&#8217;s run the report, select an item type, and see what&#8217;s what:</p> <div id="attachment_7051" style="width: 1034px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2018/10/objects2.png" alt="" width="1024" height="575" class="size-full wp-image-7051" /><p class="wp-caption-text">reports are your best friend &#8211; give them a try!</p></div> <h3>Object Navigators?</h3> <p>Want to have the object list also include links to actually OPEN the objects? <a href="https://www.thatjeffsmith.com/archive/2015/11/making-your-own-custom-object-navigators/" rel="noopener" target="_blank">Don&#8217;t forget this trick!</a></p> <div id="attachment_6059" style="width: 963px" class="wp-caption aligncenter"><img src="https://www.thatjeffsmith.com/wp-content/uploads/2017/03/scope3.png" alt="" width="953" height="445" class="size-full wp-image-6059" /><p class="wp-caption-text">Click on the hyperlink to open the object.</p></div> <!-- Easy AdSense Unfiltered [count: 3 is not less than 3] --> thatjeffsmith https://www.thatjeffsmith.com/?p=7045 Wed Oct 24 2018 01:10:15 GMT-0400 (EDT) Simple Template Pull In Azure https://dbakevlar.com/2018/10/simple-template-pull-in-azure/ <p>&nbsp;</p> <p>The I&#8217;ve been busy automating a solution we provide many of our Education customers that I discovered, due to a variety of technical skills, was hindered on using the solution while spending significant time deploying it.  New to Azure, I wanted to use templates with a bash script as my first deep dive into auto deployment, but was frustrated with the auto deployment templates &#8220;JSon&#8217;ing me to death&#8221;.</p> <p><a href="https://dbakevlar.com/2018/10/simple-template-pull-in-azure/stopit/" rel="attachment wp-att-8213"><img class="alignnone size-full wp-image-8213" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/stopit.gif?resize=500%2C282&#038;ssl=1" alt="" width="500" height="282" data-recalc-dims="1" /></a></p> <p>I wanted to quickly post about a few tips working with templates and parameter files.</p> <h3>JSon Format Checker</h3> <p>If you&#8217;re like me, you have enough varying syntax formats in your brain that one more for json may be one to many.  I&#8217;m a command line girl and having a quick format checker can come in handy.  This saved me multiple times when there wasn&#8217;t someone I cold pester to be my second set of eyes.</p> <p><a href="https://jsonchecker.com/">JSON Checker</a></p> <p>The Kitchen Sink</p> <p>I often found that I wanted one resource template and when referring to the automation deployment option in the portal, it would give you EVERY RESOURCE in the RESOURCE GROUP.  Talk about overkill when you might just want one.  A way to avoid this is to do the following:</p> <p>Go To Resource Groups, the group the resource belongs to, click on Deployments.  In the list, find the resource you want the template and parameters for and double-click on it.</p> <p><a href="https://dbakevlar.com/2018/10/simple-template-pull-in-azure/dploye2/" rel="attachment wp-att-8238"><img class="alignnone wp-image-8238" src="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=465%2C307&#038;ssl=1" alt="" width="465" height="307" srcset="https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=1024%2C676&amp;ssl=1 1024w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=300%2C198&amp;ssl=1 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=768%2C507&amp;ssl=1 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=1140%2C752&amp;ssl=1 1140w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?resize=500%2C330&amp;ssl=1 500w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/10/dploye2.jpg?w=1300&amp;ssl=1 1300w" sizes="(max-width: 465px) 100vw, 465px" data-recalc-dims="1" /></a></p> <p>The tabs for Template and Parameter files are available, along with the CLI commands, PowerShell and others.  This is simpler and more manageable than what may likely be in the output from the resource group level download.</p> <p>You can download each file, along with the deployment execution format of your choice to build out your automation from here.</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/10/simple-template-pull-in-azure/&title=Simple Template Pull In Azure"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/10/simple-template-pull-in-azure/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/10/simple-template-pull-in-azure/&title=Simple Template Pull In Azure"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/10/simple-template-pull-in-azure/&title=Simple Template Pull In Azure"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/10/simple-template-pull-in-azure/&title=Simple Template Pull In Azure"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/10/simple-template-pull-in-azure/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2014/12/rmoug-training-days-2015/" >RMOUG Training Days 2015!</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/05/for-my-sister-kristi/" >For My Sister, Kristi</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/04/conference-networking-tips-right/" >Conference Networking- Tips to Doing it Right</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/09/data-gravity-network/" >Data Gravity and the Network</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2010/01/error-400data-will-be-rejected-for-upload-from-agent-httpsemdmain-max-size-limit-for-direct-load-exceeded-in-oem/" >ERROR-400|Data will be rejected for upload from agent ‘https://:/emd/main/’, max size limit for direct load exceeded in OEM</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/10/simple-template-pull-in-azure/">Simple Template Pull In Azure</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8200 Tue Oct 23 2018 15:35:58 GMT-0400 (EDT) Upgrade threat https://jonathanlewis.wordpress.com/2018/10/23/upgrade-threat/ <p>Here&#8217;s one I&#8217;ve just discovered while trying to build a reproducible test case &#8211; that didn&#8217;t reproduce because an internal algorithm has changed.</p> <p>If you upgrade from 12c to 18c and have a number of <em><strong>hybrid</strong></em> histograms in place you may find that some execution plans change because of a change in the algorithm for producing hybrid histograms (and that&#8217;s not just if you happen to get the patch that fixes <a href="https://jonathanlewis.wordpress.com/2018/01/15/histogram-hassle/"><span style="text-decoration:underline;"><strong>the top-frequency/hybrid bug</strong></span></a> relating to high values).</p> <p>Here&#8217;s a little test to demonstrate how I wasted a couple of hours trying to solve the wrong problem &#8211; first a simple data set:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: 18c_histogram_upgrade.sql rem Author: Jonathan Lewis rem Dated: Oct 2018 rem drop table t2 purge; execute dbms_random.seed(0) create table t2( id number(8,0), n20 number(6,0), n30 number(6,0), n50 number(6,0), j2 number(6,0) ) ; insert into t2 with generator as ( select rownum id from dual connect by level &lt;= 1e4 -- &gt; comment to avoid WordPress format issue ) select rownum id, mod(rownum, 20) + 1 n20, mod(rownum, 30) + 1 n30, mod(rownum, 50) + 1 n50, 28 - round(abs(7*dbms_random.normal)) j2 from generator v1 where rownum &lt;= 800 -- &gt; comment to avoid WordPress format issue ; commit; begin dbms_stats.gather_table_stats( ownname =&gt; null, tabname =&gt; 'T2', method_opt =&gt; 'for all columns size 1 for columns j2 size 13' ); end; / </pre> <p>I&#8217;ve created a skewed data set which (we will see) has 22 distinct values and created a histogram of 13 buckets on it. This will be a hybrid histogram &#8211; but different versions of Oracle will produce different histograms (even though the data set is the same for both versions):</p> <pre class="brush: plain; title: ; notranslate"> select j2, count(*) from t2 group by j2 order by j2 ; select endpoint_value value, endpoint_number, endpoint_number - lag(endpoint_number,1,0) over(order by endpoint_number) bucket_size, endpoint_repeat_count from user_tab_histograms where table_name = 'T2' and column_name = 'J2' order by endpoint_value ; </pre> <p>Here&#8217;s the dataset from 12.2.0.1 and 18.3.0.0</p> <pre class="brush: plain; title: ; notranslate"> J2 COUNT(*) ---------- ---------- 1 1 8 3 9 1 10 5 11 4 12 8 13 14 14 9 15 11 16 22 17 34 18 31 19 36 20 57 21 44 22 45 23 72 24 70 25 87 26 109 27 96 28 41 22 rows selected. And here are the histograms - 12.2.0.1 then 18.3.0.0: VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT ---------- --------------- ----------- --------------------- 1 1 1 1 15 56 55 11 17 112 56 34 18 143 31 31 19 179 36 36 20 236 57 57 21 280 44 44 22 325 45 45 23 397 72 72 24 467 70 70 25 554 87 87 26 663 109 109 28 800 137 41 13 rows selected. VALUE ENDPOINT_NUMBER BUCKET_SIZE ENDPOINT_REPEAT_COUNT ---------- --------------- ----------- --------------------- 1 1 1 1 15 56 55 11 17 112 56 34 19 179 67 36 20 236 57 57 21 280 44 44 22 325 45 45 23 397 72 72 24 467 70 70 25 554 87 87 26 663 109 109 27 759 96 96 28 800 41 41 13 rows selected. </pre> <p>Both histograms have 13 buckets as requested; both are hybrid histograms as expected.</p> <p>But why does 12c have the value 18 when 18c doesn&#8217;t, and why does 18c have the value 27 when 12c doesn&#8217;t ?</p> <p>That&#8217;s the second time in two weeks I&#8217;ve had reproducible test cases not reproducing &#8211; thanks to an 18c upgrade.</p> <h3>Update (See comments)</h3> <p>I had completely forgotten that a <a href="https://jonathanlewis.wordpress.com/2018/01/15/histogram-hassle/"><em><strong>previous defect</strong></em></a> in the construction of hybrid (and Top-N) histograms had been addressed in 18.3 but needed a fix in 12.2 and a backport patch in 12.1.0.2.</p> <p>Since the defect could &#8220;lose&#8221; a popular value in order to ensure that both the low and high values were captured in the histogram it&#8217;s not surprising that a fix could result in one of the popular values in a histogram dissappearing (after the upgrade) even when the gather had used a 100% sample. Quite possibly the algorithm used to ensure the presence of the high value has had a cascading effect down the histogram that can affect which popular values get into the histogram with repeat counts.</p> <p>I think I&#8217;m going to have to grit my teeth and patch a 12.1.0.2, or update a 12.2.0.1 with exactly the right patch-set to find out.</p> <p><em><strong>[It has now been <a href="https://twitter.com/vldbb/status/1055879342704586754">confirmed by Nigel Bayliss</a> that this is a side effect of the fix to the bug 25994960]</strong></em></p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=19059 Tue Oct 23 2018 14:50:31 GMT-0400 (EDT) Column Groups https://jonathanlewis.wordpress.com/2018/10/22/column-groups-5/ <p>Sometimes a good thing becomes at bad thing when you hit some sort of special case &#8211; today&#8217;s post is an example of this that came up on <a href="http://www.freelists.org/post/oracle-l/Optimizer-question,2"><em><strong>the Oracle-L listserver</strong></em></a> a couple of years ago with a question about what the optimizer was doing. I&#8217;ll set the scene by creating some data to reproduce the problem:</p> <pre class="brush: plain; title: ; notranslate"> rem rem Script: distinct_key_prob.sql rem Author: Jonathan Lewis rem Dated: Apr 2016 rem Purpose: rem rem Last tested rem 18.3.0.0 rem 12.1.0.2 rem 11.2.0.4 rem drop table t1 purge; create table t1 nologging as with generator as ( select --+ materialize rownum id from dual connect by level &lt;= 1e4 -- &gt; commment to avoid wordpress format issue ) select cast(mod(rownum-1,10) as number(8,0)) non_null, cast(null as number(8,0)) null_col, cast(lpad(rownum,10) as varchar2(10)) small_vc, cast(rpad('x',100) as varchar2(100)) padding from generator v1, generator v2 where rownum &lt;= 1e6 -- &gt; commment to avoid wordpress format issue ; create index t1_i1 on t1(null_col, non_null); begin /* dbms_output.put_line( dbms_stats.create_extended_stats(user,'t1','(non_null, null_col)') ); */ dbms_stats.gather_table_stats( ownname =&gt; user, tabname =&gt;'T1', method_opt =&gt; 'for all columns size 1' ); end; / </pre> <p>So I have a table with 1,000,000 rows; one of its columns is always null and another has a very small number of distinct values and is never null (though it hasn&#8217;t been declared as <em><strong>not null</strong></em>). I&#8217;ve created an index that starts with the &#8220;always null&#8221; column (in a production system we&#8217;d really be looking at a column that was &#8220;almost always&#8221; null and have a few special rows where the column was not null, so an index like this can make sense).</p> <p>I&#8217;ve also got a few lines, commented out, to create extended stats on the column group <strong><em>(non_null, null_col)</em></strong> because any anomaly relating to the handling of the number of distinct keys in a multi-column index may also be relevant to column groups. I can run two variations of this code, one with the index, one without the index but with the column group, and see the same cardinality issue appearing in both cases.</p> <p>So let&#8217;s execute a couple of queries &#8211; after setting up a couple of bind variables &#8211; and pull their execution plans from memory:</p> <pre class="brush: plain; title: ; notranslate"> variable b_null number variable b_nonnull number exec :b_null := 5 exec :b_nonnull := 5 set serveroutput off prompt =================== prompt Query null_col only prompt =================== select count(small_vc) from t1 where null_col = :b_null ; select * from table(dbms_xplan.display_cursor(null,null,'-plan_hash')); prompt ========================= prompt Query (null_col,non_null) prompt ========================= select count(small_vc) from t1 where null_col = :b_null and non_null = :b_nonnull ; select * from table(dbms_xplan.display_cursor(null,null,'-plan_hash')); </pre> <p>The optimizer has statistics that tell it that <em><strong>null_col</strong></em> is always null so its estimate of rows where <em>null_col = 5</em> should be zero (which will be rounded up to 1); and we have an index starting with <em><strong>null_col</strong></em> so we might expect the optimizer to use an index range scan on that index for these queries. Here are the plans that actually appeared:</p> <pre class="brush: plain; title: ; notranslate"> SQL_ID danj9r6rq3c7g, child number 0 ------------------------------------- select count(small_vc) from t1 where null_col = :b_null -------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2 (100)| | | 1 | SORT AGGREGATE | | 1 | 24 | | | | 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 24 | 2 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | T1_I1 | 1 | | 2 (0)| 00:00:01 | -------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - access(&quot;NULL_COL&quot;=:B_NULL) SQL_ID d8kbtq594bsp0, child number 0 ------------------------------------- select count(small_vc) from t1 where null_col = :b_null and non_null = :b_nonnull --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 2189 (100)| | | 1 | SORT AGGREGATE | | 1 | 27 | | | |* 2 | TABLE ACCESS FULL| T1 | 100K| 2636K| 2189 (4)| 00:00:11 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter((&quot;NULL_COL&quot;=:B_NULL AND &quot;NON_NULL&quot;=:B_NONNULL)) </pre> <p>Take a careful look at what we&#8217;ve got: the second query has to access exactly the same table rows as those identified by the first query and then apply a second predicate which may discard some of those rows &#8211; but the optimizer has changed the access path from a low-cost index-driven access to a high cost tablescan. This is clearly idiotic &#8211; there has to be a flaw in the optimizer logic in this situation.</p> <p>The defect revolves around a slight inconsistency in the handling of columns groups &#8211; whether they are explicitly created, or simply inferred by reference to <em><strong>user_indexes.distinct_keys</strong></em>. The anomaly is most easily seen by explicitly creating the column group, gathering stats, and reporting from <em><strong>user_tab_cols</strong></em>.</p> <pre class="brush: plain; title: ; notranslate"> select column_name, sample_size, num_distinct, num_nulls, density, histogram, data_default from user_tab_cols where table_name = upper('T1') order by column_id ; COLUMN_NAME Sample Distinct NUM_NULLS DENSITY HISTOGRAM DATA_DEFAULT -------------------------------- ------------ ------------ ---------- ---------- --------------- -------------------------------------------- NON_NULL 1,000,000 10 0 .1 NONE NULL_COL 0 1000000 0 NONE SMALL_VC 1,000,000 995,008 0 .000001005 NONE PADDING 1,000,000 1 0 1 NONE SYS_STULC#01EE$DE1QB7UY1K4$PBI 1,000,000 10 0 .1 NONE SYS_OP_COMBINED_HASH(&quot;NON_NULL&quot;,&quot;NULL_COL&quot;) </pre> <p>As you can see, the optimizer can note that <em>&#8220;null_col&#8221;</em> is always null so the arithmetic for <em>&#8220;null_col = :bind1&#8221;</em> is going to produce a very small cardinality estimate; on the other hand when the optimizer sees <em>&#8220;null_col = :bind1 and non_null = :bind2&#8221;</em> it&#8217;s going to transform this into the single predicate <em>&#8220;SYS_STULC#01EE$DE1QB7UY1K4$PBI = sys_op_combined_hash(null_col, non_null)&#8221;</em>, and the statistics say there are 10 distinct values for this (virtual) column with no nulls &#8211; hence the huge cardinality estimate and full tablescan.</p> <p>The &#8220;slight inconsistency&#8221; in handling that I mentioned above is that if you used a predicate like <em>&#8220;null_col <strong>is null</strong> and non_null = :bind2&#8243;</em> the optimizer would not use column group because of <a href="https://jonathanlewis.wordpress.com/2015/11/05/column-groups/"><strong><em>the &#8220;is null&#8221;</em> condition</strong></a> &#8211; even though it&#8217;s exactly the case where the column group statistics would be appropriate. (In the example I&#8217;ve constructed the optimizer&#8217;s estimate from ignoring the column group would actually be correct &#8211; and identical to the estimate it would get from using the column group &#8211; because the column is null for every single row.)</p> <h3>tl;dr</h3> <p>Column groups can give you some very bad estimates, and counter-intuitive behaviour, if any of the columns in the group has a significant percentage of nulls; this happens because the column group makes the optimizer lose sight of the number of nulls in the underlying data set.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=15666 Mon Oct 22 2018 12:36:21 GMT-0400 (EDT) Oracle Database EM 18 XE Available to Remote Clients http://surachartopun.com/2018/10/oracle-database-em-18-xe-available-to.html I found lot of posts about <b><a href="https://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html" target="_blank">Oracle Database 18 XE</a></b>. It's very interesting for me.&nbsp; I didn't blog about how to install, because it's very easy for using rpm package and<a href="https://www.oracle.com/database/technologies/appdev/xe/quickstart.html" target="_blank"> document</a> very helpful.<br />I was interested in&nbsp;Enterprise Manager Database Express 18.4.0.0.0. How it looks like?<br /><b>- Installing. I used CentOS7.</b><br /><blockquote class="tr_bq">[student@centos-learning ~]$ <span style="color: blue;">sudo yum -y localinstall oracle-database*18c*</span><br />[student@centos-learning ~]$ <span style="color: blue;">sudo rpm -qa |grep oracle</span><br />oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />oracle-database-xe-18c-1.0-1.x86_64<br /><br />[student@centos-learning ~]$ <span style="color: blue;">sudo /etc/init.d/oracle-xe-18c configure</span><br />Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:<br />The password you entered contains invalid characters. Enter password:<br />Confirm the password:<br />Configuring Oracle Listener.<br />Listener configuration succeeded.<br />Configuring Oracle Database XE.<br />Enter SYS user password:<br />*********<br />Enter SYSTEM user password:<br />********<br />Enter PDBADMIN User Password:<br />*********<br />Prepare for db operation<br />7% complete<br />Copying database files<br />29% complete<br />Creating and starting Oracle instance<br />30% complete<br />31% complete<br />34% complete<br />38% complete<br />41% complete<br />43% complete<br />Completing Database Creation<br />47% complete<br />50% complete<br />Creating Pluggable Databases<br />54% complete<br />71% complete<br />Executing Post Configuration Actions<br />93% complete<br />Running Custom Scripts<br />100% complete<br />Database creation complete. For details check the logfiles at:<br />&nbsp;/opt/oracle/cfgtoollogs/dbca/XE.<br />Database Information:<br />Global Database Name:XE<br />System Identifier(SID):XE<br />Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.<br />Connect to Oracle Database using one of the connect strings:<br />&nbsp; &nbsp; &nbsp;Pluggable database: centos-learning.surachartopun.com/XEPDB1<br />&nbsp; &nbsp; &nbsp;Multitenant container database: centos-learning.surachartopun.com<br />Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE<br />[student@centos-learning ~]$ <span style="color: blue;">netstat -ltn |grep 5500</span><br />tcp&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; &nbsp; 0 <b>127.0.0.1:5500&nbsp; </b>&nbsp; &nbsp; &nbsp; &nbsp; 0.0.0.0:*&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LISTEN</blockquote><div><b>- As I didn't want to connect 127.0.0.1, I changed binding - "<a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/xeinl/making-oracle-database-em-express-available-remote-clients.html" target="_blank">Making Oracle Database EM Express Available to Remote Clients</a>"</b><br /><blockquote>SQL&gt; !netstat -ltn |grep 5500<br />tcp&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; &nbsp; 0 <b>127.0.0.1:5500&nbsp; &nbsp;</b> &nbsp; &nbsp; &nbsp; 0.0.0.0:*&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LISTEN<br />SQL&gt; !lsnrctl status | grep HTTP<br />&nbsp; (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/product/18c/dbhomeXE/admin/XE/xdb_wallet))(Presentation=HTTP)(Session=RAW))<br />SQL&gt;<br />SQL&gt;<br />SQL&gt; <span style="color: blue;"><b>EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);</b></span><br />PL/SQL procedure successfully completed.<br />SQL&gt; !netstat -ltn |grep 5500<br />tcp&nbsp; &nbsp; &nbsp; &nbsp; 0&nbsp; &nbsp; &nbsp; <b>0 0.0.0.0:5500</b>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.0.0.0:*&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LISTEN<br />SQL&gt; !lsnrctl status | grep HTTP<br />&nbsp; (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=centos-learning.surachartopun.com)(PORT=5500))(Security=(my_wallet_directory=/opt/oracle/admin/XE/xdb_wallet))(Presentation=HTTP)(Session=RAW))</blockquote><div class="separator" style="clear: both; text-align: left;"><b>- Browsed it - https://IP:5500/em</b></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-RqxcG1OXANM/W83vTQm_VPI/AAAAAAAADFk/REc6_I9D9ZwjlYAZ1Gra3n_DT6IOAl0owCLcBGAs/s1600/1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="440" data-original-width="1037" height="168" src="https://2.bp.blogspot.com/-RqxcG1OXANM/W83vTQm_VPI/AAAAAAAADFk/REc6_I9D9ZwjlYAZ1Gra3n_DT6IOAl0owCLcBGAs/s400/1.jpg" width="400" /></a></div><div><br /></div><div>However, I got some error like "<span style="color: red;">Connection with database failed. Database instance might be down.</span>"</div></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-xuTYEiCYYT0/W83v0MgnfmI/AAAAAAAADFs/s8xixGixmZkNnj5j1b577ya8GW0LoxqkgCLcBGAs/s1600/2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="319" data-original-width="371" height="275" src="https://3.bp.blogspot.com/-xuTYEiCYYT0/W83v0MgnfmI/AAAAAAAADFs/s8xixGixmZkNnj5j1b577ya8GW0LoxqkgCLcBGAs/s320/2.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><b>- Checked alert log file and fixed.</b><br /><b>Error:</b> <i>Global ports off in Root, do SetGlobalPortEnabled(TRUE) to enable.</i><br /><blockquote class="tr_bq">bash-4.2$ tail -f alert_XE.log<br />2018-10-22T22:06:32.890217+07:00<br />Global ports off in Root, do SetGlobalPortEnabled(TRUE) to enable.<br />2018-10-22T22:06:38.489011+07:00<br />Global ports off in Root, do SetGlobalPortEnabled(TRUE) to enable.<br />2018-10-22T22:10:32.402822+07:00<br />Resize operation completed for file# 3, old size 501760K, new size 512000K<br />2018-10-22T22:15:55.791490+07:00<br />Global ports off in Root, do SetGlobalPortEnabled(TRUE) to enable.<br />2018-10-22T22:18:02.248906+07:00<br /><span style="color: red;">Global ports off in Root, do SetGlobalPortEnabled(TRUE) to enable.</span></blockquote>Setting the Global Port for EM Express to Manage a CDB and the PDBs. (<i><b>It might not be the right solution, but I just wanted to see EM</b></i>).<br /><blockquote>SQL&gt; select dbms_xdb_config.getHttpsPort() from dual;<br />DBMS_XDB_CONFIG.GETHTTPSPORT()<br />------------------------------<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 5500<br />SQL&gt; <b><span style="color: blue;">exec dbms_xdb_config.SetGlobalPortEnabled(TRUE)</span></b><br />PL/SQL procedure successfully completed.</blockquote><b>- Login again.</b><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-_59lII-5sek/W83xrcHm6nI/AAAAAAAADF4/Nc7HYCmnpSItf1-kspDoX2oMM1BuFwu1gCLcBGAs/s1600/3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="425" data-original-width="1053" height="258" src="https://1.bp.blogspot.com/-_59lII-5sek/W83xrcHm6nI/AAAAAAAADF4/Nc7HYCmnpSItf1-kspDoX2oMM1BuFwu1gCLcBGAs/s640/3.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-1TpDTc-GlV8/W83yPtHMEAI/AAAAAAAADGA/mjuL1MiS_KQ2mQOrBkyNogW0EopRUKtUgCLcBGAs/s1600/4.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="559" data-original-width="1329" height="268" src="https://4.bp.blogspot.com/-1TpDTc-GlV8/W83yPtHMEAI/AAAAAAAADGA/mjuL1MiS_KQ2mQOrBkyNogW0EopRUKtUgCLcBGAs/s640/4.jpg" width="640" /></a></div><br />It worked fine for now.<br /><br /><b>Reference:</b>&nbsp;I<a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/xeinl/installation-guide.html#GUID-31891F22-B1FA-4489-A1C5-195E6B3D89C8" target="_blank">nstallation Guide for Linux x86-64</a><div class="blogger-post-footer">Written By: Surachart Opun http://surachartopun.com</div> Surachart Opun tag:blogger.com,1999:blog-20612393.post-8455735893853176440 Mon Oct 22 2018 11:58:00 GMT-0400 (EDT) A Day in the Life of a Red Cross Volunteer https://blog.pythian.com/day-life-red-cross-volunteer/ <div class="l-submain"><div class="l-submain-h g-html i-cf"><p>In addition to my day job working as a team manager at Pythian, I am fortunate to be part of a specialist Red Cross volunteer team called an Emergency Response Unit (ERU). There are many of these ERUs around the world, serving many purposes, but I am a member of an IT&amp;T ERU, responsible for all things to do with radio communications, computers and networks.</p> <p>In the event of a natural disaster, a country may request the help of the international element of the Red Cross if the disaster is sufficiently big to warrant it. The Red Cross office in Geneva may also elect to send one or more ERUs along with their response to assist the efforts there.</p> <p>My job as an ERU member is very much a support role. I am there to make sure that the doctors, builders, engineers and everyone else, both from overseas and locals, are able to talk to each other, have access to the information they need, consult with their colleagues back home and to help make them able to operate as efficiently as possible.</p> <p>I have been a member of the New Zealand IT&amp;T ERU since 2010. During that time, I’ve been sent to Samoa in 2012 to assist in the wake of Cyclone Evan. I’ve also been to Tacloban in The Philippines over Christmas and New Years in 2013 for Typhoon Yolanda and am now here in Palu, Indonesia for the significant earthquake and tsunami they have experienced.</p> <p>A large earthquake, combined with a geological phenomenon called “liquefaction”, has severely damaged buildings, and in places, completely destroyed towns. A tsunami then hit the coast, sweeping away people and wreckage and causing more collapses. Thousands of people have been killed and thousands more are still missing, presumed buried in the rubble and mud.</p> <p>Thanks to Pythian’s flexibility, I was able to jump on a plane and head straight out. After a stopover in Jakarta while the next leg was arranged, I arrived in Palu &#8211; a city on Sulawesi that was hit very hard. Everywhere I go, I see Indonesian flags flying at half mast from houses, fences and lamp posts. Power is now starting to come back on again in the bigger towns, and cellular connectivity is being restored but is still unreliable. There are still aftershocks from time to time. While these can be scary for some of the volunteers here, it must be so much worse for those that lived through the trauma of the initial disaster.</p> <p>I have been spending my time working in a temporary field office in Palu, and making trips out to the surrounding areas to meet with the various people in the field to find out what they need to make their jobs easier.</p> <p>It’s interesting and varied work during a disaster response.</p> <p>Yesterday I drove north up the coast to meet with the director of a field medical clinic in Tompe, near the epicenter of the earthquake. Their clinic is completely out of touch with the rest of the operation and their doctors working out in the field are unable to communicate with the clinic until they return at the end of their trips. While there, I also visited the Tompe Red Cross field office and was lucky enough to share a meal with them while we talked about what they needed to make their jobs easier.</p> <p>I spent today working with the VHF radio network, resolving a knotty CTCSS issue that was causing radios to refuse signals from some others. Solving that has allowed us to join the local Indonesian disaster response and logistics radio net. Of course, being the tech in the room means there’s a lot of odd side-requests, like coaxing a little more life out of a spent toner cartridge while someone is out looking for a surviving shop that has printer supplies.</p> <p>Tomorrow a colleague and I are driving back up north to the Tompe clinic with a satellite uplink, and some networking equipment to provide a field medical clinic with a reliable internet feed. It’s a long drive and the roads have been badly damaged, so we’ll have to make a couple of days of it, camping out with the medical team while we get everything working. Then, once we are confident it is going to hold up and we have a way to remotely fix any issues, we’ll make our way back to Palu again.</p> <p>Once the rest of the equipment that is following me from New Zealand clears Indonesian customs, we’ll likely return to Tompe with a kitset VHF radio, mast and antenna, along with handheld radios for the doctors going into the field.</p> <p>I will be here for three more weeks of a month-long rotation. After that, I’ll go home to my family, and the next month’s rotation will pick up the baton.</p> <p>As you might imagine, it is difficult to drop everything at 48 hours notice and step away from a busy life and job, but I’m fortunate to be working for a company that values humanitarian work and that is flexible enough to allow employees to do things like this.</p> <p><img class="alignnone size-full wp-image-105302" src="https://blog.pythian.com/wp-content/uploads/Meeting-with-the-medical-clinic-director-up-north-at-the-epicenter.jpg" alt="Emergency response volunteers in Indonesia" width="4032" height="3024" srcset="https://blog.pythian.com/wp-content/uploads/Meeting-with-the-medical-clinic-director-up-north-at-the-epicenter.jpg 4032w, https://blog.pythian.com/wp-content/uploads/Meeting-with-the-medical-clinic-director-up-north-at-the-epicenter-465x349.jpg 465w, https://blog.pythian.com/wp-content/uploads/Meeting-with-the-medical-clinic-director-up-north-at-the-epicenter-350x263.jpg 350w" sizes="(max-width: 4032px) 100vw, 4032px" /></p> <p><img class="alignnone size-full wp-image-105305" src="https://blog.pythian.com/wp-content/uploads/At-the-temporary-office-programing-radios-to-join-the-local-VHF-net.jpg" alt="" width="5312" height="2988" srcset="https://blog.pythian.com/wp-content/uploads/At-the-temporary-office-programing-radios-to-join-the-local-VHF-net.jpg 5312w, https://blog.pythian.com/wp-content/uploads/At-the-temporary-office-programing-radios-to-join-the-local-VHF-net-465x262.jpg 465w, https://blog.pythian.com/wp-content/uploads/At-the-temporary-office-programing-radios-to-join-the-local-VHF-net-350x197.jpg 350w" sizes="(max-width: 5312px) 100vw, 5312px" /><img class="alignnone size-medium wp-image-105303" src="https://blog.pythian.com/wp-content/uploads/20181016_094924.jpg" alt="" width="5312" height="2988" srcset="https://blog.pythian.com/wp-content/uploads/20181016_094924.jpg 5312w, https://blog.pythian.com/wp-content/uploads/20181016_094924-465x262.jpg 465w, https://blog.pythian.com/wp-content/uploads/20181016_094924-350x197.jpg 350w" sizes="(max-width: 5312px) 100vw, 5312px" /></p> </div></div> Chris Harrison https://blog.pythian.com/?p=105301 Mon Oct 22 2018 10:23:53 GMT-0400 (EDT) Rapid generation of Oracle DDL scripts for Tables, PL/SQL APIs, Sample Data https://technology.amis.nl/2018/10/21/rapid-generation-of-oracle-ddl-scripts-for-tables-pl-sql-apis-sample-data/ <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-31.png"><img width="190" height="226" title="image" align="right" style="margin: 0px; float: right; display: inline; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-31.png" border="0"></a>Yesterday, at the Oracle ACE Directors Product Briefing, I received a gift. It is called QuickSQL. And it is a free online service that generates SQL DDL and DML scripts. The gift in this case was the knowledge about this service &#8211; I was not aware of it. </p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-32.png"><img width="867" height="232" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-32.png" border="0"></a></p> <p>Go to <a title="http://quicksql.oracle.com" href="http://quicksql.oracle.com">http://quicksql.oracle.com</a>. Try it out. If you have a need for a quick set of database tables for a demo or a prototype &#8211; with sample data, constraints, audit columns, change history, an ORDS based REST API &#8211; you can leverage QuickSQL to generate all the required DDL and DML scripts from a just a few lines of YAML and some declarative settings, specifying what you want to have generated.</p> <p>For people like me who keep forgetting DDL syntax, this is really convenient as well.</p> <p>And of course this service brings back memories of Oracle Designer and the Database Design Transformer and DDL Generator &#8211; with their abilities to take declarative definitions and turn them into concrete and enriched code. And the TAPI triggers and packages.</p> <p>Typing this code snippet:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-33.png"><img width="327" height="252" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-33.png" border="0"></a></p> <p>is enough to get you 400 lines of DDL and DML code, generating two tables with several constraints, a view, two PL/SQL packages, insert and update triggers, and DML statements to create valid sample data.</p> <p>It is so easy to get started with, you do not really need a blog article to get you started. Perhaps a few examples to convince you. </p> <p>The indented definition shown above is entered into the worksheet in QuickSQL:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-34.png"><img width="670" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-34.png" border="0"></a></p> <p>Whenever a new line is entered in the worksheet &#8211; or the generate SQL button is pressed &#8211; the generated code on the right hand side is refreshed.</p> <p>Let&#8217;s look at a few of the things QuickSQL generates for us.</p> <p>Two create table statements are created:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-35.png"><img width="558" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-35.png" border="0"></a></p> <p>Some of the noteworthy elements:</p> <ul> <li>audit columns in DEPARTMENTS &#8211; because of the /auditcols directive. Triggers are created as well to set the values for these columns</li> <li>the foreign key column DEPARTMENT_ID to reference the master DEPARTMENT from the child Employee; because the employees table is defined indented under the departments table, it is interpreted as a child</li> <li>the foreign key constraint itself, tying the child table to the master</li> <li>a check constraint from the /check directive</li> </ul> <p></p> <p>Here is the DML trigger for insert or update of the departments table:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-36.png"><img width="562" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-36.png" border="0"></a></p> <p>The API that is generated for the departments table because of the /api directive &#8211; bringing back memories of the TAPI package generated by Oracle Designer:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-37.png"><img width="397" height="966" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-37.png" border="0"></a></p> <p>Here is a little piece of the API package body:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-38.png"><img width="411" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-38.png" border="0"></a></p> <p>The view specification &#8211; view emp_v departments employees &#8211; resulted in this create view statement:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-39.png"><img width="493" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-39.png" border="0"></a></p> <p>This brings me to a little improvement suggestion: I believe the ANSI SQL JOIN syntax is superior to this WHERE clause based syntax &#8211; if for no other reason than for readability and a clear separation between the filtering logic in the view and the join instructions. I would suggest QuickSQL generates the following join instruction:</p> <blockquote> <p>from departments</p> <p>left outer join </p> <p>employees</p> <p>on (employees.department_id = departments.id)</p> </blockquote> <p>Finally an example of the generated DML statements to load sample data into the tables:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML81f0e86.png"><img width="298" height="338" title="SNAGHTML81f0e86" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="SNAGHTML81f0e86" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML81f0e86_thumb.png" border="0"></a></p> <p>The /values instruction for the country column (/value EN, NL) is used to randomly select values for country from that set of values. For other columns, QuickSQL has a collection of data values or value generators to create valid, somewhat meaningful and largely random data.</p> <p>The lists of table and column directives that are understood by QuickSQL are shown here:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-40.png"><img width="481" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-40.png" border="0"></a></p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-41.png"><img width="541" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-41.png" border="0"></a></p> <p>As you can see, there are many things that QuickSQL can do for you.</p> <p>QuickSQL also comes with a number of predefined sample sets of tables &#8211; to make it event easier to quickly generate a prototype database schema:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-42.png"><img width="693" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-42.png" border="0"></a></p> <p>For each of these samples, an ERD is available to quickly offer an overview:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-43.png"><img width="413" height="338" title="image" style="margin: 0px auto; float: none; display: block; background-image: none;" alt="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-43.png" border="0"></a></p> <p>Note: you can save your own models &#8211; for later reuse, fine tuning &#8211; and share those models with others &#8211; by sharing the generated URL to the saved model.</p> <p></p> <p>I am quite happy with this little gift. Great work from Oracle. </p> <h3>Resources</h3> <p>Documentation on QuickSQL: <a title="https://apex.oracle.com/pls/apex/f?p=8675309:HELP:108937483794028::NO:::" href="https://apex.oracle.com/pls/apex/f?p=8675309:HELP:108937483794028::NO:::">https://apex.oracle.com/pls/apex/f?p=8675309:HELP:108937483794028::NO:::</a>&nbsp;</p> <p>This does not run the SQL, you can run SQL on <a href="http://livesql.oracle.com/">livesql.oracle.com</a> or on an Oracle database cloud service or on a local Oracle database.</p> <p>A quick demo video is shown here: <a title="https://youtu.be/BCs2jWkdVFg" href="https://youtu.be/BCs2jWkdVFg">https://youtu.be/BCs2jWkdVFg</a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/21/rapid-generation-of-oracle-ddl-scripts-for-tables-pl-sql-apis-sample-data/">Rapid generation of Oracle DDL scripts for Tables, PL/SQL APIs, Sample Data</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50190 Sun Oct 21 2018 09:56:48 GMT-0400 (EDT) add_colored_sql https://jonathanlewis.wordpress.com/2018/10/19/add_colored_sql/ <p>The <a href="https://www.freelists.org/post/oracle-l/Capturing-sql-statement"><em><strong>following request</strong></em></a> appeared recently on the <a href="https://www.freelists.org/archive/oracle-l"><em><strong>Oracle-L mailing list</strong></em></a>:</p> <p style="padding-left:30px;"><em>I have one scenario related to capturing of sql statement in history table..  Like dba_hist_sqltext capture the queries that ran for 10 sec or more..  How do I get the sql stmt which took less time say in  millisecond..  Any idea please share.</em></p> <p>An AWR snapshot captures statements that (a) meet some workload criteria such as <em>&#8220;lots of executions&#8221;</em> and (b) happen to be in the library cache when the snapshot takes place; but if you have some statements which you think are important or interesting enough to keep an eye on that don&#8217;t do enough work to meet the normal workload requirements of the AWR snapshots it&#8217;s still possible to tell Oracle to capture them by <a href="http://dilbert.com/strip/1995-11-17"><strong><em>&#8220;coloring&#8221;</em></strong></a> them.  (Apologies for the American spelling &#8211; it&#8217;s necessary to avoid error <em>&#8216;PLS_00302: component %s must be declared&#8217;</em>.)</p> <p>Somewhere in the 11gR1 timeline the package <em><strong>dbms_workload_repository</strong></em> acquired the following two procedures:</p> <pre class="brush: plain; title: ; notranslate"> PROCEDURE ADD_COLORED_SQL Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- SQL_ID VARCHAR2 IN DBID NUMBER IN DEFAULT PROCEDURE REMOVE_COLORED_SQL Argument Name Type In/Out Default? ------------------------------ ----------------------- ------ -------- SQL_ID VARCHAR2 IN DBID NUMBER IN DEFAULT </pre> <p>You have to be licensed to use the workload repository, of course, but if you are you can call the first procedure to mark an SQL statement as &#8220;interesting&#8221;, after which its execution statistics will be captured whenever it&#8217;s still in the library cache at snapshot time. The second procedure lets you stop the capture &#8211; and you will probably want to use this procedure from time to time because there&#8217;s a limit (currently 100) to the number of statements you&#8217;re allowed to color and if you try to exceed the limit your call will raise Oracle error ORA-13534.</p> <pre class="brush: plain; title: ; notranslate"> ORA-13534: Current SQL count(100) reached maximum allowed (100) ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 751 ORA-06512: at line 3 </pre> <p>If you want to see the list of statements currently marked as colored you can query table <em><strong>wrm$_colored_sql</strong></em>, exposed through the views <em><strong>dba_hist_colored_sql</strong></em> and (in 12c) <em><strong>cdb_hist_colored_sql</strong></em>. (Note: I haven&#8217;t yet tested whether the limit of 100 statements is per PDB or summed across the entire CDB [but see comment #2 below] &#8211; and the answer may vary with version of Oracle, of course).</p> <pre class="brush: plain; title: ; notranslate"> SQL&gt; select * from sys.wrm$_colored_sql; DBID SQL_ID OWNER CREATE_TI ---------- ------------- ---------- --------- 3089296639 aedf339438ww3 1 28-SEP-18 1 row selected. </pre> <p>If you&#8217;ve had to color a statement to force the AWR snapshot to capture it the statement probably won&#8217;t appear in the standard AWR reports; but it will be available to the <em>&#8220;AWR SQL&#8221;</em> report (which I usually generate from SQL*Plus with a call to <em><strong>$ORACLE_HOME/rdbms/admin/awrsqrpt./sql</strong></em>).</p> <h3>Footnote</h3> <p>If the statement you&#8217;re interested in executes very infrequently and often drops out of the library cache before it can be captured in an AWR snapshot then an alternative strategy is to <a href="https://jonathanlewis.wordpress.com/2014/05/22/sql_trace/"><em><strong>enable system-wide tracing for that statement</strong></em></a> so that you can capture every execution in a trace file.</p> <p>&nbsp;</p> Jonathan Lewis http://jonathanlewis.wordpress.com/?p=18868 Fri Oct 19 2018 10:08:07 GMT-0400 (EDT) Installation of Oracle 18c (18.3) RPM manually http://dirknachbar.blogspot.com/2018/10/installation-of-oracle-18c-183-rpm.html Since last night the RPM version of Oracle 18c (18.3) is available, see my blog post&nbsp;<a href="http://dirknachbar.blogspot.com/2018/10/oracle-18c-rpm-for-linux-available.html" target="_blank">http://dirknachbar.blogspot.com/2018/10/oracle-18c-rpm-for-linux-available.html</a><br /><br />I was directly testing the manually way in installing the Oracle 18c (18.3) RPM version, not Unbreakable Linux Network (ULN).<br /><br />As pre requirement you will need an up and running Linux Server, in my case an Oracle Enterprise Linux 7.4, and the Oracle 18c RPM and as well the Oracle 18c Preinstallation RPM.<br /><br /><ul><li>The Oracle 18c RPM you will find under&nbsp;<a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html" target="_blank">https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</a></li><li>The Oracle 18c Preinstallation RPM you will find here&nbsp;<a href="https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm" target="_blank">https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm</a></li></ul><br />Transfer the 2 above mentioned files to your target server as root user in any temporary directory, navigate in a shell to the temporary directory and execute as root following commands:<br /><br />1. Install the Oracle 18c Preinstallation RPM<br /><br /><pre class="brush:bash">yum localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm <br />Loaded plugins: langpacks, ulninfo<br />Examining oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm: oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />Marking oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm to be installed<br />Resolving Dependencies<br />--&gt; Running transaction check<br />---&gt; Package oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 will be installed<br />--&gt; Processing Dependency: glibc-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />--&gt; Processing Dependency: ksh for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />--&gt; Processing Dependency: libaio-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />--&gt; Processing Dependency: libstdc++-devel for package: oracle-database-preinstall-18c-1.0-1.el7.x86_64<br />--&gt; Running transaction check<br />---&gt; Package glibc-devel.x86_64 0:2.17-196.el7 will be installed<br />--&gt; Processing Dependency: glibc-headers = 2.17-196.el7 for package: glibc-devel-2.17-196.el7.x86_64<br />--&gt; Processing Dependency: glibc-headers for package: glibc-devel-2.17-196.el7.x86_64<br />---&gt; Package ksh.x86_64 0:20120801-34.el7 will be installed<br />---&gt; Package libaio-devel.x86_64 0:0.3.109-13.el7 will be installed<br />---&gt; Package libstdc++-devel.x86_64 0:4.8.5-16.el7 will be installed<br />--&gt; Running transaction check<br />---&gt; Package glibc-headers.x86_64 0:2.17-196.el7 will be installed<br />--&gt; Finished Dependency Resolution<br /><br />Dependencies Resolved<br /><br />====================================================================================================================================================================<br /> Package Arch Version Repository Size<br />====================================================================================================================================================================<br />Installing:<br /> oracle-database-preinstall-18c x86_64 1.0-1.el7 /oracle-database-preinstall-18c-1.0-1.el7.x86_64 55 k<br />Installing for dependencies:<br /> glibc-devel x86_64 2.17-196.el7 OL74 1.1 M<br /> glibc-headers x86_64 2.17-196.el7 OL74 675 k<br /> ksh x86_64 20120801-34.el7 OL74 883 k<br /> libaio-devel x86_64 0.3.109-13.el7 OL74 12 k<br /> libstdc++-devel x86_64 4.8.5-16.el7 OL74 1.5 M<br /><br />Transaction Summary<br />====================================================================================================================================================================<br />Install 1 Package (+5 Dependent packages)<br /><br />Total size: 4.1 M<br />Total download size: 4.1 M<br />Installed size: 14 M<br />Is this ok [y/d/N]: y<br />Downloading packages:<br />warning: /var/OSimage/OL7.4_x86_64/Packages/glibc-devel-2.17-196.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY<br />Public key for glibc-devel-2.17-196.el7.x86_64.rpm is not installed<br />--------------------------------------------------------------------------------------------------------------------------------------------------------------------<br />Total 52 MB/s | 4.1 MB 00:00:00 <br />Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY<br />Importing GPG key 0xEC551F03:<br /> Userid : "Oracle OSS group (Open Source Software group) &lt;build@oss.oracle.com&gt;"<br /> Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03<br /> Package : 7:oraclelinux-release-7.4-1.0.4.el7.x86_64 (@anaconda/7.4)<br /> From : /etc/pki/rpm-gpg/RPM-GPG-KEY<br />Is this ok [y/N]: y<br />Running transaction check<br />Running transaction test<br />Transaction test succeeded<br />Running transaction<br /> Installing : ksh-20120801-34.el7.x86_64 1/6 <br /> Installing : glibc-headers-2.17-196.el7.x86_64 2/6 <br /> Installing : glibc-devel-2.17-196.el7.x86_64 3/6 <br /> Installing : libaio-devel-0.3.109-13.el7.x86_64 4/6 <br /> Installing : libstdc++-devel-4.8.5-16.el7.x86_64 5/6 <br /> Installing : oracle-database-preinstall-18c-1.0-1.el7.x86_64 6/6 <br /> Verifying : oracle-database-preinstall-18c-1.0-1.el7.x86_64 1/6 <br /> Verifying : libstdc++-devel-4.8.5-16.el7.x86_64 2/6 <br /> Verifying : libaio-devel-0.3.109-13.el7.x86_64 3/6 <br /> Verifying : glibc-headers-2.17-196.el7.x86_64 4/6 <br /> Verifying : glibc-devel-2.17-196.el7.x86_64 5/6 <br /> Verifying : ksh-20120801-34.el7.x86_64 6/6 <br /><br />Installed:<br /> oracle-database-preinstall-18c.x86_64 0:1.0-1.el7 <br /><br />Dependency Installed:<br /> glibc-devel.x86_64 0:2.17-196.el7 glibc-headers.x86_64 0:2.17-196.el7 ksh.x86_64 0:20120801-34.el7 libaio-devel.x86_64 0:0.3.109-13.el7 <br /> libstdc++-devel.x86_64 0:4.8.5-16.el7 <br /><br />Complete!<br /></pre><br />2. Install the Oracle 18c (18.3) RPM<br /><br /><pre class="brush:bash">yum localinstall oracle-database-ee-18c-1.0-1.x86_64.rpm <br />Loaded plugins: langpacks, ulninfo<br />Examining oracle-database-ee-18c-1.0-1.x86_64.rpm: oracle-database-ee-18c-1.0-1.x86_64<br />Marking oracle-database-ee-18c-1.0-1.x86_64.rpm to be installed<br />Resolving Dependencies<br />--&gt; Running transaction check<br />---&gt; Package oracle-database-ee-18c.x86_64 0:1.0-1 will be installed<br />--&gt; Finished Dependency Resolution<br /><br />Dependencies Resolved<br /><br />====================================================================================================================================================================<br /> Package Arch Version Repository Size<br />====================================================================================================================================================================<br />Installing:<br /> oracle-database-ee-18c x86_64 1.0-1 /oracle-database-ee-18c-1.0-1.x86_64 7.8 G<br /><br />Transaction Summary<br />====================================================================================================================================================================<br />Install 1 Package<br /><br />Total size: 7.8 G<br />Installed size: 7.8 G<br />Is this ok [y/d/N]: y<br />Downloading packages:<br />Running transaction check<br />Running transaction test<br />Transaction test succeeded<br />Running transaction<br /> Installing : oracle-database-ee-18c-1.0-1.x86_64 1/1 <br />[INFO] Executing post installation scripts...<br />[INFO] Oracle home installed successfully and ready to be configured.<br />To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-18c configure<br /> Verifying : oracle-database-ee-18c-1.0-1.x86_64 1/1 <br /><br />Installed:<br /> oracle-database-ee-18c.x86_64 0:1.0-1 <br /><br />Complete!<br /></pre><br />As next we need to configure the Oracle 18c Database by executing the /etc/init.d/oracledb_ORCLDB-18c script as root user<br /><br />In case you will receive following error message while executing the /etc/init.d/oracledb_ORCLDB-18c script, simply check your /etc/hosts file and add the IP, Fully Qualified Hostname and the Shortname of your server, re-execute the /etc/init.d/oracledb_ORCLCDB-18c script<br /><pre class="brush:bash">/etc/init.d/oracledb_ORCLCDB-18c configure<br />Configuring Oracle Database ORCLCDB.<br />[FATAL] [DBT-06103] The port (1,521) is already in use.<br /> ACTION: Specify a free port.<br /></pre><br /><pre class="brush:bash">/etc/init.d/oracledb_ORCLCDB-18c configure<br />Configuring Oracle Database ORCLCDB.<br />Prepare for db operation<br />8% complete<br />Copying database files<br />31% complete<br />Creating and starting Oracle instance<br />32% complete<br />36% complete<br />40% complete<br />43% complete<br />46% complete<br />Completing Database Creation<br />51% complete<br />54% complete<br />Creating Pluggable Databases<br />58% complete<br />77% complete<br />Executing Post Configuration Actions<br />100% complete<br />Database creation complete. For details check the logfiles at:<br /> /opt/oracle/cfgtoollogs/dbca/ORCLCDB.<br />Database Information:<br />Global Database Name:ORCLCDB<br />System Identifier(SID):ORCLCDB<br />Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details.<br /><br />Database configuration completed successfully. The passwords were auto generated, you must change them by connecting to the database using 'sqlplus / as sysdba' as the oracle user.<br /></pre><br />And we are nearly done :-)<br /><br />Switch in your shell to the oracle user:<br /><br /><pre class="brush:bash">su - oracle<br />cd /opt/oracle<br />export ORACLE_BASE=`pwd`<br />cd product/18c/dbhome_1<br />export ORACLE_HOME=`pwd`<br />export PATH=$ORACLE_HOME/bin:$PATH<br />export ORACLE_SID=ORCLCDB<br />sqlplus / as sysdba<br />SQL*Plus: Release 18.0.0.0.0 - Production on Fri Oct 19 13:54:58 2018<br />Version 18.3.0.0.0<br /><br />Copyright (c) 1982, 2018, Oracle. All rights reserved.<br /><br /><br />Connected to:<br />Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production<br />Version 18.3.0.0.0<br /><br />SQL&gt; col name format a30<br />SQL&gt; select con_id, name, open_mode from v$pdbs;<br /><br /> CON_ID NAME OPEN_MODE<br />---------- ------------------------------ ----------<br /> 2 PDB$SEED READ ONLY<br /> 3 ORCLPDB1 READ WRITE<br /><br /></pre><br />You only need to change now the passwords of the Database Users, e.g. SYS, SYSTEM and so on.<br /><br />It's really a quick and fast way to install and configure an Oracle 18c (18.3) release on your server, what I personally don't like, is the used OFA (Optimal/Oracle Flexible Architecture) layout provided within the RPM, everything goes under /opt ... :-(<br /><br /><br /> Dirk Nachbar tag:blogger.com,1999:blog-4344684978957885806.post-5314161828443586940 Fri Oct 19 2018 08:03:00 GMT-0400 (EDT) Oracle 18c RPM for Linux available http://dirknachbar.blogspot.com/2018/10/oracle-18c-rpm-for-linux-available.html Since last night, the RPM for Oracle Database 18c (18.3) for Linux x86-64 is available for download in Oracle Technology Network<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-B14hSilPg5s/W8l7nNGgUZI/AAAAAAAAA3c/j2HmRnnJEmkT0XSX6fe47lZKI5jC8qWxACLcBGAs/s1600/18c_RPM_Download.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="670" data-original-width="1292" height="330" src="https://2.bp.blogspot.com/-B14hSilPg5s/W8l7nNGgUZI/AAAAAAAAA3c/j2HmRnnJEmkT0XSX6fe47lZKI5jC8qWxACLcBGAs/s640/18c_RPM_Download.png" width="640" /></a></div><br /><br />The RPM can be downloaded under following link:&nbsp;<a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html" target="_blank">https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</a><br /><br />The corresponding documentation can be found under:&nbsp;<a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/ladbi/automatically-configuring-oracle-linux-with-oracle-preinstallation-rpm.html#GUID-22846194-58EF-4552-AAC3-6F6D0A1DF794" target="_blank">https://docs.oracle.com/en/database/oracle/oracle-database/18/ladbi/automatically-configuring-oracle-linux-with-oracle-preinstallation-rpm.html#GUID-22846194-58EF-4552-AAC3-6F6D0A1DF794</a><br /><br /><br /> Dirk Nachbar tag:blogger.com,1999:blog-4344684978957885806.post-8512269083145210417 Fri Oct 19 2018 02:40:00 GMT-0400 (EDT) Oracle Cloud Infrastructure CLI Scripts for preparing for OKE Cluster Provisioning https://technology.amis.nl/2018/10/19/oracle-cloud-infrastructure-cli-scripts-for-preparing-for-oke-cluster-provisioning/ <p>In order to provision a Kubernetes cluster on Oracle Cloud Infrastructure, you need to prepare the OCI tenancy and create a compartment with appropriate network configuration and associated resources. All set up can be performed through the console &#8211; but this is quite tedious, error prone and outright stupid if you have to go through these steps more than once. I should know &#8211; I went through them multiple times.</p> <p>In a <a href="https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/" target="_blank" rel="noopener">previous article</a> I described how easy it is to get going with the OCI CLI (using a Docker container) and in this article I will share the OCI CLI statements that should be executed in order to fully prepare the OCI tenancy for subsequently provisioning the OKE Cluster. (OKE = Oracle Kubernetes Engine). The article &#8220;<a href="https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/" target="_blank" rel="noopener">Create OKE Kubernetes Cluster on Oracle Cloud Infrastructure</a>&#8221; will subsequently take you through the steps of creating an K8S instance leveraging the artifacts created in this article.</p> <p>Assuming you have the OCI CLI configured with a user with the right privileges connecting to the target OCI tenancy, below you will find the steps to perform through the OCI to create all artifacts that are required in order before an OKE Cluster can be created. After providing with the step by step commands, I will also show you the beginnings of a Shell script that will execute all individual steps automatically,saving quite a bit of manual work.</p> <h3>Step by Steps Commands</h3> <p>View the code on <a href="https://gist.github.com/lucasjellema/5b5ff3bf295af40eda87bd38ce9c5f0f">Gist</a>.</p> <pre></pre> <p>At this point, all resourced are created and you can attempt the provisioning of the OKE Cluster, either through the Console or trough the CLI or the REST API.</p> <h3>(beginning of a) Shell Script for Provisioning the Required OCI resources for an OKE Cluster Instance</h3> <p>Executing all these steps manually and constantly exporting environment variable is still a lot of work. The next level of automation is to take these individual steps and create a Shell script out of them. Below you will find that script &#8211; or at least the beginning of one, as I have not yet had the time to complete it.</p> <p>View the code on <a href="https://gist.github.com/lucasjellema/d8dc67cf2c78a3fb40e10c17a808f892">Gist</a>.</p> <pre></pre> <h2>Resources</h2> <p>Blog article: Create OKE Kubernetes Cluster on Oracle Cloud Infrastructure – including Service Request to increase limit  <a title="https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/" href="https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/">https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/</a></p> <p>Blog article: Running OCI CLI using Docker container &#8211; <a title="https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/" href="https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/">https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/</a></p> <p>First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service &#8211; <a title="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/" href="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/">https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/</a></p> <p>OCI Docs Preparing for Container Engine for Kubernetes- <a title="https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengprerequisites.htm?tocpath=Services%7CContainer%20Engine%7CPreparing%20for%20Container%20Engine%20for%20Kubernetes%7C_____0" href="https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengprerequisites.htm?tocpath=Services%7CContainer%20Engine%7CPreparing%20for%20Container%20Engine%20for%20Kubernetes%7C_____0">https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengprerequisites.htm?tocpath=Services%7CContainer%20Engine%7CPreparing%20for%20Container%20Engine%20for%20Kubernetes%7C_____0</a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/19/oracle-cloud-infrastructure-cli-scripts-for-preparing-for-oke-cluster-provisioning/">Oracle Cloud Infrastructure CLI Scripts for preparing for OKE Cluster Provisioning</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50128 Thu Oct 18 2018 19:55:16 GMT-0400 (EDT) LEAP#425 Look Inside a 7-segment with the Boldport 3x7 https://blog.tardate.com/2018/10/leap425-look-inside-a-7-segment-with-the-boldport-3x7.html <p>A lovely PCB and classic Boldport instructions(!):</p> <blockquote> <p>This project is a challenge to construct.</p> </blockquote> <p>The Boldport 3x7 is essentially a 3-digit common-cathode 7-segment display, rendered with discrete components.</p> <p>Since it’s a chance to look inside the workings of a 7-segment display unit, I decided to modify my build to flip the lid and expose the internal wiring for inspection. Now it’s built, more projects to follow!</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/3x7">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/BoldportClub/3x7"><img src="https://leap.tardate.com/BoldportClub/3x7/assets/3x7_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/10/leap425-look-inside-a-7-segment-with-the-boldport-3x7.html Thu Oct 18 2018 07:54:56 GMT-0400 (EDT) Exadata Online Training https://gavinsoorma.com/2018/10/exadata-online-training/ <p>The <strong>fourth edition</strong> of the highly popular &#8220;<strong>Oracle Exadata Essentials for Oracle DBA&#8217;s</strong>&#8221; online training course will be commencing <strong>Sunday 11th November</strong>.</p> <p>This <strong>hands-on training course</strong> will teach you how to install and configure an Exadata Storage Server Cell on your own individual Oracle Virtual Box platform as well as prepare you for the Oracle Certified Expert, Oracle Exadata X5 Administrator exam (1Z0-070).</p> <p>The classes will be from <strong>10.00 AM US EST till 14.00 PM and entire session recordings are available</strong> in case a session is missed as well as for future reference.</p> <p>The cost of the <strong>5 week online hands-on training is $699.0</strong>0, and the course curriculum is based on the Exadata Database Machine: 12c Administration Workshop course offered by Oracle University which costs over USD $5000!</p> <p>Book your seat for this training course via the registration link below:</p> <p><a href="https://attendee.gotowebinar.com/register/7797168410852492802"> Register for Exadata Essentials &#8230;</a></p> <p>In addition to the topics listed below, attendees will learn how to use CELLCLI to create and manage cell disks, grid disks and flash disks as well as how to configure alerts and monitoring of storage cells on their own individual Exadata Storage Server environments.</p> <p>• Install Exadata Storage Server software and create storage cells on a VirtualBox platform<br /> • Exadata Database Machine Components &amp; Architecture<br /> • Exadata Database Machine Networking<br /> • Smart Scans and Cell Offloading<br /> • Storage Indexes<br /> • Smart Flash Cache and Flash Logging<br /> • Exadata Hybrid Columnar Compression<br /> • I/O Resource Management (IORM)<br /> • Exadata Storage Server Configuration<br /> • Database File System<br /> • Migration to Exadata platform<br /> • Storage Server metrics and alerts<br /> • Monitoring Exadata Database Machine using OEM<br /> • Applying a patch to an Exadata Database Machine<br /> • Automatic Support Ecosystem<br /> • Exadata Cloud Service overview</p> <p>&#8230;. and more!</p> <p>Here is some of the feedback received from the attendees of earlier training sessions:</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png"><img class="aligncenter size-full wp-image-8325" src="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png" alt="" width="850" height="352" srcset="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png 850w, https://gavinsoorma.com/wp-content/uploads/2018/10/feedback-300x124.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/10/feedback-768x318.png 768w" sizes="(max-width: 850px) 100vw, 850px" /></a></p> Gavin Soorma https://gavinsoorma.com/?p=8324 Wed Oct 17 2018 01:27:21 GMT-0400 (EDT) Create OKE Kubernetes Cluster on Oracle Cloud Infrastructure – including Service Request to increase limit https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/ <p>Anyone with a trial account for Oracle Cloud can use Oracle Cloud Infrastructure (OCI) to get herself a three-node Kubernetes Cluster instance, running on Oracle&#8217;s managed Kubernetes Engine Cloud Service called OKE. Unfortunately, the default resource limits on the trial account are such that the creation of the cluster will initially fail with &#8220;Cluster Create Failed: LimitExceeded: The cluster limit for this tenancy has been exceeded.&#8221;; only after submitting a Service Request with Oracle Support &#8211; which takes between 12 and 96 hours to be processed &#8211; will the account be extended to allow the creation of the K8S cluster.</p> <p>In this article, I will show the process of submitting that support request &#8211; to ensure you this is the proper procedure (as strange as it seems) and to show how that actually works.</p> <p>Note: I have written before about how to get going with OKE in my article  <a title="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/" href="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/">First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service</a>. Most of that article is still valid &#8211; but the additional step of the Service Request is now added. The Oracle Tutorial <a title="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html" href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html">Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes and Deploying a Sample App</a> also shows most of the steps &#8211; but not the submission of the Service Request.</p> <p>I will shortly publish an article on how to prepare your OCI tenancy for a Kubernetes cluster instance using the OCI CLI and a series of scripted steps &#8211; which makes life so much easier than having to go through all the manual steps described in these two resources. Until then, you will have to manually create the Compartment, a User, the VCN, five subnets, two route tables, an internet gateway, two security lists, DHCP options. Once that is done, you can create the cluster &#8211; or at least make that first attempt.</p> <p>From the menu shown on the left side of the screen (you may have to decrease the font size to actually see all menu options) select Developer Services | Container Clusters (OKE). I do not understand why the name Kubernetes is not used &#8211; container cluster sounds a bit vague.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-7.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-7.png" alt="image" width="247" height="338" border="0" /></a></p> <p>In the page that appears, click on the Create Cluster button.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-8.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-8.png" alt="image" width="819" height="338" border="0" /></a></p> <p>You can fill out the cluster details:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-9.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-9.png" alt="image" width="558" height="338" border="0" /></a></p> <p>And more details:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-10.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-10.png" alt="image" width="566" height="509" border="0" /></a></p> <p>And finally press the Create button. But now you are in for an unpleasant surprise if you are in a fresh Oracle Cloud trial account tenancy in which you may not even have touched any resource at all:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-11.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-11.png" alt="image" width="867" height="228" border="0" /></a></p> <p>Bang, your cluster creation failed. Because you exceeded some limit. Not sure which one. Silly, how after not using anything in the trial account, the very first action brings you to this failure. It feels like the end of the road for my OKE experiements.</p> <p>Fortunately, it does not have to be. By submitting an SR (service request) with Oracle Cloud Support, I can get this mysterious limit removed or at least extended and then continue on my journey.</p> <p>As instructed in the documentation (<a href="https://docs.cloud.oracle.com/iaas/Content/General/Concepts/servicelimits.htm?TocPath=Services|Service%20Essentials|_____5">Service Limits</a>) I log in to Oracle Support at <a title="http://support.oracle.com/" href="http://support.oracle.com/">http://support.oracle.com/</a> with an account associated with the same email id as my Oracle Cloud user (this is important otherwise I cannot create the required service request type):</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-12.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-12.png" alt="image" width="761" height="338" border="0" /></a></p> <p>And switch to Oracle Cloud Support.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-13.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-13.png" alt="image" width="791" height="338" border="0" /></a></p> <p>Here I click on Create Service Request.</p> <p>The Service Type is Oracle Cloud Infrastructure, the problem type is Limit Increase. I provide the tenancy id and the names of the availability domains. I presume that the service/resource to be increased is the Compute Service.</p> <p>As Problem Summary and Description I have entered:</p> <blockquote><p><em>Problem Summary<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br /> I want to create OKE instance on OCI; failed with: Cluster Create Failed: LimitExceeded:<br /> Problem Description<br /> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<br /> In OCI Console I attempted to create an OKE Cluster &#8211; after configuring all network resources. Creation failed with:<br /> Cluster Create Failed: LimitExceeded: The cluster limit for this tenancy has been exceeded.<br /> This happened in a brand new trial account where no resources had been used at all<br /> Opc-Request-Id: a057c2a5-4808-4fc8-914b-42c3c3d5…</em></p></blockquote> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-14.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-14.png" alt="image" width="1186" height="560" border="0" /></a></p> <p>After pressing Next I get to the Submission page where I press Submit. The SR is now submitted.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-15.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-15.png" alt="image" width="714" height="338" border="0" /></a></p> <p>After a short time, I receive a notification from Oracle Support &#8211; and now the wait begins.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-16.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-16.png" alt="image" width="1036" height="487" border="0" /></a></p> <p>After 20 hours, I ask for an update, to find out if the limit increase will take much longer. Whether it has helped or not I do not know, but a little later, I received another notification email</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-17.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-17.png" alt="image" width="867" height="299" border="0" /></a></p> <p>and navigating the link it contained took me to the updated service request:<a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-18.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-18.png" alt="image" width="496" height="338" border="0" /></a></p> <p>Apparently the magic limit was extended and I should now be good to go with my OKE instance.</p> <p>I have tried to find out which limit was extended &#8211; in the Service Limits page:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-19.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-19.png" alt="image" width="482" height="237" border="0" /></a></p> <p>And it would appear that the number of OCPUS per AD has been increased:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML76e4ecb8.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="SNAGHTML76e4ecb8" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML76e4ecb8_thumb.png" alt="SNAGHTML76e4ecb8" width="867" height="129" border="0" /></a></p> <p>&nbsp;</p> <p>Let&#8217;s now try to create that OKE instance:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-20.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-20.png" alt="image" width="497" height="322" border="0" /></a></p> <p>&nbsp;</p> <p>Create Cluster:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-21.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-21.png" alt="image" width="818" height="338" border="0" /></a></p> <p>Provide details:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-22.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-22.png" alt="image" width="748" height="702" border="0" /></a></p> <p>and some more &#8211; including a node pool consisting of the three worker nodes:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-23.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-23.png" alt="image" width="750" height="706" border="0" /></a></p> <p>And finally, press the Create button again:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-24.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-24.png" alt="image" width="762" height="692" border="0" /></a></p> <p>And this time &#8211; no error message. Sweet victory.</p> <p>The creation is in progress:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-25.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-25.png" alt="image" width="1112" height="358" border="0" /></a></p> <p>And after a fairly short while (two minutes tops), the creation is done:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-26.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-26.png" alt="image" width="1122" height="341" border="0" /></a></p> <p>Drill down to inspect the cluster details:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-27.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-27.png" alt="image" width="726" height="338" border="0" /></a></p> <p>At this point, I can start using this cluster instance, as described in this article: <a title="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/" href="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/">First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service</a>, typically by downloading a kubeconfig file and using kubectl.</p> <p>Note: After creating the OKE instance, the Service Limits page give the following information:<a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML76e66b5a.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="SNAGHTML76e66b5a" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML76e66b5a_thumb.png" alt="SNAGHTML76e66b5a" width="867" height="148" border="0" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <h3>Generate kubeconfig file</h3> <p>Assuming access to the OCI CLI tool, we can continue to generate the kubeconfig file. The OCI Console contains the page with details on the k8s-1 cluster. Press the <em>Access Kubeconfig</em> button. A popup opens, with the instructions to generate the kubeconfig file – unfortunately not yet to simply download the kubeconfig file.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-28.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-28.png" alt="image" width="867" height="330" border="0" /></a></p> <p>Execute these two steps on the node from which I will run kubectl:</p> <ol> <li>mkdir -p $HOME/.kube</li> <li>cd $HOME/.kube</li> <li>oci ce cluster create-kubeconfig &#8211;cluster-id ocid1.cluster.oc1.iad.aaaaaaaaae3dmnrsmm4wgodfmvs &#8211;file &#8211; &gt; kubeconfig</li> </ol> <p>Now I have the desired kubeconfig file.</p> <p>Set the environment variable KUBECONFIG to refer to this file:</p> <p>export KUBECONFIG=kubeconfig</p> <p>And start interacting with kubectl:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-29.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-29.png" alt="image" width="867" height="97" border="0" /></a></p> <p>I have also copied the kubeconfig file to my Windows laptop and started the proxy:</p> <p>set KUBECONFIG=kubeconfig</p> <p>kubectl proxy</p> <p>Using the URL <a title="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login" href="http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login">http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login</a> I can now access the dashboard for my fresh OKE instance:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-30.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-30.png" alt="image" width="725" height="338" border="0" /></a></p> <p>&nbsp;</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-29.png"> </a></p> <h2>Resources</h2> <p><a title="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/" href="https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/">First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service</a></p> <p>Documentation on Service Limits and Service Requests: <a title="https://docs.cloud.oracle.com/iaas/Content/General/Concepts/servicelimits.htm?TocPath=Services|Service%20Essentials|_____5" href="https://docs.cloud.oracle.com/iaas/Content/General/Concepts/servicelimits.htm?TocPath=Services|Service%20Essentials|_____5">https://docs.cloud.oracle.com/iaas/Content/General/Concepts/servicelimits.htm?TocPath=Services|Service%20Essentials|_____5</a></p> <p>Oracle Tutorial <a title="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html" href="https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/oke-full/index.html">Creating a Cluster with Oracle Cloud Infrastructure Container Engine for Kubernetes and Deploying a Sample App</a></p> <p>Instructions on setting up the Ingress Controller on OKE: <a title="https://github.com/luisw19/orders-microservice-soaring-clouds-sequel/tree/master/oke-ingress" href="https://github.com/luisw19/orders-microservice-soaring-clouds-sequel/tree/master/oke-ingress">https://github.com/luisw19/orders-microservice-soaring-clouds-sequel/tree/master/oke-ingress</a></p> <p>Blog article: <a href="https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/">Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container</a></p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML76e66b5a.png"> </a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/16/create-oke-kubernetes-cluster-on-oracle-cloud-infrastructure-including-service-request-to-increase-limit/">Create OKE Kubernetes Cluster on Oracle Cloud Infrastructure &#8211; including Service Request to increase limit</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50101 Tue Oct 16 2018 03:27:40 GMT-0400 (EDT) Ignorance is Bliss, Until it Isn’t https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/ <p>My blog was hacked last week, deliberately and maliciously.  I was hacked because my lacking knowledge in the area of website management and saved by my control issues as a Database Administrator.  It was a valuable lesson to learn in regards to being a woman in technology and new technical skills.</p> <p><a href="https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/angry/" rel="attachment wp-att-8194"><img class="alignnone size-full wp-image-8194" src="https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/10/angry.gif?resize=260%2C208&#038;ssl=1" alt="" width="260" height="208" data-recalc-dims="1" /></a></p> <p>If you ask most women, they&#8217;ll most likely say that they haven&#8217;t had too many difficulties with men harassing them, especially if they&#8217;re over 40.  In truth, its not that they haven&#8217;t, we often become so immune to it, until other women recall their own stories, then suddenly recall issues.  We&#8217;ve also learned that when we bring it up, someone will most likely ask us if we&#8217;re over-reacting or if we could have misunderstood the situation, so we&#8217;re unsure why we should bother with it.</p> <p>Due to this, I under-estimated the anger of a someone that was causing difficulty for a woman I was mentoring and had advised.  When the woman, we&#8217;ll call &#8220;Sarah&#8221;, first started her recent job, &#8220;John&#8221; was really nice to her.  He went out of his way to help her get to know the lay of the land at the company and was the first to answer questions.  She admitted she isn&#8217;t sure how she would have gotten along without him.  After six months on the job, she started dating the man she&#8217;s about to marry.  As soon as they started dating, John began to behave differently around Sarah.  He went from brushing her off, to speaking in a condescending manner in meetings, but she thought she might be over-reacting.  She finally asked to meet with him and he admitted he was hurt that she was dating someone and thought they&#8217;d had a &#8220;connection&#8221;.  She carefully and clearly set the boundary that they were coworkers- ONLY.   The negative interactions escalated until she reached out to me asking for advice on finding a new position.  She really loved her job and had been there for over a year at this point.  I worked with her, advising her how to track the issues in writing and report it to HR.  Due to this, a number of meetings occurred to address the behavior with John.  Unfortunately, he wasn&#8217;t willing to address his behavior with Sarah and he was terminated.  As the conversations were  extensive, Sarah did let it slip that I was the one assisting her in how to work through the challenge and  John chose soon to start blaming me for why Sarah hadn&#8217;t done what he wanted.</p> <p>I didn&#8217;t put two and two together when I received my first email and honestly, I get weird emails from time to time.  I simply deleted it.  The next one, I sent to spam and ignored them.  Seriously guys in tech, these idiots are why you can&#8217;t have a nice conversation with women everywhere.  We never know when one of these freaks are going to come out of the woodwork and how much they&#8217;re going to go off the deep end.  If you get frustrated that women won&#8217;t have a simple conversation with you in public, don&#8217;t get upset with her, but with these guys that go to ridiculous lengths to get vengeance and justify their behavior because they didn&#8217;t get what they wanted.</p> <p>This guy had just enough skills, discovering I had an out of date PHP version on my blog.  He used this to get into my blog and hack it with malware.  The old PHP version is my fault and my lacking skills in website administration.  My saving grace is that I am a damn good DBA and had multiple backup software on dbakevlar.com because&#8230;well, I&#8217;m a DBA.  The backup software that had saved me in previous times of challenges, Site Backup Pro, failed me and my website provider in this scenario.  After the full, fresh install of WordPress, it wasn&#8217;t able to recover to a working state.  Luckily, my second daily backup, Updraft Pro, DID work and I was able to recover everything back to before the damage.</p> <p>What John didn&#8217;t have was superior enough skills to cover his tracks.  After they siphoned off all my varying IP addresses from my travels that had logged into the site, they were able to pinpoint John&#8217;s and after two days, had the evidence they needed to go after him for putting people at risk online.  John is out of a job at Sarah&#8217;s company and now in trouble with my provider, Bluehost.</p> <p>Lessons learned?  I really needed to up my website administration skills.  Check.</p> <p>I learned a lot the last week on how to manage my own website and realize that after eight years, I have a lot of garbage that needed to be cleaned up on my site and proceeded to do so.  I also lost all of danceswithwinnebagos.com, as it didn&#8217;t have the Updraft backup configured, only the Site Backup Pro and I may have to build that one back up from scratch&#8230; <img src="https://s.w.org/images/core/emoji/11/72x72/1f641.png" alt="🙁" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>Second lesson?  I may need to take emails, online threats and such more seriously.  Tim and I discussed how many I receive on a regular basis and he was taken aback on the reality of it.  I don&#8217;t feel that vulnerable personally, but my online content and my connections are vulnerable. As often as some are quick to troll or throw shade, they need to be reviewed individually and not just tossed away as empty threats.</p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> Tags:&nbsp;&nbsp;<br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/&title=Ignorance is Bliss, Until it Isn't"> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/&title=Ignorance is Bliss, Until it Isn't"><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/&title=Ignorance is Bliss, Until it Isn't"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/&title=Ignorance is Bliss, Until it Isn't"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/02/preparing-aws-delphix-trial/" >Preparing AWS for the Delphix Trial</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2017/08/upgrading-amazon-ec2-delphix-target-part-iv/" >Upgrading an Amazon EC2 Delphix Target, Part IV</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/09/submit-an-abstract-to-rmoug-training-days-2016/" >Submit an Abstract to RMOUG Training Days 2016!</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2018/02/rmoug-feb-20th-2018-td18/" >RMOUG, Feb. 20th, 2018 #td18</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2015/10/delphix-express-virtualize-your-first-database-and-application/" >Delphix Express - virtualize your first database (and application)</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/10/ignorance-is-bliss-until-it-isnt/">Ignorance is Bliss, Until it Isn't</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8193 Mon Oct 15 2018 16:24:54 GMT-0400 (EDT) Use Azure CLI…I Beg You. https://dbakevlar.com/2018/10/useazurecliibegyou/ <p>FYI-  this was the one post I had to restore manually after my blog was hacked a week back.  Its intact, but the post may appear a bit different than before, as I copied and pasted from the emailed version that occurs as part of my RSS feed.  Enjoy!</p> <p><span data-originalcomputedfontsize="16" data-removefontsize="true">Azure CLI made me feel right at home after working at Oracle in the Enterprise Manager CLI, (EMCLI)  The syntax is simple, powerful and allows an interface to manage Azure infrastructure from the command line, scripting out complex processing that would involve a lot of time in the user interface. </span></p> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">I&#8217;d love to start promoting it to more DBAs and infrastructure folks, but not just for creating databases and a few server/containers/vms, but for the entire STACK.  With that request, there&#8217;s going to be a lot of follow up blog <span class="term-highlighted">posts</span> on this one, but let&#8217;s just start with a few tips and tricks, along with a 101</p> <h3 data-originalcomputedfontsize="25.600000381469727" data-originalfontsize="1.6em">1.Download the Azure CLI Client</h3> <p data-originalcomputedfontsize="14" data-originalfontsize="14px"><a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest" target="_blank" rel="noopener" data-originalcomputedfontsize="14" data-removefontsize="true" data-saferedirecturl="https://www.google.com/url?q=https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view%3Dazure-cli-latest&amp;source=gmail&amp;ust=1538845932826000&amp;usg=AFQjCNG6Xzjz0A1paoT_5_UN9y203QhsXQ">Download Azure CLI</a> to your desktop-  Its really easy.  Just following the defaults and install it on your desktop.  There&#8217;s no need to restart and it&#8217;s readily available from the command prompt, (cmd).</p> <h3 data-originalcomputedfontsize="25.600000381469727" data-originalfontsize="1.6em">2.  Get a Real Script Editor</h3> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">If you think you&#8217;ll get by with Notepad for your script editor, don&#8217;t even try it.  Get a proper text or script editor that tracks lines of code, can handle multiple scripting formats, etc.  If you need a suggestion, I am using <a href="https://www.sublimetext.com/3" target="_blank" rel="noopener" data-originalcomputedfontsize="14" data-removefontsize="true" data-saferedirecturl="https://www.google.com/url?q=https://www.sublimetext.com/3&amp;source=gmail&amp;ust=1538845932826000&amp;usg=AFQjCNEpZP1memyqLq43LtAhgN1KftOQ3g">Sublime Text</a> and it does the trick.</p> <h3 data-originalcomputedfontsize="25.600000381469727" data-originalfontsize="1.6em">3. Test Your Installation</h3> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">Logging into Azure, if you&#8217;re using Azure Active directory is really easy.  Just open up a Command Prompt, (cmd from the start menu) and type in the following:</p> <pre data-originalcomputedfontsize="16" data-removefontsize="true">az login</pre> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">An authorization window to prompt which one of your AD accounts you&#8217;re using for Azure you&#8217;d like to choose and then it will authorize and proceed.  You&#8217;ll see the following in the command prompt window once it&#8217;s finished.</p> <pre data-originalcomputedfontsize="16" data-removefontsize="true">"You have logged in. Now let us find all the subscriptions to which you have access..."</pre> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">It will also show you your subscriptions that you have access to in Azure and then return to the prompt.  Congratulations, you&#8217;re now ready to deploy via the CLI!</p> <h3 data-originalcomputedfontsize="25.600000381469727" data-originalfontsize="1.6em">4.  Perform a Few Test Deployments</h3> <p><span data-originalcomputedfontsize="14" data-removefontsize="true">Then </span><a href="https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest" target="_blank" rel="noopener" data-originalcomputedfontsize="14" data-removefontsize="true" data-saferedirecturl="https://www.google.com/url?q=https://docs.microsoft.com/en-us/cli/azure/?view%3Dazure-cli-latest&amp;source=gmail&amp;ust=1538845932826000&amp;usg=AFQjCNGQjv_MYwbQbpgVSlON9IA8SPWmzg">get started with it</a><span data-originalcomputedfontsize="14" data-removefontsize="true"> by deploying a few test VMs, SQL Databases and maybe a container or two.</span></p> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">You can deploy a VM pretty easily with just a bit of information:</p> <pre data-originalcomputedfontsize="16" data-removefontsize="true">&gt;C:\<b data-originalcomputedfontsize="16" data-removefontsize="true">az</b> <b data-originalcomputedfontsize="16" data-removefontsize="true">vm</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> create -n </b><b data-originalcomputedfontsize="16" data-removefontsize="true">&lt;name&gt;</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> -g </b><b data-originalcomputedfontsize="16" data-removefontsize="true">&lt;name of resource group&gt;</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> --image </b><b data-originalcomputedfontsize="16" data-removefontsize="true">UbuntuLTS</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> --generate-</b><b data-originalcomputedfontsize="16" data-removefontsize="true">ssh</b><b data-originalcomputedfontsize="16" data-removefontsize="true">-keys</b> SSH key files 'C:\Users\xxxxxxxxxxxx\.ssh\<wbr />id_rsa' and 'C:\Users\xxxxxxxxxxxxxxxx\.<wbr />ssh\id_rsa.pub' have been generated under ~/.ssh to allow SSH access to the VM. If using machines without permanent storage, back up your keys to a safe location. - Running .. C:\<b data-originalcomputedfontsize="16" data-removefontsize="true">EDU_Docker&gt;az </b><b data-originalcomputedfontsize="16" data-removefontsize="true">vm</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> list –n &lt;name you gave your vm&gt;</b> C:\<b data-originalcomputedfontsize="16" data-removefontsize="true">EDU_Docker&gt;az </b><b data-originalcomputedfontsize="16" data-removefontsize="true">vm</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> delete -n </b><b data-originalcomputedfontsize="16" data-removefontsize="true">&lt;name&gt;</b><b data-originalcomputedfontsize="16" data-removefontsize="true"> -g </b><b data-originalcomputedfontsize="16" data-removefontsize="true">&lt;name of resource group&gt; #I like to add the group, too.</b></pre> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">The CLI will still ask you to verify if you want to delete the resource, but once you confirm, it will remove it and you&#8217;ll be back to clean.</p> <p data-originalcomputedfontsize="14" data-originalfontsize="14px">The more people that use the CLI, the more robust it will become and the more powerful you become as an infrastructure specialist in Azure.  Come on, help a girl out here-  I can&#8217;t blog about this all on my own&#8230; <img class="m_-2244078437660145056wp-smiley" src="https://ci6.googleusercontent.com/proxy/vXji51LRcYPolRFZC93AUgyH1am76ygCe8rXELfjFyGdF-zNOxe1XzfkbUWBs7DD06AjmgrgEyQIJruQpWMk6vDrucyyfkn4znhFk4tQj7_SFMNGDneMarFHiIHmSQWjN1dZvtA=s0-d-e1-ft#https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" alt="&#x1f642;" /></p> <table border="0" cellspacing="0" cellpadding="0" bgcolor="#dddddd"> <tbody> <tr> <td> <table class="m_-2244078437660145056subscribe-body" border="0" cellspacing="0" cellpadding="0" align="center"> <tbody> <tr> <td> <table class="m_-2244078437660145056subscribe-wrapper" style="height: 5px;" border="0" width="5" cellspacing="0" cellpadding="0" bgcolor="#ffffff"> <tbody> <tr> <td style="width: 1px;"></td> </tr> </tbody> </table> <table class="m_-2244078437660145056subscribe-footer-wrap" border="0" cellspacing="0" cellpadding="0"> <tbody> <tr> <td></td> </tr> </tbody> </table> </td> </tr> </tbody> </table> </td> </tr> </tbody> </table> <p><img src="https://ci6.googleusercontent.com/proxy/uGLhcNf_Om51D_4Z8ZnvBXTe-Z-MoXdiyUnUl1BwnWqgHupmMdpxGcZKkArKKzDCVlx2kiCMKOcmeAISSAieNkpTQYPJyaYd6gRBiSiW8qsaySCG_NWktmI3pQXd4cWssEo3lrJWrYGWOD1kOnGfw03ppJZzPhxl0We0MPmAMOqf1laseZLXQR42_zkvOw2PGWQ=s0-d-e1-ft#http://pixel.wp.com/b.gif?blog=17330520&amp;post=8208&amp;subd=dbakevlar.com&amp;ref=&amp;email=1&amp;email_o=jetpack&amp;host=jetpack.wordpress.com" alt="" width="1" height="1" border="0" /></p> <br><br><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/ico-tag.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dis="1"> Tags:&nbsp;&nbsp;<a href="https://dbakevlar.com/tag/azure/" rel="tag">azure</a>, <a href="https://dbakevlar.com/tag/azure-cli/" rel="tag">Azure CLI</a>, <a href="https://dbakevlar.com/tag/devops/" rel="tag">DevOps</a>, <a href="https://dbakevlar.com/tag/microsoft/" rel="tag">Microsoft</a><br><br><div style="width:100%"><table align="left" width="100%" cellspacing="0" cellpadding="0" bgcolor="#f1f1f1" border="0px;"> <tbody> <tr bgcolor="#ffffff"><td align="center" width="17%" valign="top"> <span class="sb_title">Del.icio.us</span><br> <a href="http://del.icio.us/post?url=https://dbakevlar.com/2018/10/useazurecliibegyou/&title=Use Azure CLI...I Beg You."> <img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/delicious.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> </a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Facebook</span><br> <a href="http://www.facebook.com/share.php?u=https://dbakevlar.com/2018/10/useazurecliibegyou/"><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/facebook_icon.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">TweetThis</span><br> <a href="http://twitthis.com/twit?url=https://dbakevlar.com/2018/10/useazurecliibegyou/&title=Use Azure CLI...I Beg You."><img src="https://i2.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tweet.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">Digg</span><br> <a href="http://digg.com/submit?phase=2&url=https://dbakevlar.com/2018/10/useazurecliibegyou/&title=Use Azure CLI...I Beg You."><img src="https://i0.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/digg.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td><td align="center" width="17%" valign="top"> <span class="sb_title">StumbleUpon</span><br> <a href="http://www.stumbleupon.com/submit?url=https://dbakevlar.com/2018/10/useazurecliibegyou/&title=Use Azure CLI...I Beg You."><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/stumble.gif?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"></a> </td></tr> </tbody></table></div><br><div style="clear:both"></div><div style="background:#EEEEEE; padding:0px 0px 0px 15px; margin:10px 0px 0px 0px;"><div style="padding:5px 0px 5px 0px;"><b>Comments:</b>&nbsp;&nbsp;<a href="https://dbakevlar.com/2018/10/useazurecliibegyou/#respond">0 (Zero), Be the first to leave a reply!</a></div><br><div style="clear:both"></div><div style="padding:13px 0px 5px 0px;"><span style="border-bottom:1px dashed #003399;padding-bottom:4px;"><strong>You might be interested in this:</strong></span>&nbsp;&nbsp;<br><ul style="margin:0; padding:0; padding-top:10px; padding-bottom:5px;"><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2013/08/cursor_sharing-a-picture-is-worth-a-1000-words/" >CURSOR_SHARING : a picture is worth a 1000 words</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/03/em12cs-configuration-topology/" >EM12c's Configuration Topology</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2009/12/the-makings-of-a-great-dba-team/" >The Makings of a Great DBA Team</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/03/rmoug-2012-revisited/" >RMOUG 2012 Revisited</a></li><li style="list-style-type: none;"><img src="https://i1.wp.com/dbakevlar.com/wp-content/plugins/readers-from-rss-2-blog/wpsmartapps-lic/images/tick.png?w=650&#038;ssl=1" border="0" align="absmiddle" data-recalc-dims="1"> &nbsp;<a href="https://dbakevlar.com/2012/07/kscope-2012-san-antonio-in-june-yippee-ki-yea/" >KSCOPE 2012, San Antonio in June, Yippee-Ki-Yea!</a></li></ul></div></div><hr style="color:#EBEBEB" /><small>Copyright © <a href="https://dbakevlar.com">DBAKevlar</a> [<a href="https://dbakevlar.com/2018/10/useazurecliibegyou/">Use Azure CLI...I Beg You.</a>], All Right Reserved. 2018.</small><br> dbakevlar https://dbakevlar.com/?p=8189 Mon Oct 15 2018 15:47:27 GMT-0400 (EDT) Privileges on a view https://laurentschneider.com/wordpress/2018/10/privileges-on-a-view.html <p>Granting too many privileges on a view could be disastrous. A view is often used as a security element; you grant access to only a subset of columns and rows to one user. Mostly only SELECT. If you want to grant update to only some rows, the security could be enhanced with the WITH CHECK OPTION. </p> <p>But let&#8217;s talk about granting too much privs.<br /> <font color="red">disclaimer: it may damaged your database forever</font><br /> <pre><code> SQL&gt; create or replace view v as select trunc(sysdate) today from dual; View created. SQL&gt; create public synonym v for v; Synonym created. SQL&gt; grant all on v to public; Grant succeeded. SQL&gt; conn u/***@db01 Connected. SQL&gt; select * from v; TODAY ---------- 2018-10-15 SQL&gt; select * from v; TODAY ---------- 2018-10-15 SQL&gt; delete from v; 1 row deleted. SQL&gt; select * from v; TODAY ---------- 2018-10-15 SQL&gt; select count(*) from v; &nbsp;&nbsp;COUNT(*) ---------- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 SQL&gt; select count(dummy) from dual; COUNT(DUMMY) ------------ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0 SQL&gt; rollback; Rollback complete. </code></pre></p> <p>Wait&#8230; what happened ???<br /> <pre><code> SQL&gt; delete from v; 1 row deleted. SQL&gt; select count(*) from v; &nbsp;&nbsp;COUNT(*) ---------- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 </code></pre></p> <p>This is a biaised test, because nobody creates view in SYS schema and nobody shall ever do GRANT ALL TO PUBLIC. But sometimes, people do. Because of the grant, you have emptied dual. <img src="https://s.w.org/images/core/emoji/11/72x72/1f62e.png" alt="😮" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>The COUNT(*) is a magic thing. select count(*) from dual returns 1. Unless your instance collapses.</p> <p><pre><code> SQL&gt; delete dual; 1 row deleted. SQL&gt; alter session set &quot;_fast_dual_enabled&quot;=false; Session altered. SQL&gt; select count(*) from dual; &nbsp;&nbsp;COUNT(*) ---------- &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 1 SQL&gt; rollback; </code></pre></p> <p>One reader once asked for assistance because he tried it and its db was broken. I won&#8217;t help you. Just do it for fun on a database that you can recreate afterwards.</p> <p>Okay, enough fun for today, let&#8217;s see another side effect of excessive rights.<br /> <pre><code> SQL&gt; create user u identified by ***; User created. SQL&gt; create role r; Role created. SQL&gt; grant create view, create session to u; Grant succeeded. SQL&gt; conn u/***@db01 Connected. SQL&gt; create view v as select trunc(trunc(sysdate)-.5) yesterday from dual; View created. SQL&gt; create role r; Role created. SQL&gt; select * from v; YESTERDAY ---------- 2018-10-14 SQL&gt; delete from v; delete from v &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;* ERROR at line 1: ORA-01031: insufficient privileges SQL&gt; grant delete on v to r; grant delete on v to r &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;* ERROR at line 1: ORA-01720: grant option does not exist for &#039;SYS.DUAL&#039; SQL&gt; grant select on v to r; Grant succeeded. </code></pre></p> <p>So far so good, I have created a view and granted select only on that view. I cannot delete DUAL. I cannot grant delete.</p> <p>Now learn about this less-known annoyance<br /> <pre><code> SQL&gt; conn / as sysdba Connected. SQL&gt; grant select, update, insert, delete on u.v to r; Grant succeeded. </code></pre></p> <p>What? SYS can give access to my view to a role, even if I have no DELETE right on the underlying?<br /> <pre><code> SQL&gt; grant create session, r to user2; Grant succeeded. SQL&gt; conn user2/***@DB01 Connected. SQL&gt; select * from u.v; YESTERDAY ---------- 2018-10-14 SQL&gt; delete from v; delete from v &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;* ERROR at line 1: ORA-01031: insufficient privileges </code></pre></p> <p>Sofar, it didn&#8217;t have so many side effect. It is not uncommon to see scripts that automatically generate grants ; and it is also not uncommon to see those script going doolally. </p> <p>But, one side effect is preventing future CREATE OR REPLACE statements.<br /> <pre><code> SQL&gt; create or replace view v as select trunc(trunc(sysdate+6,&#039;YYYY&#039;)+400,&#039;YYYY&#039;)-7 xmas from dual; create or replace view v as select trunc(trunc(sysdate+6,&#039;YYYY&#039;)+400,&#039;YYYY&#039;)-7 xmas from dual; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * ERROR at line 1: ORA-01720: grant option does not exist for &#039;SYS.DUAL&#039; </code></pre> </p> <p>CREATE OR REPLACE no longer work. You need to revoke the right. Either with a DROP VIEW or with </p> <p><pre><code> SQL&gt; revoke insert, update, delete on v from r; Revoke succeeded. SQL&gt; create or replace view v as select trunc(trunc(sysdate+6,&#039;YYYY&#039;)+400,&#039;YYYY&#039;)-7 xmas from dual; View created. SQL&gt; select * from v; XMAS ---------- 2018-12-25 </code></pre></p> <p>I&#8217;d recommend against using SYS to grant access to user tables. Use the schema owner.</p> Laurent Schneider https://laurentschneider.com/?p=2667 Mon Oct 15 2018 11:47:44 GMT-0400 (EDT) Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/ <p><img style="float: right; display: inline;" src="https://www.internet2.edu/media/medialibrary/2016/06/08/oracle-logo.png" alt="Related image" width="404" height="197" align="right" />Oracle Cloud Infrastructure is Oracle&#8217;s second generation infrastructure as a service offering &#8211; that support many components including compute nodes, networks, storage, Kubernetes clusters and Database as a Service. Oracle Cloud Infrastructure can be administered through a GUI &#8211; a browser based console &#8211; as well as through a REST API and with the OCI Command Line Interface. Oracle offers a Terraform provider that allows automated, scripted provisioning of OCI artefacts.</p> <p>This article describes an easy approach to get going with the Command Line Interface for Oracle Cloud Infrastructure &#8211; using the oci-cli Docker image. Using a Docker container image and a simple configuration file, oci commands can be executed without locally having to install and update the OCI Command Line Interface (and the Python runtime environment) itself.</p> <p>These are the steps to get going on a Linux or Mac Host that contains a Docker engine:</p> <ul> <li>create a new user in OCI (or use an existing user) with appropriate privileges; you need the OCID for the user</li> <li>also make sure you have the name of the region and the OCID for the tenancy on OCI</li> <li>execute a docker run command to prepare the OCI CLI configuration file</li> <li>update the user in OCI with the public key created by the OCI CLI setup action</li> <li>edit the .profile to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image</li> </ul> <p>At that point, you can locally run any OCI CLI command against the specified user and tenant &#8211; using nothing but the Docker container that contains the latest version of the OCI CLI and the required runtime dependencies.</p> <p>In more detail, the steps look like this:</p> <h3>Create a new user in OCI</h3> <p>(or use an existing user) with appropriate privileges; you need the OCID for the user</p> <p>You can reuse an existing user or create a fresh one &#8211; which is what I did. This step I performed in the OCI Console:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb.png" alt="image" width="714" height="338" border="0" /></a></p> <p>&nbsp;</p> <p>I then added this user to the group Administrators.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-1.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-1.png" alt="image" width="714" height="338" border="0" /></a></p> <p>And I noted the OCID for this user:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-2.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-2.png" alt="image" width="867" height="198" border="0" /></a></p> <p>also make sure you have the name of the region and the OCID for the tenancy on OCI:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-3.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-3.png" alt="image" width="854" height="338" border="0" /></a></p> <h3>Execute a docker run command to prepare the OCI CLI configuration file</h3> <p>On the Docker host machine, create a directory to hold the OCI CLI configuration files. These files will be made available to the CLI tool by mounting the directory into the Docker container.</p> <pre class="brush: bash; title: ; notranslate"> mkdir ~/.oci </pre> <p>Run the following Docker command:</p> <pre class="brush: bash; title: ; notranslate"> docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci -it stephenpearson/oci-cli:latest setup config </pre> <p>This starts the OCI CLI container in interactive mode &#8211; with the ~/.oci directory mounted into the container at /root/oci &#8211; the  and executes the <em>setup config </em>command on the OCI CLI (see <a title="https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html" href="https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html">https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/cmdref/setup/config.html</a>).</p> <p>This command will start a dialog that results in the OCI Config file being written to /root/.oci inside the container and to ~/.oci on the Docker host. The dialog also result in a private and public key file, in that same dircetory.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-4.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-4.png" alt="image" width="1119" height="228" border="0" /></a></p> <p>Here is the content of the config file that the dialog has generated on the Docker host:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-5.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-5.png" alt="image" width="867" height="162" border="0" /></a></p> <h3>Update the user in OCI with the public key created by the OCI CLI setup action</h3> <p>The contents of the file that contains the public key &#8211; ~/.oci/oci_api_key_public.pem in this case &#8211; should be configured on the OCI user &#8211; kubie in this case &#8211; as API Key:</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/image-6.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="image" src="https://technology.amis.nl/wp-content/uploads/2018/10/image_thumb-6.png" alt="image" width="606" height="338" border="0" /></a></p> <p>&nbsp;</p> <h3>Create shortcut command for OCI CLI on Docker host</h3> <p>We did not install the OCI CLI on the Docker host &#8211; but we can still make it possible to run the CLI commands as if we did. If we edit the .profile file to associate the oci command line instruction on the Docker host with running the OCI CLI Docker image, we get the same experience on the host command line as if we did install the OCI CLI.</p> <p>Edit ~/.profile and add this line:</p> <pre class="brush: bash; title: ; notranslate"> oci() { docker run --rm --mount type=bind,source=$HOME/.oci,target=/root/.oci stephenpearson/oci-cli:latest &quot;$@&quot;; } </pre> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML70b7f88a.png"><img style="margin: 0px auto; float: none; display: block; background-image: none;" title="SNAGHTML70b7f88a" src="https://technology.amis.nl/wp-content/uploads/2018/10/SNAGHTML70b7f88a_thumb.png" alt="SNAGHTML70b7f88a" width="867" height="181" border="0" /></a></p> <p>&nbsp;</p> <p>On the docker host I can now run oci cli commands (that will be sent to the docker container that uses the configuration in ~/.oci for connecting to the OCI instance)</p> <h3>Run OCI CLI command on the Host</h3> <p>We are now set to run OCI CLI command &#8211; even though we did not actually install the OCI CLI and the Python runtime environment.</p> <p>Note: most commands we run will require us to pass the Compartment Id of the OCI Compartment against which we want to perform an action. It is convenient to set an environment variable with the Compartment OCID value and then refer in all cli commands to the variable.</p> <p>For example:</p> <pre class="brush: bash; title: ; notranslate"> export COMPARTMENT_ID=ocid1.tenancy.oc1..aaaaaaaaot3ihdt </pre> <p>Now to list all policies in this compartment:</p> <pre class="brush: bash; title: ; notranslate"> oci iam policy list --compartment-id $COMPARTMENT_ID --all </pre> <p>And to create a new policy &#8211; one that I need in order to provision a Kubernetes cluster:</p> <pre class="brush: bash; title: ; notranslate"> oci iam policy create  --name oke-service --compartment-id $COMPARTMENT_ID  --statements '[ &quot;allow service OKE to manage all-re sources in tenancy&quot;]' --description 'policy for granting rights on OKE to manage cluster resources' </pre> <p>Or to create a new compartment:</p> <pre class="brush: bash; title: ; notranslate"> oci iam compartment create --compartment-id $COMPARTMENT_ID  --name oke-compartment --description &quot;Compartment for OCI resources created for OKE Cluster&quot;</pre> <p>From here on, it is just regular OCI CLI work, just as if it had been installed locally. But by using the Docker container, we keep our system tidy and we can easily benefit from the latest version of the OCI CLI at all times.</p> <p>&nbsp;</p> <h2>Resources</h2> <p>OCI CLI Command Reference &#8211; <a title="https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html" href="https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html">https://docs.cloud.oracle.com/iaas/tools/oci-cli/latest/oci_cli_docs/index.html</a></p> <p>Terraform Provider for OCI: <a title="https://www.terraform.io/docs/providers/oci/index.html" href="https://www.terraform.io/docs/providers/oci/index.html">https://www.terraform.io/docs/providers/oci/index.html</a></p> <p>GitHub repo for OCI CLI Docker &#8211;  <a title="https://github.com/stephenpearson/oci-cli " href="https://github.com/stephenpearson/oci-cli ">https://github.com/stephenpearson/oci-cli </a></p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/14/get-going-quickly-with-command-line-interface-for-oracle-cloud-infrastructure-using-docker-container/">Get going quickly with Command Line Interface for Oracle Cloud Infrastructure using Docker container</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Lucas Jellema https://technology.amis.nl/?p=50041 Sun Oct 14 2018 11:34:14 GMT-0400 (EDT) Relational to JSON with SQL https://jsao.io/2018/10/relational-to-json-with-sql/ <p>Oracle started adding JSON support to Oracle Database with version 12.1.0.2. The earliest support was targeted at storing, indexing, and querying JSON data. Version 12.2 rounded out that support by adding features for generating, exploring, and processing JSON data. See the <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/index.html">JSON Developer&#8217;s Guide</a> for a comprehensive overview of what&#8217;s now available. In this post, I&#8217;ll leverage the new SQL operators for JSON generation to convert the relational data to meet the goal.<br /> <span id="more-3195"></span></p> <div class="alert alert-info" role="alert"> <strong>Please Note:</strong> This post is part of <a href="https://jsao.io/2015/07/relational-to-json-in-oracle-database">a series on generating JSON from relational data in Oracle Database</a>. See that post for details on the solution implemented below as well as other options that can be used to achieve that goal. </div> <h4>Solution</h4> <p>The 12.2+ SQL functions available for JSON generation are:</p> <ul> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/generation.html#GUID-1084A518-A44A-4654-A796-C1DD4D8EC2AA">JSON_OBJECT</a> &#8211; single-row function, creates an object for each row.</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/generation.html#GUID-F942D202-E4BB-4ED8-997E-AEBD6D8ED8C1">JSON_ARRAY</a> &#8211; single-row function, creates an array for each row.</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/generation.html#GUID-E4DDB4E8-A4B9-4EA9-BC26-1879AA661D37">JSON_OBJECTAGG</a> &#8211; aggregate function, creates an object based on groups of rows.</li> <li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/generation.html#GUID-B0FA4582-762D-4C32-8A0C-265142BD347B">JSON_ARRAYAGG</a> &#8211; aggregate function, creates an array based on groups of rows.</li> </ul> <p>The following solution uses JSON_OBJECT and JSON_ARRAYAGG multiple times, nesting them as needed to produce the desired output.</p> <pre class="crayon-plain-tag">select json_object( 'id' is department_id, 'name' is department_name, 'location' is ( select json_object( 'id' is location_id, 'streetAddress' is street_address, 'postalCode' is postal_code, 'country' is ( select json_object( 'id' is country_id, 'name' is country_name, 'regionId' is region_id ) from countries where country_id = loc.country_id ) ) from locations loc where location_id = dept.location_id ), 'manager' is ( select json_object( 'id' is employee_id, 'name' is first_name || ' ' || last_name, 'salary' is salary, 'job' is ( select json_object( 'id' is job_id, 'title' is job_title, 'minSalary' is min_salary, 'maxSalary' is max_salary ) from jobs where job_id = man.job_id ) ) from employees man where employee_id = dept.manager_id ), 'employees' is ( select json_arrayagg( json_object( 'id' is employee_id, 'name' is first_name || ' ' || last_name, 'isSenior' is case when emp.hire_date &lt; to_date('01-01-2005', 'dd-mm-yyyy') then 'true' else 'false' end format json, 'commissionPct' is commission_pct, 'jobHistory' is ( select json_arrayagg( json_object( 'id' is job_id, 'departmentId' is department_id, 'startDate' is to_char(start_date, 'DD-MON-YYYY'), 'endDate' is to_char(end_date, 'DD-MON-YYYY') ) ) from job_history where employee_id = emp.employee_id ) ) ) from employees emp where department_id = dept.department_id ) ) as department from departments dept where department_id = :department_id</pre> <p>As was the case with the <a href="https://jsao.io/2015/07/relational-to-json-with-ords/">SQL query used for ORDS</a>, this is a fairly large query. But I love the control the new SQL operators provide! As an example, I&#8217;ve highlighted line 50, which uses the FORMAT JSON keywords to declare that the value is to be consiered as JSON data. This allowed me to add Boolean values to the JSON output despite the fact that Oracle&#8217;s SQL engine doesn&#8217;t support Boolean. There are <a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/adjsn/generation.html#GUID-C0F8F837-EE36-4EDD-9261-6E8A9245906C__OPTIONALBEHAVIORFORSQLJSONGENERATIO-022017DB">other optional keywords</a> to modify the behavior of the JSON generating functions.</p> <h4>Output</h4> <p>I&#8217;m happy to report that the solution above generates JSON that meets the goal 100%!</p> <pre class="crayon-plain-tag">{ &quot;id&quot;: 10, &quot;name&quot;: &quot;Administration&quot;, &quot;location&quot;: { &quot;id&quot;: 1700, &quot;streetAddress&quot;: &quot;2004 Charade Rd&quot;, &quot;postalCode&quot;: &quot;98199&quot;, &quot;country&quot;: { &quot;id&quot;: &quot;US&quot;, &quot;name&quot;: &quot;United States of America&quot;, &quot;regionId&quot;: 2 } }, &quot;manager&quot;: { &quot;id&quot;: 200, &quot;name&quot;: &quot;Jennifer Whalen&quot;, &quot;salary&quot;: 4400, &quot;job&quot;: { &quot;id&quot;: &quot;AD_ASST&quot;, &quot;title&quot;: &quot;Administration Assistant&quot;, &quot;minSalary&quot;: 3000, &quot;maxSalary&quot;: 6000 } }, &quot;employees&quot;: [ { &quot;id&quot;: 200, &quot;name&quot;: &quot;Jennifer Whalen&quot;, &quot;isSenior&quot;: true, &quot;commissionPct&quot;: null, &quot;jobHistory&quot;: [ { &quot;id&quot;: &quot;AD_ASST&quot;, &quot;departmentId&quot;: 90, &quot;startDate&quot;: &quot;17-SEP-1995&quot;, &quot;endDate&quot;: &quot;17-JUN-2001&quot; }, { &quot;id&quot;: &quot;AC_ACCOUNT&quot;, &quot;departmentId&quot;: 90, &quot;startDate&quot;: &quot;01-JUL-2002&quot;, &quot;endDate&quot;: &quot;31-DEC-2006&quot; } ] } ] }</pre> <p>However, when I ran this query on department 50 (which as the most employees) I received this error: ORA-40459: output value too large (actual: 4071, maximum: 4000). This is because each of the JSON generation functions has a default output of varchar2(4000). This is fine for many use cases, but it&#8217;s easily exceeded with the aggregate functions and deeply nested structures.</p> <p>The solution is to leverage the RETURNING clause to specify a different data type or size. See <a href="https://gist.github.com/dmcghan/127cdeaf770203d467f8354deb31a598">this gist</a> to get an idea of how the solution above could be modified to use the RETURNING clause. In 12.2, <a href="https://asktom.oracle.com/pls/apex/asktom.search?tag=sql-json-ora-40459-exception">there were some restrictions</a> on which functions could work with CLOBs, but they&#8217;ve been lifted in 18c.</p> <h4>Summary</h4> <p>This is my favorite solution of the series &#8211; by far! The JSON generation functions are very powerful and because they&#8217;re in the database it&#8217;s possible to leverage them from just about anywhere, including Node.js and ORDS.</p> danmcghan https://jsao.io/?p=3195 Fri Oct 12 2018 08:21:05 GMT-0400 (EDT) How to deploy InfluxDB in Azure using a VM service with dedicated storage https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/ <p>&nbsp;</p> <p>InfluxDB isn&#8217;t natively supported on Azure. This blog post will teach you how to deploy InfluxDB (or any other database) in a VM  with a managed disk on the Azure platform. This will enable you to use this fast time-series database for your project. If the standard range of supported databases (MySQL, CosmosDB, &#8230;) on Azure doesn&#8217;t suffice. This blog post is for you.</p> <p>Prerequisite:</p> <ul> <li>Azure account <ul> <li><a href="http://portal.azure.com">Azure Portal</a></li> </ul> </li> <li>PuTTY(gen) <ul> <li>It is possible to generate keys through Azure Cloud Shell but we will use <a href="https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html">PuTTY</a>.</li> </ul> </li> </ul> <p><strong>Where are the keys at?<br /> </strong>PuTTY is a free open-source terminal emulator. It supports multiple network protocols, in this blog post we will only use SSH. Fun fact, PuTTY has no official meaning. We will use this tool to create keys and connect to our VM.</p> <p>Download PuTTY with the link provided in the Prerequisite tab. When PuTTY and PuTTYgen are installed launch PuTTYgen. This is a tool for creating SSH keys.<br /> <img data-attachment-id="49920" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/1-4/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/1.jpg" data-orig-size="475,469" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/1-300x296.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/1.jpg" class="size-medium wp-image-49920 alignleft" src="https://technology.amis.nl/wp-content/uploads/2018/12/1-300x296.jpg" alt="" width="300" height="296" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/1-300x296.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/1.jpg 475w" sizes="(max-width: 300px) 100vw, 300px" /></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Click the generate button and move your cursor around the progress bar to &#8216;generate some randomness&#8217;. We will use these keys to connect to our VM later. Click the &#8220;Save public key&#8221; and &#8220;Save private key&#8221; button and save your keys to a secure place on your computer. You can close PuTTYgen and log into <a href="http://portal.azure.com">Azure Portal</a>.</p> <p><strong>Creating a Virtual Machine<br /> </strong>Now that we have our keys we can get started in Azure. But first I&#8217;ll explain why we are using a VM. InfluxDB isn&#8217;t natively supported on Azure and that causes us to run it somewhere else.</p> <p>Most people (including myself) think using a Container Instance with a Shared File Storage is the appropriate option. It&#8217;s stateless, secured and a cheap alternative to a VM. The major problem is that on-restart InfluxDB can&#8217;t read it&#8217;s own data anymore. Shared File storage isn&#8217;t supported by InfluxDB because it causes a lot of bugs. Luckily InfluxDB isn&#8217;t alone, MongoDB explicitly notes that Shared File Storage isn&#8217;t supported at all.</p> <p>We now have an understanding why we are using a VM. Let&#8217;s create one! Search for Virtual Machines and click it.</p> <p><img data-attachment-id="49922" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/3-3/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/3.jpg" data-orig-size="497,272" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="3" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/3-300x164.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/3.jpg" class="alignnone size-medium wp-image-49922" src="https://technology.amis.nl/wp-content/uploads/2018/12/3-300x164.jpg" alt="" width="300" height="164" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/3-300x164.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/3.jpg 497w" sizes="(max-width: 300px) 100vw, 300px" /><br /> Since we want to create a new VM click add.</p> <p><img data-attachment-id="49923" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/4-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/4.jpg" data-orig-size="587,243" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/4-300x124.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/4.jpg" class="alignnone size-medium wp-image-49923" src="https://technology.amis.nl/wp-content/uploads/2018/12/4-300x124.jpg" alt="" width="300" height="124" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/4-300x124.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/4.jpg 587w" sizes="(max-width: 300px) 100vw, 300px" /><br /> Now we are in the Virtual Machine wizard with a lot of options. Note that most options depend on your case. We will create something cheap for this demo.</p> <p><img data-attachment-id="49924" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/5-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/5.jpg" data-orig-size="705,857" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="5" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/5-247x300.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/5.jpg" class="alignnone size-full wp-image-49924" src="https://technology.amis.nl/wp-content/uploads/2018/12/5.jpg" alt="" width="705" height="857" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/5.jpg 705w, https://technology.amis.nl/wp-content/uploads/2018/12/5-247x300.jpg 247w" sizes="(max-width: 705px) 100vw, 705px" /></p> <ol> <li>Choose your subscription, this is where the expenses are credited.</li> <li>Choose your resource group, if you did not have one before you can use the <em>create new</em> option.</li> <li>Choose your VM-name, something recognizable is feasible since the VM creates a lot of services for you with this name ($NAME-ip, &#8230;).</li> <li>Choose your preferred region.</li> <li>This option protects your data from datacenter outages, if you are reading this blogpost you probably don&#8217;t need this. Skip it for now.</li> <li>I will use Ubuntu 18.04 but you are free to use whatever. Linux is cheaper than Windows, keep that in mind.</li> <li>B1s has 1vcpu and 1GB memory. Depending on the load you put on your VM choose something more powerful. You can always scale up later. <strong>Do not</strong> look at disk space. We will create and attach our own managed disk later.</li> <li>We will use SSH public key authentication.</li> <li>Copy and paste the <strong>public</strong> key we generated with PuTTYgen. Also choose your login name, I will use &#8220;influx&#8221;.</li> <li>When the VM is running we want to connect to it through PuTTY. Open up port 22 (SSH) with this setting. Otherwise the error &#8220;Connection Refused&#8221; will occur.</li> </ol> <p>A database saves data, that is why we need some extra disk space. Click Next : Disks.<br /> <img data-attachment-id="49925" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/6-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/6.jpg" data-orig-size="637,406" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="6" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/6-300x191.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/6.jpg" class="alignnone size-medium wp-image-49925" src="https://technology.amis.nl/wp-content/uploads/2018/12/6-300x191.jpg" alt="" width="300" height="191" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/6-300x191.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/6.jpg 637w" sizes="(max-width: 300px) 100vw, 300px" /></p> <p>Select your OS disk. I am using the Standard HDD. If you want something faster, go for it.</p> <p><img data-attachment-id="49926" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/7-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/7.jpg" data-orig-size="559,382" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="7" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/7-300x205.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/7.jpg" class="alignnone size-medium wp-image-49926" src="https://technology.amis.nl/wp-content/uploads/2018/12/7-300x205.jpg" alt="" width="300" height="205" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/7-300x205.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/7.jpg 559w" sizes="(max-width: 300px) 100vw, 300px" /></p> <ol> <li>Choose your disk type. A standard HDD is 60 mb/s and really cheap. For a lot of use cases this is enough.</li> <li>Choose the name</li> <li>I will use 100GB. HDD prices are divided in tiers, click <a href="https://azure.microsoft.com/pricing/details/managed-disks/">here.</a></li> <li>We need an empty disk, click create and go to Next : Management</li> </ol> <p>We are now in the Management section. Make sure you select the correct Storage Account for diagnostics. If you do not have one, click the create new button. This is where your logs are stored.</p> <p>We are done with the VM settings. Click Review + create and check once more if everything is properly configured. Azure will deploy your VM to the selected resourcegroup, this can take a minute. Once the VM is up and running go to your resource group and click on your VM.</p> <p><img data-attachment-id="49929" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/10-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/10.jpg" data-orig-size="1239,498" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="10" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/10-300x121.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/10-1024x412.jpg" class="alignnone size-large wp-image-49929" src="https://technology.amis.nl/wp-content/uploads/2018/12/10-1024x412.jpg" alt="" width="702" height="282" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/10-1024x412.jpg 1024w, https://technology.amis.nl/wp-content/uploads/2018/12/10-300x121.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/10-768x309.jpg 768w, https://technology.amis.nl/wp-content/uploads/2018/12/10.jpg 1239w" sizes="(max-width: 702px) 100vw, 702px" /></p> <p>On the right there is a label named &#8220;Public IP Address&#8221;. Copy the IP address and save it. If you ever forget the IP, this is where it is located. In this tutorial we are not going to configure a DNS. Next up click the networking tab on the left.</p> <p>Because we want to access the database from outside the VM as well we need to open up port 8086. In the network tab click Add inbound port rule.</p> <p><img data-attachment-id="49931" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/12-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/12.jpg" data-orig-size="305,634" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="12" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/12-144x300.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/12.jpg" class="alignnone size-full wp-image-49931" src="https://technology.amis.nl/wp-content/uploads/2018/12/12.jpg" alt="" width="305" height="634" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/12.jpg 305w, https://technology.amis.nl/wp-content/uploads/2018/12/12-144x300.jpg 144w" sizes="(max-width: 305px) 100vw, 305px" /></p> <p>Change the port to 8086 and choose your name and description. Next up click Add.</p> <p>Okay good, but what did we do? We created an Ubuntu VM with an extra HDD to store our data. We also opened port 22 for SSH and port 8086 for InfluxDB. We still need to partition our disk, move Docker to that disk and install InfluxDB. Let&#8217;s get right into that.</p> <p><strong>Establishing a connection with our VM</strong><br /> Open up PuTTY!</p> <p><img data-attachment-id="49932" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/13-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/13.jpg" data-orig-size="442,439" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="13" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/13-300x298.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/13.jpg" class="alignnone size-medium wp-image-49932" src="https://technology.amis.nl/wp-content/uploads/2018/12/13-300x298.jpg" alt="" width="300" height="298" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/13-300x298.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/13-150x150.jpg 150w, https://technology.amis.nl/wp-content/uploads/2018/12/13-144x144.jpg 144w, https://technology.amis.nl/wp-content/uploads/2018/12/13.jpg 442w" sizes="(max-width: 300px) 100vw, 300px" /></p> <p>Our VM is running but we secured it with a SSH key. We need to attach our private key to open up a session. Go to Auth and click browse, select your private key and click ok. Go back to Session.</p> <p><img data-attachment-id="49933" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/14-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/14.jpg" data-orig-size="443,438" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="14" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/14-300x297.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/14.jpg" class="alignnone size-medium wp-image-49933" src="https://technology.amis.nl/wp-content/uploads/2018/12/14-300x297.jpg" alt="" width="300" height="297" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/14-300x297.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/14-150x150.jpg 150w, https://technology.amis.nl/wp-content/uploads/2018/12/14-144x144.jpg 144w, https://technology.amis.nl/wp-content/uploads/2018/12/14.jpg 443w" sizes="(max-width: 300px) 100vw, 300px" /></p> <p>Paste your VMs IP address, use port 22 and choose connection type SSH. Click open connection. Log in using your username (the one you from the VM setup). Congratulations! You are now logged in to a computer running in one of Microsofts datacenters.</p> <p><strong>Inside the VM<br /> </strong>No more fancy GUI&#8217;s, real (wo)men use the CLI but don&#8217;t worry, you don&#8217;t have to be a CLI-wizard (yet). When copying a line from this blog, use the right mouse button to paste it into your own CLI.</p> <p>First we need to partition our managed disk. In my case, a 100GB HDD. Afterwards we need to mount it. That way we can move our InfluxDB data to the managed disk.</p> <p>Let&#8217;s find our managed disk with the following command. We are using two commands to verify that it&#8217;s indeed the right disk.</p> <pre class="brush: plain; title: ; notranslate"> dmesg | grep SCSI lsblk </pre> <p>I made an 100gb disk which is now called &#8220;sdc&#8221;. Verify that the disk is in both outputs like in the image below.<br /> <img data-attachment-id="49934" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/15-2/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/15.jpg" data-orig-size="661,434" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="15" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/15-300x197.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/15.jpg" class="alignnone size-full wp-image-49934" src="https://technology.amis.nl/wp-content/uploads/2018/12/15.jpg" alt="" width="661" height="434" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/15.jpg 661w, https://technology.amis.nl/wp-content/uploads/2018/12/15-300x197.jpg 300w, https://technology.amis.nl/wp-content/uploads/2018/12/15-214x140.jpg 214w" sizes="(max-width: 661px) 100vw, 661px" /></p> <p>First we need to partition the disk. If your disk has a different name than <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">sdc</tt> change accordingly.</p> <pre class="brush: plain; title: ; notranslate">sudo fdisk /dev/sdc</pre> <p>You are prompted to enter a command. Use the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">N</tt> command to create a new partition. After you created your new partition, we need to write it to our disk. Use the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">W</tt> command to write and exit. We succesfully created a new partition and wrote it to our managed disk.</p> <p>Now write a file system to the partition using the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">mkfs</tt> command. We are going to create an <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">ext4</tt> filesystem. Execute the command below.</p> <pre class="brush: plain; title: ; notranslate"> sudo mkfs -t ext4 /dev/sdc </pre> <p>We partitioned the HDD to <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">ext4</tt>, nice! The last thing to do is to create a directory to mount the file system using <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">mkdir</tt>. I am going to name my folder <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">/databasedata</tt> but you can opt for a different name. Afterwards we mount the disk to your folder.</p> <pre class="brush: plain; title: ; notranslate"> sudo mkdir /databasedata sudo mount /dev/sdc /databasedata </pre> <p>To ensure that the drive is mounted automatically after a reboot it must be added to the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">fstab</tt> file. It&#8217;s a best practice to use the UUID to refer to the drive rather than the name. To find the UUID use:</p> <pre class="brush: plain; title: ; notranslate"> sudo -i blkid </pre> <p><img data-attachment-id="49935" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/attachment/16/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/16.jpg" data-orig-size="641,160" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="16" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/16-300x75.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/16.jpg" class="alignnone size-full wp-image-49935" src="https://technology.amis.nl/wp-content/uploads/2018/12/16.jpg" alt="" width="641" height="160" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/16.jpg 641w, https://technology.amis.nl/wp-content/uploads/2018/12/16-300x75.jpg 300w" sizes="(max-width: 641px) 100vw, 641px" /><br /> Ah, there it is! <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">/dev/sdc</tt> is the one we need. Copy the UUID. Now format the following line to your own needs. My $FOLDERNAME is <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">/databasedata</tt>.</p> <pre class="brush: plain; title: ; notranslate">UUID=$UUID /$FOLDERNAME ext4 defaults,nofail 1 2</pre> <p>Now we are going to add the line above to the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">fstab</tt> file. If you are not familiar with <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">vim</tt>. Use <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">i</tt> to go into insert-mode and once you added the line click <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">ESC</tt> and type <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">:wq</tt> to save and exit.</p> <pre class="brush: plain; title: ; notranslate">sudo vi /etc/fstab</pre> <p>To verify that everything works.</p> <pre class="brush: plain; title: ; notranslate">lslbk</pre> <p><img data-attachment-id="49936" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/attachment/17/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/17.jpg" data-orig-size="382,176" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="17" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/17-300x138.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/17.jpg" class="alignnone size-full wp-image-49936" src="https://technology.amis.nl/wp-content/uploads/2018/12/17.jpg" alt="" width="382" height="176" srcset="https://technology.amis.nl/wp-content/uploads/2018/12/17.jpg 382w, https://technology.amis.nl/wp-content/uploads/2018/12/17-300x138.jpg 300w" sizes="(max-width: 382px) 100vw, 382px" /><br /> It&#8217;s mounted correctly!</p> <p><strong>VMception<br /> </strong>We are going to use Docker to run InfluxDB in our VM. Let&#8217;s start by installing Docker.</p> <pre class="brush: plain; title: ; notranslate"> sudo apt-get update sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common </pre> <p>Now that we updated our <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">apt-get</tt> package and installed some certificates let&#8217;s install Docker. First we will acquire the Docker GPG key, we verify using a fingerprint and download the Ubuntu repository. We update <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">apt-get</tt> once more and install Docker-ce.</p> <pre class="brush: plain; title: ; notranslate"> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ &quot;deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable&quot; sudo apt-get update sudo apt-get install docker-ce </pre> <p>We finally have our Docker instance and we are close to installing InfluxDB. Docker isn&#8217;t really flexible when it comes to data storage. Since we want everything on our managed disk we need to change the default directory of Docker. First, verify that Docker is working.</p> <pre class="brush: plain; title: ; notranslate">docker -v</pre> <p>When Docker starts it checks the <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">daemon.json</tt> file for variables. We will edit this file to let Docker know we want all our data on <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">/$YOURFOLDER</tt>. Enter the following command to edit, remember the commands <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">vim</tt> uses?</p> <pre class="brush: plain; title: ; notranslate">sudo vi /etc/docker/daemon.json</pre> <p>Insert the following: (the folder is where your disk is mounted.) And of course save and exit.</p> <pre class="brush: plain; title: ; notranslate"> { &quot;graph&quot;: &quot;%YOURFOLDER&quot;, &quot;storage-driver&quot;: &quot;overlay&quot; } </pre> <p>Good, but Docker only tries to read this file when it initially starts. Let&#8217;s restart the daemon and Docker.</p> <pre class="brush: plain; title: ; notranslate"> sudo systemctl daemon-reload sudo systemctl restart docker </pre> <p>To confirm that it&#8217;s really using the correct directory type the following:</p> <pre class="brush: plain; title: ; notranslate">sudo docker info|grep &quot;Docker Root Dir&quot;</pre> <p>Once you verified that Docker moved your files, we can remove the old Docker files.</p> <pre class="brush: plain; title: ; notranslate"> sudo rm -rf /var/lib/docker </pre> <p><strong>You&#8217;re still here? Good! Let&#8217;s install InfluxDB</strong><br /> A couple steps ago we changed the root folder of Docker to our managed disk. All Docker data (and Influx) will be stored there. We are creating a volume because volumes are completely managed by Docker and have serveral advantages over bind mounts. Let&#8217;s create a volume! I&#8217;m going to call it <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">influxdb_data</tt> but choose whatever you want</p> <pre class="brush: plain; title: ; notranslate"> sudo docker volume create influxdb_data </pre> <p>Let&#8217;s install Influxdb. You are free ingest as many variables as you want. In this example we are only going to give it authentication, specifiy the port, making sure it restarts on-failure and linking the volume.</p> <pre class="brush: plain; title: ; notranslate"> sudo docker run -d \ --name=&quot;influxdb&quot; \ --restart on-failure \ -p 8086:8086 \ -e INFLUXDB_HTTP_AUTH_ENABLED=true \ -v influxdb_data:/var/lib/influxdb \ influxdb -config /etc/influxdb/influxdb.conf </pre> <p>Verify that your container is running.</p> <pre class="brush: plain; title: ; notranslate">sudo docker container ls</pre> <p>To get into our container and start using the database :</p> <pre class="brush: plain; title: ; notranslate">sudo docker exec -it influxdb /bin/bash influx</pre> <p><img data-attachment-id="49983" data-permalink="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/giphy/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/12/giphy.gif" data-orig-size="330,190" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="giphy" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/12/giphy-300x173.gif" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/12/giphy.gif" class="alignnone size-full wp-image-49983" src="https://technology.amis.nl/wp-content/uploads/2018/12/giphy.gif" alt="" width="330" height="190" /><br /> At this point you should probably be like Dwight. You did it! We now have InfluxDB running locally in a VM, but we can access it with through <tt style="background-color: #eff0f1; padding: 3px; color: #000000;">$VMIPADDRESS:8086</tt></p> <p><strong>What to do now?<br /> </strong>InfluxDB has many options and there is a lot to learn. Read their documentation and start inserting, getting and modifying data. You can also visualize your data in Grafana, which you can also run in a VM! Hopefully this guide has given you a better understanding how to run databases in Azure.</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/12/how-to-deploy-influxdb-in-azure-using-a-vm-service-with-dedicated-storage/">How to deploy InfluxDB in Azure using a VM service with dedicated storage</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Sam Vruggink https://technology.amis.nl/?p=49915 Fri Oct 12 2018 06:38:24 GMT-0400 (EDT) PeopleCounter part one: Counting People https://technology.amis.nl/2018/10/12/peoplecounter-part-one-counting-people/ <h2>Intro</h2> <p>Internet of Things stands for connecting devices to the internet. The devices are then able to communicate with each other. In our project, the PeopleCounter, we use a mini-computer with intelligent software to count the number of people in front of a camera. We send that number to an Oracle IoT Cloud. With a business rule we check if the number is higher than a specific value. If yes, an electric device is turned on. We use a red tube to see the business rule getting activated. (See image one) Our project consists of two parts. The PeopleCounter itself (part one) and the cloud (part two). I describe in this blog post how we created the PeopleCounter and its parts.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/PeopleCounterInAction.jpg"><img class="aligncenter" style="margin-right: auto;margin-left: auto;float: none" title="PeopleCounterInAction" src="https://technology.amis.nl/wp-content/uploads/2018/10/PeopleCounterInAction_thumb.jpg" alt="PeopleCounterInAction" width="580" height="772" border="0" /></a><span id="more-49997"></span></p> <h2>Use Case</h2> <p>We describe a use case to show that we as a company can develop applications where IoT is a part of it. We have shown this use case on a conference named nlOUG. nlOUG stands for <em>Nederlands Oracle User Group</em>. Companies can give presentations at the conference about techniques that uses Oracle Technology.</p> <p>Our use case was the following:</p> <ul> <li>We have a room where we have a Raspberry Pi mounted with a camera.</li> <li>The Pi films this room.</li> <li>The images are passed through a library or tool which counts the people on the images.</li> <li>We send this number to the Oracle IoT Cloud.</li> <li>If the number is higher than a specified value the cloud sends a signal to an external system that gets activated.</li> </ul> <h2>Hardware</h2> <p>We use a Raspberry Pi Model 3B+, where all of the calculation takes place. It is the newest version of the Pi and is relatively cheap. This model has connections for Wi-Fi, Ethernet, HDMI and the most important one the camera. We use a second generation of the camera module (Camera Board &#8211; V2). It has an 8MP lens and can shoot video in Full HD. We have as casing called the Camera Box Bundle. It is specifically design to hold a Pi mounted with a camera. We have bought our products on the following website: <a href="https://www.modmypi.com/">https://www.modmypi.com/</a>. When everything is assembled it looks like this.</p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_155950.jpg"><img class="aligncenter" style="margin-right: auto;margin-left: auto;float: none" title="IMG_20180223_155950" src="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_155950_thumb.jpg" alt="IMG_20180223_155950" width="364" height="484" border="0" /></a></p> <h2>OpenCV</h2> <p>We use in our first version a library called OpenCV. <a href="https://opencv.org/">OpenCV</a> stands for Open Source Computer Vision Library and is an open source computer vision and machine learning software library. It has hundreds of different algorithms to detect faces or movement, remove backgrounds and many more possibilities. We used a Java based version, but the original is written in C. The Java based version can be found at this <a href="https://github.com/atduskgreg/opencv-processing">repository</a>.</p> <p>The following code shows an implementation of how the OpenCV library works. The code receives an video input. We pass this input into a function called opencv.loadImage(video). With a function called opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE) OpenCV knows to scan the input for faces. Every face is then pointed out by drawing an square around it.</p> <pre class="brush: plain; title: ; notranslate">import gohai.glvideo.*; import gab.opencv.*; import java.awt.*; GLCapture video; OpenCV opencv; int x = 0; PImage snapshot; Rectangle[] faces; int numFaces; void setup() { frameRate(5); size(640, 480, P2D); String[] devices = GLCapture.list(); video = new GLCapture(this, devices[0], width, height); video.start(); opencv = new OpenCV(this, width, height); opencv.loadCascade(OpenCV.CASCADE_UPPERBODY); } void draw() { println(frameRate); background(0); if (video.available()) { video.read(); opencv.loadImage(video); } if (x &gt; 50) { snapshot = opencv.getSnapshot(); faces = opencv.detect(); stroke(255, 0, 0); strokeWeight(2); noFill(); for (int i = 0; i &lt; faces.length; i++) { rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); } numFaces = faces.length; x = 0; } stroke(255); textSize(30); println(numFaces); text(numFaces, width/2, height/2); x++; }</pre> <p><img data-attachment-id="49992" data-permalink="https://technology.amis.nl/2018/10/12/peoplecounter-part-one-counting-people/img_20180223_161411_thumb-jpg/" data-orig-file="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb.jpg" data-orig-size="364,484" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="IMG_20180223_161411_thumb.jpg" data-image-description="" data-medium-file="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb-226x300.jpg" data-large-file="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb.jpg" class="size-medium wp-image-49992 aligncenter" src="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb-226x300.jpg" alt="" width="226" height="300" srcset="https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb-226x300.jpg 226w, https://technology.amis.nl/wp-content/uploads/2018/10/IMG_20180223_161411_thumb.jpg 364w" sizes="(max-width: 226px) 100vw, 226px" /></p> <p>Soon we found out that the Pi isn&#8217;t so powerful. We use in our application a video input which has a resolution of 640 x 480, which is not much considering it can shoot Full HD. Even in 640 x 480 the program ran very slow. The frame rate dropped to 2 to 3 frames per second which is really slow. It helps to shoot in a lower resolution, but then you don&#8217;t see it very well on a screen. It doesn&#8217;t give us a good user experience.</p> <p>Because of performance issues we chose to take photos instead of shoot video and analyze those. Another option was to send the video to the cloud analyze the input there. That is considerably faster, because of better hardware and software. The problem then is that the video is put online. There is a chance that people steal the video and do any illegal stuff with it. In our solution it is less an issue since we don&#8217;t store the photo, we delete the photo after it is analyzed.</p> <h2>YOLO</h2> <p>In our second version we use a library called YOLO. YOLO stands for You Only Look Once, and just as the title mentioned, the library analyzes the photo only once. It splits up the photo in different areas that are analyzed separately. The result is a prediction with the object and a percentage of how certain he is.</p> <p>We use a pre-trained <a href="https://en.wikipedia.org/wiki/Weighted_majority_algorithm_(machine_learning)">weight</a> to show how accurate the library is, recognizing objects. The library has two types of weights. A normal one and a smaller one. We choose the smaller one, because of performance. At the same time it is less accurate. We also use a modified version of the library. This version can be found at this url: <a href="https://github.com/digitalbrain79/darknet-nnpack">https://github.com/digitalbrain79/darknet-nnpack.</a></p> <p><a href="https://technology.amis.nl/wp-content/uploads/2018/10/PredictionYOLO.png"><img class="aligncenter" style="margin-right: auto;margin-left: auto;float: none" title="PredictionYOLO" src="https://technology.amis.nl/wp-content/uploads/2018/10/PredictionYOLO_thumb.png" alt="PredictionYOLO" width="914" height="772" border="0" /></a></p> <p>We run the following command to start the analysis:</p> <pre class="brush: plain; title: ; notranslate">./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg</pre> <p>Then the output is pipe into a script. The script counts the number of persons detected by the library and saves that number in a text file.</p> <pre class="brush: plain; title: ; notranslate">#!/bin/bash input=0 object=&quot;&quot; while read -r line; do IFS=':' read -r -a object &lt;&lt;&lt; $line if [[ $object = &quot;car&quot; ]] then (( input++)) echo &quot;${object[0]}&quot; fi done echo $input &gt; numberOfObjects.txt </pre> <h2>Python</h2> <p>We expanded the script and wrote it in python to have one script that takes a photo, analyzes the photo and saves the number of persons counted to a text file. We have one important function in our script, that is a function called analysePhoto. As a result we have the whole python script below:</p> <pre class="brush: plain; title: ; notranslate">import json import subprocess import time import timeit import urllib import urllib2 PeopleCounter_ON = 'https://maker.ifttt.com/trigger/PeopleCounter_ON/with/key/g2VNF0mp-fFyk4RYbPRK0ZjmZjjorjaFQ2LvjkL2GFC' PeopleCounter_OFF = 'https://maker.ifttt.com/trigger/PeopleCounter_OFF/with/key/g2VNF0mp-fFyk4RYbPRK0ZjmZjjorjaFQ2LvjkL2GFC' PeopleCounter_FallBack = 'https://eu-wap.tplinkcloudc.om/?token=f58b1ba2-B46gYJulcdt9rX1QCjdclUv' status = 'off' # on or off requestLink = PeopleCounter_OFF personCount = 0 timeElapsed = time.time() timeout = None threshold = None # Receives Threshold from file to change threshold without exiting the loop def getMetaData(): global timeout, threshold with open('iotapp/threshold_timeout.json', 'r') as f: jsonFile = json.load(f) threshold = jsonFile[&quot;threshold&quot;] timeout = jsonFile[&quot;timeout&quot;] print ('Threshold is {}'.format(threshold)) print (&quot;timeout is: {}&quot;.format(timeout)) def analysePhoto(): global personCount res = subprocess.check_output(['raspistill', '-o', 'iotapp/data/snapshot.jpg', '-w', '1280', '-h', '720', '-t', '1000', '-p', '0,0,200,200']) for line in res.splitlines(): print (line) # Analyse part with YOLO Library res = subprocess.check_output(['./darknet', 'detector', 'test', 'cfg/voc.data', 'cfg/tiny-yolo-voc.cfg', 'tiny-yolo-voc.weights', 'iotapp/data/snapshot.jpg']) # Checks if a certain object exists, if true then variable is increment by 1 for line in res.splitlines(): if 'person' in line.decode('utf-8'): personCount += 1 timestamp = int(time.time()) file_path = 'iotapp/data/numberOfObjects_'+str(timestamp)+'.txt' file_stream = open(file_path,'w') message = '{ &quot;person&quot; : ' + str(personCount) + ' }' file_stream.write(message) file_stream.close() res = subprocess.check_output(['cp', 'predictions.png', 'iotapp/data/predictions.jpg']) for line in res.splitlines(): print (line) # sends the specific request which is needed to turn on/off the smart link plug def sendRequest(threshold, peopleCounted, timestamp): global requestLink, status, timeElapsed htmlResponse = None timeToCompareWith = timestamp if(peopleCounted &gt;= threshold and status == 'off'): requestLink = PeopleCounter_ON status = 'on' if(peopleCounted &lt; threshold and status == 'on'): requestLink = PeopleCounter_OFF status = 'off' print (requestLink) body = urllib.urlencode({'value1' : str(peopleCounted)}) # First time it doesn't send a request to a link, after certain threshold if(timeToCompareWith - timeElapsed &gt; timeout): request = urllib2.Request(requestLink, body) response = urllib2.urlopen(request) htmlResponse = response.read() timeElapsed = timeToCompareWith response.close() return htmlResponse # Whole loop to keep the program running while True: personCount = 0 getMetaData() start = timeit.default_timer() analysePhoto() stop = timeit.default_timer() print ('People counted: {}'.format(personCount)) print (stop - start) # Shows how long it takes to analyze the photo html = sendRequest(threshold, personCount, time.time()); if html: print (html) </pre> <p>We ran an node script next to the python script. This script grabs the most recent file with the number of persons counted and sends it to the Oracle IoT Cloud.</p> <pre class="brush: plain; title: ; notranslate">// Return only base file name without dir function getMostRecentFileName(dir) { var allFiles = fs.readdirSync(dir); var files = allFiles.filter(extension); if (files.length &gt; 0) { // use underscore for max() return _.max(files, function (f) { var fullpath = path.join(dir, f); // ctime = creation time is used // replace with mtime for modification time return fs.statSync(fullpath).ctime; }); } return ''; } </pre> <p>We have created a webpage to show the output of the library as you can see in the image below.</p> <h2><a href="https://technology.amis.nl/wp-content/uploads/2018/10/PeopleCounterWebpage.jpg"><img class="aligncenter" style="margin-right: auto;margin-left: auto;float: none" title="PeopleCounterWebpage" src="https://technology.amis.nl/wp-content/uploads/2018/10/PeopleCounterWebpage_thumb.jpg" alt="PeopleCounterWebpage" width="1028" height="580" border="0" /></a>Final word</h2> <p>There are a lot of possibilities. For example you can scan queues for their length and open or close more counters accordingly. Or you can count the number of animals passing by so foresters know how many of each kind is living this part of the forest. Another possibility is to measure crowd in an lunch room so colleagues know how busy it is at the lunch room. They can choose to come later if it is too busy.</p> <p>I want to thank people for contributing to this project: Robert van Mölken, Michael van Gastel and Corien Gruppen. Without them is was not possible to present this project at nlOUG.</p> <p>This is the end of part one. In part two I will you how we implemented the cloud and activated the red tube. See you in part two!</p> <p>The post <a rel="nofollow" href="https://technology.amis.nl/2018/10/12/peoplecounter-part-one-counting-people/">PeopleCounter part one: Counting People</a> appeared first on <a rel="nofollow" href="https://technology.amis.nl">AMIS Oracle and Java Blog</a>.</p> Kjettil Hennis https://technology.amis.nl/?p=49997 Fri Oct 12 2018 06:00:00 GMT-0400 (EDT) Graceful Stop of Goldengate Extract http://www.fahdmirza.com/2018/10/graceful-stop-of-goldengate-extract.html <div dir="ltr" style="text-align: left;" trbidi="on">It's always a good idea to stop extracts after checking if there is any long running transaction in the database being captured. Failing to do so might later result in hung or unstable processes.<br /><br /><br /><a name='more'></a><br /><br /><br />Use following command to check the extract:<br /><br /><pre style="background-attachment: inherit; background-clip: inherit; background-color: white; background-image: inherit; background-origin: inherit; background-position: inherit; background-repeat: inherit; background-size: inherit; border-radius: 4px; border: 0px; box-sizing: border-box; color: #333333; font-size: 10pt; line-height: normal; overflow-wrap: normal; overflow: auto; padding: 0px; white-space: pre-wrap; word-break: normal;">GGSCI (test) 3&gt; send e* status<br /></pre><pre style="background-attachment: inherit; background-clip: inherit; background-color: white; background-image: inherit; background-origin: inherit; background-position: inherit; background-repeat: inherit; background-size: inherit; border-radius: 4px; border: 0px; box-sizing: border-box; color: #333333; font-size: 10pt; line-height: normal; overflow-wrap: normal; overflow: auto; padding: 0px; white-space: pre-wrap; word-break: normal;"><br /></pre><pre style="background-attachment: inherit; background-clip: inherit; background-color: white; background-image: inherit; background-origin: inherit; background-position: inherit; background-repeat: inherit; background-size: inherit; border-radius: 4px; border: 0px; box-sizing: border-box; color: #333333; font-size: 10pt; line-height: normal; overflow-wrap: normal; overflow: auto; padding: 0px; white-space: pre-wrap; word-break: normal;"><pre style="background-attachment: inherit; background-clip: inherit; background-image: inherit; background-origin: inherit; background-position: inherit; background-repeat: inherit; background-size: inherit; border-radius: 4px; border: 0px; box-sizing: border-box; font-size: 10pt; line-height: normal; overflow-wrap: normal; overflow: auto; padding: 0px; white-space: pre-wrap; word-break: normal;">Sending STATUS request to EXTRACT ext ...<br /><br /><br />EXTRACT ext (PID 16649)<br /> Current status: <b>In recovery[1]</b>: Processing data with empty data queue<br /> <br />++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br /> <br />testsrv:/u05/ggate&gt; grep -i bound dirrpt/ext.rpt <br />Bounded Recovery Parameter:<br />2017-06-07 15:48:18 INFO OGG-01639 <b>BOUNDED RECOVERY: ACTIVE: for object pool 2</b>: p2628_Redo_Thread_2.<br />2017-06-07 15:48:18 INFO OGG-01640 <b>BOUNDED RECOVERY: recovery start XID: 129.15.2953694. </b><br /><br />-- And then Check RBA is moving with following commands<br />./ggsci <br />info e*<br />lag e*<br /></pre><div></div></pre><div><br /></div></div> Fahd Mirza tag:blogger.com,1999:blog-3496259157130184660.post-5572570241965945581 Thu Oct 11 2018 19:58:00 GMT-0400 (EDT) LEAP#423 The Maker UNO https://blog.tardate.com/2018/10/leap423-the-maker-uno.html <p>I was recently shown an Arduino-compatible board made by Cytron in Penang, Malaysia called the <a href="https://www.cytron.io/p-maker-uno?search=maker%20uno&amp;description=1">Maker UNO</a>. I gather it began life as a <a href="https://www.kickstarter.com/projects/1685732347/6-maker-uno-simplifying-arduino-for-education">very successful kickstarter</a> that aimed to produce a better board and associated teaching materials for K-12 education.</p> <p>It packs some nice additional features in the Uno form-factor:</p> <ul> <li>LEDs on all digital pins</li> <li>a piezo buzzer on pin 8</li> <li>a push-button on pin 2</li> </ul> <p>Aside from looking pretty spiffy, a really nice feature is the price - currently listing for RM15 (<a href="https://www.google.com/search?q=myr+15+in+usd">~$3.60 USD</a>)!</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/MakerUno">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p>Here’s a quick demo sketch that exercises the LEDs, buzzer and push-button:</p> <iframe class="youtube-embed" src="https://www.youtube.com/embed/YXwNq41K2Ik" frameborder="0" allowfullscreen=""></iframe> https://blog.tardate.com/2018/10/leap423-the-maker-uno.html Thu Oct 11 2018 14:19:15 GMT-0400 (EDT) LEAP#422 VL53L0X Laser Tape Measure https://blog.tardate.com/2018/10/leap422-vl53l0x-laser-tape-measure.html <p>The <a href="https://www.st.com/en/imaging-and-photonics-solutions/vl53l0x.html">VL53L0X</a> is a very small Time-of-Flight (ToF) ranging sensor. It is quite widely available as a module, including from Adafruit - see their <a href="https://learn.adafruit.com/adafruit-vl53l0x-micro-lidar-distance-sensor-breakout/downloads">information page</a>.</p> <p>For this project I’m taking the module for a test drive and building a short-range tape measure with an Arduino and Nokia 5110 display.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/VL53L0X/LaserTapeMeasure">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a></p> <p><a href="https://github.com/tardate/LittleArduinoProjects/tree/master/playground/VL53L0X/LaserTapeMeasure"><img src="https://leap.tardate.com/playground/VL53L0X/LaserTapeMeasure/assets/LaserTapeMeasure_build.jpg" alt="hero_image" /></a></p> https://blog.tardate.com/2018/10/leap422-vl53l0x-laser-tape-measure.html Wed Oct 10 2018 13:01:11 GMT-0400 (EDT) LEAP#421 LimeSDR First Look https://blog.tardate.com/2018/10/leap421-limesdr-first-look.html <p>I think I first heard about <a href="https://wiki.myriadrf.org/LimeSDR-USB">LimeSDR-USB</a> on <a href="https://theamphour.com/314-an-interview-with-josh-lifton/">The Amp Hour</a>.</p> <p>I was convinced to jump in and give it a go particularly because of the fully open-source nature of the platform - from hardware all the way to software, including FPGA code.</p> <p>The unit is suited to research and development in a wide range of areas, but I suspect as I get more into this I will be mainly focused on areas such as test and measurement (e.g. spectrum analysis) and amateur radio.</p> <p>As always, <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/SDR/LimeSDR/FirstLook">all notes, schematics and code are in the Little Electronics &amp; Arduino Projects repo on GitHub</a> <a href="https://github.com/tardate/LittleArduinoProjects/tree/master/SDR/LimeSDR/FirstLook"><img src="https://leap.tardate.com/SDR/LimeSDR/FirstLook/assets/FirstLook_build.jpg" alt="hero_image" /></a></p> <p>There are some good video resource, a good place to start is: What is the LimeSDR?</p> <iframe class="youtube-embed" src="https://www.youtube.com/embed/LnJLiOCEq9I" frameborder="0" allowfullscreen=""></iframe> https://blog.tardate.com/2018/10/leap421-limesdr-first-look.html Tue Oct 09 2018 13:01:43 GMT-0400 (EDT) “Hidden” Efficiencies of Non-Partitioned Indexes on Partitioned Tables Part II (Aladdin Sane) https://richardfoote.wordpress.com/2018/10/09/hidden-efficiencies-of-non-partitioned-indexes-on-partitioned-tables-part-ii-aladdin-sane/ In Part I of this series, I highlighted how a Non-Partitioned Global Index on a Partitioned Table is able to effectively perform &#8220;Partition Pruning&#8221; by reading only the associated index entries to access just the table blocks of interest from relevant table partitions when the table partitioned keys are specified in an SQL Predicate. Understanding [&#8230;] Richard Foote http://richardfoote.wordpress.com/?p=5679 Tue Oct 09 2018 02:16:52 GMT-0400 (EDT) Create the Linux 6.8 VM’s on VirtualBox for the Oracle RAC 12c Workshop https://gavinsoorma.com/2018/10/how-to-create-the-linux-6-8-vms-on-virtualbox-for-the-oracle-rac-12c-workshop/ <p><strong>Oracle RAC How-To Series &#8211; Tutorial 16</strong></p> <p>Download the note (for members only&#8230;)</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/09/How-to-create-the-Linux-6.8-VMs-on-VirtualBox-for-the-Oracle-RAC-12c-Workshop.docx">Tutorial 16</a></p> Gavin Soorma https://gavinsoorma.com/?p=8304 Mon Oct 08 2018 04:20:59 GMT-0400 (EDT) July 2018 PSU Oracle Grid Infrastructure 12c Release 2 https://gavinsoorma.com/2018/10/july-2018-psu-oracle-grid-infrastructure-12c-release-2/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/july-2018-psu-oracle-grid-infrastructure-12c-release-2/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8302 Mon Oct 08 2018 04:19:13 GMT-0400 (EDT) Oracle 12c Clusterware Post-installation and Configuration Verification https://gavinsoorma.com/2018/10/oracle-12c-clusterware-post-installation-and-configuration-verification/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/oracle-12c-clusterware-post-installation-and-configuration-verification/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8300 Mon Oct 08 2018 04:16:06 GMT-0400 (EDT) 18c Grid Infrastructure Upgrade https://gavinsoorma.com/2018/10/18c-grid-infrastructure-upgrad/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/18c-grid-infrastructure-upgrad/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8298 Mon Oct 08 2018 04:13:57 GMT-0400 (EDT) DNS and DHCP setup for 12c R2 Grid Infrastructure installation with Grid Naming Service (GNS) https://gavinsoorma.com/2018/10/dns-and-dhcp-setup-for-12c-r2-grid-infrastructure-installation-with-grid-naming-service-gns/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/dns-and-dhcp-setup-for-12c-r2-grid-infrastructure-installation-with-grid-naming-service-gns/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8296 Mon Oct 08 2018 04:12:10 GMT-0400 (EDT) Adding and Deleting a Node From a RAC Cluster https://gavinsoorma.com/2018/10/adding-and-deleting-a-node-from-a-rac-cluster/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/adding-and-deleting-a-node-from-a-rac-cluster/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8294 Mon Oct 08 2018 04:10:07 GMT-0400 (EDT)