ODTUG Aggregator ODTUG Blogs http://localhost:8080 Wed, 21 Nov 2018 09:45:35 +0000 http://aggrssgator.com/ 4 Factors CFOs Should Consider for a Successful EPM Implementation https://www.us-analytics.com/hyperionblog/factors-cfos-should-consider <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/factors-cfos-should-consider" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Fotolia_108088386_Subscription_Monthly_M.jpg?t=1541832538128" alt="Fotolia_108088386_Subscription_Monthly_M" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p style="text-align: left;">In 2018, the CFO role has changed tremendously. A recent study found that traditional skills like compliance and controllership account for less than 10 percent skills critical to success. Today, skills like general management and strategy are most important to a CFO’s success.</p> <p>CFO’s must be more creative in how they formulate their strategy and measure success. Other factors important to the finance team, like efficiency and performance, will fall into place once a successful strategy is put into place.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffactors-cfos-should-consider&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/factors-cfos-should-consider Thu Nov 08 2018 16:44:07 GMT-0500 (EST) PBCS & EPBCS Updates (November 2018): Custom Function in Calculation Manager, New Version of Smart View for Office & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-november-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-november-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Fotolia_228426734_Subscription_Monthly_M.jpg?t=1541832538128" alt="Fotolia_228426734_Subscription_Monthly_M" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The November updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;are here!&nbsp;</span>This blog post outlines several new features, including&nbsp;c<span>ustom function in calculation manager, and the new version of Smart View for Office.</span></p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, November 16 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-november-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-november-updates Wed Nov 07 2018 15:54:41 GMT-0500 (EST) FCCS Updates (November 2018): Running Intercompany Reports, Improved Search for Forms, Dashboards, Infolets & More https://www.us-analytics.com/hyperionblog/fccs-updates-november-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-november-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Fotolia_226004489_Subscription_Monthly_M.jpg?t=1541832538128" alt="Fotolia_226004489_Subscription_Monthly_M" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The November updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including&nbsp;r<span>unning intercompany reports, as well as improved search for forms, dashboards, and infolets.</span></p> <p><em>The monthly update for FCCS will occur on Friday, November 16 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-november-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-november-2018 Wed Nov 07 2018 14:54:36 GMT-0500 (EST) Oracle 18c New Feature Pluggable Database Switchover https://gavinsoorma.com/2018/11/oracle-18c-new-feature-pluggable-database-switchover/ <p>In earlier releases prior to Oracle 18c, while we could enable Data Guard for a Multitenant Container/Pluggable database environment, we were restricted when it came to performing a Switchover or Failover &#8211; it had to be performed at the Container Database (CDB) level.</p> <p>This meant that a database role reversal would affect each and every PDB hosted by the CDB undergoing a Data Guard Switchover or Failover.</p> <p>In Oracle 12c  Release 2, a new feature called <strong class="term">refreshable clone PDB</strong> was introduced. A refreshable  clone PDB is a <span id="GUID-13A96755-8407-42B0-A8B8-00E84FD8A360__d345e364">read-only clone that can periodically synchronize itself with its source PDB. </span></p> <p>This synchronization could be configured to happen manually or automatically based on a predefined interval for the refresh.</p> <p>In Oracle 18c a new feature has been added using  the refreshable clone mechanism which enables us to now <strong>perform a switchover at the individual PDB level.</strong> So we are enabling high availability at the PDB level within the CDB.</p> <p>We can now issue a command in Oracle 18c like this:</p> <p>SQL&gt; alter pluggable database orclpdb1<br /> refresh mode manual<br /> from orclpdb1@cdb2_link<br /> <strong>switchover</strong>;</p> <p>After the switchover completes, the original source PDB becomes the refreshable clone PDB (which can only be opened in <code class="codeph">READ ONLY</code> mode), while the original refreshable clone PDB is now open in read/write mode functioning as a source PDB.</p> <p><a href="https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/"> How to perform a Switchover for a Pluggable Database (Members Only) </a></p> Gavin Soorma https://gavinsoorma.com/?p=8389 Tue Nov 06 2018 22:28:06 GMT-0500 (EST) ARCS Updates (November 2018): Export of Adjustments and Transactions as Journal Entries, REST APIs for Managing Users & More https://www.us-analytics.com/hyperionblog/arcs-product-update-november-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-november-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Fotolia_169674427_Subscription_Monthly_M.jpg?t=1541832538128" alt="Fotolia_169674427_Subscription_Monthly_M" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The November updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) have arrived. In this blog post, we’ll outline new features in ARCS, including<span><span>&nbsp;e</span>xport of adjustments and transactions as journal entries and REST APIs for Managing Users.</span></p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, November 16 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-november-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-november-2018 Tue Nov 06 2018 17:52:12 GMT-0500 (EST) 5 OpenWorld Takeaways for EPM & BI https://www.us-analytics.com/hyperionblog/openworld-takeaways-for-epm-bi <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/openworld-takeaways-for-epm-bi" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/OpenWorld%202018%20Key%20Takeaways.jpg?t=1541832538128" alt="OpenWorld 2018 Key Takeaways" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>If you didn’t make it to OpenWorld this year, we’ve got you covered with the key takeaways from the EPM and BI sessions.</p> <p>In this blog post, we’ll first look at the EPM highlights, including…</p> <ul> <li>An updated roadmap for on-prem EPM Hyperion solutions</li> <li>Moving from Hyperion to the Oracle EPM Cloud</li> </ul> <p>Then we’ll talk about BI highlights, including…</p> <ul> <li>Getting started with OAC</li> <li>OAC Essbase Capabilities and Roadmap</li> <li>Active Directory &amp; Single Sign-On with OAC</li> </ul> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fopenworld-takeaways-for-epm-bi&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/openworld-takeaways-for-epm-bi Tue Nov 06 2018 15:35:19 GMT-0500 (EST) Exciting News for Unify https://www.rittmanmead.com/blog/2018/11/exciting-news-for-unify/ <h2 id="announcementunifyforfree">Announcement: Unify for Free</h2> <p>We are excited to announce we are going to make Unify available for free. To get started send an email to <a href="mailto:unify@rittmanmead.com">unify@rittmanmead.com</a>, we will ask you to complete a short set of qualifying questions, then we can give you a demo, provide a product key and a link to download the latest version.</p> <p>The free version of Unify will come with no support obligations or SLAs. On sign up, we will give you the option to join our Unify Slack channel, through which you can raise issues and ask for help.</p> <p>If you’d like a supported version, we have built a special <a href="https://www.rittmanmead.com/expert-service-desk/">Expert Service Desk</a> package for Unify which covers</p> <ul> <li>Unify support, how to, bugs and fixes</li> <li>Assistance with configuration issues for OBIEE or Tableau</li> <li>Assistance with user/role issues within OBIEE</li> <li>Ad-hoc support queries relating to OBIEE, Tableau and Unify</li> </ul> <p>Beyond supporting Unify, the Expert Service Desk package can also be used to provide technical support and expert services for your entire BI and analytics platform, including:</p> <ul> <li>An agreed number of hours per month for technical support of Oracle and Tableau's BI and DI tools</li> <li>Advisory, strategic and roadmap planning for your platform</li> <li>Use of any other Rittman Mead accelerators including support for our other Open Source tools and DevOps Developer Toolkits</li> <li>Access to Rittman Mead’s <a href="https://www.rittmanmead.com/on-demand-training/">On Demand Training</a></li> </ul> <h2 id="newreleaseunify10017">New Release: Unify 10.0.17</h2> <p>10.0.17 is the new version of Unify. This release doesn’t change how Unify looks and feels, but there are some new features and improvements under the hood.</p> <p>The most important feature is that now you can get more data from OBIEE using fewer resources. While we are not encouraging you to download all your data from OBIEE to Tableau all time (please use filters, aggregation etc.), we realise that downloading the large datasets is sometimes required. With the new version, you can do it. Hundreds of thousands of rows can be retrieved without causing your Unify host to grind to a halt.</p> <p>The second feature we would like to highlight is that now you can use OBIEE instances configured with self-signed SSL certificates. Self-signed certificates are often used for internal systems, and now Unify supports such configurations.</p> <p>The final notable change is that you can now run Unify Server as a Windows service. It wasn't impossible to run Unify Server at system startup before, but it is even easier.</p> <p>And, of course, we fixed some bugs and enhanced the logging. We would like to see our software function without bugs, but sometimes they just happen, and when they do, you will get a better explanation of what happened.</p> <p>On most platforms, Unify Desktop should auto update, if it doesn’t, then please download manually.</p> <hr> <p>Unify is 100% owned and maintained by Rittman Mead Consulting Ltd, and while this announcement makes it available for free, all copies must be used under an End User Licence Agreement (EULA) with Rittman Mead Consulting Ltd.</p> Jon Mead 5be02bda3b9d8d00bfa9e276 Tue Nov 06 2018 03:37:40 GMT-0500 (EST) Oracle 18c New Feature Read-Only Oracle Homes https://gavinsoorma.com/2018/11/oracle-18c-new-feature-read-only-oracle-homes/ <p>One of the new features of Oracle Database 18c is that we can now configure an Oracle Home in <strong>read-only</strong> mode.</p> <p><strong>In a read-only Oracle home, all the configuration files like database init.ora, password files, listener.ora, tnsnames.ora as well as related log files reside outside of the read-only Oracle home</strong>.</p> <p>This feature allows us to use the read-only Oracle home as a &#8216;master or gold&#8217; software image that can be distributed across multiple servers. So it <strong>enables mass provisioning</strong> and also <strong>simplifies the patching process</strong> where hundreds of target servers are potentially required to have a patch applied. Here we patch the &#8216;master&#8217; read-only Oracle Home and this image can then be deployed on multiple target servers seamlessly.</p> <p>To configure a read-only Oracle Home, we need to do a software only 18c installation &#8211; that is we do not create a database as part of the software installation.</p> <p>We then run a command <strong>roohctl -enable </strong>which will configure the Oracle Home in read-only mode.</p> <p>In addition to the ORACLE_HOME and ORACLE_BASE variables, we have a new variable defined called <strong>ORACLE_BASE_CONFIG</strong> and like the oratab file we have an additional file called <strong>orabasetab</strong>.</p> <p>So in an 18c read-only Oracle Home, for example the dbs directory is now not located as it was traditionally under the $ORACLE_HOME/dbs, but is now located under $ORACLE_BASE_CONFIG &#8211; which takes the form of a directory structure called $ORACLE_BASE/homes/&lt;ORACLE_HOME_NAME&gt;.</p> <p><a href="https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/">Read more about how to configure an Oracle 18c read-only Oracle Home (Members Only)</a></p> Gavin Soorma https://gavinsoorma.com/?p=8374 Sun Nov 04 2018 20:29:04 GMT-0500 (EST) How to configure an Oracle 18c read-only Oracle Home https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/how-to-configure-an-oracle-18c-read-only-oracle-home/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8376 Sun Nov 04 2018 20:23:14 GMT-0500 (EST) Oracle 18c Pluggable Database Switchover https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/oracle-18c-pluggable_database-switchover/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8370 Sun Nov 04 2018 03:18:47 GMT-0500 (EST) New Emerging Technologies Track at ODTUG Kscope19 https://www.odtug.com/p/bl/et/blogaid=836&source=1 New to ODTUG Kscope19, the Emerging Technologies track offers ODTUG Kscope attendees the opportunity to learn about the latest and greatest technologies making a mark on the world. ODTUG https://www.odtug.com/p/bl/et/blogaid=836&source=1 Thu Nov 01 2018 14:31:43 GMT-0400 (EDT) How to install the Oracle 18c RPM-based database software https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8358 Thu Nov 01 2018 02:39:34 GMT-0400 (EDT) Oracle 18c RPM Based Software Installation https://gavinsoorma.com/2018/11/oracle-18c-rpm-based-software-installation/ <p>One of the (many) new features in Oracle Database 18c enables us to install a single-instance Oracle Database software (no support for Grid Infrastructure as yet) using an RPM package.</p> <p>So say as part of say provisioning a new Linux server, the system administrator can also provide the Oracle 18c software already pre-installed and ready to be used by the DBA.</p> <p>Note that the  RPM-based Oracle Database installation is not available for Standard Edition 2. Standard Edition 2 support is planned in the next release 19c.</p> <p>The naming convention for RPM packages is <em><span class="variable">name</span>&#8211;<span class="variable">version</span>&#8211;<span class="variable">release</span>.<span class="variable">architecture</span></em>.rpm.</p> <p>Currently the RPM for 18c is : <strong>oracle-database-ee-18c-1.0-1.x86_64.rpm</strong></p> <p>So we can see that this RPM is for 18c Enterprise Edition (ee-18c), the version number (1.0) , release number of the package (1) and the platform architecture (x86_64).</p> <p>To install the 18c database software we will do the following :</p> <ul> <li>Connect as root and download and install the 18c pre-installation RPM using the <strong>yum install</strong> command</li> <li>Download the 18c Oracle Database RPM-based installation software from OTN or the Oracle Software Delivery Cloud portal (Edelivery).</li> <li>Install the database software using the <strong>yum localinstall</strong> command</li> </ul> <p>Once the 18c software has been installed, we can run a script as root (<strong> /etc/init.d/oracledb_ORCLCDB-18c configure</strong>) which will automatically create a Container Database (ORCLDB) with a Pluggable Database (ORCLPDB1) as well as configure and start  the listener!!</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/2018/11/how-to-install-the-oracle-18c-rpm-based-database-software/">How to perform a RPM-based Oracle 18c Software Installation and execute the oracledb_ORCLCDB-18c configure script (Members Only)</a></p> Gavin Soorma https://gavinsoorma.com/?p=8350 Thu Nov 01 2018 02:12:54 GMT-0400 (EDT) ODTUG October News https://www.odtug.com/p/bl/et/blogaid=835&source=1 Announcing the 2018–2019 ODTUG Leadership Program Class! ODTUG is pleased to announce its sixth ODTUG Leadership Program, a program dedicated to enhancing the leadership skills of ODTUG members. ODTUG https://www.odtug.com/p/bl/et/blogaid=835&source=1 Wed Oct 31 2018 10:25:26 GMT-0400 (EDT) Looker Join 2018 https://blog.redpillanalytics.com/looker-join-2018-5f8b169d7353?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2jS_cc2Wt8wu9P29TXmB6g.jpeg" /></figure><p>As a vendor, it’s rare to leave a conference feeling totally energized and fulfilled; thankfully, <a href="https://looker.com/events/join">Looker Join</a> broke the mold for Red Pill Analytics. Spoiler alert: we’ll definitely be going back next year.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JyWCHT2JvckdzJmB0H2qTw.jpeg" /><figcaption><a href="https://medium.com/u/8e6c93c67c06">Mike Jelen</a> addressing the <a href="https://twitter.com/LookerData">@LookerData</a> Partner Summit on how <a href="https://twitter.com/RedPillA">@RedPillA</a> provides Real Time Business Analytics to customers.</figcaption></figure><p>The week kicked off at the Partner Summit, where Looker executives provided sneak previews of Looker 6.0, a new branding campaign, and outlined their plans for the future. The highlight of the event was Red Pill Analytics being recognized for excelling at providing Real-Time Business Analytics. Red Pill was one of four partners acknowledged for outstanding consulting.</p><p>The conference kicked off with a well-attended, lively welcome reception where conference attendees happily networked with colleagues and vendors. The top-notch event continued with two days of sessions, mostly focused on hands-on labs, case studies, and roadmaps.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q6k991UjAoGrFUe371c14Q.jpeg" /></figure><p>The partner pavilion was packed the entire time. It was refreshing to speak to people in different industries and understand their unique data problems in a relaxed and genuinely interesting atmosphere.</p><p>It would be a disservice to leave out the stunning location of the conference. The Palace of Fine Arts in San Francisco is truly spectacular. Throughout the unique and massive grounds of the event, the Looker team spared no expense in branding and keeping the feel of the conference fresh. The signage was eye-catching, and the common spaces felt more like an urban coffee shop than a dull hotel space; this cool vibe undoubtedly contributed to the likability of the conference. The endless snacks, soda, and coffee didn’t hurt either. Is there anything better than the mix of inspirational data geeks, a gorgeous venue, and an abundance of food and drink?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BehAFyNGm8I-KsNS3E7dqQ.jpeg" /><figcaption>The Palace of Fine Arts in San Francisco</figcaption></figure><p>We’re already making plans to return next year… and may even be fighting for seats!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ih4EHu6OZnEpjiYZuXDytA.jpeg" /><figcaption>The Red Pill Team representing at the booth.</figcaption></figure><p>Explore our <a href="https://blog.redpillanalytics.com/">blogs</a> for more in-depth details on <a href="https://looker.com/">Looker</a> and Red Pill Analytics. And don’t forget to visit our <a href="http://redpillanalytics.com/">website</a> for our unique analytics offerings such as <a href="https://redpillanalytics.com/looker#glimpse">The Glimpse Initiative</a>, or just stop by and say hi on any of our <a href="https://redpillanalytics.com/social-media/">social media channels</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5f8b169d7353" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/looker-join-2018-5f8b169d7353">Looker Join 2018</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Lauren Prezby https://medium.com/p/5f8b169d7353 Mon Oct 29 2018 13:20:40 GMT-0400 (EDT) How Can I Help? http://beyond-just-data.blogspot.com/2018/10/how-can-i-help.html <div class="MsoNormal"></div><div class="MsoNormal">I recently ran for the ODTUG Board of Directors.<span style="mso-spacerun: yes;">&nbsp; </span>There were 4 open positions and 9 candidates on the ballot.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">This morning I received the call that I did not make the cut.<span style="mso-spacerun: yes;">&nbsp; </span>While this was disappointing to hear, I told Mike Riley, the ODTUG Secretary, that I will continue to look for other opportunities to volunteer and add value to the organization.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">This evening while I have been catching up on a TV series that I have found interest in I heard the main character ask “How can I help?”</div><div class="MsoNormal"><br /></div><div class="MsoNormal">For those of you who watch network TV you will recognize this quote from the main character Max in “New Amsterdam”.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">So, for my readers who belong to a professional organization or for that matter any organization; ask yourself…”How Can I Help?”</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Most professional organizations run on volunteers.<span style="mso-spacerun: yes;">&nbsp; </span>If they are anything like the ones I belong to, IOUG and ODTUG, they are always asking for people to help.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">So ask yourself…”How Can I Help?”</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Do you have a couple hours a month to volunteer to be on a committee?<span style="mso-spacerun: yes;">&nbsp; </span>&nbsp;</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Do you have an experience on recent project or implementation that you could share? <span style="mso-spacerun: yes;">&nbsp;</span>Write an article for the newsletter or conduct an educational webcast.<span style="mso-spacerun: yes;">&nbsp;&nbsp;</span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><span style="mso-spacerun: yes;">&nbsp;</span>Better yet submit an abstract to speak at a conference...believe me once you have done it, public speaking is not that scary.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">So, for all you professionals out there who want to grow in your profession ask yourself…”How Can I Help?”</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Please consider volunteering within the professional organization you are a member of. </div><br /> Wayne D. Van Sluys tag:blogger.com,1999:blog-7768091516190336427.post-4514048830428211889 Thu Oct 25 2018 19:33:00 GMT-0400 (EDT) Announcing the 2019-2020 ODTUG Board of Directors https://www.odtug.com/p/bl/et/blogaid=834&source=1 Congratulations to the Newly Elected 2019–2020 ODTUG Board of Directors! ODTUG https://www.odtug.com/p/bl/et/blogaid=834&source=1 Thu Oct 25 2018 12:47:57 GMT-0400 (EDT) Tuna 200 – 2018 https://realtrigeek.com/2018/10/22/tuna-200-2018/ <p>99% of my posts on this site are Oracle related. This is the 1% &#8211; where the &#8220;tri&#8221; comes in, albeit indirectly.</p> <p>One of the items on my Bucket List was to run a long distance relay run. The local race that fits this bill is the <a href="http://www.tuna200.com">Tuna 200</a> where you run from Raleigh to Atlantic Beach in North Carolina for a total of 203.94 miles. You can have up to 12 people on a team to cover this distance&#8230;or less. I never attempted to put together a team because:</p> <ol> <li>I really wanted 12 people because I don&#8217;t want to run more than a marathon.</li> <li>It&#8217;s hard to get 11 other people committed to 36 hours of running.</li> <li>Logistics of getting vans, people, and stops together is a short-time, full-time job.</li> </ol> <p>When I saw a Facebook post for our neighborhood page asking if anyone was interested in joining a team for the Tuna 200 run, I jumped! I could mark this off my list and, hopefully, meet new running friends along the way. I put the option out to the rest of my teammates at <a href="http://www.theenduranceedge.com">The Endurance Edge</a> and was happy that my friend and teammate, Cynthia, joined in the fight. (BTW&#8230;Cynthia is an amazing woman who encourages everyone and is a BLAST to be with!)</p> <p>We met a couple weeks ago to 1) meet and 2) go through the details of our run/race. I was THRILLED to learn that they were a &#8220;for fun&#8221; team versus a competitive team. I did the math and I was willing to run my 203.94 miles / 12 people miles, no matter how the breakdown actually occurred. With anyone, the more miles I ran meant the slower the pace would average at the end. That lunch showed me I was with &#8220;my people&#8221;.</p> <p>Fast forward to race day and we met at my neighbor/teammate&#8217;s house at 5AM. We were to take off on our journey at 545AM. Surprisingly, I got a great night&#8217;s sleep, even though it was only 6.5 hours. I knew I would get not many more hours over the next 36-40 hours so I was happy to get quality sleep.</p> <p>We were separated into 2 vans of 6 people. There were 36 legs so each of us was responsible for 3 legs, roughly every 12 hours or so. Since our van was &#8220;off&#8221; the first 6 legs, we cheered on our teammates (since we weren&#8217;t sleeping that morning anyway). My first leg was ~250PM. I had a GREAT first leg and averaged faster than I planned. The sun was shining and I was thrilled to be part of a team! While we ran, the other van rested up for their legs.</p> <p>Our van&#8217;s second leg started around 11PM so once we finished our first legs and drove to our &#8220;rest stop&#8221;, it was ~530PM. Along the route, there are *many* churches that open their doors for restrooms, refreshments, and rest. It&#8217;s like nothing I have ever seen&#8230;and I grew up in the church! <a href="https://www.snowhillchurch.org/">Snow Hill Original Free Will Baptist Church</a> was beyond welcoming. They had a full dinner including soups, sandwiches, drinks, and desserts. I have a severe food allergy so when I asked about the chili ingredients, I got to talk directly to the cook. Lemme tell you, my mom&#8217;s chili is my favorite, but this lady&#8217;s came in a VERY CLOSE second! I had two bowls! We also camped out until 10PM in one of the Sunday School rooms. I find it hard to sleep, so I pre-loaded my iPad with Netflix shows and watched those to recover a bit before our van&#8217;s night run. I truly can&#8217;t say enough about this church&#8217;s hospitality. It was nothing I experienced growing up or as an adult.</p> <p>My second run started around 3AM. I had Sheryl Crow&#8217;s &#8220;Everyday Is a Winding Road&#8221; in my head because I was living on coffee and nicotine&#8230;minus the nicotine. Snow Hill filled my Hydro Flask with black coffee for the night and I lived on that energy! I made a comment on <a href="https://www.strava.com/athletes/7344484">Strava</a> that I had never run that late &#8211; or early &#8211; ever. Even during basic training! I loved my team&#8230;and one of the things I loved (other than the amazing individuals) was that during the night, there was someone to ride a bike alongside you as you ran. I felt safe and I could provide safety to others. What a blessing! I held a consistent and solid pace for my second run while the US slept.</p> <p>After our team&#8217;s 2nd leg, we crashed at <a href="https://www.facebook.com/BethelBaptistJonestown/?rf=111682932200149">Bethel Baptist Church</a> in Pink Hill, NC. I use the term &#8220;crash&#8221; literally. I am a terrible sleeper, but I slept here. Again, I had my iPad loaded with my &#8220;sleep&#8221; shows and fell asleep within 15 minutes of laying down my sleeping bag in the sanctuary of the church. There were people spread out on and below each pew. I tend to have night sweats (thank you, hormones), so I, truly, slept next to an AC register. I woke up around 730AM knowing I had to get up and moving around 845AM. I had started watching &#8220;A Haunting at Hill House&#8221; the night before and watched the second episode before getting up for my van&#8217;s 3rd leg. This church had a great pancake breakfast (so I hear, but could not partake in due to food allergies) but I opted for straight coffee instead. Again, what fantastic hospitality this church offered the Tuna 200 runners!</p> <p>Our van&#8217;s third leg was&#8230;hurtful. We were tired, unrested, unrecovered, and delirious. Each of us had heart and completed our prescribed run, although slower than our previous 2 runs. Because we were doing it for fun, nobody cared. I cannot stress to you how supportive this team &#8211; both vans &#8211; were to the overall goal of finishing the race. I struggled in my last 4.5 miles. My legs hurt, I was dogged tired, and it was crappy running conditions. But *each of us* completed our routes. It was SO much fun encouraging others and being encouraged.</p> <p>After both of my fall half Ironmans got canceled due to Hurricane Florence, I was bummed out and burnt out on training. This run, because it&#8217;s team-based and fun, reminded me why I do endurance events. They are completed by a certain breed of people&#8230;MY PEOPLE. Time doesn&#8217;t matter, finishing does. You are only as strong as your weakest link, and we had no weak links because the support was so strong. I long forgot the fun and teamwork required in endurance events. This event reminded me of why I do endurance events&#8230;this is what I do for fun. And for the first time in a LONG time, I had FUN.</p> <p>I can&#8217;t stress enough to my teammates how much this weekend meant to me. From a training perspective. From a community perspective. From a human perspective (because everyone poops!). From a physical, mental, and spiritual perspective. My soul is happy. I&#8217;m ignited for my December half Ironman.</p> <p>A quote I said to my van, &#8220;I don&#8217;t believe in coincidences&#8221;. Each person said that they did not either. We were supposed to be together, for one reason or another.</p> <p>I am thankful for the weekend I had. The 40+ hours without sleep. The running when I should have been sleeping. The complete strangers I met that became friends. The real-life stories we shared with each other. &#8230;This is why I &#8220;race&#8221;. Not to win, but to get ahead. To get ahead with humanity. To get ahead with spirituality. To get ahead with life. God&#8217;s at the helm, we just need to follow our path.</p> Sarah Craynon Zumbrum http://realtrigeek.com/?p=2048 Mon Oct 22 2018 19:23:31 GMT-0400 (EDT) Essbase: The Cloud vs. On-Prem https://www.us-analytics.com/hyperionblog/essbase-cloud-vs-on-prem <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/essbase-cloud-vs-on-prem" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/social-suggested-images/the%20cloud%20vs%20on-prem%20webinar%20header_skinny_smaller%20text-2.png?t=1541832538128" alt="the cloud vs on-prem webinar header_skinny_smaller text-2" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Finance and budgeting teams have long been fans of Essbase — being able to store aggregated values gives users the flexibility to efficiently report and run scenarios. So, when we talk about Essbase, Hyperion Planning is often part of the conversation, too.</p> <p>But, is <strong><a href="https://www.us-analytics.com/hyperionblog/faq-oracle-essbase-cloud">Essbase in the cloud</a></strong> going to change the way we talk about and consider Essbase? In this blog post, we’ll look at some of the major differences between on-prem and cloud-based Essbase, and how having this tool in the cloud can help your organization in ways you may not have considered yet.</p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fessbase-cloud-vs-on-prem&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/essbase-cloud-vs-on-prem Fri Oct 19 2018 11:41:34 GMT-0400 (EDT) FCCS Updates (October 2018): New Consolidation Audit Trail, Extended Dimensionality, and More https://www.us-analytics.com/hyperionblog/fccs-updates-october-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-october-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/FCCS%20updates%2010-18.jpg?t=1541832538128" alt="FCCS updates 10-18" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The October updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including a new consolidation audit trail, extended dimensionality, and more.</p> <p><em>The monthly update for FCCS will occur on Friday, October 19 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-october-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-october-2018 Fri Oct 19 2018 11:40:58 GMT-0400 (EDT) PBCS & EPBCS Updates (October 2018): New Functions in Calculation Manager, Support for Master-Detail Relationships Between Forms in a Dashboard, and More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-october-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-october-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20updates%2010-18.jpg?t=1541832538128" alt="pbcs updates 10-18" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The October updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;are here!&nbsp;</span>This blog post outlines several new features, including new functions in calculation manager, support for master-detail relationships between forms in a dashboard, and more.</p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, October 19 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-october-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-october-updates Fri Oct 19 2018 11:40:45 GMT-0400 (EDT) ARCS Updates (October 2018): New EPM Automate Utility Version, Considerations for the Academy, and More https://www.us-analytics.com/hyperionblog/arcs-product-update-october-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-october-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/ARCS%20updates%2010-18.jpg?t=1541832538128" alt="ARCS updates 10-18" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The October updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) have arrived. In this blog post, we’ll outline new features in ARCS, including a new EPM Automate Utility version and considerations for the Academy.</p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, October 19 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-october-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-october-2018 Fri Oct 19 2018 11:40:32 GMT-0400 (EDT) OBIEE Development: Merging the RPD with Git (Free Open-Source Tool) https://www.us-analytics.com/hyperionblog/merging-rpd-with-git <p style="text-align: center;"><strong><span style="font-size: 20px;">US-Analytics RPD Merge Script:</span></strong></p> <p><strong><a class="cta_button" href="https://cta-image-cms2.hubspot.com/ctas/v2/public/cs/ci/?pg=1649a7ff-03fb-4108-9ee6-98432e1e31d6&amp;pid=135305&amp;ecid=&amp;hseid=&amp;hsic="><img class="hs-cta-img " style="border-width: 0px; /*hs-extra-styles*/; margin: 0 auto; display: block; margin-top: 20px; margin-bottom: 20px" alt="View On Github" src="https://no-cache.hubspot.com/cta/default/135305/1649a7ff-03fb-4108-9ee6-98432e1e31d6.png" align="middle"></a></strong></p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fmerging-rpd-with-git&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/merging-rpd-with-git Fri Oct 19 2018 11:40:19 GMT-0400 (EDT) Code One, Here we Come https://blog.redpillanalytics.com/code-one-here-we-come-267912352e16?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*nCL_kXqsGzOaLQKVwkhjow.png" /></figure><p>It’s almost time for that exciting fall event of the year — no, not Pumpkin Spice lattes — but <a href="https://www.oracle.com/openworld/index.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne">Oracle OpenWorld 2018</a>. Red Pill will be heading to San Francisco from October 22th-25th for Oracle Code One; just enough time to recharge from the sensation of Looker Join ’18. <a href="https://www.oracle.com/code-one/index.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne">Code One</a> is a parallel conference to OpenWorld focused on — you guessed it — coding and developers. Join the conversation on a range of topics such as Java, Go, Rust, Python, chatbots, and blockchain technology. There’s so much going on at both OpenWorld and Oracle Code that we’ve done you a favor and pulled some highlights.</p><h3><strong>Red Pill Events</strong></h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/398/1*94xUpwxUCH4mQId7DVER2g.png" /></figure><p><a href="https://oracle.rainfocus.com/widget/oracle/oow18/catalogcodeone18?=undefined&amp;source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne&amp;search=Stewart%20Bryson"><strong>Architecture Live: Designing an Analytics Platform for the Big Data Era </strong></a><strong><br></strong>Monday, Oct 22, 9:00 a.m. — 9:45 a.m. | Moscone West — Room 2001<br><a href="https://twitter.com/stewartbryson">Stewart Bryson</a><strong>, Co-founder and CEO, Red Pill Analytics<br></strong><a href="https://twitter.com/jpdijcks">Jean-Pierre Dijcks</a><strong>, Master Product Manager, Oracle</strong></p><p>Don’t miss the “Architecture Live” experience, led by Stewart Bryson, CEO of Red Pill Analytics and Jean-Pierre Dijcks, Master Product Manager at Oracle. In this interactive session, you’ll witness Stewart and Jean-Pierre digitally illustrating data-driven architectures live, with input and feedback from the audience. Kafka, Lambda, and streaming analytics are all covered. You’ll learn what these words mean and, more importantly, how they affect the choices we make in building an enterprise architecture. With Oracle’s information management reference architecture as the backdrop, the presentation clarifies and delineates the different components involved in delivering big data, fast data, and the gray area in between. The “Architecture Live” experience will be fun and different, and everyone involved will learn something along the way.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/398/1*iQLB1du-lFC451tWdhXsIg.png" /></figure><p><a href="https://oracle.rainfocus.com/widget/oracle/oow18/catalogcodeone18?=undefined&amp;source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne&amp;search=Stewart%20Bryson"><strong>Machine Learning Live: Let’s build a Taxi Fare Predictor </strong></a><strong><br></strong>Monday, Oct 22, 4:00 p.m. — 4:45 p.m. | Moscone West — Room 2006<br><a href="https://twitter.com/stewartbryson">Stewart Bryson</a><strong>, Co-founder and CEO, Red Pill Analytics<br></strong><a href="https://twitter.com/brost">Bjoern Rost</a><strong>, Google Cloud Solutions Architect, The Pythian Group Inc.</strong></p><p>Our very own Stewart Bryson, co-founder and CEO of Red Pill Analytics, will be presenting an Oracle Code session with Bjoern Rost, Google Cloud Solutions Architect at Pythian. Join them as they explore the expansion of use cases made possible through machine learning and traditional analytics. Watch as Stewart and Bjoern walk through the process of creating a taxi cab fare predictor, using publicly available data and machine learning. They’ll explore data sets, use traditional analytics to make predictions, and train their model with linear regression algorithms to sweep through and beat those predictions. If you want to experience machine learning live, you won’t want to miss this.</p><h3>The “Can’t Miss” of OpenWorld 2018</h3><p><strong>Connect with some of the most innovative minds in the world at <br></strong><a href="https://www.oracle.com/code-one/index.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne">Code One Fireside Chat</a>:<br>Doug Cutting, Co-creator of Hadoop<br>Neha Narkhede, Co-creator of Apache Kafka<br>Charles Nutter, Co-leader of JRuby<br>Graeme Rocher, Creator of Grails<br>Guido van Rossum, Creator of Python</p><p><strong>A sample of </strong><a href="https://www.oracle.com/openworld/featured-speakers.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne"><strong>Featured Speakers</strong></a><strong> and </strong><a href="https://www.oracle.com/openworld/keynotes.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne"><strong>Keynotes</strong></a> <br>Larry Ellison, Executive Chief Chairman &amp; Chief Technology Officer at OracleBrian Greene, Physicist and String Theorist<br>Dr. Rand Hindi, Chief Executive Officer, Snips<br>Sophie Hackford, Chief Executive Officer at 1715 Labs<br>Senator Barbara Boxer, Senate<br>Sir John McLeod Scarlett KCMG OBE, Former Chief of the British Secret Intelligence Service<br>Jeh Johnson, Former Head of the Department of Homeland Security</p><p>And don’t forget to wind down your week with concert fun at <a href="https://www.oracle.com/openworld/cloudfest.html?source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AReg1CodeOne"><strong>Oracle CloudFest.</strong></a><strong><br>October 24th 6:30–11 PM<br>AT&amp;T Park</strong></p><p>There’s no excuse to miss this one; it’s only a 15-minute walk from Moscone Center. Rock out to Beck, Portugal., The Man, and The Bleachers. You’ll need tickets unless you have a full conference pass.</p><p>Are you attending Oracle OpenWorld and want to meet up? <a href="https://redpillanalytics.com/contact/">Contact us</a>, or check out our <a href="https://redpillanalytics.com/events/">website</a> for more info.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=267912352e16" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/code-one-here-we-come-267912352e16">Code One, Here we Come</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Renee Miller https://medium.com/p/267912352e16 Thu Oct 18 2018 15:18:27 GMT-0400 (EDT) Connect to DV Datasets and explore many more new features in OAC / OAAC 18.3.3.0 https://blogs.oracle.com/xmlpublisher/connect-to-dv-datasets-and-explore-many-more-new-features-in-oac-oaac-18330 <p>Greetings !</p> <p>Oracle Analytics Cloud (OAC) and Oracle Autonomous Analytics Cloud (OAAC) version 18.3.3.0 (also known as V5) got released last month. A rich set of new features have been introduced in this release across different products (with product version 12.2.5.0.0) in the suite.<span><span>&nbsp;You can check all the new features of OAC / OAAC in the video <a href="https://www.youtube.com/watch?v=o6UB1MNf3_4" target="_blank">here</a>.</span></span></p> <p>The focus for BI Publisher on OAC / OAAC in this release has been to compliment Data Visualization for pixel perfect reporting, performance optimizations and adding self service abilities. Here is a list of new features added this release:</p> <p><strong>BI Publisher New Features in OAC V5.0</strong></p> <strong>New Feature</strong> <strong>Description</strong> 1. DV Datasets <p>Now you can leverage a variety of data sources covered by Data Visualization data sets, including Cloud based data sources such as Amazon Redshift, Autonomous Data Warehouse Cloud; Big Data sources such as Spark, Impala, Hive; and Application data sources such as Salesforce, Oracle Applications etc. BI Publisher is here to compliment DV to create pixel perfect reports using DV datasets.</p> <p>Check the <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acpmr/creating-data-set-using-data-visualization-data-set.html#GUID-349DEF4C-6B27-44A4-BBCA-EBD41B08A9BC" target="_blank">documentation</a> for additional details. Also, check <a href="https://youtu.be/pzScLLViZas" target="_blank">this video</a> to see how this feature works.</p> 2. Upload Center <p>Now upload all files for custom configuration such as fonts, ICC Profile, Private Keys, Digital Signature etc.from the Upload Center as a self service feature available in the Administration page.</p> <p>Additional details can be found in the documentation <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acabi/configuring-system-maintenance-properties.html#GUID-84FC24D4-40EC-46D1-B3C8-F7F04D4D8E81" target="_blank">here</a>.</p> 3. Validate Data Model <p>Report Authors can now validate a data model before deploying the report in a production environment. This will help during a custom data model creation where data sets, LOVs and Bursting Queries can be validated against standard guidelines to avoid any undesired performance impact to the report server.&nbsp;</p> <p>Details available <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acpmr/validating-data-models.html#GUID-1010DD07-56E1-40F8-8975-937062B24FD2" target="_blank">here</a>.</p> 4. Skip unused data sets <p>When a data model contains multiple data sets for different layouts, each layout might not use all the data sets defined in the data model. Now Report Authors can select data model property to skip the execution of the unused data sets in a layout. Setting this property reduces the data extraction time, memory usage and improves overall report performance.</p> <p>Additional details can be found <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acpmr/data-model-properties.html#GUID-890FCA3E-24EE-425B-A65B-76BAF98C1C1B" target="_blank">here</a>.</p> 5.&nbsp;Apply Digital Signature to PDF Documents <p>Digital Signature is widely used feature in on-prem deployments and now this has been added in OAC too, where in Digital Signature can be applied to a PDF output. Digital Signatures can be uploaded from the Upload Center, required signature can be selected under security center, and then applied to PDF outputs by configuring attributes under report properties or run-time properties.&nbsp;</p> <p>You can find the documentation <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acabi/applying-digital-signature-pdf-documents.html#GUID-A2A16D53-5C76-480E-8F43-DDA3276ED1C0" target="_blank">here</a>. Also check <a href="https://youtu.be/v5rB0WuL6k0" target="_blank">this video</a> for a quick demonstration.</p> 6.&nbsp;Password protect MS Office Outputs - DocX, PPTX, XLSX <p>Now protect your MS Office output files with a password defined at report or server level.</p> <p>Check the <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acabi/defining-runtime-configurations.html#GUID-FEABDE1C-89F4-4887-983F-C83BC3E766DF" target="_blank">PPTX output properties</a>, <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acabi/defining-runtime-configurations.html#GUID-27655487-3868-48F0-938B-BB850BAD4595">DocX output properties</a>, <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acabi/defining-runtime-configurations.html#GUID-10E3AE51-D95D-4449-A66B-E875F612714C" target="_blank">Excel 2007 output properties</a>.&nbsp;</p> 7.&nbsp;Deliver reports in compressed format <p>You can select this option to compress the output by including the file in a zip file before delivery via email, FTP, etc.</p> <p>Additional details can be found <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acubi/set-output-options.html#GUID-448D34AC-1257-4B56-ABAE-FC6B1D5F884A" target="_blank">here</a>.</p> 8.&nbsp;Request read-receipt and delivery confirmation notification&nbsp; <p>You can opt to get delivery and read-receipt notification for scheduled job delivery via email.</p> <p>Check <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acubi/set-output-options.html#GUID-6C41D9A9-8F6A-419B-8083-9E63EB8925F8" target="_blank">documentation</a> for additional details.&nbsp;</p> 9.&nbsp;Add scalability mode for Excel Template to handle large data size <p>Now you can set up scalability mode for an excel template. This can be done at system level, report level or at template level. By setting this attribute to true, the engine will flush memory after a threshold value and when the data exceeds 65K rows it will rollover data into multiple sheets.</p> <p>You can find the documentation <a href="https://docs.oracle.com/en/cloud/paas/analytics-cloud/acpmr/understanding-excel-template-concepts.html#GUID-14CA8373-DB1B-494A-AE47-2E9B6383694B" target="_blank">here</a>.</p> <p>&nbsp;</p> <p>Stay tuned to hear more updates on features and functionalities ! Happy BIP&#39;ing ...</p> <p>&nbsp;</p> Pradeep Sharma https://blogs.oracle.com/xmlpublisher/connect-to-dv-datasets-and-explore-many-more-new-features-in-oac-oaac-18330 Wed Oct 17 2018 06:26:15 GMT-0400 (EDT) Fixing* Baseline Validation Tool** Using Network Sniffer https://www.rittmanmead.com/blog/2018/10/fixing-baseline-validation-tool-using-network-sniffer/ <p>* Sort of<br> ** Not exactly</p> <p>In the past, Robin Moffatt wrote a number of blogs showing how to use various Linux tools for diagnosing OBIEE and getting insights into how it works (<a href="https://www.rittmanmead.com/blog/2015/03/lifting-the-lid-on-obiee-internals-with-linux-diagnostics-tools/">one</a>, <a href="https://www.rittmanmead.com/blog/2016/08/how-to-poke-around-obiee-on-linux-with-strace-working-with-unsupported-odbc-sources-in-obiee-12c/">two</a>, <a href="https://www.rittmanmead.com/blog/2016/05/under-the-covers-of-obiee-12c-configuration-with-sysdig/">three</a>, ...). Some time ago I faced a task which allowed me to continue Robin's cycle of posts and show you how to use <a href="https://www.wireshark.org/">Wireshark</a> to understand how a certain Oracle tool works and how to search for the solution of a problem more effectively.</p> <p>To be clear, this blog is not about the issue itself. I could simply write a tweet like &quot;If you faced issue A then patch B solves it&quot;. The idea of this blog is to demonstrate how you can use somewhat unexpected tools and get things done.</p> <p>Obviously, my way of doing things is not the only one. If you are good in searching at My Oracle Support, you possibly can do it even faster, but what is good about my way (except for it is mine, which is enough for me) is that it doesn't involve uneducated guessing. I do an observation and get a clarified answer.</p> <p>Most of my blogs have disclaimers. This one is not an exception, while its disclaimer is rather small. There is still no silver bullet. This won't work for every single problem in OBIEE. I didn't say this.</p> <p>Now, let's get started.</p> <h2 id="thetask">The Task</h2> <p>The problem was the following: a client was upgrading its OBIEE system from 11g to 12c and obviously wanted to test for regression, making sure that the upgraded system worked exactly the same as the old one. Manual comparison wasn't an option since they have hundreds or even thousands of analyses and dashboards, so <a href="https://www.oracle.com/technetwork/middleware/bi/downloads/bi-bvt-download-3587672.html">Oracle Baseline Validation Tool</a> (usually called just BVT) was the first candidate as a solution to automate the checks.</p> <p>Using BVT is quite simple:</p> <ul> <li>Create a baseline for the old system.</li> <li>Upgrade</li> <li>Create a new baseline</li> <li>Compare them</li> <li>???</li> <li>Profit! Congratulations. You are ready to go live.</li> </ul> <p>Right? Well, almost. The problem that we faced was that BVT Dashboards plugin for 11g (a very old 11.1.1.7.<em>something</em>) gave exactly what was expected. But for 12c (12.2.1.<em>something</em>) we got all numbers with a decimal point even while all analyses had &quot;no decimal point&quot; format. So the first feeling we got at this point was that BVT doesn't work well for 12c and that was somewhat disappointing.</p> <details> <summary>SPOILER</summary> That wasn't true. </details> <br> <p>I made a simple dashboard demonstrating the issue.</p> <h3 id="obiee11g">OBIEE 11g</h3> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/11g-dash-vs-bvt.png" alt="11g-dash-vs-bvt"><br> Measure values in the XML produced by BVT are exactly as on the dashboard. Looks good.</p> <h3 id="obiee12c">OBIEE 12c</h3> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/12c-dash-vs-bvt-1.png" alt="12c-dash-vs-bvt-1"><br> Dashboard looks good, but values in the XML have decimal digits.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/failed.PNG" alt="failed"></p> <p>As you can see, the analyses are the same or at least they look very similar but the XMLs produced by BVT aren't. From regression point of view this dashboard must get &quot;DASHBOARDS PASSED&quot; result, but it got &quot;DASHBOARDS DIFFERENT&quot;.</p> <p>Reading the documentation gave us no clear explanation for this behaviour. We had to go deeper and understand what actually caused it. Is it BVT screwing up the data it gets from 12c? Well, that is a highly improbable theory. Decimals were not simply present in the result but they were correct. Correct as in &quot;the same as stored in the database&quot;, we had to reject this theory.<br> Or maybe the problem is that BVT works differently with 11g and 12c? Well, this looks more plausible. A few years have passed since 11.1.1.7 was released and it would not be too surprising if the old version and the modern one had different APIs used by BVT and causing this problem. Or maybe the problem is that 12c itself ignores formatting settings. Let's find out.</p> <h2 id="thetool">The Tool</h2> <p>Neither BVT, nor OBIEE logs gave us any insights. From every point of view, everything was working fine. Except that we were getting 100% mismatch between the source and the target. My hypothesis was that BVT worked differently with OBIEE 11g and 12c. How can I check this? Decompiling the tool and reading its code would possibly give me the answer, but it is not legal. And even if it was legal, the latest BVT size is more than 160 megabytes which would give an insane amount of code to read, especially considering the fact I don't actually know what I'm looking for. Not an option. But BVT talks to OBIEE via the network, right? Therefore we can intercept the network traffic and read it. Shall we?</p> <p>There are a lot of ways to do it. I work with OBIEE quite a lot and Windows is the obvious choice for my platform. And hence the obvious tool for me was <a href="https://www.wireshark.org/">Wireshark</a>.</p> <blockquote> <p>Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s happening on your network at a microscopic level and is the de facto (and often de jure) standard across many commercial and non-profit enterprises, government agencies, and educational institutions. Wireshark development thrives thanks to the volunteer contributions of networking experts around the globe and is the continuation of a project started by Gerald Combs in 1998.</p> </blockquote> <p>What this &quot;About&quot; doesn't say is that Wireshark is open-source and free. Which is quite nice I think.</p> <h2 id="installationdetails">Installation Details</h2> <p>I'm not going to go into too many details about the installation process. It is quite simple and straightforward. Keep all the defaults unless you know what you are doing, reboot if asked and you are fine.</p> <p>If you've never used Wireshark or analogues, the main question would be &quot;Where to install it?&quot;. The answer is pretty simple - install it on your workstation, the same workstation where BVT is installed. We're going to intercept our own traffic, not someone else's.</p> <h2 id="abitofwireshark">A Bit of Wireshark</h2> <p>Before going to the task we want to solve let's spend some time familiarizing with Wireshark. Its starting screen shows all the network adapters I have on my machine. The one I'm using to connect to the OBIEE servers is &quot;WiFi 2&quot;.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-09-13.50.44.png" alt="Screenshot-2018-10-09-13.50.44"></p> <p>I double-click it and immediately see a constant flow of network packets flying back and forth between my computer and local network machines and the Internet. It's a bit hard to see any particular server in this stream. And &quot;a bit hard&quot; is quite an understatement, to be honest, it is impossible.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/wireshark.gif" alt="wireshark"></p> <p>I need a filter. For example, I know that my OBIEE 12c instance IP is <code>192.168.1.226</code>. So I add <code>ip.addr==192.168.1.226</code> filter saying that I only want to see traffic to or from this machine. Nothing to see right now, but if I open the login page in a browser, for example, I can see traffic between my machine (<code>192.168.1.25</code>) and the server. It is much better now but still not perfect.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-09-14.08.52.png" alt="Screenshot-2018-10-09-14.08.52"></p> <p>If I add <code>http</code> to the filter like this <code>http and ip.addr==192.168.1.226</code>, I definitely can get a much more clear view.</p> <p>For example, here I opened <code>http://192.168.1.226:9502/analytics</code> page just like any other user would do. There are quite a lot of requests and responses. The browser asked for <code>/analytics</code> URL, the server after a few redirects replied what the actual address for this URL is <code>login.jsp</code> page, then browser requested <code>/bi-security-login/login.jsp</code> page using <code>GET</code> method and got the with HTTP code <code>200</code>. Code <code>200</code> shows that there were no issues with the request.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/startpage.PNG" alt="startpage"></p> <p>Let's try to log in.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/login.gif" alt="login"></p> <p>The top window is a normal browser and the bottom one is Wireshark. Note that my credentials been sent via clear text and I think that is a very good argument in defence of using HTTPS everywhere.</p> <p>That is a very basic use of Wireshark: start monitoring, do something, see what was captured. I barely scratched the surface of what Wireshark can do, but that is enough for my task.</p> <h2 id="wiresharkandbvt12c">Wireshark and BVT 12c</h2> <p>The idea is quite simple. I should start capturing my traffic then use BVT as usual and see how it works with 12c and then how it works with 11g. This should give me the answer I need.</p> <p>Let's see how it works with 12c first. To make things more simple I created a catalogue folder with just one analysis placed on a dashboard.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/bvt-dashboard-1.PNG" alt="bvt-dashboard-1"></p> <p>It's time to run BVT and see what happens.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-17.49.59.png" alt="Screenshot-2018-10-11-17.49.59"></p> <p>Here is the dataset I got from OBIEE 12c. I slightly edited and formatted it to make easier to read, but didn't change anything important.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/dataset12--1.PNG" alt="dataset12--1"></p> <p>What did BVT do to get this result? What API did it use? Let's look at Wireshark.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-19.09.27.png" alt="Screenshot-2018-10-11-19.09.27"></p> <p>First three lines are the same as with a browser. I don't know why it is needed for BVT, but I don't mind. Then BVT gets WSDL from OBIEE (<code>GET /analytics-ws/saw.dll/wsdl/v6/private</code>). There are multiple pairs of similar query-response flying back and forth because WSDL is big enough and downloaded in chunks. A purely technical thing, nothing strange or important here.<br> But now we know what API BVT uses to get data from OBIEE. I don't think anyone is surprised that it is Web Services API. Let's take a look at Web Services calls.</p> <p>First <code>logon</code> method from <code>nQSessionService</code>. It logs into OBIEE and starts a session.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-19.36.59.png" alt="Screenshot-2018-10-11-19.36.59"></p> <p>Next requests get catalogue items descriptions for objects in my <code>/shared/BVT</code> folder. We can see a set of calls to <code>webCatalogServce</code> methods. These calls are reading my web catalogue structure: all folders, subfolders, dashboard and analysis. Pretty simple, nothing really interesting or unexpected here.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/ws01.PNG" alt="ws01"></p> <p>Then we can see how BVT uses <code>generateReportSQLResult</code> from <code>reportService</code> to get logical SQL for the analysis.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-19.42.07.png" alt="Screenshot-2018-10-11-19.42.07"></p> <p>And gets analysis' logical SQL as the response.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-19.45.10.png" alt="Screenshot-2018-10-11-19.45.10"></p> <p>And the final step - BVT executes this SQL and gets the data. Unfortunately, it is hard to show the data on a screenshot, but the line starting with <code>[truncated]</code> is the XML I showed before.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-12-12.19.58.png" alt="Screenshot-2018-10-12-12.19.58"></p> <p>And that's all. That's is how BVT gets data from OBIEE.</p> <p>I did the same for 11g and saw absolutely the same procedure.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/Screenshot-2018-10-11-21.01.35.png" alt="Screenshot-2018-10-11-21.01.35"></p> <p>My initial theory that BVT may have been using different APIs for 11g and 12c was busted.</p> <p>From my experiment, I found out that BVT used <code>xmlViewService</code> to actually get the data. And also I know now that it uses logical SQL for getting the data. Looking at <a href="https://docs.oracle.com/cd/E23943_01/bi.1111/e16364/methods.htm#BIEIT345">the documentation</a> I can see that <code>xmlViewService</code> has no options related to any formatting. It is a purely data-retrieval service. It can't preserve any formatting and supposed to give only the data. But hey, I've started with the statement &quot;11g preserves formatting&quot;, how is that possible? Well, that was a simple coincidence. It doesn't.</p> <p>In the beginning, I had very little understanding of what keywords to use on MoS to solve the issue. &quot;BVT for 12c doesn't preserve formatting&quot;? &quot;BVT decimal part settings&quot;? &quot;BVT works differently for 11g and 12c&quot;? Now I have something much better - <em>&quot;executeSQLQuery decimal&quot;</em>. 30 seconds of searching and I know the answer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/mos-1.PNG" alt="mos-1"></p> <p>This was fixed in 11.1.1.9, but there is a patch for 11.1.1.7.some_of_them. The patch fixes an 11g issue which prevents BVT from getting decimal parts of numbers.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/pass.PNG" alt="pass"></p> <p>As you may have noticed I had no chance of finding this using my initial problem description. Nether BVT, nor 12g or 11.1.1.7 were mentioned. This thread looks completely unrelated to the issue, I had zero chances to find it.</p> <h2 id="conlusion">Conlusion</h2> <p>OBIEE is a complex software and solving issues is not always easy. Unfortunately, no single method is enough for solving all problems. Usually, log files will help you. But when something works but not the way you expect, log files can be useless. In my case BVT was working fine, 11g was working fine, 12c was working fine too. Nothing special to write to logs was happening. That is why sometimes you may need unexpected tools. Just like this. Thanks for reading!</p> Andrew Fomin 5bb2129d5ffec000bfcaa7bb Wed Oct 17 2018 06:22:47 GMT-0400 (EDT) Exadata Online Training https://gavinsoorma.com/2018/10/exadata-online-training/ <p>The <strong>fourth edition</strong> of the highly popular &#8220;<strong>Oracle Exadata Essentials for Oracle DBA&#8217;s</strong>&#8221; online training course will be commencing <strong>Sunday 11th November</strong>.</p> <p>This <strong>hands-on training course</strong> will teach you how to install and configure an Exadata Storage Server Cell on your own individual Oracle Virtual Box platform as well as prepare you for the Oracle Certified Expert, Oracle Exadata X5 Administrator exam (1Z0-070).</p> <p>The classes will be from <strong>10.00 AM US EST till 14.00 PM and entire session recordings are available</strong> in case a session is missed as well as for future reference.</p> <p>The cost of the <strong>5 week online hands-on training is $699.0</strong>0, and the course curriculum is based on the Exadata Database Machine: 12c Administration Workshop course offered by Oracle University which costs over USD $5000!</p> <p>Book your seat for this training course via the registration link below:</p> <p><a href="https://attendee.gotowebinar.com/register/7797168410852492802"> Register for Exadata Essentials &#8230;</a></p> <p>In addition to the topics listed below, attendees will learn how to use CELLCLI to create and manage cell disks, grid disks and flash disks as well as how to configure alerts and monitoring of storage cells on their own individual Exadata Storage Server environments.</p> <p>• Install Exadata Storage Server software and create storage cells on a VirtualBox platform<br /> • Exadata Database Machine Components &amp; Architecture<br /> • Exadata Database Machine Networking<br /> • Smart Scans and Cell Offloading<br /> • Storage Indexes<br /> • Smart Flash Cache and Flash Logging<br /> • Exadata Hybrid Columnar Compression<br /> • I/O Resource Management (IORM)<br /> • Exadata Storage Server Configuration<br /> • Database File System<br /> • Migration to Exadata platform<br /> • Storage Server metrics and alerts<br /> • Monitoring Exadata Database Machine using OEM<br /> • Applying a patch to an Exadata Database Machine<br /> • Automatic Support Ecosystem<br /> • Exadata Cloud Service overview</p> <p>&#8230;. and more!</p> <p>Here is some of the feedback received from the attendees of earlier training sessions:</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png"><img class="aligncenter size-full wp-image-8325" src="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png" alt="" width="850" height="352" srcset="https://gavinsoorma.com/wp-content/uploads/2018/10/feedback.png 850w, https://gavinsoorma.com/wp-content/uploads/2018/10/feedback-300x124.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/10/feedback-768x318.png 768w" sizes="(max-width: 850px) 100vw, 850px" /></a></p> Gavin Soorma https://gavinsoorma.com/?p=8324 Wed Oct 17 2018 01:27:21 GMT-0400 (EDT) ODC Appreciation Day: Oracle Cloud PSM Cli https://www.rittmanmead.com/blog/2018/10/odc-appreciation-day-2018-psm-cli/ <img src="https://www.rittmanmead.com/blog/content/images/2018/10/100818_1043_ODCApprecia1-1.png" alt="ODC Appreciation Day: Oracle Cloud PSM Cli"><p><a href="https://oracle-base.com/blog/2018/09/27/oracle-developer-community-odc-appreciation-day-2018-thanksodc/">Oracle Developer Community (ODC) Appreciation Day</a> (previously know as OTN Appreciation Day) is a day, started from an initiative of <a href="https://oracle-base.com">Tim Hall</a>, where everyone can share their Thanks to the Oracle community by writing about a favourite product, an experience, a story related to Oracle technology.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/100818_1043_ODCApprecia1.png" alt="ODC Appreciation Day: Oracle Cloud PSM Cli"></p> <p>Last year I wrote about <a href="https://www.rittmanmead.com/blog/2017/10/odc-appreciation-day-obiees-time-hierarchies/">OBIEE Time Hierarchies</a> and how they are very useful to perform time comparison, shifts, and aggregations.</p> <p>This year I want to write about <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html">Oracle Paas Service Manager (PSM) Client</a>!<br> I've already written a <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">blog post</a> about it in detail, basically Oracle PSM allows Oracle cloud administrators to manage their instances via command line instead of forcing them to use the Web-UI.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/10/2jv6bi.jpg" alt="ODC Appreciation Day: Oracle Cloud PSM Cli"></p> <p>PSM Cli allows you to create an Oracle Analytics Cloud instance by just calling</p> <pre><code>psm analytics create-service -c &lt;CONFIG_FILE&gt; -of &lt;OUTPUT_FORMAT&gt; </code></pre> <p>and passing a JSON <code>&lt;CONFIG_FILE&gt;</code> which can easily be downloaded after following the creation process in the Web-UI, a bit like the response file in on-premises OBIEE can be saved and customised for future reuse after the first UI installation. Examples of the PSM JSON payloads can be found <a href="https://github.com/FrancescoTisiot/OAC-JSON">here</a>.</p> <p>OAC Instances can also easily be started/stopped/restarted with the command</p> <pre><code>psm analytics start/stop/restart -s &lt;INSTANCE_NAME&gt; </code></pre> <p>And the status of each command tracked with</p> <pre><code>psm analytics operation-status -j &lt;JOB_ID&gt; </code></pre> <p>As mentioned in my <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">previous post</a>, PSM Cli opens also the doors for <strong>instance management automation</strong> which is a requirement for providing cost-effective <strong>fully isolated feature-related OAC instances</strong> useful when thinking about <strong>DevOps</strong> practices. The fact that PSM Cli is command line, means that it can be integrated in any automation tool like Jenkins and thus integrated in any DevOps flow being designed in any company.</p> <p>So Thank you, Oracle, for enabling such automation with PSM Cli!</p> <p>Follow the <a href="https://twitter.com/hashtag/thanksodc">#ThanksODC</a> hashtag on Twitter to check which post have been published on the same theme!</p> Francesco Tisiot 5bbef6d9a62b9100bf837de3 Thu Oct 11 2018 04:11:33 GMT-0400 (EDT) Oracle OAC5 Needs! https://realtrigeek.com/2018/10/09/oracle-oac5-needs/ <p>Alright. It&#8217;s been a while since I last posted so you all know I need to say something. As a fill in, I&#8217;m back at Oracle in my previous position but on the commerial side this time. I&#8217;m having a blast and feel I&#8217;m in my happy/sweet spot.</p> <p>But there&#8217;s something we need to discuss that I don&#8217;t feel Product Management has done a great job of doing &#8211; letting you know that the newest release of OAC (namely OAAC) requires some new downloads.</p> <p>Especially if using the autonomous version of OAC, you need to download a new BI Admin tool for the RPDs, *especially* if you are using EssCS. There are .pem (certificate) files you need. Link is here: https://www.oracle.com/technetwork/middleware/oac/downloads/oac-tools-4392272.html .</p> <p>Also, the new URL to use in the RPD is: https://URL/essbase/agent since the /essbase/agent rules all now.</p> <p>You also need to download a new MaxL client and for the same reason &#8211; .pem files. You can download these from your EssCS instance.</p> <p>Short and sweet today to hopefully figure you figure out why you can&#8217;t connect to Essbase Cloud Service EssCS Essbase (trying to cover all the SEO topics) from the RPD.</p> <p>Even more #EssCS #EssbaseCloud #OAC #RPD  Essbase won&#8217;t work in RPD OAC <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p> <p>It&#8217;s a different framework! Download the new connections and you are good to go&#8230;</p> Sarah Craynon Zumbrum http://realtrigeek.com/?p=2046 Tue Oct 09 2018 20:24:48 GMT-0400 (EDT) Create the Linux 6.8 VM’s on VirtualBox for the Oracle RAC 12c Workshop https://gavinsoorma.com/2018/10/how-to-create-the-linux-6-8-vms-on-virtualbox-for-the-oracle-rac-12c-workshop/ <p><strong>Oracle RAC How-To Series &#8211; Tutorial 16</strong></p> <p>Download the note (for members only&#8230;)</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/09/How-to-create-the-Linux-6.8-VMs-on-VirtualBox-for-the-Oracle-RAC-12c-Workshop.docx">Tutorial 16</a></p> Gavin Soorma https://gavinsoorma.com/?p=8304 Mon Oct 08 2018 04:20:59 GMT-0400 (EDT) July 2018 PSU Oracle Grid Infrastructure 12c Release 2 https://gavinsoorma.com/2018/10/july-2018-psu-oracle-grid-infrastructure-12c-release-2/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/july-2018-psu-oracle-grid-infrastructure-12c-release-2/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8302 Mon Oct 08 2018 04:19:13 GMT-0400 (EDT) Oracle 12c Clusterware Post-installation and Configuration Verification https://gavinsoorma.com/2018/10/oracle-12c-clusterware-post-installation-and-configuration-verification/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/oracle-12c-clusterware-post-installation-and-configuration-verification/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8300 Mon Oct 08 2018 04:16:06 GMT-0400 (EDT) 18c Grid Infrastructure Upgrade https://gavinsoorma.com/2018/10/18c-grid-infrastructure-upgrad/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/18c-grid-infrastructure-upgrad/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8298 Mon Oct 08 2018 04:13:57 GMT-0400 (EDT) DNS and DHCP setup for 12c R2 Grid Infrastructure installation with Grid Naming Service (GNS) https://gavinsoorma.com/2018/10/dns-and-dhcp-setup-for-12c-r2-grid-infrastructure-installation-with-grid-naming-service-gns/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/dns-and-dhcp-setup-for-12c-r2-grid-infrastructure-installation-with-grid-naming-service-gns/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8296 Mon Oct 08 2018 04:12:10 GMT-0400 (EDT) Adding and Deleting a Node From a RAC Cluster https://gavinsoorma.com/2018/10/adding-and-deleting-a-node-from-a-rac-cluster/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/adding-and-deleting-a-node-from-a-rac-cluster/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8294 Mon Oct 08 2018 04:10:07 GMT-0400 (EDT) Convert RAC to RAC One Node and RAC One Node to RAC https://gavinsoorma.com/2018/10/convert-rac-to-rac-one-node-and-rac-one-node-to-rac/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/convert-rac-to-rac-one-node-and-rac-one-node-to-rac/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8292 Mon Oct 08 2018 04:08:01 GMT-0400 (EDT) Convert Single Instance Database to RAC Using rconfig https://gavinsoorma.com/2018/10/convert-single-instance-database-to-rac-using-rconfig/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/convert-single-instance-database-to-rac-using-rconfig/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8290 Mon Oct 08 2018 04:04:51 GMT-0400 (EDT) Convert Standard Cluster to Flex ASM Cluster https://gavinsoorma.com/2018/10/convert-standard-cluster-to-flex-asm-cluster/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/convert-standard-cluster-to-flex-asm-cluster/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8288 Mon Oct 08 2018 03:59:22 GMT-0400 (EDT) Create and Manage Server Pools https://gavinsoorma.com/2018/10/create-and-manage-server-pools/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/create-and-manage-server-pools/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8286 Mon Oct 08 2018 03:55:08 GMT-0400 (EDT) Create and Manage Services https://gavinsoorma.com/2018/10/create-and-manage-services/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/create-and-manage-services/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8284 Mon Oct 08 2018 03:48:43 GMT-0400 (EDT) Creating ASM Disks with UDEV https://gavinsoorma.com/2018/10/creating-asm-disks-with-udev/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/creating-asm-disks-with-udev/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8282 Mon Oct 08 2018 03:47:08 GMT-0400 (EDT) How To Create a RAC One Node Database https://gavinsoorma.com/2018/10/how-to-create-a-rac-one-node-database/ <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/10/how-to-create-a-rac-one-node-database/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8279 Mon Oct 08 2018 03:42:36 GMT-0400 (EDT) Securing Network Access for Snowflake Data Warehouse and Amazon S3 https://blog.redpillanalytics.com/securing-network-access-for-snowflake-data-warehouse-and-amazon-s3-a45076d24cac?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ygvhVK18rbsglkOiR0ffSQ.jpeg" /></figure><p>As technology moves further into a service-oriented (read: Cloud) world, people and enterprises alike need to ensure that securing their assets is taken seriously. Security has always been one of the first of topics covered before purchasing a cloud service which is why good cloud vendors typically spend more time securing their products than any customer ever would want or be able to. That being said, there are often opportunities to make configurations that render the service even more secure than the out of the box offering.</p><p>In my current engagement, I have been partnering with a <a href="http://redpillanalytics.com">Red Pill Analytics </a>client to move from a legacy business intelligence system to a hybrid cloud/on-premises technology stack that, in part, includes <a href="https://www.attunity.com/products/replicate/">Attunity Replicate</a> and <a href="https://www.snowflake.com">Snowflake Data Warehouse</a> on AWS. As part of the project, we have been tasked by Information Security with hardening the architecture. “Hardening” is a bit of an ambiguous term that loosely translates to: <em>configure all components to be as secure as possible</em>. While there are many controls that the team has worked through, this blog post will focus on one in particular: Deny network access to Snowflake and AWS S3 by default, allow access by exception.</p><h4>Securing Access to Snowflake</h4><p>The first step in satisfying the requirement listed above is to restrict Snowflake network access. Snowflake makes it incredibly easy to control which IP address(es) can access the instance. Simply navigate to Account &gt; Policies and create a new <a href="https://docs.snowflake.net/manuals/user-guide/network-policies.html">network policy</a>. A policy can include CIDR notated IP ranges which helps to whitelist traffic coming from various subnets on the company network. Do this before loading any data if at all possible. Creating a new network policy takes a few minutes to set up and can always be adjusted to add or remove IP addresses as needed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zRYVeg3Vlnbh0P-_B5wwXA.png" /></figure><h4>Setting the (External) Stage</h4><p>In our case, Attunity is running on-premises within the client network while Snowflake is a cloud-only data warehouse. A current (October 2018) pre-requisite for connecting Attunity to Snowflake requires that the customer provide an S3 bucket to stage data files; in Snowflake, this is known as an <a href="https://docs.snowflake.net/manuals/sql-reference/sql/create-stage.html">external stage</a>. When Attunity tasks are run, files are continuously shipped to S3 and subsequently copied into Snowflake. As the InfoSec requirement above dictates, access to the S3 bucket must be restricted to only be accessible by Attunity and Snowflake at the network level.</p><h4>Restricting S3 Access</h4><p>We know the on-premise IP addresses that Attunity traffic will originate from so that part is easy; however, Snowflake is not as obvious. Snowflake is constantly spinning up and down compute (EC2) instances which means other than knowing the IP addresses are somewhere within AWS ranges, it is a bit of a moving target. Fortunately, Snowflake traffic can also be identified by the AWS Virtual Private Cloud (VPC) from which it originated. VPC Endpoints are not public information; however, the owner of the VPC can share the identifier with whomever they wish. In this case, Snowflake support can provide a customer with the appropriate ID. Taking the resulting information over to S3, bucket policies can include a combination of VPC IDs and IP Addresses as described <a href="https://aws.amazon.com/premiumsupport/knowledge-center/block-s3-traffic-vpc-ip/">here</a>. <em>Note: We had little luck using </em><em>StringNotLike for IP addresses as mentioned in the article but substituting </em><em>NotIPAddress worked just fine.</em></p><p>Putting together the IP addresses and Snowflake’s VPC endpoint ID, the bucket policy ends up looking like this:</p><pre>{<br> &quot;Version&quot;: &quot;2012-10-17&quot;,<br> &quot;Id&quot;: &quot;&lt;yourPolicyId&gt;&quot;,<br> &quot;Statement&quot;: [<br> {<br> &quot;Sid&quot;: &quot;&lt;yourSid&gt;&quot;,<br> &quot;Effect&quot;: &quot;Deny&quot;,<br> &quot;Principal&quot;: &quot;*&quot;,<br> &quot;Action&quot;: &quot;s3:*&quot;,<br> &quot;Resource&quot;: &quot;arn:aws:s3:::&lt;yourBucket&gt;&quot;,<br> &quot;Condition&quot;: {<br> &quot;StringNotLike&quot;: {<br> &quot;aws:SourceVpce&quot;: &quot;vpce-&lt;snowflakeVpce&gt;&quot;<br> },<br> &quot;NotIpAddress&quot;: {<br> &quot;aws:SourceIp&quot;: [<br> &quot;0.0.0.0/0&quot;,<br> &quot;1.1.1.1/1&quot;<br> ]<br> }<br> }<br> }<br> ]<br>}</pre><p>Notice that the effect is to deny all traffic <em>except</em> the listed IP addresses and VPC Endpoint ID. Simply copy and paste the JSON into the Bucket Policy on the Permissions tab.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GTmXWdircFDg154pZXcU_w.png" /></figure><p>Once saved, a quick test using <a href="https://www.getpostman.com">Postman</a> to send a GET request to the bucket from an unauthorized IP address now returns 403 Forbidden, indicating the bucket policy is working as expected.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vqFwqDNrpxUYIgbgmqU-Vw.png" /></figure><h4>Conclusion</h4><p>It is important to control not only who is accessing your applications but also where they are accessing from. Following the instructions above, Snowflake and S3 can be configured to only allow traffic from trusted networks.</p><p>For additional information specific to Snowflake security, check out documentation for items such as federated authentication &amp; SSO, multi-factor authentication, AWS PrivateLink, AWS Direct Connect, and more. A categorized summary of Snowflake security features can be found <a href="https://docs.snowflake.net/manuals/user-guide/admin-security.html">here</a>.</p><h4>Need help?</h4><p>Red Pill Analytics is a Snowflake Solutions Partner experienced not only in working with Snowflake and Attunity technically but also advising organizations on overall data strategy. From proof-of-concept to implementation to training your users, we can help. If you are interested in guidance while working with Snowflake, Attunity, or any of your data projects, feel free to reach out to us any time <a href="http://redpillanalytics.com/contact/">on our website</a> or find us on <a href="https://twitter.com/redpilla">Twitter</a>, <a href="https://www.facebook.com/redpillanalytics/">Facebook</a>, and <a href="https://www.linkedin.com/company/red-pill-analytics">LinkedIn</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a45076d24cac" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/securing-network-access-for-snowflake-data-warehouse-and-amazon-s3-a45076d24cac">Securing Network Access for Snowflake Data Warehouse and Amazon S3</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Mike Fuller https://medium.com/p/a45076d24cac Thu Oct 04 2018 09:07:01 GMT-0400 (EDT) Set up Fusion SaaS BI Cloud Connector (BICC) to use Cloud Storage http://www.ateam-oracle.com/set-up-fusion-saas-bi-cloud-connector-bicc-to-use-cloud-storage/ For other A-Team articles by Richard, click here Introduction This article walks through the steps to set up a Cloud Storage Container, for use with the Fusion SaaS BI Cloud Connector. This may be of particular interest to Oracle Analytics Cloud customers, wanting to use the new Data Replication functionality from Fusion SaaS (for more details, [&#8230;] Richard Williams http://www.ateam-oracle.com/?p=52585 Tue Oct 02 2018 20:16:52 GMT-0400 (EDT) OAC 18.3.3: New Features https://www.rittmanmead.com/blog/2018/09/oac_18_3_3_new_features/ <img src="https://www.rittmanmead.com/blog/content/images/2018/09/Certified-1.png" alt="OAC 18.3.3: New Features"><p>I believe there is a hidden strategy behind Oracle's product release schedule: <strong>every</strong> time I'm either on holidays or in a business trip full of appointments a new version of <strong>Oracle Analytics Cloud</strong> is published with a huge set of new features!</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Start-1.png" alt="OAC 18.3.3: New Features"></p> <p>OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at <a href="https://kscope18.odtug.com">Kscope18</a> during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!</p> <h1 id="datapreparation">Data Preparation</h1> <p>A recurring theme in Europe since last year is <a href="https://en.wikipedia.org/wiki/General_Data_Protection_Regulation">GDPR</a>, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we &quot;play&quot; with data on daily basis and we should be aware of what data we can use and how.<br> Luckily for us now OAC helps to address GDPR with the <strong>Data Preparation Recommendations</strong> step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.<br> The suggestion may include:</p> <ul> <li>Complete or partial <strong>obfuscation</strong> of the data: useful when dealing with security/user sensitive data</li> <li><strong>Data Enrichment</strong> based on the column data can include: <ul> <li><strong>Demographical</strong> information based on names</li> <li><strong>Geographical</strong> information based on locations, zip codes</li> </ul> </li> </ul> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Recommendations.png" alt="OAC 18.3.3: New Features"></p> <p>Each of the suggestion applied to the dataset is stored in a <strong>data preparation script</strong> that can easily be reapplied if the data is updated.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Prep-Script.png" alt="OAC 18.3.3: New Features"></p> <h1 id="dataflows">Data Flows</h1> <p>Data Flows is the &quot;mini-ETL&quot; component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.<br> The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to <strong>schedule Data Flows</strong> more or less like we were used to when scheduling Agents back in OBIEE.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Schedule.png" alt="OAC 18.3.3: New Features"></p> <p>Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the <strong>Branch</strong> node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Branch.png" alt="OAC 18.3.3: New Features"></p> <p>Two other new features have been introduced to make data-flows more reusable: <strong>Parametrized Sources and Outputs</strong> and <strong>Incremental Processing</strong>.<br> The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Prompted-DataFlows.png" alt="OAC 18.3.3: New Features"></p> <p>The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:</p> <ul> <li>Define in the source dataset which is the <strong>key column</strong> that can be used to indicate new data (e.g. <code>CUSTOMER_KEY</code> or <code>ORDER_DATE</code>) since the last run</li> <li>When including the dataset in a Data Flow enable the execution of the Data Flow with only the <strong>new data</strong></li> <li>In the target dataset define if the Incremental Processing <strong>replaces existing data</strong> or <strong>appends data</strong>.</li> </ul> <p>Please note that the Incremental Load is available only when using <strong>Database Sources</strong>.</p> <p>Another important improvement is the <strong>Function Shipping</strong> when Data Flows are used with <strong>Big Data Cloud</strong>: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.</p> <p>Lastly there is a new <strong>Properties Inspector</strong> feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the <strong>scheduling</strong> of the related flow.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/DataFlowInspection.png" alt="OAC 18.3.3: New Features"></p> <h1 id="datareplication">Data Replication</h1> <p>Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.<br> As expected the user can select which objects to replicat, the filters to apply, the destination tables and columns, and the load type between <strong>Full</strong> or <strong>Incremental</strong>.</p> <h1 id="projectcreation">Project Creation</h1> <p>New visualization capabilities have been added which include:</p> <ul> <li>Grid HeatMap</li> <li>Correlation Matrix</li> <li>Discrete Shapes</li> <li>100% Stacked Bars and Area Charts</li> </ul> <p>In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Viz.png" alt="OAC 18.3.3: New Features"></p> <p>Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Tooltip.png" alt="OAC 18.3.3: New Features"></p> <p>The <strong>Explain</strong> feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.</p> <p>A new feature that many developers will appreciate is the <strong>AutoSave</strong>: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.<br> Another very interesting addition is the <strong>Copy Data to Clipboard</strong>: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.</p> <p>Did you create a new dataset and you want to repoint your existing project to it? Now with <strong>Dataset replacement</strong> it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Replace.png" alt="OAC 18.3.3: New Features"></p> <h1 id="datamanagement">Data Management</h1> <p>The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.<br> New OAC administrative options have been addressing this problem: a dataset <strong>Certification</strong> by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset <strong>Permissions</strong> tab allowing the definition of <strong>Full Control</strong>, <strong>Edit</strong> or <strong>Read Only</strong> access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Certified.png" alt="OAC 18.3.3: New Features"></p> <p>A <strong>Search</strong> tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select <strong>when</strong> then indexing is executed as well as <strong>which</strong> columns to index and <strong>how</strong> (by column name and value or by column name only).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/09/Search.png" alt="OAC 18.3.3: New Features"></p> <h1 id="bipandessbase">BIP and Essbase</h1> <p>BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like <strong>email delivery read receipt notification</strong> and <strong>compressed output</strong> and <strong>password protection</strong> that were already available on the on-premises version.<br> There is also a new set of features for Essbase including <strong>new UI</strong>, <strong>REST APIs</strong>, and, very important security wise, all the <strong>external communications</strong> (like Smartview) are now <strong>over HTTPS</strong>.<br> For a detailed list of new features check this <a href="https://www.youtube.com/playlist?list=PL6gBNP-Fr8KWJxzgqQFV1rE6UWOgmNWp-">link</a></p> <h1 id="conclusion">Conclusion</h1> <p>OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.<br> If you are looking into OAC and want to know more don't hesitate to <a href="https://www.rittmanmead.com/blog/2018/09/oac_18_3_3_new_features/info@rittmanmead.com">contact us</a></p> Francesco Tisiot 5b7bbc5d5ffec000bfcaa789 Fri Sep 21 2018 08:58:43 GMT-0400 (EDT) ODTUG Board of Directors Nominations Close in 3 Days! https://www.odtug.com/p/bl/et/blogaid=828&source=1 This is your opportunity to nominate the person you believe will best provide leadership and policy development for ODTUG. For more information, please click here. All nominees must be paid ODTUG members in good standing. ODTUG https://www.odtug.com/p/bl/et/blogaid=828&source=1 Mon Sep 17 2018 09:56:58 GMT-0400 (EDT) KScope 18 Speaker Award https://devepm.com/2018/09/17/kscope-18-speaker-award/ Hey guys how are you? It has been awhile since last time I wrote anything here&#8230;. and surprise, surprise, it&#8217;s because I&#8217;m crazy working in a project that was sized small but turn out huge and the size didn&#8217;t change&#8230;. 🙂 never happened before heheheh 😉 This is just a small post to tell how [&#8230;] RZGiampaoli http://devepm.com/?p=1723 Mon Sep 17 2018 07:04:39 GMT-0400 (EDT) PBCS vs. EPBCS: Comparing Oracle's Cloud Planning Applications https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/PBCS%20vs%20EPBCS.png?t=1541832538128" alt="PBCS vs. EPBCS: Comparing Oracle's Cloud Planning Applications" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p><br>If you’re thinking about <strong><a href="/hyperionblog/cheat-sheet-migrating-on-prem-hyperion-planning-to-the-cloud">migrating on-prem Hyperion Planning to the cloud</a></strong>,&nbsp;<span style="background-color: transparent;">you have to make a big decision:&nbsp;</span>Oracle Planning and Budgeting Cloud Service (<a href="/hyperionblog/is-oracle-pbcs-the-right-fit-for-your-organization-4-factors-to-consider">PBCS</a>) or&nbsp;<span>Oracle Enterprise Planning and Budgeting Cloud Service</span><span><span>&nbsp;</span>(EPBCS)</span><span style="background-color: transparent;">?</span></p> <p>Whether you choose PBCS or EPBCS, the overall benefits of an Oracle cloud application are the same: no upfront cost for hardware or software, less IT involvement, and no annual maintenance costs.</p> <p><strong>In this blog post, you'll get a high-level comparison</strong> of<strong style="background-color: transparent;">&nbsp;</strong>Oracle’s cloud planning applications to help you navigate your options.<span style="background-color: transparent;">&nbsp;</span></p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-vs-epbcs-comparing-oracle-cloud-planning-applications&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Brian Marshall https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications Thu Sep 13 2018 14:35:00 GMT-0400 (EDT) ODTUG Kscope Session Highlights - Part 2 https://www.odtug.com/p/bl/et/blogaid=826&source=1 Part 2 - Curious about the content you'll see at ODTUG Kscope19? As we look ahead to opening abstract submissions in the coming weeks, we would like to share some of the content highlights from ODTUG Kscope18. ODTUG https://www.odtug.com/p/bl/et/blogaid=826&source=1 Thu Sep 13 2018 13:14:04 GMT-0400 (EDT) Star Schema Optimization in Autonomous Data Warehouse Cloud https://danischnider.wordpress.com/2018/09/13/star-schema-optimization-in-autonomous-data-warehouse-cloud/ <p>Oracle Autonomous Data Warehouse Cloud does not allow to create indexes. Is this a problem for star schemas because no Star Transformation can be used? Or are the required bitmap indexes automatically created? A look under the hood of ADWC.</p> <p><span id="more-601"></span></p> <p>A typical recommendation for star schemas in an Oracle database is to create a bitmap index on each dimension key of a fact table. I used (and still use) this index strategy in many data warehouses and recommend it in reviews, trainings and presentations. Why are bitmap indexes so important on a fact table? Because they are required for the <em>Star Transformation</em>, a special join strategy for star schemas. Without explaining all the details, here a short summary of the Star Transformation approach:</p> <ol> <li>For each dimension table with a filter (WHERE condition) in the query, a bit array is built based on the bitmap index of the dimension key in the fact table</li> <li>All these bit arrays are combined with a BITMAP AND operator. The result is a bit array for all rows of the fact table that fit all filter conditions</li> <li>This resulting bit array is used to access the corresponding rows in the fact table</li> </ol> <p>But how can this efficient join and access method in a star schema be used, when the database does not allow to create any bitmap indexes or non-unique b-tree indexes? Are bitmap indexes created automatically in ADWC? Or how are these kind of queries on a star schema handled by the optimizer? To find the answer, let’s look at the execution plan of a typical query on the SSB schema (sample star schema benchmark) that is available in every ADWC database. Some example queries can be found in <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/sample-queries.html#GUID-431A16E8-4C4D-4786-BE5C-30029AC1EFD8">Appendix D</a> of the ADWC documentation.</p> <h1>Sample Star Schema Benchmark</h1> <p>The SSB schema contains one fact table LINEORDER with around 6 billion rows and four dimension tables. The time dimension DWDATE contains all calendar days for 8 years, all other dimensions contains between 2 and 30 million rows. The data model is a typical star schema (the foreign key constraints I added manually in Data Modeler for better readability. The SSB schema constains no PK/FK constraints at all).</p> <p><img title="SSB_schema.jpg" src="https://danischnider.files.wordpress.com/2018/09/issb_schema.jpg?w=456&#038;h=230" alt="SSB schema" width="456" height="230" border="0" /></p> <div> </div> <h1>Execution Plan of a Star Schema Query</h1> <p>A query that joins the fact table with all four dimension tables, each of them containing a filter, leads to the following execution plan. We can see several interesting details in this plan:</p> <ul> <li><strong>Parallel Execution: </strong>The query runs in parallel (all the PX operators in the execution plan). This is generally the case in ADWC, except for connections with consumer group LOW or for ADWC configurations with only 1 CPU core. The degree of parallelism (DOP) depends on the number of CPU cores. More details about scalability can be found in Christian Antognini’s blog post <a href="https://antognini.ch/2018/07/observations-about-the-scalability-of-data-loads-in-adwc/">Observations About the Scalability of Data Loads in ADWC</a>.</li> <li><strong>Result Cache: </strong>The result cache is activated (see green line 1 in execution plan). The parameter RESULT_CACHE_MODE is set to FORCE in ADWC and cannot be changed. This allows very short response times for queries that are executed multiple times. Only the first execution reads and joins the tables, all subsequent executions read the result from the cache. This works only for queries with a small result set. In a star schema, this is usually the case for high aggregated data (i.e. when the facts are aggregated on a high hierarchy level of the dimensions).</li> <li><strong>No Star Transformation: </strong>No indexes are used in the execution plan. There are two simple reasons for this: Indexes cannot be created manually, and there is no automatic creation of indexes in the Autonomous Data Warehouse. Because no indexes are available, no Star Transformation can be used here.</li> <li><strong>Vector Transformation: </strong>Instead of Star Transformation, an even better approach is used in ADWC: Vector Transformation (see blue lines in execution plan). This is very interesting, because Vector Transformation works only in combination with Oracle Database In-Memory. Although this feature is not supported in ADWC at the moment, this very efficient join approach for star schema queries takes place here.</li> </ul> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="color:#fff900;font-family:Monaco;font-size:10px;background-color:#6c6c6c;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</span><span style="color:#fff900;font-family:Monaco;font-size:10px;background-color:#6c6c6c;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">| Id  | Operation                                        | Name                        | Rows  |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<span style="color:#fff900;font-family:Monaco;font-size:10px;background-color:#6c6c6c;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   0 | SELECT STATEMENT                                 |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#00e52c;">|   1 |  RESULT CACHE                                    | 0v7cjd9tjv4vb78py6r10duh4r  |       |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   2 |   TEMP TABLE TRANSFORMATION                      |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|   3 |    LOAD AS SELECT                                | SYS_TEMP_0FDA1A147_1F199710 |       |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   4 |     PX COORDINATOR                               |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   5 |      PX SEND QC (RANDOM)                         | :TQ10001                    |     8 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   6 |       HASH GROUP BY                              |                             |     8 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   7 |        PX RECEIVE                                |                             |     8 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|   8 |         PX SEND HASH                             | :TQ10000                    |     8 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|   9 |          KEY VECTOR CREATE BUFFERED              | :KV0000                     |     8 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  10 |           PX BLOCK ITERATOR                      |                             |   639 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 11 |            TABLE ACCESS STORAGE FULL             | DWDATE                      |   639 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  12 |    LOAD AS SELECT                                | SYS_TEMP_0FDA1A148_1F199710 |       |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  13 |     PX COORDINATOR                               |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  14 |      PX SEND QC (RANDOM)                         | :TQ20001                    |     1 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  15 |       HASH GROUP BY                              |                             |     1 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  16 |        PX RECEIVE                                |                             |     1 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  17 |         PX SEND HASH                             | :TQ20000                    |     1 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  18 |          KEY VECTOR CREATE BUFFERED              | :KV0001                     |     1 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  19 |           PX BLOCK ITERATOR                      |                             |  6000K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 20 |            TABLE ACCESS STORAGE FULL             | CUSTOMER                    |  6000K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  21 |    LOAD AS SELECT                                | SYS_TEMP_0FDA1A149_1F199710 |       |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  22 |     PX COORDINATOR                               |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  23 |      PX SEND QC (RANDOM)                         | :TQ30001                    |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  24 |       HASH GROUP BY                              |                             |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  25 |        PX RECEIVE                                |                             |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  26 |         PX SEND HASH                             | :TQ30000                    |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  27 |          KEY VECTOR CREATE BUFFERED              | :KV0002                     |   249 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  28 |           PX BLOCK ITERATOR                      |                             | 80000 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 29 |            TABLE ACCESS STORAGE FULL             | SUPPLIER                    | 80000 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  30 |    LOAD AS SELECT                                | SYS_TEMP_0FDA1A14A_1F199710 |       |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  31 |     PX COORDINATOR                               |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  32 |      PX SEND QC (RANDOM)                         | :TQ40001                    |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  33 |       HASH GROUP BY                              |                             |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  34 |        PX RECEIVE                                |                             |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  35 |         PX SEND HASH                             | :TQ40000                    |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  36 |          KEY VECTOR CREATE BUFFERED              | :KV0003                     |  1006 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  37 |           PXLOCK ITERATOR                      |                             | 80000 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 38 |            TABLE ACCESS STORAGE FULL             | PART                        | 80000 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  39 |    PX COORDINATOR                                |                             |       |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  40 |     PX SEND QC (ORDER)                           | :TQ50004                    |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  41 |      SORT GROUP BY                               |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  42 |       PX RECEIVE                                 |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  43 |        PX SEND RANGE                             | :TQ50003                    |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  44 |         HASH GROUP BY                            |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 45 |          HASH JOIN                               |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  46 |           PX RECEIVE                             |                             |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  47 |            PX SEND BROADCAST                     | :TQ50000                    |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  48 |             PX BLOCK ITERATOR                    |                             |  1006 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  49 |              TABLE ACCESS STORAGE FULL           | SYS_TEMP_0FDA1A14A_1F199710 |  1006 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 50 |           HASH JOIN                              |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  51 |            PX RECEIVE                            |                             |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  52 |             PX SEND BROADCAST                    | :TQ50001                    |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  53 |              PX BLOCK ITERATOR                   |                             |   249 |</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  54 |               TABLE ACCESS STORAGE FULL          | SYS_TEMP_0FDA1A149_1F199710 |   249 |</span></span></p> <div> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 55 |            HASH JOIN                             |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  56 |             TABLE ACCESS STORAGE FULL            | SYS_TEMP_0FDA1A147_1F199710 |     8 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 57 |             HASH JOIN                            |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  58 |              TABLE ACCESS STORAGE FULL           | SYS_TEMP_0FDA1A148_1F199710 |     1 |</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  59 |              VIEW                                | VW_VT_846B3E5D              |   708K|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  60 |               HASH GROUP BY                      |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  61 |                PX RECEIVE                        |                             |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  62 |                 PX SEND HASH                     | :TQ50002                    |   708K|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  63 |                  VECTOR GROUP BY                 |                             |   708K|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  64 |                   HASH GROUP BY                  |                             |   708K|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  65 |                    KEY VECTOR USE                | :KV0001                     |  3729K|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  66 |                     KEY VECTOR USE               | :KV0000                     |  3875K|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  67 |                      KEY VECTOR USE              | :KV0003                     |    14M|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">|  68 |                       KEY VECTOR USE             | :KV0002                     |   247M|</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|  69 |                        PX BLOCK ITERATOR         |                             |  5999M|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">|* 70 |                         TABLE ACCESS STORAGE FULL| LINEORDER                   |  5999M|</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<span style="color:#fff900;font-family:Monaco;font-size:10px;background-color:#6c6c6c;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">Note</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">&#8212;&#8211;</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">   &#8211; automatic DOP: Computed Degree of Parallelism is 8 because of degree limit</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">   &#8211; vector transformation used for this statement</span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> </div> <p> </p> <h1>Vector Transformation</h1> <p>Like with Star Transformation, the base idea of Vector Transformation is similar: Before accessing the (usually much bigger) fact table, the result set is reduced by applying all dimension filters. To do this, the Vector Transformation is executed in two phases:</p> <p><strong>Phase 1: </strong>The following steps are performed for each dimension table with filter criteria (i.e. a WHERE condition in the query):</p> <ul> <li>The dimension table is scanned with a full table scan. All rows that do not fit the WHERE condition are ignored</li> <li>A key vector is calculated to determine which rows of the dimension are required for the query (KEY VECTOR CREATE BUFFERED in the plan)</li> <li>The data is aggregated with an “In-Memory Accumulator” and stored in a temporary table (LOAD AS SELECT in the plan)</li> </ul> <p><strong>Phase 2: </strong>Now the key vectors and temporary tables are used to find the corresponding rows in the fact table:</p> <ul> <li>A full table scan is performed on the fact table, and the data is filtered based on the pre-calculated key vectors (KEY VECTOR USE in the plan)</li> <li>Aggregation of the result set using HASH GROUP BY and VECTOR GROUP BY</li> <li>To get the required dimension attributes, a join back on the temporary table is required for each of the dimension</li> <li>Finally, additional dimension tables (without filters) are joined to the result set. This is not the case in our example.</li> </ul> <p>This approach is very fast &#8211; even faster than Star Transformation &#8211; especially for large fact tables and weak selectivities on the dimensions. The benchmark query on the 6 billion fact table was running between 1 and 7 minutes (depending on the number of CPU cores that were configured). For most star schemas (with “only” a few million rows), the queries will run in a few seconds (and if the data is highly aggregated, less than a second for the second execution because of the result cache).</p> <p>Vector Transformation was introduced in Oracle 12.1.0.2 for In-Memory Aggregations and only takes place when the In-Memory option is enabled. If this is the case, the transformation can even be used for tables that are not populated in the In-Memory Column Store.</p> <p>When we check the In-Memory parameters on an ADWC database (which cannot be changed, by the way), we can see that an In-Memory Column Store of 1 gigabyte is allocated (parameter INMEMORY_SIZE). This enables the In-Memory option and allows the optimizer to use Vector Transformation.</p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">SELECT name, value</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">FROM v$parameter</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">WHERE NAME LIKE &#8216;inmemory%&#8217;;</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">NAME                                          VALUE               </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_adg_enabled                          TRUE                </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><span style="color:#94dbeb;">inmemory_size                                 1073741824         </span><span style="color:#fff900;"> </span></span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_clause_default                                           </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_force                                DEFAULT             </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_query                                ENABLE              </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_expressions_usage                    ENABLE              </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_virtual_columns                      MANUAL              </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_max_populate_servers                 42                  </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;">inmemory_trickle_repopulate_servers_percent   1                   </span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> </p> <p> </p> <p>Although the In-Memory option is enabled, it is not possible to populate any table into the In-Memory Column Store (IMCS). If a table is created or altered with an INMEMORY clause, nothing happens. The clause is just ignored, and nothing is populated to IMCS.</p> <h1>Conclusion</h1> <p>The absence of indexes (especially bitmap indexes) in Autonomous Data Warehouse is not a problem at all, although it prevents the usage of Star Transformation. Queries on a star schema are very efficient because of the combination of Vector Transformation, Parallel Execution and Result Cache. This is a very good setup for most data warehouses using dimensional data marts.</p> <p>The performance could even be improved with Oracle Database In-Memory. Currently, this feature seems not to be used in ADWC. Hopefully, this will be changed somewhen in the near future.</p> Dani Schnider http://danischnider.wordpress.com/?p=601 Thu Sep 13 2018 12:03:42 GMT-0400 (EDT) What to Know Before Moving Hyperion to the Cloud https://www.us-analytics.com/hyperionblog/what-to-know-before-moving-hyperion-to-the-cloud <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/what-to-know-before-moving-hyperion-to-the-cloud" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/move%20hyperion%20to%20the%20cloud.jpg?t=1541832538128" alt="move hyperion to the cloud.jpg" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p><span style="background-color: transparent;">There’s a lot of information out there about moving from Oracle Hyperion to the Oracle EPM Cloud, which makes sense — there’s a lot you need to know. However, that saturation of content can be difficult to sift through and keep organized. This blog post is your solution.</span></p> <p>In it you’ll find a high-level overview of what you need to know before moving your on-prem tools to the Oracle EPM Cloud, along with links to more in-depth content. You’ll have a single point of reference to answer your questions about moving to the cloud.</p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fwhat-to-know-before-moving-hyperion-to-the-cloud&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/what-to-know-before-moving-hyperion-to-the-cloud Wed Sep 12 2018 15:22:00 GMT-0400 (EDT) ODTUG Kscope18 Session Recordings & Presentations Now Available to ODTUG Members https://www.odtug.com/p/bl/et/blogaid=827&source=1 If you are a paid ODTUG member who was unable to attend ODTUG Kscope18, we have great news for you! The ODTUG Kscope18 session presentations and recordings are NOW AVAILABLE to you! ODTUG https://www.odtug.com/p/bl/et/blogaid=827&source=1 Wed Sep 12 2018 10:14:04 GMT-0400 (EDT) ODTUG Board of Directors Nomination Deadline is September 20! https://www.odtug.com/p/bl/et/blogaid=825&source=1 Are you a paid ODTUG member (or will you be as of September 30, 2018)? Do you have a lot to offer the ODTUG community? Do you have a passion for ODTUG and time to commit to serving on the board? If so, then I encourage you to submit your name to be considered for the 2019-2020 BOD. ODTUG https://www.odtug.com/p/bl/et/blogaid=825&source=1 Tue Sep 11 2018 09:30:20 GMT-0400 (EDT) PBCS and EPBCS Updates (September 2018): REST APIs for Managing Users, Unified User Preferences, Upcoming Changes & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-september-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-september-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20and%20epbcs%20september%202018.jpg?t=1541832538128" alt="pbcs and epbcs september 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The September updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;have arrived!&nbsp;</span>This blog post outlines several new features, including REST APIs for managing users, unified user preferences, and more.</p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, September 21 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-september-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-september-updates Mon Sep 10 2018 18:00:18 GMT-0400 (EDT) FCCS Updates (September 2018): Toolkit for HFM Migration, Drilling Down from Summary Members, Considerations & More https://www.us-analytics.com/hyperionblog/fccs-updates-september-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-september-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/fccs%20september%202018.jpg?t=1541832538128" alt="fccs september 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The Septemer updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including a toolkit for Hyperion Financial Management (HFM) migration to FCCS, drilling down from summary members, considerations, and more.</p> <p><em>The monthly update for FCCS will occur on Friday, September 21 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-september-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-september-2018 Mon Sep 10 2018 17:27:54 GMT-0400 (EDT) Welcome to ODTUG Kscope19! https://www.odtug.com/p/bl/et/blogaid=824&source=1 Welcome to a new ODTUG Kscope conference planning year! After a much needed hiatus, the team has kicked off the new conference season, and we are super excited about going back to Seattle! ODTUG https://www.odtug.com/p/bl/et/blogaid=824&source=1 Mon Sep 10 2018 16:51:00 GMT-0400 (EDT) OOW18 and Code One agendas with Date and Times http://www.oralytics.com/2018/09/oow18-and-code-one-agendas-with-date.html <p>I've just received an email in from the organisers of Oracle Open World (18) and Oracle Code One (formally Java One) with details of when I will be presenting.</p> <p>It's going to be a busy presenting schedule this year with 4 sessions. </p><p>It's going to be a busy presenting schedule this year with 3 sessions on the Monday.</p> <p>Check out my sessions, dates and times.</p> <p><img src="https://lh3.googleusercontent.com/-CslcSswRtCU/W5Izs8B8aII/AAAAAAAAAfo/RDhbH2mH1H8PwiC-J03h7wYM6pFlE8RvwCHMYCw/Screenshot%2B2018-09-07%2B09.10.11.png?imgmax=1600" alt="Screenshot 2018 09 07 09 10 11" title="Screenshot 2018-09-07 09.10.11.png" border="0" width="370" height="650" /></p> <p>In addition to these sessions I'll also be helping out in the Demo area in the Developer Lounge. I'll be there on Wednesday afternoon handing out FREE beer.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-4152009642431719747 Fri Sep 07 2018 04:15:00 GMT-0400 (EDT) Discoverer Migration to OBIEE: 3 Methods You Can Use https://www.us-analytics.com/hyperionblog/discoverer-to-oracle-bi-migration <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/discoverer-to-oracle-bi-migration" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/discoverer%20to%20oracle%20bi.jpg?t=1541832538128" alt="discoverer to oracle bi" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Is your organization still using Oracle Discoverer? By now, you’re probably aware of <span><strong><a href="https://www.oracle.com/technetwork/developer-tools/discoverer/overview/discoverer-sod-jan2009-132849.pdf">the end of extended support for Discoverer</a></strong>,</span> and that we are well outside the extended support window. You could still be using Discoverer for several reasons, but the fact remains, having your business rely on a tool which is no longer supported is <strong>expensive</strong>, <strong>risky</strong>, and <strong>dangerous</strong>.</p> <p>The Discoverer migration paths aren’t exactly clear, though. Luckily, there’s an upgrade path to <span>Oracle Business Intelligence (OBIEE) or Oracle Analytics Cloud (OAC)&nbsp;</span>that is simple, fast, and affordable. If this sounds like exactly what you need, then feel free to skip down to the bottom. For those of you not convinced of the importance of migration, read on...</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fdiscoverer-to-oracle-bi-migration&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/discoverer-to-oracle-bi-migration Thu Sep 06 2018 14:55:13 GMT-0400 (EDT) How to Connect Power BI to OBIEE https://www.us-analytics.com/hyperionblog/connect-power-bi-to-obiee <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/connect-power-bi-to-obiee" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/power%20bi_4.png?t=1541832538128" alt="power bi_4" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Microsoft’s Power BI is a powerful and popular self-service BI application. It allows reports to be developed at an incredible pace without compromising visualization flexibility or design, in addition to being many times more affordable than traditional Enterprise BI applications.</p> <p>However, as great as Power BI is, it suffers from some of the same problems that all self-service BI applications do. Chief among them is the accuracy and security of the reporting data. Lucky for us, we still use our Enterprise BI application, which guarantees accuracy and security through a governed data warehouse. Leveraging our Enterprise BI application as a data source, while developing reports with Power BI, we can create a secure, accurate, and reliable workflow.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fconnect-power-bi-to-obiee&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/connect-power-bi-to-obiee Thu Sep 06 2018 12:45:56 GMT-0400 (EDT) Upcoming ODTUG Webinars - September https://www.odtug.com/p/bl/et/blogaid=815&source=1 Our webinar calendar is constantly evolving! Check back frequently as more webinars will be added to the schedule. ODTUG https://www.odtug.com/p/bl/et/blogaid=815&source=1 Wed Sep 05 2018 10:46:45 GMT-0400 (EDT) ODTUG Kscope Session Highlights - Part 1 https://www.odtug.com/p/bl/et/blogaid=814&source=1 Curious about the content you'll see at ODTUG Kscope19? As we look ahead to opening abstract submissions for ODTUG Kscope19, we would like to share some of the content highlights from ODTUG Kscope18. The following presentation recordings demonstrate the high-caliber content you'll see at Kscope19. ODTUG https://www.odtug.com/p/bl/et/blogaid=814&source=1 Wed Sep 05 2018 10:44:51 GMT-0400 (EDT) Join the 2019 ODTUG Board of Directors - Call for Nominations Now Open https://www.odtug.com/p/bl/et/blogaid=812&source=1 The 2019 Board of Directors nominations are now open! Read this blog to learn more about the roles and responsibilities and to submit a nomination. ODTUG https://www.odtug.com/p/bl/et/blogaid=812&source=1 Tue Sep 04 2018 08:49:11 GMT-0400 (EDT) Bringing Neural Networks to Production using GraphPipe http://www.oralytics.com/2018/08/bringing-neural-networks-to-production.html <p>Machine learning is a fascinating topic. It has so much potential yet very few people talk about using machine learning in production. I've been highlighting the need for this for over 20 years now and only a very small number of machine learning languages and solutions are suitable for production use. Why? maybe it is due to the commercial aspects and as many of the languages and tools are driven by the open source community, one of the last things they get round to focusing on is production deployment. Rightly they are focused at developing more and more machine learning algorithms and features for developing models, but where the real value comes is will being able to embed machine learning model scoring in production system. Maybe this why the dominant players with machine learning in enterprises are still the big old analytics companies.</p> <p>Yes that was a bit a of a rant but it is true. But over the summer and past few months there has been a number of articles about production deployment.</p> <p>But this is not a new topic. For example, we have Predictive Model Markup Language (PMML) around for a long time. The aim of this was to allow the interchange of models between different languages. This would mean that the data scientist could develop their models using one language and then transfer or translate the model into another language that offers the same machine learning algorithms.</p> <p>But the problem with this approach is that you may end up with different results being generated by the model in the development or lab environment versus the model being used in production. Why does this happen? Well the algorithms are developed by different people/companies and everyone has their preferences for how these algorithms are implemented.</p> <p>To over come this some companies would rewrite their machine learning algorithms and models to ensure that development/lab results matched the results in production. But there is a very large cost associated with this development and ongoing maintenance as the models evolved. This would occur, maybe, every 3, 6, 9, 12 months. Somethings the time to write or rewrite each new version of the model would be longer than its lifespan.</p> <p>These kind of problems have been very common and has impacted on model deployment in production. </p> <p>In the era of cloud we are now seeing some machine learning cloud solutions making machine learning models available using REST services. These can, very easily, allow for machine learning models to be included in production applications. You are going to hear more about this topic over the coming year.</p> <p>But, despite all the claims and wonders and benefits of cloud solutions, it isn't for everyone. Maybe at some time in the future but it mightn't be for some months or years to come.</p> <p>So, how can we easily add machine learning model scoring/labeling to our production systems? Well we need some sort of middleware solutions.</p> <p>Given the current enthusiasm for neural networks, and the need for GPUs, means that these cannot (easily) be deployed into production applications.</p> <p>There have been some frameworks put forward for how to enable this. Once such framework is called <a href="https://oracle.github.io/graphpipe/#/">Graphpipe</a>. This has recently been made open source by Oracle.</p> <p><img src="https://lh3.googleusercontent.com/-3-4cia6NqeY/W4bVlCMlHJI/AAAAAAAAAfU/FkJ0lO4X6fkrgqZ0c3Ta1KsN_n6tIxopgCHMYCw/graphpipe.jpg?imgmax=1600" alt="Graphpipe" title="graphpipe.jpg" border="0" width="370" height="280" /></p> <p>Graphpipe is a framework that to access and use machine learning models developed and running on different platforms. The framework allows you to perform model scoring across multiple neural networks models and create ensemble solutions based on these. Graphpipe development has been focused on performance (most other frameworks don't). It uses flatbuffers for efficient transfer of data and currently has integration with TensorFlow, PyTorch, MXNet, CNTK and via ONNX and caffe2. </p> <p>Expect to have more extensions added to the framework.</p> <p><a href="https://oracle.github.io/graphpipe/#/">Graphpipe website</a></p><p><a href="https://oracle.github.io/graphpipe/#/guide/user-guide/quickstart">Graphpipe getting started</a></p><p><a href="https://blogs.oracle.com/developers/introducing-graphpipe">Graphpipe blogpost</a></p><p><a href="https://github.com/oracle/graphpipe">Graphpipe download</a></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-9217704337069469123 Wed Aug 29 2018 13:19:00 GMT-0400 (EDT) Three Methods for Restarting OAC https://www.us-analytics.com/hyperionblog/restarting-oracle-analytics-cloud <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/restarting-oracle-analytics-cloud" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/restarting%20oac.jpg?t=1541832538128" alt="restarting oac" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this blog post, I’ll go through three different methods for restarting Oracle Analytics Cloud (OAC), including the uses cases and tutorials for each.</p> <p>The steps in my previous two-minute tutorial — <span><a href="https://www.us-analytics.com/hyperionblog/opening-ports-oac">Opening Ports for OAC – EM Browser Access and RPD Admin Tool Access</a></span> — are prerequisites for the tutorials below.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Frestarting-oracle-analytics-cloud&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Becky Wagner https://www.us-analytics.com/hyperionblog/restarting-oracle-analytics-cloud Tue Aug 28 2018 15:51:53 GMT-0400 (EDT) Looker for OBIEE Experts: Introduction and Concepts https://www.rittmanmead.com/blog/2018/08/looker-for-obiee-experts-introduction-and-concepts/ <img src="https://www.rittmanmead.com/blog/content/images/2018/08/2018-08-23_09-51-07-1.png" alt="Looker for OBIEE Experts: Introduction and Concepts"><p>Recently I've been doing some personal study around various areas including streaming, machine learning and data visualization and one of the tools that got my attention is Looker. I've initially heard about Looker from a <a href="https://www.drilltodetail.com/podcast/2017/3/28/drill-to-detail-ep23-looker-bigquery-and-analytics-on-big-data-with-special-guest-daniel-mintz-1">Drill to Detail podcast</a> and increasingly been hearing about it in conferences and use cases together with other cloud solutions like <a href="https://cloud.google.com/bigquery/">BigQuery</a>, <a href="https://www.snowflake.com">Snowflake</a> and <a href="https://fivetran.com">Fivetran</a>.</p> <p>I decided to give it a try myself and, since most of my career was based on Oracle Business Intelligence (OBI) writing down a comparison between the tools that could help others sharing my experience getting introduced to Looker.</p> <h1 id="obieesgoldenfeaturethesemanticmodel">OBIEE's Golden Feature: The Semantic Model</h1> <p>As you probably know if you have been working with OBIEE for some time the centrepiece of its architecture is the <strong>Semantic Model</strong> contained in the <strong>Repository</strong> (RPD)</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/08/051467sshot-1-18.jpg" alt="Looker for OBIEE Experts: Introduction and Concepts"></p> <p>In the three layers of the RPD, we model our source data (e.g. database tables) into attributes, metrics, hierarchies which can then be easily dragged and dropped by the end-user in the analysis or data visualization.</p> <p>I called the RPD &quot;OBIEE's Golden Feature&quot; because to me it's the main benefit of the platform: <strong>abstracting the data complexity from end-users</strong> and, at the same time, <strong>optimizing the query</strong> definition to take care of all the features that could be set in the datasource. The importance of the RPD is also its <strong>centrality</strong>: within the traditional OBIEE all Analysis and Dashboard had to be based on Subject Areas exposed by the RPD meaning that the definition of the metrics was done in a unique place in a consistent manner and then spread across all the reporting providing the <strong>unique source of truth</strong> for the important KPIs in the company typical of what Gartner calls the Mode 1 Analytics.</p> <h1 id="rpddevelopmentspeedlimitationandmode2analytics">RPD Development Speed Limitation and Mode 2 Analytics</h1> <p>The RPD is a centralized binary object within the OBIEE infrastructure: in order to develop and test a full OBIEE instance is required, and the merges between different streams are natively performed via the RPD's admin tool.</p> <p>This complexity unified to the deep knowledge required to correctly build a valid semantic model limits the number of people being able to create and publish new content thus slowing down the process from data to insights typical of the centralized Mode 1 Analytic platform provided centrally by IT teams. Moreover, RPD development is entirely point-and-click within the admintool which is somehow considered slow and old fashion in a world of scripting, code versioning and git merging. Several solutions are out in the market (including <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">Rittman Mead Developer Toolkit</a>) to enhance the agility of the development but still, the skills and the toolset required to develop new content makes it a purely IT manageable solution.</p> <p>In order to overcome this limitation several tools like Tableau, QlikView or Oracle's Data Visualization (included in OAC or in the Desktop version) give all the power in the ends of the end-user: from data-sources to graphing, the tools allow an end-to-end data discovery to visualization journey. The problem with those tools (called Mode 2 Analytics by Gartner) is that there is <strong>no central definition of the KPI</strong> since it's demanded to every analyst. All those tools are addressing the problem by providing some sort of <strong>datasource certification</strong> allowing a datasource to be visible and reusable publicly only when it's validated centrally. Again, for most of those tools, the modelling is done in a visual format, which makes it difficult to debug, version control and automate. I've been speaking about this subject in my presentation &quot;<a href="https://speakerdeck.com/ftisiot/devops-and-obiee-do-it-before-its-too-late">DevOps and OBIEE do it before it's too late</a>&quot;.</p> <p>What if we could provide the same centralized source of truth data modelling with an easily scriptable syntax that can be developed from business users without any deep knowledge of SQL or source tables? Well, what we just described is <strong>LookML</strong>!</p> <h1 id="lookml">LookML</h1> <p>LookerML takes the best part of OBIEE: the idea of a modelling layer and democratizes it in order to be available to all business user with a simple language and set of concepts. Moreover, the code versioning is embedded in the tool, so there's no need to teach git branch, commit, push or pull to non-IT people.</p> <p>So, what are the concepts behing LookerML and how can you get familiar with it when comparing it to the medatada modelling in the RPD?</p> <h2 id="lookmlconcepts">LookML Concepts</h2> <p>Let's start from the basic of the RPD modelling: a database table. In LookerML each table is represented by an object called <strong>View</strong> (naming is a bit confusing). Moreover, LookerML's Views can be used not only to map existing database tables but also to create new tables based on existing content and a SQL definition, like the <em>opaque views</em> in OBIEE. On top of this LookML allows the phisicalization of those objects (into a table) and the definition of a schedule for the refresh. This concept is very useful when aggregates are needed, the aggregate definition (SQL) is defined within the LookML View together with the related refresh schedule.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/08/2018-08-23_09-24-51.png" alt="Looker for OBIEE Experts: Introduction and Concepts"></p> <p>The View itself defines only the source, a bit like the RPD's physical layer, the next step is defining how multiple Views interact within each other, or, in OBIEE terms, the Business Layer. In LookML there is an entity called <strong>Explores</strong> and is the place where we can define which Views we want to group together, and what's the linkage between them. Multiple Explores are defined in a <strong>Model</strong>, which should be unique per database. So, in OBIEE words, a Model can be compared to a Business Model with Explores being a subset of Facts and Dimensions grouped in a Subject Area.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/08/2018-08-23_09-51-07.png" alt="Looker for OBIEE Experts: Introduction and Concepts"></p> <p>Ok, all &quot;easy&quot; so far, but where do we map the columns? and where do we set the aggregations? As you might expect both are mapped within a LookML View into <strong>Fields</strong>. Fields is a generic term which includes in both metrics and attributes, LookML naming is the below:</p> <ul> <li><strong>Dimension</strong>: in OBIEE's terms attributes of a dimension. The terminology is confusing since in LookML the Dimension is the column itself while in OBIEE terms is the table. A Dimension can be a column value or a combination of multiple values (like OBIEE's BM Logical Sources formulas). A Dimension in LookML can't have any aggregation (as in OBIEE).</li> <li><strong>Measures</strong>: in OBIEE's terms a metric. The definition includes, the source formula in SQL syntax, the type of aggregation (min/max/count...) and the drill fields.</li> <li><strong>Filters</strong>: this is not something usually defined in OBIEE's RPD, filters are a way of passing a user choice based on a column value back to an RPD calculation formula, a bit like, for the OBIEE experts, overriding session variables with dashboard prompt values.</li> <li><strong>Parameters</strong>: again this is not something usually defined in OBIEE's RPD, you can think a Parameter as a way of setting up variables function. E.g. a Parameter with values SUM, AVG, MIN, MAX could be used to change how a certain Measure is aggregated</li> </ul> <p>All good so far? Stick with me and in the future we'll explore more about LookML syntax and Looker in general!</p> Francesco Tisiot 5b7eb0805ffec000bfcaa78e Thu Aug 23 2018 09:28:50 GMT-0400 (EDT) ODTUG August News 2018 https://www.odtug.com/p/bl/et/blogaid=813&source=1 ODTUG August News ODTUG https://www.odtug.com/p/bl/et/blogaid=813&source=1 Wed Aug 22 2018 09:28:40 GMT-0400 (EDT) Parsing Badly Formatted JSON in Oracle DB with APEX_JSON https://www.rittmanmead.com/blog/2018/08/parsing-badly-formatted-json-in-oracle-db-with-apex_json/ <img src="https://www.rittmanmead.com/blog/content/images/2018/08/2018-08-20_11-37-01-1.png" alt="Parsing Badly Formatted JSON in Oracle DB with APEX_JSON"><p>After some blogging silence due to project work and holidays, I thought it was a good idea to do a write-up about a problem I faced this week. One of the tasks I was assigned was to parse a set of JSON files stored in an Oracle 12.1 DB Table.</p> <p>As probably all of you already know <strong>JSON</strong> (JavaScript Object Notation) is a lightweight data-interchange format and is the format used widely when talking of web-services due to its flexibility. In JSON there is no header to define (think CSV as example), every field is defined in a format like <code>&quot;field name&quot;:&quot;field value&quot;</code>, there is no &quot;set of required columns&quot; for a JSON object, when a new attribute needs to be defined, the related name and value can be added to the structure. On top of this &quot;schema-free&quot; definition, the <code>field value</code> can either be</p> <ul> <li>a single value</li> <li>an array</li> <li>a nested JSON object</li> </ul> <p>Basically, when you start parsing JSON you feel like</p> <p><img src="https://i.imgflip.com/2etnau.gif" alt="Parsing Badly Formatted JSON in Oracle DB with APEX_JSON"></p> <h1 id="theeasypart">The Easy Part</h1> <p>The task assigned wasn't too difficult, after reading the proper documentation, I was able to parse a JSON File like</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot; } </code></pre> <p>Using a simple SQL like</p> <pre><code>select * from TBL_NAME d, JSON_TABLE(d.text, '$' COLUMNS ( field1 VARCHAR2(10) PATH '$.field1', field2 VARCHAR2(10) PATH '$.field2' ) ) </code></pre> <p>Parsing arrays is not very complex either, a JSON file like</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot;, &quot;field3&quot;: [&quot;fgh&quot;,&quot;ilm&quot;,&quot;nop&quot;] } </code></pre> <p>Can be easily parsed using the <code>NESTED PATH</code> call</p> <pre><code>select * from TBL_NAME d, JSON_TABLE(d.text, '$' COLUMNS ( field1 VARCHAR2(10) PATH '$.field1', field2 VARCHAR2(10) PATH '$.field2', NESTED PATH '$.field3[*]' COLUMNS ( field3 VARCHAR2(10) PATH '$' ) ) ) </code></pre> <p>In case the Array contains nested objects, those can be parsed using the same syntax as before, for example, <code>field4</code> and <code>field5</code> of the following JSON</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot;, &quot;field3&quot;: [ { &quot;field4&quot;:&quot;fgh&quot;, &quot;field5&quot;:&quot;ilm&quot; }, { &quot;field4&quot;:&quot;nop&quot;, &quot;field5&quot;:&quot;qrs&quot; } ] } </code></pre> <p>can be parsed with</p> <pre><code>NESTED PATH '$.field3[*]' COLUMNS ( field4 VARCHAR2(10) PATH '$.field4', field5 VARCHAR2(10) PATH '$.field5' ) </code></pre> <h1 id="wherethingsgotcomplicated">...Where things got complicated</h1> <p>All very very easy with well-formatted JSON files, but then I faced the following</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot;, &quot;field3&quot;: [ { &quot;field4&quot;: &quot;aaaa&quot;, &quot;field5&quot;:{ &quot;1234&quot;:&quot;8881&quot;, &quot;5678&quot;:&quot;8893&quot; } }, { &quot;field4&quot;: &quot;bbbb&quot;, &quot;field5&quot;:{ &quot;9876&quot;:&quot;8881&quot;, &quot;7654&quot;:&quot;8945&quot;, &quot;4356&quot;:&quot;7777&quot; } } ] } </code></pre> <p>Basically the JSON file started including fields with names representing the Ids meaning an association like <code>Product Id</code> (1234) is member of <code>Brand Id</code> (8881). This immediately triggered my reaction:</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/08/2018-08-20_11-37-01.png" alt="Parsing Badly Formatted JSON in Oracle DB with APEX_JSON"></p> <p>After checking the documentation again, I wasn't able to find anything that could help me parsing that, since all the calls were including a predefined <code>PATH</code> string, that in the case of Ids I couldn't know beforehand.</p> <p>I then reached out to my network on Twitter</p> <div align="center"> <blockquote class="twitter-tweet" data-lang="it"><p lang="en" dir="ltr">To all my <a href="https://twitter.com/Oracle?ref_src=twsrc%5Etfw">@Oracle</a> SQL friends out there: I need to parse a JSON object which has a strange format of {“name”:”abc”, “345678”:”123456”} with the 345678 being an Id I need to extract, any suggestions? none of the ones mentioned here seems to help <a href="https://t.co/DRWdGvCVfu">https://t.co/DRWdGvCVfu</a> <a href="https://t.co/PfhtUnAeR4">pic.twitter.com/PfhtUnAeR4</a></p>&mdash; Francesco Tisiot (@FTisiot) <a href="https://twitter.com/FTisiot/status/1029288292280352768?ref_src=twsrc%5Etfw">14 agosto 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p>That generated quite a lot of responses. Initially, the discussion was related to the correctness of the JSON structure, that, from a purist point of view should be mapped as</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot;, &quot;field3&quot;: [ { &quot;field4&quot;: &quot;aaaa&quot;, &quot;field5&quot;: { &quot;association&quot;: [ {&quot;productId&quot;:&quot;1234&quot;, &quot;brandId&quot;:&quot;8881&quot;}, {&quot;productId&quot;:&quot;5678&quot;, &quot;brandId&quot;:&quot;8893&quot;} ] }, }, { &quot;field4&quot;: &quot;bbbb&quot;, &quot;field5&quot;: { &quot;association&quot;: [ {&quot;productId&quot;:&quot;9876&quot;, &quot;brandId&quot;:&quot;8881&quot;}, {&quot;productId&quot;:&quot;7654&quot;, &quot;brandId&quot;:&quot;8945&quot;}, {&quot;productId&quot;:&quot;4356&quot;, &quot;brandId&quot;:&quot;7777&quot;} ] } } ] } </code></pre> <p>basically going back to standard field names like productId and brandId that could be easily parsed. In my case this wasn't possible since the JSON format was aready widely used at the client.</p> <h1 id="possiblesolutions">Possible Solutions</h1> <p>Since a change in the JSON format wasn't possible, I needed to find a way of parsing it, few solutions were mentioned in the twitter thread:</p> <ul> <li>Regular Expressions</li> <li>Bash external table preprocessor</li> <li>Java Stored functions</li> <li>External parsing before storing data into the database</li> </ul> <p>All the above were somehow discarded since I wanted to try achieving a solution based only on existing database functions. Other suggestion included <a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adjsn/json-dataguide.html#GUID-219FC30E-89A7-4189-BC36-7B961A24067C">JSON_DATAGUIDE</a> and <a href="https://oracle-base.com/articles/12c/plsql-object-types-for-json-12cr2">JSON_OBJECT.GET_KEYS</a> that unfortunately are available only from 12.2 (I was on 12.1).</p> <p>But, just a second before surrendering, <a href="https://twitter.com/alanarentsen/status/1029824227875217409">Alan Arentsen</a> suggested using <a href="https://docs.oracle.com/cd/E59726_01/doc.50/e39149/apex_json.htm#AEAPI29635">APEX_JSON.PARSE</a> procedure!</p> <h1 id="thechosenoneapex_json">The Chosen One: APEX_JSON</h1> <p>The <code>APEX_JSON</code> package offers a series of procedures to parse JSON in a PL/SQL package, in particular:</p> <ul> <li><strong>PARSE</strong>: Parses a JSON formatted string contained in a <code>VARCHAR2</code> or <code>CLOB</code> storing all the members.</li> <li><strong>GET_COUNT</strong>: Returns the number of array elements or <strong>object members</strong></li> <li><strong>GET_MEMBERS</strong>: Returns the table of members of an object</li> </ul> <p>You can already imagine how a combination of those calls can parse the JSON text defined above, let's have a look at the JSON again:</p> <pre><code>{ &quot;field1&quot;: &quot;abc&quot;, &quot;field2&quot;: &quot;cde&quot;, &quot;field3&quot;: [ { &quot;field4&quot;: &quot;aaaa&quot;, &quot;field5&quot;:{ &quot;1234&quot;:&quot;8881&quot;, &quot;5678&quot;:&quot;8893&quot; } }, { &quot;field4&quot;: &quot;bbbb&quot;, &quot;field5&quot;:{ &quot;9876&quot;:&quot;8881&quot;, &quot;7654&quot;:&quot;8945&quot;, &quot;4356&quot;:&quot;7777&quot; } } ] } </code></pre> <p>The parsing process should iterate over the <code>field3</code> entries (2 in this case), and for each entry, then iterate over the fields in <code>field5</code> to get both the field name as well as the field value.<br> The number of <code>field3</code> entries can be found with</p> <pre><code>APEX_JSON.GET_COUNT(p_path=&gt;'field3',p_values=&gt;j); </code></pre> <p>And the list of members of <code>field5</code> with</p> <pre><code>APEX_JSON.GET_MEMBERS(p_path=&gt;'field3[%d].field5',p_values=&gt;j,p0=&gt;i); </code></pre> <p>Note the <code>p_path</code> parameter set to <code>field3[%d].field5</code> meaning that we want to extract the <code>field5</code> from the nth row in <code>field3</code>. The rownumber is defined by <code>p0=&gt;i</code> with <code>i</code> being the variable we use in our <code>FOR</code> loop.</p> <p>The complete code is the following</p> <pre><code>DECLARE j APEX_JSON.t_values; r_count number; field5members WWV_FLOW_T_VARCHAR2; p0 number; BrandId VARCHAR2(10); BEGIN APEX_JSON.parse(j,'&lt;INSERT_JSON_STRING&gt;'); # Getting number of field3 elements r_count := APEX_JSON.GET_COUNT(p_path=&gt;'field3',p_values=&gt;j); dbms_output.put_line('Nr Records: ' || r_count); # Looping for each element in field3 FOR i IN 1 .. r_count LOOP # Getting field5 members for the ith member of field3 field5members := APEX_JSON.GET_MEMBERS(p_path=&gt;'field3[%d].field5',p_values=&gt;j,p0=&gt;i); # Looping all field5 members FOR q in 1 .. field5members.COUNT LOOP # Extracting BrandId BrandId := APEX_JSON.GET_VARCHAR2(p_path=&gt;'field3[%d].field5.'||field5members(q) ,p_values=&gt;j,p0=&gt;i); # Printing BrandId and Product Id dbms_output.put_line('Product Id =&quot;'||field5members(q)||'&quot; BrandId=&quot;'||BrandId ||'&quot;'); END LOOP; END LOOP; END; </code></pre> <p>Note that, in order to extract the <code>BrandId</code> we used</p> <pre><code>APEX_JSON.GET_VARCHAR2(p_path=&gt;'field3[%d].field5.'||field5members(q) ,p_values=&gt;j,p0=&gt;i); </code></pre> <p>Specifically the <code>PATH</code> is <code>field3[%d].field5.'||field5members(q)</code>. As you can imagine we are appending the member name (<code>field5members(q)</code>) to the path described previously to extract the value, forming a string like <code>field3[1].field5.1234</code> that will correctly extract the value associated.</p> <h1 id="conclusion">Conclusion</h1> <p>Three things to save from this experience. The first is the usage of <strong>JSON_TABLE</strong>: with JSON_TABLE you can parse well-constructed JSON documents and it's very easy and powerful.<br> The second: <strong>APEX_JSON</strong> useful package to parse &quot;not very well&quot; constructed JSON documents, allows iteration across elements of JSON arrays and object members.<br> The last, which is becoming every day more relevant in my career, is the importance of networking and knowledge sharing: blogging, speaking at conferences, helping others in various channels allows you to know other people and be known with the nice side effect of sometimes being able with a single tweet to get help solving problems you may face!</p> Francesco Tisiot 5b7a82855ffec000bfcaa77f Mon Aug 20 2018 06:38:46 GMT-0400 (EDT) Opening Ports for OAC - EM Browser Access and RPD Admin Tool Access [Two Minute Tutorial] https://www.us-analytics.com/hyperionblog/opening-ports-oac <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/opening-ports-oac" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/ports%203.png?t=1541832538128" alt="ports 3" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this two-minute tutorial, I’ll show you how to open a port for <span><a href="https://www.us-analytics.com/hyperionblog/oracle-analytics-cloud-questions">Oracle Analytics Cloud (OAC)</a></span> to get EM browser access as well as access to the RPD admin tool.</p> <p>The steps in this tutorial are necessary for the subject of my upcoming blog post — three different methods for restarting OAC.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fopening-ports-oac&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Becky Wagner https://www.us-analytics.com/hyperionblog/opening-ports-oac Fri Aug 17 2018 17:32:19 GMT-0400 (EDT) PBCS Data Backup and Recovery Scenarios [Tutorial] https://www.us-analytics.com/hyperionblog/pbcs-data-backup-and-recovery-scenarios <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-data-backup-and-recovery-scenarios" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/PBCS%20Data%20Backup%20and%20Recovery%20Scenarios.jpg?t=1541832538128" alt="PBCS Data Backup and Recovery Scenarios" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-family: 'Calibri',sans-serif;">Recently, I created several types of backups and recovering methods using the applications snapshots, PBCS exports, and Essbase Data Export business rules to export and import data.</span></p> <p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-family: 'Calibri',sans-serif;">The application exports data in a formatted file in a way that another application can read it and use the data. This typically requires adjusting for naming conventions. We can do this in SQL and in FDMEE. If it’s straightforward — like for recovering data or for migrating data into a different environment — then the Essbase data export will work. The PBCS data export will also work if we’re looking for a non-technical approach. These methods enable the two systems to share the same data.</span></p> <p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-family: 'Calibri',sans-serif;">In searching for the best method, I’ve found a few different options. In this blog post, I’ll show you the business case for PBCS data backup and recovery, along with how to execute several of these techniques. </span></p> <span style="color: #000000;"></span> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-data-backup-and-recovery-scenarios&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Jeff Price https://www.us-analytics.com/hyperionblog/pbcs-data-backup-and-recovery-scenarios Wed Aug 15 2018 13:24:40 GMT-0400 (EDT) Spark docker images http://www.oralytics.com/2018/08/spark-docker-images.html <p>Spark is a very popular environment for processing data and doing machine learning in a distributed environment.</p> <p>When working in a development environment you might work on a single node. This can be your local PC or laptop, as not everyone will have access to a multi node distributed environment.</p> <p>But what if you could spin up some docker images there by creating additional nodes for you to test out the scalability of your Spark code.</p> <p>There are links to some Docker images that may help you to do this.</p> <ul> <li><a href="https://hub.docker.com/r/mesosphere/spark/">Mesosphere - Docker repository for Spark image</a></li> <li><a href="https://github.com/big-data-europe/docker-spark">Big Data Europe - Spark Docker images on GitHub</a></li> <li><a href="https://github.com/gettyimages/docker-spark">GettyImages - Spark Docker image on GitHub</a> and also available on <a href="https://hub.docker.com/r/gettyimages/spark/">Docker website</a></li> <li><a href="https://hub.docker.com/r/sequenceiq/spark/">SequenceIQ - Docker repository Spark image</a></li></ul> <p>Or simply create a cloud account on the <a href="https://databricks.com/try-databricks">Databricks Community website</a> to create your own Spark environment to play and learn.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-3260388491940866630 Mon Aug 13 2018 06:49:00 GMT-0400 (EDT) Two Minute Tutorial: How to Access the OAC RPD https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/how%20to%20access%20oac%20rpd.jpg?t=1541832538128" alt="how to access oac rpd" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this two-minute tutorial, I’ll walk you through how to access the OAC RPD in two methods…</p> <ul> <li>Accessing it through the Admin Tool</li> <li>SSH into the server</li> </ul> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fhow-to-access-the-oac-rpd&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Becky Wagner https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd Fri Aug 03 2018 15:25:22 GMT-0400 (EDT) PBCS and EPBCS Updates (August 2018): Incremental Export and Import Behavior Change, Updated Vision Sample Application & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20and%20epbcs%20august%202018%20updates.jpg?t=1541832538128" alt="pbcs and epbcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The August updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;have arrived!&nbsp;</span>This blog post outlines several new features, including an&nbsp;i<span>ncremental export and import behavior change, updated vision sample application, and more.</span></p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-august-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates Fri Aug 03 2018 13:11:46 GMT-0400 (EDT) FCCS Updates (August 2018): Enhancements to Close Manager, Ability to Create Journals for Entities with Different Parents & More https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/fccs%20update%20august%202018.jpg?t=1541832538128" alt="fccs update august 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The August updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including enhancements made to Close Manager, c<span>reate journals for entities with different parents, and more.</span></p> <p><em>The monthly update for FCCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018 Fri Aug 03 2018 12:17:09 GMT-0400 (EDT) ARCS Updates (August 2018): Changes to Filtering on Unmatched Transactions in Transaction Matching, Considerations & More https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/arcs%20august%202018%20updates.jpg?t=1541832538128" alt="arcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The August updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) are here. In this blog post, we’ll outline new features in ARCS, including changes to filtering on unmatched transaction matching, considerations, and more.</p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018 Thu Aug 02 2018 17:18:24 GMT-0400 (EDT) EPRCS Updates (August 2018): Drill to Source Data in Management Reporting, Improved Variable Panel Display in Smart View & More https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/eprcs%20august%202018%20updates.jpg?t=1541832538128" alt="eprcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this blog, we'll cover the August updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/enterprise-performance-reporting-cloud">Oracle Enterprise Performance Reporting Cloud Service (EPRCS)</a>&nbsp;including new features and considerations.</p> <p><em>The monthly update for EPRCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Feprcs-updates-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018 Thu Aug 02 2018 16:47:45 GMT-0400 (EDT) 2019 Leadership Program - Now Accepting Applications https://www.odtug.com/p/bl/et/blogaid=811&source=1 Are you looking to invest in your professional development? Do you enjoy the ODTUG community and are you looking to become more involved? The ODTUG leadership program is a great way to accomplish both goals and broaden your network. ODTUG https://www.odtug.com/p/bl/et/blogaid=811&source=1 Thu Aug 02 2018 11:57:06 GMT-0400 (EDT) A selection of Hadoop Docker Images http://www.oralytics.com/2018/08/a-selection-of-hadoop-docker-images.html <p>When it comes to big data platforms one of the biggest challenges is getting a test environment setup where you can try out the various components. There are a few approaches to doing this this. The first is to setup your own virtual machine or some other container with the software. But this can be challenging to get just a handful of big data applications/software to work on one machine.</p> <p>But there is an alternative approach. You can use one of the preconfigured environments from the likes of AWS, Google, Azure, Oracle, etc. But in most cases these come with a cost. Maybe not in the beginning but after a little us you will need to start handing over some dollars. But these require you to have access to the cloud i.e. wifi, to run these. Again not always possible!</p> <p>So what if you want to have a local big data and Hadoop environment on your own PC or laptop or in your home or office test lab? There ware a lot of Virtual Machines available. But most of these have a sizeable hardware requirement. Particularly for memory, with many requiring 16+G of RAM ! Although in more recent times this might not be a problem but for many it still is. Your machines do not have that amount or your machine doesn't allow you to upgrade.</p> <p>What can you do?</p> <p>Have you considered using Docker? There are many different Hadoop Docker images available and these are not as resource or hardware hungry, unlike the Virtual Machines.</p> <p>Here is a list of some that I've tried out and you might find them useful.</p> <p><strong><a href="https://hub.docker.com/r/cloudera/quickstart/">Cloudera QuickStart image</a></strong></p><p>You may have tried their VM, now go dry the Cloudera QuickStart docker image.</p><p><a href="https://blog.cloudera.com/blog/2015/12/docker-is-the-new-quickstart-option-for-apache-hadoop-and-cloudera/">Read about it here.</a></p> <p>Check our <a href="https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=hadoop&starCount=0">Docker Hub</a> for lots and lots of images.</p> <p>Docker Hub is not the only place to get Hadoop Docker images. There are lots on GitHub Just do a quick <a href="https://www.google.com/search?q=hadoop+docker+images+on+github&ie=utf-8&oe=utf-8&client=firefox-b-ab">Google search </a>to find the many, many, many images.</p> <p>These Docker Hadoop images are a great way for you to try out these Big Data platforms and environments with the minimum of resources.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-6708294390713491168 Thu Aug 02 2018 11:31:00 GMT-0400 (EDT) Tutorial: Updating Connection Pools in OAC & OBIEE 12c https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/updating%20connection%20pools%20tutorial.jpg?t=1541832538128" alt="updating connection pools tutorial" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>I recently ran into an interesting issue with a client — each RPD connected to different databases based on the environment (Dev, Test and Production). Due to the client’s security policy, OBIEE developers were not permitted to have the passwords for the data sources.</p> <p>To migrate the RPD, a member of the DBA team must be contacted to input the passwords for the connection pools. At times, a DBA with available bandwidth can be difficult to locate (even for a few minutes), and the existing ticketing system does not lend itself to “on the fly” RPD promotions (such as Dev to Test).</p> <p>If only we had a way to apply connection pools to the RPD while keeping the connection pool information secure…</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fupdating-connection-pools-tutorial&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Kevin Jacox https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial Wed Aug 01 2018 13:55:21 GMT-0400 (EDT) DV2 Sequences, Hash Keys, Business Keys – Candid Look https://danlinstedt.com/allposts/datavaultcat/dv2-keys-pros-cons/ Primary Key Options for Data Vault 2.0 This entry is a candid look (technical, unbiased view) of the three alternative primary key options in a Data Vault 2.0 Model.  There are pros and cons to each selection.  I hope you enjoy this factual entry. (C) Copyright 2018 Dan Linstedt all rights reserved, NO reprints allowed [&#8230;] Dan Linstedt http://danlinstedt.com/?p=2986 Mon Jul 30 2018 10:13:12 GMT-0400 (EDT) Oracle 18c Grid Infrastructure Upgrade https://gavinsoorma.com/2018/07/oracle-18c-grid-infrastructure-upgrade/ <h3><span style="color: #ff0000;">Upgrade Oracle 12.1.0.2 Grid Infrastructure to 18c </span></h3> <p><strong>Download the 18c Grid Infrastructure software (18.3)</strong></p> <p><a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html">https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png"><img class="aligncenter size-full wp-image-8220" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png" alt="" width="774" height="415" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png 774w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-300x161.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-768x412.png 768w" sizes="(max-width: 774px) 100vw, 774px" /></a></p> <p>&nbsp;</p> <p><strong>Prerequisites</strong></p> <ul> <li>Apply the patch <strong>21255373</strong> to the 12.1.0.2 Grid Infrastructure software home</li> <li>Edit the /etc/security/limits.conf file and add the lines:</li> </ul> <p>oracle soft stack 10240<br /> grid   soft stack 10240</p> <p>&nbsp;</p> <p><strong>Notes</strong></p> <ul> <li>Need to have at least 10 GB of free space in the $ORACLE_BASE directory</li> <li>The unzipped 18c Grid Infrastructure software occupies around 11 GB of disk space &#8211; a big increase on the earlier versions</li> <li>The Grid Infrastructure upgrade can be performed in rolling fashion -configure Batches for this</li> <li>We can see the difference in the software version between the RAC nodes while GI upgrade is in progress &#8230;.</li> </ul> <p>During Upgrade:</p> <p>[root@rac01 trace]# cd /u02/app/18.0.0/grid/bin</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [12.1.0.2.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [12.1.0.2.0]</p> <p>[root@rac01 bin]#</p> <p>&nbsp;</p> <p>After Upgrade:</p> <p>[root@rac01 bin]# <strong>./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [18.0.0.0.0]</p> <p>[root@rac01 bin]# <strong>./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [18.0.0.0.0]</p> <p>&nbsp;</p> <ul> <li>The minimum memory requirements is 8 GB &#8211; same as 12c Release 2</li> <li>Got an error PRVF-5600 related to /etc/resolv.conf stating the file cannot be parsed as some lines are in an improper format   &#8211; ignored the error because the format of the file is correct.</li> </ul> <p>[grid@rac01 grid]$ cat /etc/resolv.conf<br /> # Generated by NetworkManager<br /> search localdomain  rac.localdomain</p> <p>nameserver 192.168.56.102</p> <p>options timeout:3<br /> options retries:1</p> <p>&nbsp;</p> <p><strong>Create the directory structure on both RAC nodes</strong></p> <p>[root@rac01 app]# su &#8211; grid</p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/</p> <p>[grid@rac01 ~]$ cd /u02/app</p> <p>[grid@rac01 app]$ mkdir 18.1.0</p> <p>[grid@rac01 app]$ cd 18.1.0/</p> <p>[grid@rac01 18.0.0]$ mkdir grid</p> <p>[grid@rac01 18.0.0]$ cd grid</p> <p>[grid@rac01 grid]$ ssh grid@rac02</p> <p>Last login: Sun Jul 29 11:22:38 2018 from rac01.localdomain</p> <p>[grid@rac02 ~]$ cd /u02/app</p> <p>[grid@rac02 app]$ mkdir 18.1.0</p> <p>[grid@rac02 app]$ cd 18.1.0/</p> <p>[grid@rac02 18.0.0]$ mkdir grid</p> <p>&nbsp;</p> <p><strong>Unzip the 18c GI Software</strong></p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/grid</p> <p>[grid@rac01 grid]$ unzip -q /media/sf_software/LINUX.X64_180000_grid_home.zip</p> <p>&nbsp;</p> <p><strong>Execute gridSetup.sh</strong></p> <p>[grid@rac01 18.0.0]$ export DISPLAY=:0.0</p> <p>[grid@rac01 18.0.0]$ <strong>./gridSetup.sh</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png"><img class="aligncenter size-full wp-image-8196" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png" alt="" width="612" height="382" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png 612w, https://gavinsoorma.com/wp-content/uploads/2018/07/18a-300x187.png 300w" sizes="(max-width: 612px) 100vw, 612px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png"><img class="aligncenter size-full wp-image-8197" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png"><img class="aligncenter size-full wp-image-8198" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png"><img class="aligncenter size-full wp-image-8199" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png" alt="" width="800" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png 800w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-768x575.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-627x470.png 627w" sizes="(max-width: 800px) 100vw, 800px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png"><img class="aligncenter size-full wp-image-8200" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png" alt="" width="794" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png"><img class="aligncenter size-full wp-image-8201" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png" alt="" width="801" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png 801w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-768x571.png 768w" sizes="(max-width: 801px) 100vw, 801px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png"><img class="aligncenter size-full wp-image-8202" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png"><img class="aligncenter size-full wp-image-8203" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png" alt="" width="802" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-768x571.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png"><img class="aligncenter size-full wp-image-8204" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png" alt="" width="797" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-768x574.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png"><img class="aligncenter size-full wp-image-8205" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png"><img class="aligncenter size-full wp-image-8206" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png" alt="" width="802" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-768x573.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png"><img class="aligncenter size-full wp-image-8207" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png" alt="" width="798" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-768x577.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png"><img class="aligncenter size-full wp-image-8208" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png" alt="" width="797" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png"><img class="aligncenter size-full wp-image-8209" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png" alt="" width="798" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png"><img class="aligncenter size-full wp-image-8210" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png" alt="" width="798" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png"><img class="aligncenter size-full wp-image-8211" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png" alt="" width="799" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-768x578.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png"><img class="aligncenter size-full wp-image-8212" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png" alt="" width="793" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-768x582.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png"><img class="aligncenter size-full wp-image-8213" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png" alt="" width="794" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-768x580.png 768w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png"><img class="aligncenter size-full wp-image-8214" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png"><img class="aligncenter size-full wp-image-8215" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png" alt="" width="793" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-768x579.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png"><img class="aligncenter size-full wp-image-8216" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png" alt="" width="799" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-768x573.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><strong>ASM Configuration Assistant 18c</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png"><img class="aligncenter size-full wp-image-8217" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png" alt="" width="949" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png 949w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-300x188.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-768x482.png 768w" sizes="(max-width: 949px) 100vw, 949px" /></a></p> <p>&nbsp;</p> <p><strong>GIMR pluggable database upgraded to 18c</strong></p> <p>&nbsp;</p> <pre>[grid@rac01 bin]$ export ORACLE_SID=-MGMTDB [grid@rac01 bin]$ pwd /u02/app/18.0.0/grid/bin [grid@rac01 bin]$ ./sqlplus sys as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Jul 29 22:09:17 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; select name,open_mode from v$pdbs; NAME -------------------------------------------------------------------------------- OPEN_MODE ---------- PDB$SEED READ ONLY <strong>GIMR_DSCREP_10</strong> READ WRITE SQL&gt; alter session set container=GIMR_DSCREP_10; Session altered. SQL&gt; select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX UNDOTBS1 TEMP USERS SYSGRIDHOMEDATA SYSCALOGDATA SYSMGMTDATA SYSMGMTDATADB SYSMGMTDATACHAFIX SYSMGMTDATAQ 11 rows selected. SQL&gt; select file_name from dba_data_files where tablespace_name='SYSMGMTDATA'; FILE_NAME -------------------------------------------------------------------------------- +OCR/_MGMTDB/7224A7DF6CB92239E0536438A8C03F3A/DATAFILE/sysmgmtdata.281.982792479 SQL&gt; </pre> Gavin Soorma https://gavinsoorma.com/?p=8218 Mon Jul 30 2018 01:05:29 GMT-0400 (EDT) Building dynamic ODI code using Oracle metadata dictionary https://devepm.com/2018/07/27/building-dynamic-odi-code-using-oracle-metadata-dictionary/ Hi all, today’s post will be about how ODI can be used to generate any kind of SQL statements using Oracle metadata tables. We always like to say that ODI is way more than just an ETL tool and that people needs to start to think about ODI as being a full development platform, where [&#8230;] radk00 http://devepm.com/?p=1713 Fri Jul 27 2018 19:28:56 GMT-0400 (EDT) Automating Backup & Recovery for Oracle EPM Cloud [Tutorial] https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Backup%20and%20Recovery%20Strategy%20graphic.jpg?t=1541832538128" alt="Backup and Recovery Strategy graphic" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>What happens when you have data loss or a devastating change to your Oracle Cloud application? Is there an undo button or safety net to help you easily bounce back?</p> <p>We want to proactively help you with this topic, whether it's preventing minor and major data loss, being prepared in case of application corruption, or you simply want an undo for your application changes. You also may want to surgically undo for a specific application artifact like a form or setting, or a specific slice of data back several weeks. Can you call Oracle for help, or should you do it yourself?</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-automation-using-epm-automate-and-powershell&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Jeff Price https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell Thu Jul 26 2018 12:40:09 GMT-0400 (EDT) Managing Metadata in OBIEE 12c [Video Tutorial] https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Managing-metadata-in-OBIEE.png?t=1541832538128" alt="Managing-metadata-in-OBIEE" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this 5-minute video tutorial, learn how system admins manage metadata in OBIEE 12c, including how to upload RPD files using the command line.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fmanaging-metadata-in-obiee-12c&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c Thu Jul 26 2018 12:37:00 GMT-0400 (EDT) Starting & Stopping OBIEE Components [Video Tutorial] https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/starting-and-stoping-obiee-components.png?t=1541832538128" alt="starting-and-stoping-obiee-components" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this quick video tutorial, learn the options you have for starting, stopping, and viewing the status of OBIEE components in 12c.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fstarting-stopping-obiee-components-12c&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c Thu Jul 26 2018 12:11:00 GMT-0400 (EDT) Making our way into Dremio https://www.rittmanmead.com/blog/2018/07/untitled/ <p>In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/GUID-65DF513D-DFC8-4046-8AA5-24292AF5942F-default-1.png" alt=""></p> <p>Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.</p> <p>The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.</p> <p>I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.</p> <p>Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from <a href="https://www.dremio.com/download/">here</a>.</p> <p>Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit <code>/Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env</code> to point to the directory where java 1.8 home is located.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-04-at-09.17.15.png" alt=""></p> <p>After that you should be able to start Dremio as any other MacOs application and access <code>http://localhost:9047</code></p> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.55.25.png" style="width: 360px; height:280px"> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-10.35.55.png" alt=""></p> <h3 id="configurings3source">Configuring S3 Source</h3> <p>Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html">here</a>) and show the query capabilities of Dremio against unstructured data.</p> <p>For this demo we're using Garmin CSV activity data that can be easily <a href="https://connect.garmin.com/">downloaded</a> from Garmin activity page.</p> <p>Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.</p> <pre><code>act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories 1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08 ,0:06:06 ,0:04:13 ,175.390625,193.0,92.89507499768523,--,--,--,65 1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26 ,0:10:08 ,0:06:02 ,150.140625,236.0,63.74555754497759,--,--,--,55</code></pre> <p>For user information data we have used the following dataset</p> <pre><code>runner,dob,name JM,01-01-1900,Jon Mead NMG,01-01-1900,Nelio Guimaraes</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.20.42.png" alt=""></p> <p>Add your S3 credentials to access <continue></continue></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.22.14.png" alt=""></p> <p>After configuring your S3 account all buckets associated to it, will be prompted under the new source area.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-12.13.12.png" alt=""></p> <p>For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart</p> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-10-at-15.42.03.png" style="width: 360px; height:280px"> <p><strong>nmgbuckettest</strong> - contains Garmin activity data that could be seen as a fact table in CSV format :</p> <p><font size="3">Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories</font></p> <p><strong>nmgdremiouser</strong> - contains user data that could be seen as a user dimension in a CSV format:</p> <p><font size="3">runner,dob,name</font></p> <h3 id="creatingdatasets">Creating datasets</h3> <p>After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.45.58.png" alt=""></p> <p>By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for <code>\n</code> and checked the option <em>Extract Field Name</em></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.53.02.png" alt=""></p> <p>Lets do the same for the second set of CSV's (nmgdremiouser bucket)</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.54.32-1.png" alt=""></p> <p>Click in saving will drive us to a new panel where we can start performing some queries.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-17.41.21-1.png" alt=""></p> <p>However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the <em>dob</em> field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.46.55.png" alt=""></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.47.30-1.png" alt=""></p> <p>Apply the changes and save the new dataset under the desire space.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.57.18.png" alt=""></p> <p>Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call <em>D_USER</em> for the dataset coming from nmgdremiouser bucket and <em>F_ACTIVITY</em> for data coming from nmgbuckettest</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-12.12.11.png" alt=""></p> <h3 id="queryingdatasets">Querying datasets</h3> <p>Now that we have D_USER and F_ACTIVITY datasets created we can start querying them and do some analysis.</p> <p>This first analysis will tell us which runner climbs more during his activities:</p> <pre><code>SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name FROM ( SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner FROM ( SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain, MAX(to_number("Elevation Gain",'###')) as max_elev_gain, SUM(to_number("Elevation Gain",'###')) as sum_elev_gain, runner FROM dremioblogpost.F_ACTIVITY where "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-16-at-13.45.01.png" alt=""></p> <p>To enrich the example lets understand who is the fastest runner with analysis based on the total climbing</p> <pre><code> SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name FROM ( SELECT km_per_hour, total_climbing, runner FROM ( select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour, sum(cast("Elevation Gain" as integer)) total_climbing, runner from dremioblogpost.F_ACTIVITY where "Avg Moving Paces" != '--' and "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-18-at-13.30.55.png" alt=""></p> <h3 id="conclusions">Conclusions</h3> <p>Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.</p> <p>Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a &quot;good enough&quot; way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.</p> Nélio Guimarães 5b5a56f45000960018e69b44 Wed Jul 25 2018 05:07:32 GMT-0400 (EDT) Making our way into Dremio https://www.rittmanmead.com/blog/2018/07/untitled/ <p>In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/GUID-65DF513D-DFC8-4046-8AA5-24292AF5942F-default-1.png" alt=""></p> <p>Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.</p> <p>The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.</p> <p>I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files. </p> <p>Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from <a href="https://www.dremio.com/download/">here</a>.</p> <p>Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit <code>/Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env</code> to point to the directory where java 1.8 home is located.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-04-at-09.17.15.png" alt=""></p> <p>After that you should be able to start Dremio as any other MacOs application and access <code>http://localhost:9047</code></p> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.55.25.png" style="width: 360px; height:280px"></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-10.35.55.png" alt=""></p> <h3 id="configurings3source">Configuring S3 Source</h3> <p>Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html">here</a>) and show the query capabilities of Dremio against unstructured data.</p> <p>For this demo we're using Garmin CSV activity data that can be easily <a href="https://connect.garmin.com/">downloaded</a> from Garmin activity page. </p> <p>Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.</p> <pre><code>act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories 1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08 ,0:06:06 ,0:04:13 ,175.390625,193.0,92.89507499768523,--,--,--,65 1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26 ,0:10:08 ,0:06:02 ,150.140625,236.0,63.74555754497759,--,--,--,55</code></pre> <p>For user information data we have used the following dataset </p> <pre><code>runner,dob,name JM,01-01-1900,Jon Mead NMG,01-01-1900,Nelio Guimaraes</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.20.42.png" alt=""></p> <p>Add your S3 credentials to access <continue></continue></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.22.14.png" alt=""></p> <p>After configuring your S3 account all buckets associated to it, will be prompted under the new source area.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-12.13.12.png" alt=""></p> <p>For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart </p> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-10-at-15.42.03.png" style="width: 360px; height:280px"></p> <p><strong>nmgbuckettest</strong> - contains Garmin activity data that could be seen as a fact table in CSV format :</p> <p><font size="3">Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories</font></p> <p><strong>nmgdremiouser</strong> - contains user data that could be seen as a user dimension in a CSV format:</p> <p><font size="3">runner,dob,name</font></p> <h3 id="creatingdatasets">Creating datasets</h3> <p>After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.45.58.png" alt=""></p> <p>By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for <code>\n</code> and checked the option <em>Extract Field Name</em></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.53.02.png" alt=""></p> <p>Lets do the same for the second set of CSV's (nmgdremiouser bucket)</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.54.32-1.png" alt=""></p> <p>Click in saving will drive us to a new panel where we can start performing some queries. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-17.41.21-1.png" alt=""></p> <p>However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the <em>dob</em> field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.46.55.png" alt=""></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.47.30-1.png" alt=""></p> <p>Apply the changes and save the new dataset under the desire space. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.57.18.png" alt=""></p> <p>Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call <em>D_USER</em> for the dataset coming from nmgdremiouser bucket and <em>F_ACTIVITY</em> for data coming from nmgbuckettest</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-12.12.11.png" alt=""></p> <h3 id="queryingdatasets">Querying datasets</h3> <p>Now that we have D<em>USER and F</em>ACTIVITY datasets created we can start querying them and do some analysis.</p> <p>This first analysis will tell us which runner climbs more during his activities:</p> <pre><code>SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name FROM ( SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner FROM ( SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain, MAX(to_number("Elevation Gain",'###')) as max_elev_gain, SUM(to_number("Elevation Gain",'###')) as sum_elev_gain, runner FROM dremioblogpost.F_ACTIVITY where "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-16-at-13.45.01.png" alt=""></p> <p>To enrich the example lets understand who is the fastest runner with analysis based on the total climbing </p> <pre><code> SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name FROM ( SELECT km_per_hour, total_climbing, runner FROM ( select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour, sum(cast("Elevation Gain" as integer)) total_climbing, runner from dremioblogpost.F_ACTIVITY where "Avg Moving Paces" != '--' and "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-18-at-13.30.55.png" alt=""></p> <h3 id="conclusions">Conclusions</h3> <p>Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.</p> <p>Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.</p> Nélio Guimarães d8f83ec4-91b0-4760-bd7d-906e8ff427d9 Wed Jul 25 2018 05:07:32 GMT-0400 (EDT) Oracle 12c Release 2 New Feature DGMGRL Scripting https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/ <p>New in Oracle 12c Release 2 is the ability for scripts to be executed through the Data Guard broker DGMGRL command-line interface very similar to like say in SQL*Plus. </p> <p>DGMGRL commands, SQL commands using the DGMGRL SQL command, and OS commands using the new HOST (or !) capability can be </p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8188 Wed Jul 25 2018 00:02:38 GMT-0400 (EDT) Oracle 12c Release 2 New Feature – SQL HISTORY https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/ <p>Oracle 12c Release 2 now provides the ability to reissue the previously executed SQL*Plus commands.</p> <p>This functionality is similar to the shell history command available on the UNIX platform.</p> <p>This feature enables us to run, edit, or delete previously executed SQL*Plus, SQL, or PL/SQL commands from the <strong>history list in </strong></p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8183 Tue Jul 24 2018 23:45:00 GMT-0400 (EDT) Lessor known Apache Machine Learning languages http://www.oralytics.com/2018/07/lessor-known-apache-machine-learning.html <p>Machine learning is a very popular topic in recent times, and we keep hearing about languages such as R, Python and Spark. In addition to these we have commercially available machine learning languages and tools from SAS, IBM, Microsoft, Oracle, Google, Amazon, etc., etc. Everyone want a slice of the machine learning market!</p> <p>The Apache Foundation supports the development of new open source projects in a number of areas. One such area is machine learning. If you have read anything about machine learning you will have come across Spark, and maybe you might believe that everyone is using it. Sadly this isn't true for lots of reasons, but it is very popular. Spark is one of the project support by the Apache Foundation.</p> <p>But are there any other machine learning projects being supported under the Apache Foundation that are an alternative to Spark? The follow lists the alternatives and lessor know projects: (most of these are incubator/retired/graduated Apache projects)</p> <style>td, th { border: 1px solid black; border-color:black; border-collapse: collapse; border-spacing:0; text-align: left; padding: 8px; } </style> <table style="width:100%" border="1" cellspacing="0" cellpadding="0"><tr> <td width="25%"><a href="http://flink.apache.org/">Flink</a> </td> <td>Flink is an open source system for expressive, declarative, fast, and efficient data analysis. Stratosphere combines the scalability and programming flexibility of distributed MapReduce-like platforms with the efficiency, out-of-core execution, and query optimization capabilities found in parallel databases. Flink was originally known as Stratosphere when it entered the Incubator. <p><a href="https://ci.apache.org/projects/flink/flink-docs-master/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="https://incubator.apache.org/projects/horn.html">HORN</a> </td> <td>HORN is a neuron-centric programming APIs and execution framework for large-scale deep learning, built on top of Apache Hama. <p><a href="https://cwiki.apache.org/confluence/display/HORN">Wiki Page</p></a><p>(Retired)</p> </td></tr><tr> <td><a href="http://hivemall.incubator.apache.org/">HiveMail</a> </td> <td>Hivemall is a library for machine learning implemented as Hive UDFs/UDAFs/UDTFs <p>Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta. Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta. </p><p><a href="http://hivemall.incubator.apache.org/userguide/index.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://madlib.apache.org/">MADlib</a></td> <td>Apache MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data. Key features include: Operate on the data locally in-database. Do not move data between multiple runtime environments unnecessarily; Utilize best of breed database engines, but separate the machine learning logic from database specific implementation details; Leverage MPP shared nothing technology, such as the Greenplum Database and Apache HAWQ (incubating), to provide parallelism and scalability. <p><a href="http://madlib.apache.org/documentation.html">Documentation</a></p><p>(graduated)</p></td></tr><tr> <td><a href="http://mxnet.incubator.apache.org/">MXNet</a> </td> <td>A Flexible and Efficient Library for Deep Learning . MXNet provides optimized numerical computation for GPUs and distributed ecosystems, from the comfort of high-level environments like Python and R MXNet automates common workflows, so standard neural networks can be expressed concisely in just a few lines of code. <p><a href="https://mxnet.incubator.apache.org/">Webpage</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://opennlp.apache.org/">OpenNLP</a> </td> <td>OpenNLP is a machine learning based toolkit for the processing of natural language text. OpenNLP supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, language detection and coreference resolution. <p><a href="http://opennlp.apache.org/docs/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://predictionio.apache.org/">PredictionIO</a> </td> <td>PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack, that enables developers to manage and deploy production-ready predictive services for various kinds of machine learning tasks. <p><a href="http://predictionio.apache.org/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://samoa.incubator.apache.org/">SAMOA</a> </td> <td>SAMOA provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms that run on top of distributed stream processing engines (DSPEs). It features a pluggable architecture that allows it to run on several DSPEs such as Apache Storm, Apache S4, and Apache Samza. <p><a href="http://samoa.incubator.apache.org/documentation/Home.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://singa.incubator.apache.org/en/index.html">SINGA</a> </td> <td>SINGA is a distributed deep learning platform. An intuitive programming model based on the layer abstraction is provided, which supports a variety of popular deep learning models. SINGA architecture supports both synchronous and asynchronous training frameworks. Hybrid training frameworks can also be customized to achieve good scalability. SINGA provides different neural net partitioning schemes for training large models. <p><a href="http://singa.incubator.apache.org/en/docs/index.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://storm.apache.org/">Storm</a> </td> <td>Storm is a distributed, fault-tolerant, and high-performance realtime computation system that provides strong guarantees on the processing of data. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language. <p><a href="http://storm.apache.org/releases/2.0.0-SNAPSHOT/index.html">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://systemml.apache.org/">SystemML</a> </td> <td>SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations such as Apache Hadoop MapReduce and Apache Spark. <p><a href="http://systemml.apache.org/documentation">Documentation</a></p><p>(graduated)</p> </td></tr></table> <p><img src="https://lh3.googleusercontent.com/-mi3xpslLyP8/W1S7LVTj-jI/AAAAAAAAAes/V380gbQA4VwJuYQIJEB0XjU-zmKblAxcACHMYCw/big_data_ml.png?imgmax=1600" alt="Big data ml" title="big_data_ml.png" border="0" width="620" height="400" /></p> <p>I will have a closer look that the following SQL based machine learning languages in a lager blog post:</p> <p> - <a href="http://madlib.apache.org/">MADlib</a></p><p> - <a href="http://storm.apache.org/">Storm</a></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-1432535821395247575 Mon Jul 23 2018 10:40:00 GMT-0400 (EDT) Oracle Analytics Cloud Workshop FAQ https://www.rittmanmead.com/blog/2018/07/oac-workshop-faq/ <p>A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.</p> <p>If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to <strong><a href="mailto:training@rittmanmead.com">training@rittmanmead.com</a></strong> and we can give you the details.</p> <h2 id="dooracleprovideafeaturecomparisonlistbetweenobieeonpremiseandoac">Do Oracle provide a feature comparison list between OBIEE on premise and OAC?</h2> <p>Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:</p> <h3 id="highlevel">High Level</h3> <ul> <li>Enterprise Analytics is identical to 12c Analytics</li> <li>Only two Actions available in OAC: Navigate to BI content, Navigate to Web page</li> <li>BI Publisher is identical in 12c and OAC</li> <li>Data Visualiser has additional features and a slightly different UI in OAC compared to 12c</li> </ul> <h3 id="bideveloperclienttoolforoac">BI Developer Client Tool for OAC</h3> <ul> <li>Looks exactly the same as the OBIEE client</li> <li>Available only for Windows, straightforward installation</li> <li>OAC IP address and BI Server port must be provided to create an ODBC data source</li> <li>Allows to open and edit online the OAC model</li> <li>Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)</li> </ul> <h3 id="datamodeler">Data Modeler</h3> <ul> <li>Alternative tool to create and manage metadata models</li> <li>Very easy to use, but limited compared to the BI Developer Client.</li> </ul> <h3 id="catalog">Catalog</h3> <ul> <li>It's possible to archive/unarchive catalog folders from on-premise to OAC.</li> </ul> <h3 id="barfile">BAR file</h3> <ul> <li>It's possible to create OAC bar files</li> <li>It's possible to migrate OAC bar files to OBIEE 12c</li> </ul> <h2 id="canyoueverbechargedbynetworkusageforexampleconnectiontoanonpremisedatasourceusingrdc">Can you ever be charged by network usage, for example connection to an on premise data source using RDC?</h2> <p>Oracle will not charge you for network usage as things stand. Your charges come from the following:</p> <ul> <li>Which version of OAC you have (Standard, Data Lake or Enterprise)</li> <li>Whether you are using Pay-as-you-go or Monthly Commitments</li> <li>The amount of disk space you have specified during provisioning</li> <li>The combination of OCPU and RAM currently in use (size).</li> <li>The up-time of your environment.</li> </ul> <p>So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.</p> <h2 id="doyouneedtorestarttheoacenvironmentwhenyouchangetheramandocpusettings">Do you need to restart the OAC environment when you change the RAM and OCPU settings?</h2> <p>Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:</p> <p><img src="https://i.imgur.com/AY0IXeI.png&amp;width=250&amp;height=150" alt="alt"></p> <p>PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">here</a>, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.</p> <p>This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:</p> <ul> <li>oc3 — 1 OCPU with 7.5 GB RAM</li> <li>oc4 — 2 OCPUs with 15 GB RAM</li> <li>oc5 — 4 OCPUs with 30 GB RAM</li> <li>oc6 — 8 OCPUs with 60 GB RAM</li> <li>oc7 — 16 OCPUs with 120 GB RAM</li> <li>oc8 — 24 OCPUs with 180 GB RAM</li> <li>oc9 — 32 OCPUs with 240 GB RAM</li> </ul> <p>For example:</p> <p>The following example scales the rittmanmead-analytics-prod service to the oc9 shape.</p> <p>$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json<br> where the JSON file contains the following:</p> <p><code>{ &quot;components&quot; : { &quot;BI&quot; : &quot;shape&quot; : &quot;oc9&quot;, &quot;hosts&quot;:[&quot;rittmanmead-prod-1&quot;] } } }</code></p> <p>Oracle supply documentation for the commands required here: <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html">https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html</a> .</p> <h2 id="howishighavailabilityprovisionedinoracleanalyticscloud">How is high availability provisioned in Oracle Analytics Cloud?</h2> <p>Building a high available infrastructure in the cloud needs to take into consideration three main areas:</p> <p><strong>Server Failure:</strong> Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:</p> <p><img src="https://i.imgur.com/SHAhPB8.png&amp;width=451&amp;height=250" alt="alt"></p> <p>It is also possible to provision a load balancer, as you can see from the screenshot below:</p> <p><img src="https://i.imgur.com/YffRsnW.png&amp;width=451&amp;height=250" alt="alt"></p> <p><strong>Zone Failure:</strong> Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons &quot;availability zone&quot;. A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.</p> <p><strong>Cloud Failure:</strong> Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment.</p> <p>In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.</p> <h2 id="canyoupushtheusercredentialsdowntothedatabase">Can you push the user credentials down to the database?</h2> <p>At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: <a href="https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html">https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html</a></p> <p>You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script <em>configure_bi_sql_group_provider</em> to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.</p> <p>Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.</p> <p><img src="https://i.imgur.com/LxqZtIt.png&amp;width=451&amp;height=250" alt="alt"></p> <p>These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.<br> The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure_bi_sql_group_provider</p> <h2 id="canyouimplementsslcertificatesinoracleanalyticscloud">Can you implement SSL certificates in Oracle Analytics Cloud?</h2> <p>The short answer is yes.</p> <p>When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. <strong>Doc ID 2334800.1</strong> on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:</p> <ul> <li>Associate a custom domain name against the public ip of your OAC instance</li> <li>Get the custom SSL certificate from a Certificate Authority</li> <li>Specify the DNS registered host name that you want to secure with SSL in servername.conf</li> <li>Install Intermediate certificateRun the script to Register the new private key and server certificate</li> </ul> <h2 id="canyouimplementsinglesignonssoinoracleanalyticscloud">Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?</h2> <p>Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:</p> <p><img src="https://i.imgur.com/5rRKyiH.png" alt="alt"></p> <p>Oracle support <strong>Doc ID 2399789.1</strong> covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.</p> <h2 id="arerpdfilesbarfilesbackwardscompatible">Are RPD files (BAR files) backwards compatible?</h2> <p>This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot</p> <h2 id="howdoyouconnectsecurelytoaws">How do you connect securely to AWS?</h2> <p>There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:</p> <p><img src="https://i.imgur.com/9odjJrb.png" alt="alt"></p> <h2 id="canyoustilleditinstanceconfigandnqsconfigfiles">Can you still edit instanceconfig and nqsconfig files?</h2> <p>Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI</p> <p>Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.</p> <h2 id="whoisresponsibleforwhatregardingsupport">Who is responsible for what regarding support?</h2> <p>Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (<strong>Doc ID 2309936.1</strong>)</p> <p><a href="http://https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=277697121079580&amp;id=2309936.1&amp;displayIndex=1&amp;_afrWindowMode=0&amp;_adf.ctrl-state=qwe5xzsil_210#aref_section33">Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services</a></p> Chris Redgrave 5b5a56f45000960018e69b47 Mon Jul 23 2018 08:59:34 GMT-0400 (EDT) Oracle Analytics Cloud Workshop FAQ https://www.rittmanmead.com/blog/2018/07/oac-workshop-faq/ <p>A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.</p> <p>If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to <strong>training@rittmanmead.com</strong> and we can give you the details.</p> <h2 id="dooracleprovideafeaturecomparisonlistbetweenobieeonpremiseandoac">Do Oracle provide a feature comparison list between OBIEE on premise and OAC?</h2> <p>Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:</p> <h3 id="highlevel">High Level</h3> <ul> <li>Enterprise Analytics is identical to 12c Analytics</li> <li>Only two Actions available in OAC: Navigate to BI content, Navigate to Web page</li> <li>BI Publisher is identical in 12c and OAC</li> <li>Data Visualiser has additional features and a slightly different UI in OAC compared to 12c</li> </ul> <h3 id="bideveloperclienttoolforoac">BI Developer Client Tool for OAC</h3> <ul> <li>Looks exactly the same as the OBIEE client</li> <li>Available only for Windows, straightforward installation</li> <li>OAC IP address and BI Server port must be provided to create an ODBC data source</li> <li>Allows to open and edit online the OAC model</li> <li>Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)</li> </ul> <h3 id="datamodeler">Data Modeler</h3> <ul> <li>Alternative tool to create and manage metadata models</li> <li>Very easy to use, but limited compared to the BI Developer Client.</li> </ul> <h3 id="catalog">Catalog</h3> <ul> <li>It's possible to archive/unarchive catalog folders from on-premise to OAC.</li> </ul> <h3 id="barfile">BAR file</h3> <ul> <li>It's possible to create OAC bar files</li> <li>It's possible to migrate OAC bar files to OBIEE 12c</li> </ul> <h2 id="canyoueverbechargedbynetworkusageforexampleconnectiontoanonpremisedatasourceusingrdc">Can you ever be charged by network usage, for example connection to an on premise data source using RDC?</h2> <p>Oracle will not charge you for network usage as things stand. Your charges come from the following:</p> <ul> <li>Which version of OAC you have (Standard, Data Lake or Enterprise)</li> <li>Whether you are using Pay-as-you-go or Monthly Commitments</li> <li>The amount of disk space you have specified during provisioning</li> <li>The combination of OCPU and RAM currently in use (size).</li> <li>The up-time of your environment.</li> </ul> <p>So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.</p> <h2 id="doyouneedtorestarttheoacenvironmentwhenyouchangetheramandocpusettings">Do you need to restart the OAC environment when you change the RAM and OCPU settings?</h2> <p>Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:</p> <p><img src="https://i.imgur.com/AY0IXeI.png&amp;width=250&amp;height=150" alt="alt"></p> <p>PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">here</a>, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.</p> <p>This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:</p> <ul> <li>oc3 — 1 OCPU with 7.5 GB RAM</li> <li>oc4 — 2 OCPUs with 15 GB RAM</li> <li>oc5 — 4 OCPUs with 30 GB RAM</li> <li>oc6 — 8 OCPUs with 60 GB RAM</li> <li>oc7 — 16 OCPUs with 120 GB RAM</li> <li>oc8 — 24 OCPUs with 180 GB RAM</li> <li>oc9 — 32 OCPUs with 240 GB RAM</li> </ul> <p>For example:</p> <p>The following example scales the rittmanmead-analytics-prod service to the oc9 shape.</p> <p>$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json where the JSON file contains the following:</p> <p><code>{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }</code></p> <p>Oracle supply documentation for the commands required here: <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html">https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html</a> .</p> <h2 id="howishighavailabilityprovisionedinoracleanalyticscloud">How is high availability provisioned in Oracle Analytics Cloud?</h2> <p>Building a high available infrastructure in the cloud needs to take into consideration three main areas:</p> <p><strong>Server Failure:</strong> Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:</p> <p><img src="https://i.imgur.com/SHAhPB8.png&amp;width=451&amp;height=250" alt="alt"></p> <p>It is also possible to provision a load balancer, as you can see from the screenshot below: </p> <p><img src="https://i.imgur.com/YffRsnW.png&amp;width=451&amp;height=250" alt="alt"></p> <p><strong>Zone Failure:</strong> Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.</p> <p><strong>Cloud Failure:</strong> Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment. </p> <p>In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.</p> <h2 id="canyoupushtheusercredentialsdowntothedatabase">Can you push the user credentials down to the database?</h2> <p>At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: <a href="https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html">https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html</a></p> <p>You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script <em>configure_bi_sql_group_provider</em> to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.</p> <p>Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.</p> <p><img src="https://i.imgur.com/LxqZtIt.png&amp;width=451&amp;height=250" alt="alt"></p> <p>These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results. <br> The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure<em>bi</em>sql<em>group</em>provider</p> <h2 id="canyouimplementsslcertificatesinoracleanalyticscloud">Can you implement SSL certificates in Oracle Analytics Cloud?</h2> <p>The short answer is yes.</p> <p>When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. <strong>Doc ID 2334800.1</strong> on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:</p> <ul> <li>Associate a custom domain name against the public ip of your OAC instance</li> <li>Get the custom SSL certificate from a Certificate Authority</li> <li>Specify the DNS registered host name that you want to secure with SSL in servername.conf</li> <li>Install Intermediate certificateRun the script to Register the new private key and server certificate</li> </ul> <h2 id="canyouimplementsinglesignonssoinoracleanalyticscloud">Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?</h2> <p>Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:</p> <p><img src="https://i.imgur.com/5rRKyiH.png" alt="alt"></p> <p>Oracle support <strong>Doc ID 2399789.1</strong> covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.</p> <h2 id="arerpdfilesbarfilesbackwardscompatible">Are RPD files (BAR files) backwards compatible?</h2> <p>This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot</p> <h2 id="howdoyouconnectsecurelytoaws">How do you connect securely to AWS?</h2> <p>There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:</p> <p><img src="https://i.imgur.com/9odjJrb.png" alt="alt"></p> <h2 id="canyoustilleditinstanceconfigandnqsconfigfiles">Can you still edit instanceconfig and nqsconfig files?</h2> <p>Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI</p> <p>Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.</p> <h2 id="whoisresponsibleforwhatregardingsupport">Who is responsible for what regarding support?</h2> <p>Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (<strong>Doc ID 2309936.1</strong>)</p> <p><a href="http://https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=277697121079580&amp;id=2309936.1&amp;displayIndex=1&amp;_afrWindowMode=0&amp;_adf.ctrl-state=qwe5xzsil_210#aref_section33">Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services</a></p> Chris Redgrave 1bafd34e-380e-4a22-8264-c9bdceff901a Mon Jul 23 2018 08:59:34 GMT-0400 (EDT) 10 Tips to Improve ETL Performance – Revised for ADWC https://danischnider.wordpress.com/2018/07/20/10-tips-to-improve-etl-performance-revised-for-adwc/ <p>The Autonomous Data Warehouse Cloud (ADWC) is a self-configuring, fast, secure and scalable platform for data warehouses. Does this mean we don’t have to take care anymore about performance of our ETL processes? Which performance tips are still important for us, and where can we hand over the responsibility to ADWC? A revised version of an old blog post, with regard to Oracle&#8217;s Data Warehouse Cloud solution.</p> <p><span id="more-594"></span></p> <p>Last summer, I published a blog post with performance tips for ETL jobs: <a href="https://danischnider.wordpress.com/2017/07/23/10-tips-to-improve-etl-performance/">10 Tips to Improve ETL Performance</a>. Now, it’s summer again, and I am running several ETL jobs on the Autonomous Data Warehouse Cloud to test its features and limitations.This is a good time for a revised version of my blog post, with a special focus on ADWC environments. What is still the same, what changes with ADWC?</p> <p><img title="midnight_adwc.jpg" src="https://danischnider.files.wordpress.com/2018/07/imidnight_adwc.jpg?w=598&#038;h=299" alt="Midnight adwc" width="598" height="299" border="0" /><br /><em>What is the impact of the Autonomous Data Warehouse Cloud on ETL performance? Is the night still too short?</em></p> <p>In my <a href="https://danischnider.wordpress.com/2017/07/23/10-tips-to-improve-etl-performance/">original blog post</a>, I wrote about the following performance tips for ETL:</p> <ol> <li>Use Set-based Operations</li> <li>Avoid Nested Loops</li> <li>Drop Unnecessary Indexes</li> <li>Avoid Functions in WHERE Condition</li> <li>Take Care of OR in WHERE Condition</li> <li>Reduce Data as Early as Possible</li> <li>Use WITH to Split Complex Queries</li> <li>Run Statements in Parallel</li> <li>Perform Direct-Path INSERT</li> <li>Gather Statistics after Loading each Table</li> </ol> <p>Of course, all these tips are still valid, and I recommend them to use in every ETL process. But some of them are more important, some of them not relevant anymore, if you run your data warehose on ADWC. Let’s go through the list step by step.</p> <h1>1. Use Set-based Operations</h1> <p>Architecture and configuration of ADWC are designed for a high throughput of large data sets in parallel mode. If you run your load jobs with row-by-row executions, using cursor loops in a procedural language or a row-based ETL tool, ADWC is the wrong environment for you. Of course, it is possible to load data with such programs and tools into an ADWC database, but don’t expect high performance improvements compared to any other database environment.</p> <p>Data Warehouse Automation frameworks and modern ELT tools are able to use the benefits of the target database and run set-based operations. If you use any tools that are able to generate or execute SQL statements, you are on the right track with ADWC.</p> <h1>2. Avoid Nested Loops</h1> <p>As I already mentioned in the original blog post, Nested Loop Joins are one of the main causes for ETL performance problems. This join method is usually not feasible for ETL jobs and often the reason for poor load performance. In most situations when the optimizer decides to choose a nested loop, this is in combination with an index scan. Because almost no indexes exist in an ADWC environment (see next section), this problem is not relevant anymore.</p> <h1>3. Drop Unnecessary Indexes</h1> <p>ADWC doesn’t like indexes. If you try to create an index, you will get an error message:</p> <pre>CREATE INDEX s_order_item_delivery_date_idx<br />ON s_order_item (delivery_date);<br /><br /><strong>ORA-01031: insufficient privileges</strong></pre> <p>Although this seems to be a very hard restriction, it is actually a good approach for a data warehouse environment. There are only a few reasons for indexes in a data warehouse:</p> <ul> <li>Unique indexes are used to prove the uniqueness of primary key and unique constraints. This is the only case where ADWC still allows to create indexes. If you define such constraints, an index is created as usual. To improve the ETL performance, it is even possible to create the primary keys with <em>RELY DISABLE NOVALIDATE</em>. In this case, no index is created, but you have to guarantee in the load process or with additional quality checks that no duplicates are loaded.</li> <li>In a star schema, bitmap indexes on the dimension keys of a fact table are required to perform a Star Transformation. In ADWC, a Vector Transformation is used instead (this transformation was introduced with Oracle Database In-Memory). So, there is no need for these indexes anymore.</li> <li>For selective queries that return only a small subset of data, an index range scan may be useful. For these kind of queries, the optimizer decides to use a Bloom filter in ADWC as an alternative to the (missing) index.</li> </ul> <p>So, the only case where indexes are created in ADWC, are primary key and unique constraints. No other indexes are allowed. This solves a lot of performance issues in ETL jobs.</p> <h1>Performance Tips 4 to 7</h1> <p>These are general tips for writing fast SQL statements. Complex queries and expressions in WHERE conditions are hard to be evaluated by the query optimizer and can lead to wrong estimations and poor execution plans. This does not change if you move your data warehouse to ADWC. Of course, performance issues can be “solved” in ADWC by increasing the number of CPUs (see next section), but a more elegant and sustainable approach is to <a href="https://danischnider.wordpress.com/2018/04/03/keep-your-sql-simple-and-fast/">keep your SQL simple and fast</a>. This is the case on all databases on premises and in the Cloud.</p> <h1>8. Run Statements in Parallel</h1> <p>Queries and DML statements are executed in parallel by default in ADWC, if more than 1 CPU core is allocated. Parallel DML (PDML) is enabled by default for all sessions. Normally, PDML has to be enabled per session with an <em>ALTER SESSION ENABLE PARALLEL DML</em> command. This is not necessary in ADWC.</p> <p>The typical way of performance tuning in ADWC is to increase the number of CPUs and therefore the parallel degree of the executed SQL statements. Some call this KIWI (“kill it with iron”) approach, Oracle calls it “elastic scaling”. The number of CPU cores can be assigned to your data warehouse environment at runtime. This works fine. The number of CPUs can be adjusted any time on the web interface of the Oracle Cloud Infrastructure. After changing the number to the new value, the system is scaled up or down. This takes a few minutes, but no interrupt of services or restart of the database is required. The only detail you have to keep in mind: the number of CPU cores has an impact on the costs of the Cloud infrastructure.</p> <p><img title="ADWC_Scale_Up_Down.jpg" src="https://danischnider.files.wordpress.com/2018/07/iadwc_scale_up_down.jpg?w=556&#038;h=310" alt="ADWC Scale Up Down" width="556" height="310" border="0" /><br /><em>The number of CPU cores can be adjusted any time in the Autonomous Data Warehouse Cloud</em></p> <p>The degree of parallelism (DOP) is computed by the optimizer with the Auto DOP mechanism (<em>PARALLEL_DEGREE_POLICY = AUTO</em>). All initialization parameters for parallel execution are configured automatically and cannot be changed, not even on session level.</p> <p><em>PARALLEL</em> hints are neighter required nor recommended. By default, they are ignored in ADWC. But if you need them (or you think you need them), it is possible to enable them on session level:</p> <pre>ALTER SESSION SET OPTIMIZER_IGNORE_PARALLEL_HINTS = FALSE;</pre> <p>The parameter <em>OPTIMIZER_IGNORE_PARALLEL_HINTS</em> was introduced with Oracle 18c, but is available in ADWC, too (ADWC is currently a mixture of Oracle 12.2.0.1 and Oracle 18c). It is one of the few initialization parameters that can be modified in ADWC (see <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/experienced-database-users.html#GUID-7CF648C1-0822-4602-8ED1-6F5719D6779E">documentation</a>). By default, it is TRUE, so all parallel hints are ignored.</p> <h1>9. Perform Direct-Path INSERT</h1> <p>Because Parallel DML is enabled by default in ADWC, most INSERT and MERGE statements are executed as Direct-Path load operations. Only for serial DML statements (which only occur on an ADWC database with one CPU core), the <em>APPEND</em> hint has to be added to the INSERT and MERGE statements. This is the only hint that is not ignored by default in ADWC.</p> <p>But pay attention: Parallel DML and even an <em>APPEND</em> hint do not guarantee a Direct-Path INSERT. If referential integrity is enabled on the target table, Direct-Path is disabled and a Conventional INSERT is performed. This can be avoided with reliable constraints, as described in blog post <a href="https://danischnider.wordpress.com/2015/12/01/foreign-key-constraints-in-an-oracle-data-warehouse/">Foreign Key Constraints in an Oracle Data Warehouse</a>.</p> <h1>10. Gather Statistics after Loading each Table</h1> <p>Direct-Path load operations are not only much faster than Conventional DML statements, they have another good side effect in ADWC: Online statistics gathering was improved and is now able to gather object statistics automatically after each Direct-Path load operation. I explained this in my last blog post <a href="https://danischnider.wordpress.com/2018/07/11/gathering-statistics-in-the-autonomous-data-warehouse-cloud/">Gathering Statistics in the Autonomous Data Warehouse Cloud</a>. Only after Conventional DML statements, it is required to call DBMS_STATS to gather statistics. Unfortunately, this is not done (yet) automatically.</p> <h1>Conclusion</h1> <p>As you can see from the length of this blog post, the Autonomous Data Warehouse Cloud is not a complete self-configuring environment that solves all performance issues automatically. It is still important to know how the Oracle database works and how efficient ETL processes have to be designed. Set-based operations and reliable constraints are mandatory, and bad SQL statements will still be bad, even in an Autonomous Database.</p> <p>But there are many simplifications in ADWC. The consistent usage of Parallel DML and Direct-Path load operations, including online statistics gathering, makes it easier to implement fast ETL jobs. And many performance problems of ETL jobs are solved because no indexes are allowed.</p> Dani Schnider http://danischnider.wordpress.com/?p=594 Fri Jul 20 2018 09:29:32 GMT-0400 (EDT) Self-Service Data Transformation: Getting In-Depth with Oracle Data Flow https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20part%203_22.jpg?t=1541832538128" alt="self-service part 3_22" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In part one, I talked about the common concepts and goals that Tableau Prep and Oracle Data Flow share. In part two, I looked at a brief overview of both tools and took an in-depth look at Tableau Prep.</p> <p>In this third post, let's dive deeper into Oracle Data Flow.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-three&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three Tue Jul 17 2018 15:14:25 GMT-0400 (EDT) Self-Service Data Transformation: Getting In-Depth with Tableau Prep https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20part%202_13.jpg?t=1541832538128" alt="self-service part 2_13" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In part one of this series, I shared an overview of the common concepts that Tableau Prep and Oracle Data Flow share as well as a brief look at the tools themselves. In part two, I want to take a more in-depth look at Tableau Prep and share my experiences using it.</p> <p>In my first example, I have three spreadsheets containing data collected from every World Cup Match from 1930 to 2014. One contains detailed information about each match individually.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-two&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two Tue Jul 17 2018 11:53:51 GMT-0400 (EDT) Oracle Developer Champion http://www.oralytics.com/2018/07/oracle-developer-champion.html Yesterday evening I received an email titled 'Invitation to Developer Champion Program'.<br /><br />What a surprise!<br /><div style="text-align: center;"><img alt="Oracle dev champion" border="0" height="160" src="https://lh3.googleusercontent.com/-DrLlMdy6dN8/W0eC91un_1I/AAAAAAAAAeE/6OCN-wDgaa8-rZgzNpJXrJliigCeScd-QCHMYCw/oracle_dev_champion.png?imgmax=1600" title="oracle_dev_champion.png" width="250" /></div>The <a href="https://developer.oracle.com/devchampion">Oracle Developer Champion program</a> was setup just a year ago and is aimed at people who are active in generating content and sharing their knowledge on new technologies including cloud, micro services, containers, Java, open source technologies, machine learning and various types of databases.<br />For me, I fit into the machine learning, cloud, open source technologies, a bit on chatbots and various types of databases areas. Well I think I do!<br /><br />This made me look back over my activities for the past 12-18 months. As an <a href="http://www.oracle.com/technetwork/community/oracle-ace/index.html">Oracle ACE Director</a>, we have to record all our activities. I'd been aware that the past 12-18 months had been a bit quieter than previous years. But when I looked back at all the blog posts, articles for numerous publications, books, and code contributions, etc. Even I was impressed with what I had achieved, even though it was a quiet period for me.<br /><br />Membership of <a href="https://developer.oracle.com/devchampion">Oracle Developer Champion program</a> is for one year, and the good people in Oracle Developer Community (ODC) will re-evaluate what I, and the others in the program, have been up to and will determine if you can continue for another year.<br /><br />In addition to writing, contributing to projects, presenting, etc Oracle Developer Champions typically have leadership roles in user groups, answering questions on forums and providing feedback to product managers.<br /><br />The list of existing Oracle Developer Champions is very impressive. I'm honoured to be joining this people.<br /><br />Click on the image to go to the Oracle Developer Champion website to find out more.<br /><div style="text-align: center;"><a href="https://developer.oracle.com/devchampion"><img alt="Screen Shot 2018 07 12 at 17 21 32" border="0" height="168" src="https://lh3.googleusercontent.com/-QCtNHUdnH2k/W0eC9Ve1IeI/AAAAAAAAAeA/zhL2bFfS8uoCV8qhOZWQn9rAD5Ni1spdgCHMYCw/Screen%2BShot%2B2018-07-12%2Bat%2B17.21.32.png?imgmax=1600" title="Screen Shot 2018-07-12 at 17.21.32.png" width="599" /></a> </div><br />And check out the <a href="https://apex.oracle.com/pls/apex/f?p=19297:3::IR_DEV_CHAMPS:NO:CIR,RIR">list of existing Oracle Developer Champions</a>.<br />&nbsp;<img alt="Oracle dev champion" border="0" height="160" src="https://lh3.googleusercontent.com/-DrLlMdy6dN8/W0eC91un_1I/AAAAAAAAAeE/6OCN-wDgaa8-rZgzNpJXrJliigCeScd-QCHMYCw/oracle_dev_champion.png?imgmax=1600" title="oracle_dev_champion.png" width="250" /> <img alt="O ACEDirectorLogo clr" border="0" height="100" src="https://lh3.googleusercontent.com/-HflTDNal8cE/W0eC-cIgdYI/AAAAAAAAAeI/Xoce4SGffzckAlKIjDdFOb559HQCatE-QCHMYCw/O_ACEDirectorLogo_clr.png?imgmax=1600" title="O_ACEDirectorLogo_clr.png" width="250" /> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-2594705535599083107 Thu Jul 12 2018 12:34:00 GMT-0400 (EDT) Self-Service Data Transformation: Intro to Oracle Data Flow & Tableau Prep https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20data%20transformation%20part%201.jpg?t=1541832538128" alt="self-service data transformation part 1" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In the world of self-service analytics, Tableau and <strong><a href="https://www.us-analytics.com/hyperionblog/oracle-data-visualization-v4">Oracle Data Visualization</a></strong> are two tools that are often put on the same discussion platform. In the last year, the conversations surrounding these two tools have increased dramatically — with most of our clients using self-service analytics. In this blog, I am not going to do a comparison rundown between Tableau and Oracle DV. What I do want to show you is two similar tools which introduce exciting new possibilities: <strong>Tableau Prep </strong>and <strong>Oracle Data Flow</strong>.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-one&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one Wed Jul 11 2018 14:42:26 GMT-0400 (EDT) Gathering Statistics in the Autonomous Data Warehouse Cloud https://danischnider.wordpress.com/2018/07/11/gathering-statistics-in-the-autonomous-data-warehouse-cloud/ <p>Optimizer statistics are essential for good execution plans and fast performance of the SQL queries. Of course, this is also the case in the Autonomous Data Warehouse Cloud. But the handling of gathering statistics is slightly different from what we know from other Oracle databases.</p> <p><span id="more-585"></span></p> <p> </p> <p>Since a couple of days, I’m testing several features and behaviors of the Autonomous Data Warehouse Cloud (ADWC) to find out, how useful this Cloud platform solution is for real DWH projects and what has to be considered for the development of a Data Warehouse. To simulate a typical scenario, I’m running incremental load jobs into multiple target tables several times per day. The example I use for this is a Data Vault schema for a craft beer brewery (if you want to know more about the data model, watch <a href="https://www.youtube.com/watch?v=Q1qj_LjEawc">this video</a> I recorded last year). It the simulated environment on ADWC, I already sold 68 million beers until today &#8211; far away from what we sell in our real micro brewery. But this is not the subject I want to write about in this blog post.</p> <p><img title="craft_beer_dv.jpg" src="https://danischnider.files.wordpress.com/2018/07/icraft_beer_dv.jpg?w=600&#038;h=285" alt="Craft beer dv" width="600" height="285" border="0" /></p> <p> </p> <p>More interesting than the data (which is mostly generated by DBMS_RANDOM) is the fact that no optimizer statistics were gathered so far, although the system is running since more than a week now. I play the role of a “naive ETL developer”, so I don’t care about such technical details. That’s what the Autonomous Data Warehouse should do for me.</p> <h1><span style="color:#373737;">Managing Optimizer Statistics in ADWC</span></h1> <p>For this blog post, I switch my role to the interested developer, that wants to know why there are statistics available. A good starting point &#8211; as often &#8211; is to read the manual. In the documentation of ADWC, we can find the following statements in the section <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/manage-service.html#GUID-69906542-4DF6-4759-ABC1-1817D77BDB02">Managing Optimizer Statistics and Hints on Autonomous Data Warehouse Cloud</a>:</p> <blockquote> <p><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;font-size:14px;box-sizing:border-box;">Autonomous Data Warehouse Cloud</span><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;"> gathers optimizer statistics automatically for tables loaded with direct-path load operations. … </span><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;font-size:14px;">If you have tables modified using conventional DML operations you can run commands to gather optimizer statistics for those tables. &#8230;</span></p> </blockquote> <p>What does this mean exactly? Let’s look at some more details.</p> <h1>Statistics for ETL Jobs with Conventional DML</h1> <p>The automatic gathering statistics job that is executed regularly on a “normal” Oracle database, does not run on ADWC. The job is enabled, but the maintenance windows of the scheduler are disabled by default:</p> <p><img title="auto_stats_job.jpg" src="https://danischnider.files.wordpress.com/2018/07/iauto_stats_job.jpg?w=438&#038;h=274" alt="Auto stats job" width="438" height="274" border="0" /></p> <p><span style="color:#373737;font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:15px;">This is a good decision, because many data warehouses are running ETL jobs in the time frame of the default windows. Statistics gathering in a data warehouse should always be part of the ETL jobs. This is also the case in ADWC. After loading data into a target table with a conventional DML operation (INSERT, UPDATE, MERGE), the optimizer statistics are gathered with a DBMS_STATS call:</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><br />BEGIN<br />   dbms_stats.gather_table_stats(USER, &#8216;H_ORDER_ITEM&#8217;);<br />END;<br />  </span></p> <p> </p> <p>Only schema and table name must be specified as parameter. For all other settings, the DBMS_STATS preferences are used. Four of them are defined differently per default in Autonomous Data Warehouse Cloud:</p> <ul> <li><strong>INCREMENTAL</strong> is set to TRUE (default: FALSE). This is only relevant for incremental statistics on partitioned table. Currently, Partitioning is not supported on ADWC, so this preference has no impact.</li> <li><strong>INCREMENTAL_LEVEL</strong> is set to TABLE (default: PARTITION). This is relevant for partition exchange in combination with incremental statistics and therefore currently not relevant, too.</li> <li><strong>METHOD_OPT</strong> is set to ‘FOR ALL COLUMNS SIZE 254’ (default: … SIZE AUTO). With the default setting, histograms are only gathered if a column was used in a WHERE condition of a SQL query before. In ADWC, a histogram with up to 254 buckets i calculated for each column, independent of the queries that were executed so far. This allows more flexibility for ad-hoc queries and is suitable in a data warehouse environment.</li> <li><strong>NO_INVALIDATE</strong> is set to FALSE (default: DBMS_STATS.AUTO_INVALIDATE). For ETL jobs, it is important so set this parameter to FALSE (see my previous blog post <a href="https://danischnider.wordpress.com/2015/01/06/avoid-dbms_stats-auto_invalidate-in-etl-jobs/">Avoid dbms_stats.auto_invalidate in ETL jobs</a>). So, the preference setting is a very good choice for data warehouses.</li> </ul> <p>The configuration of ADWC makes it very easy to gather optimizer statistics in your ETL jobs, but you still have to make sure that a DBMS_STATS call is included at the end of each ETL job.</p> <h1>Statistics for ETL Jobs with Direct-Path Loads</h1> <p>A better approach is to use Direct-Path INSERT statements. This is not only faster for large data sets, but makes it much easier to manage optimizer statistics. The reason is an Oracle 12c feature and two new undocumented parameters.</p> <p>Since Oracle 12.1, statistics are gathered automatically for a Direct-Path INSERT. This works only for empty tables, and no histograms are calculated, as explained in my previous blog post <a href="https://danischnider.wordpress.com/2015/12/23/online-statistics-gathering-in-oracle-12c/">Online Statistics Gathering in Oracle 12c</a>.</p> <p>In ADWC, two new undocumented parameters are available, both are set to TRUE by default:</p> <ul> <li>“<strong>_optimizer_gather_stats_on_load_all</strong>”: When this parameter is TRUE, online statistics are gathered even for a Direct-Path operation into a non-empty target table.</li> <li>“<strong>_optimizer_gather_stats_on_load_hist</strong>”: When this parameter is TRUE, histograms are calculated during online statistics gathering.</li> </ul> <p>The following code fragment shows this behavior: Before an incremental load into the Hub table H_ORDER_ITEM, the number of rows in the table statistics is 68386107. After inserting another 299041 rows, the table statistics are increased to 68685148 (= 68386107 + 299041).</p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> <br />SELECT table_name, num_rows, last_analyzed<br />  FROM user_tab_statistics</p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="color:#fff900;"> WHERE table_name = &#8216;H_ORDER_ITEM&#8217;;<br /> <br />TABLE_NAME             NUM_ROWS LAST_ANALYZED<br />&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;- &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br />H_ORDER_ITEM           </span><span style="color:#ffb5b2;">68386107</span><span style="color:#fff900;"> 11.07.2018 09:37:04</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">INSERT /*+ append */ INTO h_order_item</span><br /><span style="color:#fff900;">      ( h_order_item_key</span><br /><span style="color:#fff900;">      , order_no</span><br /><span style="color:#fff900;">      , line_no</span><br /><span style="color:#fff900;">      , load_date</span><br /><span style="color:#fff900;">      , record_source</span><br /><span style="color:#fff900;">      )</span><br /><span style="color:#fff900;">SELECT s.h_order_item_key</span><br /><span style="color:#fff900;">     , s.order_no</span><br /><span style="color:#fff900;">     , s.line_no</span><br /><span style="color:#fff900;">     , v_load_date</span><br /><span style="color:#fff900;">     , c_record_source</span><br /><span style="color:#fff900;">  FROM v_stg_order_details s</span><br /><span style="color:#fff900;">  LEFT OUTER JOIN h_order_item t</span><br /><span style="color:#fff900;">    ON (s.h_order_item_key = t.h_order_item_key)</span><br /><span style="color:#fff900;"> WHERE t.h_order_item_key IS NULL;</span><br /><span style="color:#fff900;"> </span><br /><span><span style="color:#ffb5b2;">299041</span></span><span style="color:#fff900;"> rows inserted.</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">COMMIT;</span><br /><span style="color:#fff900;"> </span><br /><span style="font-variant-ligatures:no-common-ligatures;">SELECT table_name, num_rows, last_analyzed<br /></span><span style="color:#fff900;">  FROM user_tab_statistics</span><br /><span style="color:#fff900;"> WHERE table_name = &#8216;H_ORDER_ITEM&#8217;;</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">TABLE_NAME             NUM_ROWS LAST_ANALYZED</span><br /><span style="color:#fff900;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;- &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</span><br /><span style="color:#fff900;">H_ORDER_ITEM           </span><span><span style="color:#ffb5b2;">68685148</span></span><span style="color:#fff900;"> 11.07.2018 14:11:09</span><br /><span style="color:#032ce2;">~                                                                                                                                  </span></p> <pre> </pre> <p>The column statistics (including histograms) are adapted for the target table, too. Only index statistics are not affected during online statistics gathering &#8211; but indexes in ADWC are a different story anyway. I will write about it in a separate blog post.</p> <h1>Conclusion</h1> <p>Statistics gathering is still important in the Autonomous Data Warehouse Cloud, and we have to take care that the optimizer statistics are frequently been updated. For Direct-Path operations, this works automatically, so we have nothing to do anymore. Only for conventional DML operations, it is still required to call DBMS_STATS after each ETL job, but the default configuration of ADWC makes it very easy to use.</p> Dani Schnider http://danischnider.wordpress.com/?p=585 Wed Jul 11 2018 11:46:42 GMT-0400 (EDT) Oracle Analytics Roadmap: OBIEE & OAC https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/oracle%20analytics%20roadmap.jpg?t=1541832538128" alt="oracle analytics roadmap" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>After attending Kscope18, one thing is still extremely clear — Oracle continues and will continue to be all about the cloud. For example, the title of the Kscope presentation detailing what’s coming to OBIEE is “Oracle Analytics: How to Get to the Cloud and the Future of On-Premises.”</p> <p>While that does tell you there’s still a future in on-prem Oracle BI, it's also clear that all the innovation will be put into the cloud. In this blog post, we’ll look at…</p> <ul> <li>Innovative features of <a href="/hyperionblog/oracle-analytics-cloud-questions">Oracle Analytics Cloud (OAC</a>)</li> <li>The future of OBIEE</li> </ul> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Foracle-analytics-roadmap&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap Thu Jul 05 2018 14:04:37 GMT-0400 (EDT) Oracle BI Commentary Tools: Open Source or Enterprise? https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/oracle%20bi%20commentary%20-%20enterprise%20vs%20open%20source.jpg?t=1541832538128" alt="oracle bi commentary - enterprise vs open source" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Commentary support in Oracle BI tools has been a commonly requested feature for many years. As with many absent features in our beloved tools, the community has come together to develop methods to implement this functionality themselves. Some of these approaches, such as leveraging Writeback, implement out-of-the-box Oracle BI features.</p> <p>More commonly you’ll find custom-built software extensions or free “open source” applications that provide OBIEE comments as extensions.</p> <p>In this blog post, we’ll look at the difference between custom-built extensions and open-source applications. We’ll also consider two different tools — one open source, another a custom-built extension.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Foracle-bi-commentary-tools&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools Thu Jul 05 2018 10:43:00 GMT-0400 (EDT) External Tables in Autonomous Data Warehouse Cloud https://danischnider.wordpress.com/2018/07/04/external-tables-in-autonomous-data-warehouse-cloud/ <p>In Oracle Autonomous Data Warehouse Cloud, External Tables can be used to read files from the cloud-based Object Storage. But take care to do it the official way, otherwise you will see a surprise, but no data.</p> <p><span id="more-580"></span></p> <p>Together with my Trivadis colleague <a href="https://antognini.ch/about/">Christian Antognini</a>, I currently have the opportunity to do several tests in the Autonomous Data Warehouse Cloud (ADWC). We are checking out the features and the performance of Oracle’s new cloud solution for data warehouses. For the kick-off of this project, we met in an idyllic scenery in the garden of Chris’ house in Ticino, the southern part of Switzerland. So, I was able to work on a real external table.</p> <p><img title="external_table.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexternal_table.jpg?w=600&#038;h=456" alt="External table" width="600" height="456" border="0" /><br /><em>Testing External Tables in the Cloud on an external table with view to the clouds.</em></p> <p>A typical way to load data files into a data warehouse is to create an External Table for the file and then read the data from this table into a stage table. In ADWC, the data files must first copied to a specific landing zone, a <em>Bucket</em> in the Oracle Cloud Infrastructure <em>Object Storage</em> service. The first steps to do this are described in the <a href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/adwc/OBE_Loading%20Your%20Data/loading_your_data.html">Oracle Autonomous Data Warehouse Cloud Service Tutorial</a>. The Oracle Cloud Infrastructure command line interface <a href="https://docs.cloud.oracle.com/iaas/Content/API/Concepts/cliconcepts.htm">CLI</a> can also be used to upload the files.</p> <p>The tutorial uses the procedure <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud-reference.html#GUID-9428EA51-5DDD-43C2-B1F5-CD348C156122"><em>DBMS_CLOUD.copy_data</em></a> to load the data into the target tables. The procedure creates a temporary external table in the background and drops it at the end. Another procedure <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud-reference.html#GUID-2AFBEFA4-992E-4F53-96DB-F560084C7DA9"><em>DBMS_CLOUD.create_exernal_table</em></a> is available to create a reusable External Table on a file in the Object Storage. But is it possible to create an External Table manually, too? To check this, I extracted the DDL command of the table CHANNELS (created with <em>DBMS_CLOUD.create_extrenal_table</em>):</p> <p><img title="exttab_ddl.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_ddl.jpg?w=599&#038;h=279" alt="Exttab ddl" width="599" height="279" border="0" /></p> <p>Then, I created a new table CHANNELS_2 with exactly the same definition, only with a different name. It seems to be obvious that both tables should contain the same data. But this is not the case, table CHANNEL_2 returns no data:</p> <p><img title="exttab_query.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_query1.jpg?w=490&#038;h=518" alt="Exttab query" width="490" height="518" border="0" /></p> <p>First, I was confused. Then I thought it has to do with missing privileges. Finally, I assumed to be dazed because of the heat in Chris’ garden. But the reason is a different one: CHANNELS_2 is not an External Table, but a normal heap-organized table. Even it was created with an ORGANIZATION EXTERNAL clause! Extracting the DDL command shows what happened:</p> <p><img title="exttab_ddl_2.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_ddl_2.jpg?w=598&#038;h=196" alt="Exttab ddl 2" width="598" height="196" border="0" /></p> <p>What is the reason for this behavior? The explanation can be found in Appendix B: <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/experienced-database-users.html#GUID-58EE6599-6DB4-4F8E-816D-0422377857E5">Autonomous Data Warehouse Cloud for Experienced Oracle Database Users</a> (the most interesting part of the ADWC documentation): Most clauses of the CREATE TABLE command are either ignored or not allowed in the Autonomous Data Warehouse. In ADWC, you cannot manually define physical properties such as tablespace name or storage parameters. No additional clauses for logging, compression, partitioning, in-memory, etc. are allowed. They are either not supported in ADWC (like Partitioning), or they are automatically handled by the Autonomous Database. According to documentation, creating an External Table should not be allowed (i.e. return an error message), but instead, the clause is just ignored. The same happens for index-organized tables, by the way.</p> <h1>Conclusion</h1> <p>External Tables are supported (and even recommended) in the Autonomous Data Warehouse Cloud, but they cannot be created manually &#8211; we are in an <span style="text-decoration:underline;">Autonomous</span> Database.</p> <p>If you follow the steps explained in the documentation and use the provided procedures in package DBMS_CLOUD, everything works fine. If you try to do it the “manual way”, you will get a non-expected behavior and probably loose a lot of time to find your data in the files.</p> <p>The PL/SQL package <em>DBMS_CLOUD</em> contains many additional useful procedures for file handling in the Cloud, but not all of them are documented. A complete reference of all its procedures with some examples can be found in Christian Antognini’s blog post <a href="https://antognini.ch/2018/07/dbms_cloud-package-a-reference-guide/">DBMS_CLOUD Package – A Reference Guide</a>.</p> Dani Schnider http://danischnider.wordpress.com/?p=580 Wed Jul 04 2018 17:29:55 GMT-0400 (EDT)