ODTUG Aggregator ODTUG Blogs http://localhost:8080 Tue, 14 Aug 2018 02:52:27 +0000 http://aggrssgator.com/ Two Minute Tutorial: How to Access the OAC RPD https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/how%20to%20access%20oac%20rpd.jpg?t=1533950236061" alt="how to access oac rpd" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this two-minute tutorial, I’ll walk you through how to access the OAC RPD in two methods…</p> <ul> <li>Accessing it through the Admin Tool</li> <li>SSH into the server</li> </ul> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fhow-to-access-the-oac-rpd&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Becky Wagner https://www.us-analytics.com/hyperionblog/how-to-access-the-oac-rpd Fri Aug 03 2018 15:25:22 GMT-0400 (EDT) PBCS and EPBCS Updates (August 2018): Incremental Export and Import Behavior Change, Updated Vision Sample Application & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20and%20epbcs%20august%202018%20updates.jpg?t=1533950236061" alt="pbcs and epbcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The August updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;have arrived!&nbsp;</span>This blog post outlines several new features, including an&nbsp;i<span>ncremental export and import behavior change, updated vision sample application, and more.</span></p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-august-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-august-updates Fri Aug 03 2018 13:11:46 GMT-0400 (EDT) FCCS Updates (August 2018): Enhancements to Close Manager, Ability to Create Journals for Entities with Different Parents & More https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/fccs%20update%20august%202018.jpg?t=1533950236061" alt="fccs update august 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The July updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including enhancements made to Close Manager, c<span>reate journals for entities with different parents, and more.</span></p> <p><em>The monthly update for FCCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-august-2018 Fri Aug 03 2018 12:17:09 GMT-0400 (EDT) ARCS Updates (August 2018): Changes to Filtering on Unmatched Transactions in Transaction Matching, Considerations & More https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/arcs%20august%202018%20updates.jpg?t=1533950236061" alt="arcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The August updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) are here. In this blog post, we’ll outline new features in ARCS, including changes to filtering on unmatched transaction matching, considerations, and more.</p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-august-2018 Thu Aug 02 2018 17:18:24 GMT-0400 (EDT) EPRCS Updates (August 2018): Drill to Source Data in Management Reporting, Improved Variable Panel Display in Smart View & More https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/eprcs%20august%202018%20updates.jpg?t=1533950236061" alt="eprcs august 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this blog, we'll cover the August updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/enterprise-performance-reporting-cloud">Oracle Enterprise Performance Reporting Cloud Service (EPRCS)</a>&nbsp;including new features and considerations.</p> <p><em>The monthly update for EPRCS will occur on Friday, August 17 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Feprcs-updates-august-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/eprcs-updates-august-2018 Thu Aug 02 2018 16:47:45 GMT-0400 (EDT) 2019 Leadership Program - Now Accepting Applications https://www.odtug.com/p/bl/et/blogaid=811&source=1 Are you looking to invest in your professional development? Do you enjoy the ODTUG community and are you looking to become more involved? The ODTUG leadership program is a great way to accomplish both goals and broaden your network. ODTUG https://www.odtug.com/p/bl/et/blogaid=811&source=1 Thu Aug 02 2018 11:57:06 GMT-0400 (EDT) A selection of Hadoop Docker Images http://www.oralytics.com/2018/08/a-selection-of-hadoop-docker-images.html <p>When it comes to big data platforms one of the biggest challenges is getting a test environment setup where you can try out the various components. There are a few approaches to doing this this. The first is to setup your own virtual machine or some other container with the software. But this can be challenging to get just a handful of big data applications/software to work on one machine.</p> <p>But there is an alternative approach. You can use one of the preconfigured environments from the likes of AWS, Google, Azure, Oracle, etc. But in most cases these come with a cost. Maybe not in the beginning but after a little us you will need to start handing over some dollars. But these require you to have access to the cloud i.e. wifi, to run these. Again not always possible!</p> <p>So what if you want to have a local big data and Hadoop environment on your own PC or laptop or in your home or office test lab? There ware a lot of Virtual Machines available. But most of these have a sizeable hardware requirement. Particularly for memory, with many requiring 16+G of RAM ! Although in more recent times this might not be a problem but for many it still is. Your machines do not have that amount or your machine doesn't allow you to upgrade.</p> <p>What can you do?</p> <p>Have you considered using Docker? There are many different Hadoop Docker images available and these are not as resource or hardware hungry, unlike the Virtual Machines.</p> <p>Here is a list of some that I've tried out and you might find them useful.</p> <p><strong><a href="https://hub.docker.com/r/cloudera/quickstart/">Cloudera QuickStart image</a></strong></p><p>You may have tried their VM, now go dry the Cloudera QuickStart docker image.</p><p><a href="https://blog.cloudera.com/blog/2015/12/docker-is-the-new-quickstart-option-for-apache-hadoop-and-cloudera/">Read about it here.</a></p> <p>Check our <a href="https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=hadoop&starCount=0">Docker Hub</a> for lots and lots of images.</p> <p>Docker Hub is not the only place to get Hadoop Docker images. There are lots on GitHub Just do a quick <a href="https://www.google.com/search?q=hadoop+docker+images+on+github&ie=utf-8&oe=utf-8&client=firefox-b-ab">Google search </a>to find the many, many, many images.</p> <p>These Docker Hadoop images are a great way for you to try out these Big Data platforms and environments with the minimum of resources.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-6708294390713491168 Thu Aug 02 2018 11:31:00 GMT-0400 (EDT) Tutorial: Updating Connection Pools in OAC & OBIEE 12c https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/updating%20connection%20pools%20tutorial.jpg?t=1533950236061" alt="updating connection pools tutorial" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>I recently ran into an interesting issue with a client — each RPD connected to different databases based on the environment (Dev, Test and Production). Due to the client’s security policy, OBIEE developers were not permitted to have the passwords for the data sources.</p> <p>To migrate the RPD, a member of the DBA team must be contacted to input the passwords for the connection pools. At times, a DBA with available bandwidth can be difficult to locate (even for a few minutes), and the existing ticketing system does not lend itself to “on the fly” RPD promotions (such as Dev to Test).</p> <p>If only we had a way to apply connection pools to the RPD while keeping the connection pool information secure…</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fupdating-connection-pools-tutorial&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Kevin Jacox https://www.us-analytics.com/hyperionblog/updating-connection-pools-tutorial Wed Aug 01 2018 13:55:21 GMT-0400 (EDT) DV2 Sequences, Hash Keys, Business Keys – Candid Look http://danlinstedt.com/allposts/datavaultcat/dv2-keys-pros-cons/ Primary Key Options for Data Vault 2.0 This entry is a candid look (technical, unbiased view) of the three alternative primary key options in a Data Vault 2.0 Model.  There are pros and cons to each selection.  I hope you enjoy this factual entry. (C) Copyright 2018 Dan Linstedt all rights reserved, NO reprints allowed [&#8230;] Dan Linstedt http://danlinstedt.com/?p=2986 Mon Jul 30 2018 10:13:12 GMT-0400 (EDT) Oracle 18c Grid Infrastructure Upgrade https://gavinsoorma.com/2018/07/oracle-18c-grid-infrastructure-upgrade/ <h3><span style="color: #ff0000;">Upgrade Oracle 12.1.0.2 Grid Infrastructure to 18c </span></h3> <p><strong>Download the 18c Grid Infrastructure software (18.3)</strong></p> <p><a href="https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html">https://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle18c-linux-180000-5022980.html</a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png"><img class="aligncenter size-full wp-image-8220" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png" alt="" width="774" height="415" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18.png 774w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-300x161.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18-768x412.png 768w" sizes="(max-width: 774px) 100vw, 774px" /></a></p> <p>&nbsp;</p> <p><strong>Prerequisites</strong></p> <ul> <li>Apply the patch <strong>21255373</strong> to the 12.1.0.2 Grid Infrastructure software home</li> <li>Edit the /etc/security/limits.conf file and add the lines:</li> </ul> <p>oracle soft stack 10240<br /> grid   soft stack 10240</p> <p>&nbsp;</p> <p><strong>Notes</strong></p> <ul> <li>Need to have at least 10 GB of free space in the $ORACLE_BASE directory</li> <li>The unzipped 18c Grid Infrastructure software occupies around 11 GB of disk space &#8211; a big increase on the earlier versions</li> <li>The Grid Infrastructure upgrade can be performed in rolling fashion -configure Batches for this</li> <li>We can see the difference in the software version between the RAC nodes while GI upgrade is in progress &#8230;.</li> </ul> <p>During Upgrade:</p> <p>[root@rac01 trace]# cd /u02/app/18.0.0/grid/bin</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [12.1.0.2.0]</p> <p>[root@rac01 bin]#<strong> ./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [12.1.0.2.0]</p> <p>[root@rac01 bin]#</p> <p>&nbsp;</p> <p>After Upgrade:</p> <p>[root@rac01 bin]# <strong>./crsctl query crs activeversion</strong></p> <p>Oracle Clusterware active version on the cluster is [18.0.0.0.0]</p> <p>[root@rac01 bin]# <strong>./crsctl query crs softwareversion -all</strong></p> <p>Oracle Clusterware version on node [rac01] is [18.0.0.0.0]</p> <p>Oracle Clusterware version on node [rac02] is [18.0.0.0.0]</p> <p>&nbsp;</p> <ul> <li>The minimum memory requirements is 8 GB &#8211; same as 12c Release 2</li> <li>Got an error PRVF-5600 related to /etc/resolv.conf stating the file cannot be parsed as some lines are in an improper format   &#8211; ignored the error because the format of the file is correct.</li> </ul> <p>[grid@rac01 grid]$ cat /etc/resolv.conf<br /> # Generated by NetworkManager<br /> search localdomain  rac.localdomain</p> <p>nameserver 192.168.56.102</p> <p>options timeout:3<br /> options retries:1</p> <p>&nbsp;</p> <p><strong>Create the directory structure on both RAC nodes</strong></p> <p>[root@rac01 app]# su &#8211; grid</p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/</p> <p>[grid@rac01 ~]$ cd /u02/app</p> <p>[grid@rac01 app]$ mkdir 18.1.0</p> <p>[grid@rac01 app]$ cd 18.1.0/</p> <p>[grid@rac01 18.0.0]$ mkdir grid</p> <p>[grid@rac01 18.0.0]$ cd grid</p> <p>[grid@rac01 grid]$ ssh grid@rac02</p> <p>Last login: Sun Jul 29 11:22:38 2018 from rac01.localdomain</p> <p>[grid@rac02 ~]$ cd /u02/app</p> <p>[grid@rac02 app]$ mkdir 18.1.0</p> <p>[grid@rac02 app]$ cd 18.1.0/</p> <p>[grid@rac02 18.0.0]$ mkdir grid</p> <p>&nbsp;</p> <p><strong>Unzip the 18c GI Software</strong></p> <p>[grid@rac01 ~]$ cd /u02/app/18.1.0/grid</p> <p>[grid@rac01 grid]$ unzip -q /media/sf_software/LINUX.X64_180000_grid_home.zip</p> <p>&nbsp;</p> <p><strong>Execute gridSetup.sh</strong></p> <p>[grid@rac01 18.0.0]$ export DISPLAY=:0.0</p> <p>[grid@rac01 18.0.0]$ <strong>./gridSetup.sh</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png"><img class="aligncenter size-full wp-image-8196" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png" alt="" width="612" height="382" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18a.png 612w, https://gavinsoorma.com/wp-content/uploads/2018/07/18a-300x187.png 300w" sizes="(max-width: 612px) 100vw, 612px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png"><img class="aligncenter size-full wp-image-8197" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18b.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18b-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png"><img class="aligncenter size-full wp-image-8198" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18c.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18c-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png"><img class="aligncenter size-full wp-image-8199" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png" alt="" width="800" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18d.png 800w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-768x575.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18d-627x470.png 627w" sizes="(max-width: 800px) 100vw, 800px" /></a></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png"><img class="aligncenter size-full wp-image-8200" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png" alt="" width="794" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18e.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18e-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png"><img class="aligncenter size-full wp-image-8201" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png" alt="" width="801" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18f.png 801w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18f-768x571.png 768w" sizes="(max-width: 801px) 100vw, 801px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png"><img class="aligncenter size-full wp-image-8202" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png" alt="" width="794" height="595" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18g.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18g-627x470.png 627w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png"><img class="aligncenter size-full wp-image-8203" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png" alt="" width="802" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18h.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-300x223.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18h-768x571.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png"><img class="aligncenter size-full wp-image-8204" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png" alt="" width="797" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18i.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-768x574.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18i-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png"><img class="aligncenter size-full wp-image-8205" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18j.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18j-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png"><img class="aligncenter size-full wp-image-8206" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png" alt="" width="802" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18k.png 802w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18k-768x573.png 768w" sizes="(max-width: 802px) 100vw, 802px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png"><img class="aligncenter size-full wp-image-8207" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png" alt="" width="798" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18l.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-768x577.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18l-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png"><img class="aligncenter size-full wp-image-8208" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png" alt="" width="797" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18m.png 797w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18m-627x470.png 627w" sizes="(max-width: 797px) 100vw, 797px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png"><img class="aligncenter size-full wp-image-8209" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png" alt="" width="798" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18n.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18n-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png"><img class="aligncenter size-full wp-image-8210" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png" alt="" width="798" height="599" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18o.png 798w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-300x225.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-768x576.png 768w, https://gavinsoorma.com/wp-content/uploads/2018/07/18o-627x470.png 627w" sizes="(max-width: 798px) 100vw, 798px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png"><img class="aligncenter size-full wp-image-8211" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png" alt="" width="799" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18p.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18p-768x578.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png"><img class="aligncenter size-full wp-image-8212" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png" alt="" width="793" height="601" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18q.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18q-768x582.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png"><img class="aligncenter size-full wp-image-8213" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png" alt="" width="794" height="600" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18r.png 794w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-300x227.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18r-768x580.png 768w" sizes="(max-width: 794px) 100vw, 794px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png"><img class="aligncenter size-full wp-image-8214" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png" alt="" width="799" height="597" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18s.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18s-768x574.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png"><img class="aligncenter size-full wp-image-8215" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png" alt="" width="793" height="598" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18t.png 793w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-300x226.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18t-768x579.png 768w" sizes="(max-width: 793px) 100vw, 793px" /></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png"><img class="aligncenter size-full wp-image-8216" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png" alt="" width="799" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18u.png 799w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-300x224.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18u-768x573.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></a></p> <p>&nbsp;</p> <p><strong>ASM Configuration Assistant 18c</strong></p> <p>&nbsp;</p> <p><a href="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png"><img class="aligncenter size-full wp-image-8217" src="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png" alt="" width="949" height="596" srcset="https://gavinsoorma.com/wp-content/uploads/2018/07/18v.png 949w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-300x188.png 300w, https://gavinsoorma.com/wp-content/uploads/2018/07/18v-768x482.png 768w" sizes="(max-width: 949px) 100vw, 949px" /></a></p> <p>&nbsp;</p> <p><strong>GIMR pluggable database upgraded to 18c</strong></p> <p>&nbsp;</p> <pre>[grid@rac01 bin]$ export ORACLE_SID=-MGMTDB [grid@rac01 bin]$ pwd /u02/app/18.0.0/grid/bin [grid@rac01 bin]$ ./sqlplus sys as sysdba SQL*Plus: Release 18.0.0.0.0 - Production on Sun Jul 29 22:09:17 2018 Version 18.3.0.0.0 Copyright (c) 1982, 2018, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production Version 18.3.0.0.0 SQL&gt; select name,open_mode from v$pdbs; NAME -------------------------------------------------------------------------------- OPEN_MODE ---------- PDB$SEED READ ONLY <strong>GIMR_DSCREP_10</strong> READ WRITE SQL&gt; alter session set container=GIMR_DSCREP_10; Session altered. SQL&gt; select tablespace_name from dba_tablespaces; TABLESPACE_NAME ------------------------------ SYSTEM SYSAUX UNDOTBS1 TEMP USERS SYSGRIDHOMEDATA SYSCALOGDATA SYSMGMTDATA SYSMGMTDATADB SYSMGMTDATACHAFIX SYSMGMTDATAQ 11 rows selected. SQL&gt; select file_name from dba_data_files where tablespace_name='SYSMGMTDATA'; FILE_NAME -------------------------------------------------------------------------------- +OCR/_MGMTDB/7224A7DF6CB92239E0536438A8C03F3A/DATAFILE/sysmgmtdata.281.982792479 SQL&gt; </pre> Gavin Soorma https://gavinsoorma.com/?p=8218 Mon Jul 30 2018 01:05:29 GMT-0400 (EDT) Building dynamic ODI code using Oracle metadata dictionary https://devepm.com/2018/07/27/building-dynamic-odi-code-using-oracle-metadata-dictionary/ Hi all, today’s post will be about how ODI can be used to generate any kind of SQL statements using Oracle metadata tables. We always like to say that ODI is way more than just an ETL tool and that people needs to start to think about ODI as being a full development platform, where [&#8230;] radk00 http://devepm.com/?p=1713 Fri Jul 27 2018 19:28:56 GMT-0400 (EDT) Automating Backup & Recovery for Oracle EPM Cloud [Tutorial] https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Backup%20and%20Recovery%20Strategy%20graphic.jpg?t=1533950236061" alt="Backup and Recovery Strategy graphic" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>What happens when you have data loss or a devastating change to your Oracle Cloud application? Is there an undo button or safety net to help you easily bounce back?</p> <p>We want to proactively help you with this topic, whether it's preventing minor and major data loss, being prepared in case of application corruption, or you simply want an undo for your application changes. You also may want to surgically undo for a specific application artifact like a form or setting, or a specific slice of data back several weeks. Can you call Oracle for help, or should you do it yourself?</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-automation-using-epm-automate-and-powershell&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Jeff Price https://www.us-analytics.com/hyperionblog/pbcs-automation-using-epm-automate-and-powershell Thu Jul 26 2018 12:40:09 GMT-0400 (EDT) Managing Metadata in OBIEE 12c [Video Tutorial] https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/Managing-metadata-in-OBIEE.png?t=1533950236061" alt="Managing-metadata-in-OBIEE" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this 5-minute video tutorial, learn how system admins manage metadata in OBIEE 12c, including how to upload RPD files using the command line.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fmanaging-metadata-in-obiee-12c&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/managing-metadata-in-obiee-12c Thu Jul 26 2018 12:37:00 GMT-0400 (EDT) Starting & Stopping OBIEE Components [Video Tutorial] https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/starting-and-stoping-obiee-components.png?t=1533950236061" alt="starting-and-stoping-obiee-components" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this quick video tutorial, learn the options you have for starting, stopping, and viewing the status of OBIEE components in 12c.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fstarting-stopping-obiee-components-12c&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/starting-stopping-obiee-components-12c Thu Jul 26 2018 12:11:00 GMT-0400 (EDT) Making our way into Dremio https://www.rittmanmead.com/blog/2018/07/untitled/ <div class="kg-card-markdown"><p>In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/GUID-65DF513D-DFC8-4046-8AA5-24292AF5942F-default-1.png" alt=""></p> <p>Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.</p> <p>The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.</p> <p>I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.</p> <p>Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from <a href="https://www.dremio.com/download/">here</a>.</p> <p>Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit <code>/Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env</code> to point to the directory where java 1.8 home is located.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-04-at-09.17.15.png" alt=""></p> <p>After that you should be able to start Dremio as any other MacOs application and access <code>http://localhost:9047</code></p> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.55.25.png" style="width: 360px; height:280px"> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-10.35.55.png" alt=""></p> <h3 id="configurings3source">Configuring S3 Source</h3> <p>Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html">here</a>) and show the query capabilities of Dremio against unstructured data.</p> <p>For this demo we're using Garmin CSV activity data that can be easily <a href="https://connect.garmin.com/">downloaded</a> from Garmin activity page.</p> <p>Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.</p> <pre><code>act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories 1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08 ,0:06:06 ,0:04:13 ,175.390625,193.0,92.89507499768523,--,--,--,65 1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26 ,0:10:08 ,0:06:02 ,150.140625,236.0,63.74555754497759,--,--,--,55</code></pre> <p>For user information data we have used the following dataset</p> <pre><code>runner,dob,name JM,01-01-1900,Jon Mead NMG,01-01-1900,Nelio Guimaraes</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.20.42.png" alt=""></p> <p>Add your S3 credentials to access <continue></continue></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.22.14.png" alt=""></p> <p>After configuring your S3 account all buckets associated to it, will be prompted under the new source area.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-12.13.12.png" alt=""></p> <p>For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart</p> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-10-at-15.42.03.png" style="width: 360px; height:280px"> <p><strong>nmgbuckettest</strong> - contains Garmin activity data that could be seen as a fact table in CSV format :</p> <p><font size="3">Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories</font></p> <p><strong>nmgdremiouser</strong> - contains user data that could be seen as a user dimension in a CSV format:</p> <p><font size="3">runner,dob,name</font></p> <h3 id="creatingdatasets">Creating datasets</h3> <p>After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.45.58.png" alt=""></p> <p>By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for <code>\n</code> and checked the option <em>Extract Field Name</em></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.53.02.png" alt=""></p> <p>Lets do the same for the second set of CSV's (nmgdremiouser bucket)</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.54.32-1.png" alt=""></p> <p>Click in saving will drive us to a new panel where we can start performing some queries.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-17.41.21-1.png" alt=""></p> <p>However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the <em>dob</em> field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.46.55.png" alt=""></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.47.30-1.png" alt=""></p> <p>Apply the changes and save the new dataset under the desire space.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.57.18.png" alt=""></p> <p>Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call <em>D_USER</em> for the dataset coming from nmgdremiouser bucket and <em>F_ACTIVITY</em> for data coming from nmgbuckettest</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-12.12.11.png" alt=""></p> <h3 id="queryingdatasets">Querying datasets</h3> <p>Now that we have D_USER and F_ACTIVITY datasets created we can start querying them and do some analysis.</p> <p>This first analysis will tell us which runner climbs more during his activities:</p> <pre><code>SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name FROM ( SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner FROM ( SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain, MAX(to_number("Elevation Gain",'###')) as max_elev_gain, SUM(to_number("Elevation Gain",'###')) as sum_elev_gain, runner FROM dremioblogpost.F_ACTIVITY where "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-16-at-13.45.01.png" alt=""></p> <p>To enrich the example lets understand who is the fastest runner with analysis based on the total climbing</p> <pre><code> SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name FROM ( SELECT km_per_hour, total_climbing, runner FROM ( select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour, sum(cast("Elevation Gain" as integer)) total_climbing, runner from dremioblogpost.F_ACTIVITY where "Avg Moving Paces" != '--' and "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-18-at-13.30.55.png" alt=""></p> <h3 id="conclusions">Conclusions</h3> <p>Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.</p> <p>Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a &quot;good enough&quot; way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.</p> </div> Nélio Guimarães 5b5a56f45000960018e69b44 Wed Jul 25 2018 05:07:32 GMT-0400 (EDT) Making our way into Dremio https://www.rittmanmead.com/blog/2018/07/untitled/ <p>In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/GUID-65DF513D-DFC8-4046-8AA5-24292AF5942F-default-1.png" alt=""></p> <p>Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.</p> <p>The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.</p> <p>I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files. </p> <p>Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from <a href="https://www.dremio.com/download/">here</a>.</p> <p>Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit <code>/Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env</code> to point to the directory where java 1.8 home is located.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-04-at-09.17.15.png" alt=""></p> <p>After that you should be able to start Dremio as any other MacOs application and access <code>http://localhost:9047</code></p> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.55.25.png" style="width: 360px; height:280px"></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-10.35.55.png" alt=""></p> <h3 id="configurings3source">Configuring S3 Source</h3> <p>Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found <a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html">here</a>) and show the query capabilities of Dremio against unstructured data.</p> <p>For this demo we're using Garmin CSV activity data that can be easily <a href="https://connect.garmin.com/">downloaded</a> from Garmin activity page. </p> <p>Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.</p> <pre><code>act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories 1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08 ,0:06:06 ,0:04:13 ,175.390625,193.0,92.89507499768523,--,--,--,65 1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26 ,0:10:08 ,0:06:02 ,150.140625,236.0,63.74555754497759,--,--,--,55</code></pre> <p>For user information data we have used the following dataset </p> <pre><code>runner,dob,name JM,01-01-1900,Jon Mead NMG,01-01-1900,Nelio Guimaraes</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.20.42.png" alt=""></p> <p>Add your S3 credentials to access <continue></continue></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-11.22.14.png" alt=""></p> <p>After configuring your S3 account all buckets associated to it, will be prompted under the new source area.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-09-at-12.13.12.png" alt=""></p> <p>For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart </p> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-10-at-15.42.03.png" style="width: 360px; height:280px"></p> <p><strong>nmgbuckettest</strong> - contains Garmin activity data that could be seen as a fact table in CSV format :</p> <p><font size="3">Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories</font></p> <p><strong>nmgdremiouser</strong> - contains user data that could be seen as a user dimension in a CSV format:</p> <p><font size="3">runner,dob,name</font></p> <h3 id="creatingdatasets">Creating datasets</h3> <p>After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.45.58.png" alt=""></p> <p>By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for <code>\n</code> and checked the option <em>Extract Field Name</em></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.53.02.png" alt=""></p> <p>Lets do the same for the second set of CSV's (nmgdremiouser bucket)</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-16.54.32-1.png" alt=""></p> <p>Click in saving will drive us to a new panel where we can start performing some queries. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-03-at-17.41.21-1.png" alt=""></p> <p>However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the <em>dob</em> field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.46.55.png" alt=""></p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.47.30-1.png" alt=""></p> <p>Apply the changes and save the new dataset under the desire space. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-11.57.18.png" alt=""></p> <p>Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call <em>D_USER</em> for the dataset coming from nmgdremiouser bucket and <em>F_ACTIVITY</em> for data coming from nmgbuckettest</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-13-at-12.12.11.png" alt=""></p> <h3 id="queryingdatasets">Querying datasets</h3> <p>Now that we have D<em>USER and F</em>ACTIVITY datasets created we can start querying them and do some analysis.</p> <p>This first analysis will tell us which runner climbs more during his activities:</p> <pre><code>SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name FROM ( SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner FROM ( SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain, MAX(to_number("Elevation Gain",'###')) as max_elev_gain, SUM(to_number("Elevation Gain",'###')) as sum_elev_gain, runner FROM dremioblogpost.F_ACTIVITY where "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-16-at-13.45.01.png" alt=""></p> <p>To enrich the example lets understand who is the fastest runner with analysis based on the total climbing </p> <pre><code> SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name FROM ( SELECT km_per_hour, total_climbing, runner FROM ( select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour, sum(cast("Elevation Gain" as integer)) total_climbing, runner from dremioblogpost.F_ACTIVITY where "Avg Moving Paces" != '--' and "Elevation Gain" != '--' group by runner ) nested_0 ) nested_0 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner</code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/07/Screen-Shot-2018-07-18-at-13.30.55.png" alt=""></p> <h3 id="conclusions">Conclusions</h3> <p>Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.</p> <p>Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.</p> Nélio Guimarães d8f83ec4-91b0-4760-bd7d-906e8ff427d9 Wed Jul 25 2018 05:07:32 GMT-0400 (EDT) Oracle 12c Release 2 New Feature DGMGRL Scripting https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/ <p>New in Oracle 12c Release 2 is the ability for scripts to be executed through the Data Guard broker DGMGRL command-line interface very similar to like say in SQL*Plus. </p> <p>DGMGRL commands, SQL commands using the DGMGRL SQL command, and OS commands using the new HOST (or !) capability can be </p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-dgmgrl-scripting/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8188 Wed Jul 25 2018 00:02:38 GMT-0400 (EDT) Oracle 12c Release 2 New Feature – SQL HISTORY https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/ <p>Oracle 12c Release 2 now provides the ability to reissue the previously executed SQL*Plus commands.</p> <p>This functionality is similar to the shell history command available on the UNIX platform.</p> <p>This feature enables us to run, edit, or delete previously executed SQL*Plus, SQL, or PL/SQL commands from the <strong>history list in </strong></p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2018/07/oracle-12c-release-2-new-feature-sql-history/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=8183 Tue Jul 24 2018 23:45:00 GMT-0400 (EDT) Lessor known Apache Machine Learning languages http://www.oralytics.com/2018/07/lessor-known-apache-machine-learning.html <p>Machine learning is a very popular topic in recent times, and we keep hearing about languages such as R, Python and Spark. In addition to these we have commercially available machine learning languages and tools from SAS, IBM, Microsoft, Oracle, Google, Amazon, etc., etc. Everyone want a slice of the machine learning market!</p> <p>The Apache Foundation supports the development of new open source projects in a number of areas. One such area is machine learning. If you have read anything about machine learning you will have come across Spark, and maybe you might believe that everyone is using it. Sadly this isn't true for lots of reasons, but it is very popular. Spark is one of the project support by the Apache Foundation.</p> <p>But are there any other machine learning projects being supported under the Apache Foundation that are an alternative to Spark? The follow lists the alternatives and lessor know projects: (most of these are incubator/retired/graduated Apache projects)</p> <style>td, th { border: 1px solid black; border-color:black; border-collapse: collapse; border-spacing:0; text-align: left; padding: 8px; } </style> <table style="width:100%" border="1" cellspacing="0" cellpadding="0"><tr> <td width="25%"><a href="http://flink.apache.org/">Flink</a> </td> <td>Flink is an open source system for expressive, declarative, fast, and efficient data analysis. Stratosphere combines the scalability and programming flexibility of distributed MapReduce-like platforms with the efficiency, out-of-core execution, and query optimization capabilities found in parallel databases. Flink was originally known as Stratosphere when it entered the Incubator. <p><a href="https://ci.apache.org/projects/flink/flink-docs-master/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="https://incubator.apache.org/projects/horn.html">HORN</a> </td> <td>HORN is a neuron-centric programming APIs and execution framework for large-scale deep learning, built on top of Apache Hama. <p><a href="https://cwiki.apache.org/confluence/display/HORN">Wiki Page</p></a><p>(Retired)</p> </td></tr><tr> <td><a href="http://hivemall.incubator.apache.org/">HiveMail</a> </td> <td>Hivemall is a library for machine learning implemented as Hive UDFs/UDAFs/UDTFs <p>Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta. Apache Hivemall offers a variety of functionalities: regression, classification, recommendation, anomaly detection, k-nearest neighbor, and feature engineering. It also supports state-of-the-art machine learning algorithms such as Soft Confidence Weighted, Adaptive Regularization of Weight Vectors, Factorization Machines, and AdaDelta. </p><p><a href="http://hivemall.incubator.apache.org/userguide/index.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://madlib.apache.org/">MADlib</a></td> <td>Apache MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical and machine learning methods for structured and unstructured data. Key features include: Operate on the data locally in-database. Do not move data between multiple runtime environments unnecessarily; Utilize best of breed database engines, but separate the machine learning logic from database specific implementation details; Leverage MPP shared nothing technology, such as the Greenplum Database and Apache HAWQ (incubating), to provide parallelism and scalability. <p><a href="http://madlib.apache.org/documentation.html">Documentation</a></p><p>(graduated)</p></td></tr><tr> <td><a href="http://mxnet.incubator.apache.org/">MXNet</a> </td> <td>A Flexible and Efficient Library for Deep Learning . MXNet provides optimized numerical computation for GPUs and distributed ecosystems, from the comfort of high-level environments like Python and R MXNet automates common workflows, so standard neural networks can be expressed concisely in just a few lines of code. <p><a href="https://mxnet.incubator.apache.org/">Webpage</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://opennlp.apache.org/">OpenNLP</a> </td> <td>OpenNLP is a machine learning based toolkit for the processing of natural language text. OpenNLP supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, language detection and coreference resolution. <p><a href="http://opennlp.apache.org/docs/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://predictionio.apache.org/">PredictionIO</a> </td> <td>PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack, that enables developers to manage and deploy production-ready predictive services for various kinds of machine learning tasks. <p><a href="http://predictionio.apache.org/">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://samoa.incubator.apache.org/">SAMOA</a> </td> <td>SAMOA provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms that run on top of distributed stream processing engines (DSPEs). It features a pluggable architecture that allows it to run on several DSPEs such as Apache Storm, Apache S4, and Apache Samza. <p><a href="http://samoa.incubator.apache.org/documentation/Home.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://singa.incubator.apache.org/en/index.html">SINGA</a> </td> <td>SINGA is a distributed deep learning platform. An intuitive programming model based on the layer abstraction is provided, which supports a variety of popular deep learning models. SINGA architecture supports both synchronous and asynchronous training frameworks. Hybrid training frameworks can also be customized to achieve good scalability. SINGA provides different neural net partitioning schemes for training large models. <p><a href="http://singa.incubator.apache.org/en/docs/index.html">Documentation</a></p><p>(incubator)</p> </td></tr><tr> <td><a href="http://storm.apache.org/">Storm</a> </td> <td>Storm is a distributed, fault-tolerant, and high-performance realtime computation system that provides strong guarantees on the processing of data. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language. <p><a href="http://storm.apache.org/releases/2.0.0-SNAPSHOT/index.html">Documentation</a></p><p>(graduated)</p> </td></tr><tr> <td><a href="http://systemml.apache.org/">SystemML</a> </td> <td>SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations such as Apache Hadoop MapReduce and Apache Spark. <p><a href="http://systemml.apache.org/documentation">Documentation</a></p><p>(graduated)</p> </td></tr></table> <p><img src="https://lh3.googleusercontent.com/-mi3xpslLyP8/W1S7LVTj-jI/AAAAAAAAAes/V380gbQA4VwJuYQIJEB0XjU-zmKblAxcACHMYCw/big_data_ml.png?imgmax=1600" alt="Big data ml" title="big_data_ml.png" border="0" width="620" height="400" /></p> <p>I will have a closer look that the following SQL based machine learning languages in a lager blog post:</p> <p> - <a href="http://madlib.apache.org/">MADlib</a></p><p> - <a href="http://storm.apache.org/">Storm</a></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-1432535821395247575 Mon Jul 23 2018 10:40:00 GMT-0400 (EDT) Oracle Analytics Cloud Workshop FAQ https://www.rittmanmead.com/blog/2018/07/oac-workshop-faq/ <div class="kg-card-markdown"><p>A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.</p> <p>If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to <strong><a href="mailto:training@rittmanmead.com">training@rittmanmead.com</a></strong> and we can give you the details.</p> <h2 id="dooracleprovideafeaturecomparisonlistbetweenobieeonpremiseandoac">Do Oracle provide a feature comparison list between OBIEE on premise and OAC?</h2> <p>Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:</p> <h3 id="highlevel">High Level</h3> <ul> <li>Enterprise Analytics is identical to 12c Analytics</li> <li>Only two Actions available in OAC: Navigate to BI content, Navigate to Web page</li> <li>BI Publisher is identical in 12c and OAC</li> <li>Data Visualiser has additional features and a slightly different UI in OAC compared to 12c</li> </ul> <h3 id="bideveloperclienttoolforoac">BI Developer Client Tool for OAC</h3> <ul> <li>Looks exactly the same as the OBIEE client</li> <li>Available only for Windows, straightforward installation</li> <li>OAC IP address and BI Server port must be provided to create an ODBC data source</li> <li>Allows to open and edit online the OAC model</li> <li>Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)</li> </ul> <h3 id="datamodeler">Data Modeler</h3> <ul> <li>Alternative tool to create and manage metadata models</li> <li>Very easy to use, but limited compared to the BI Developer Client.</li> </ul> <h3 id="catalog">Catalog</h3> <ul> <li>It's possible to archive/unarchive catalog folders from on-premise to OAC.</li> </ul> <h3 id="barfile">BAR file</h3> <ul> <li>It's possible to create OAC bar files</li> <li>It's possible to migrate OAC bar files to OBIEE 12c</li> </ul> <h2 id="canyoueverbechargedbynetworkusageforexampleconnectiontoanonpremisedatasourceusingrdc">Can you ever be charged by network usage, for example connection to an on premise data source using RDC?</h2> <p>Oracle will not charge you for network usage as things stand. Your charges come from the following:</p> <ul> <li>Which version of OAC you have (Standard, Data Lake or Enterprise)</li> <li>Whether you are using Pay-as-you-go or Monthly Commitments</li> <li>The amount of disk space you have specified during provisioning</li> <li>The combination of OCPU and RAM currently in use (size).</li> <li>The up-time of your environment.</li> </ul> <p>So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.</p> <h2 id="doyouneedtorestarttheoacenvironmentwhenyouchangetheramandocpusettings">Do you need to restart the OAC environment when you change the RAM and OCPU settings?</h2> <p>Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:</p> <p><img src="https://i.imgur.com/AY0IXeI.png&amp;width=250&amp;height=150" alt="alt"></p> <p>PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">here</a>, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.</p> <p>This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:</p> <ul> <li>oc3 — 1 OCPU with 7.5 GB RAM</li> <li>oc4 — 2 OCPUs with 15 GB RAM</li> <li>oc5 — 4 OCPUs with 30 GB RAM</li> <li>oc6 — 8 OCPUs with 60 GB RAM</li> <li>oc7 — 16 OCPUs with 120 GB RAM</li> <li>oc8 — 24 OCPUs with 180 GB RAM</li> <li>oc9 — 32 OCPUs with 240 GB RAM</li> </ul> <p>For example:</p> <p>The following example scales the rittmanmead-analytics-prod service to the oc9 shape.</p> <p>$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json<br> where the JSON file contains the following:</p> <p><code>{ &quot;components&quot; : { &quot;BI&quot; : &quot;shape&quot; : &quot;oc9&quot;, &quot;hosts&quot;:[&quot;rittmanmead-prod-1&quot;] } } }</code></p> <p>Oracle supply documentation for the commands required here: <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html">https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html</a> .</p> <h2 id="howishighavailabilityprovisionedinoracleanalyticscloud">How is high availability provisioned in Oracle Analytics Cloud?</h2> <p>Building a high available infrastructure in the cloud needs to take into consideration three main areas:</p> <p><strong>Server Failure:</strong> Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:</p> <p><img src="https://i.imgur.com/SHAhPB8.png&amp;width=451&amp;height=250" alt="alt"></p> <p>It is also possible to provision a load balancer, as you can see from the screenshot below:</p> <p><img src="https://i.imgur.com/YffRsnW.png&amp;width=451&amp;height=250" alt="alt"></p> <p><strong>Zone Failure:</strong> Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons &quot;availability zone&quot;. A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.</p> <p><strong>Cloud Failure:</strong> Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment.</p> <p>In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.</p> <h2 id="canyoupushtheusercredentialsdowntothedatabase">Can you push the user credentials down to the database?</h2> <p>At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: <a href="https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html">https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html</a></p> <p>You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script <em>configure_bi_sql_group_provider</em> to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.</p> <p>Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.</p> <p><img src="https://i.imgur.com/LxqZtIt.png&amp;width=451&amp;height=250" alt="alt"></p> <p>These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results.<br> The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure_bi_sql_group_provider</p> <h2 id="canyouimplementsslcertificatesinoracleanalyticscloud">Can you implement SSL certificates in Oracle Analytics Cloud?</h2> <p>The short answer is yes.</p> <p>When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. <strong>Doc ID 2334800.1</strong> on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:</p> <ul> <li>Associate a custom domain name against the public ip of your OAC instance</li> <li>Get the custom SSL certificate from a Certificate Authority</li> <li>Specify the DNS registered host name that you want to secure with SSL in servername.conf</li> <li>Install Intermediate certificateRun the script to Register the new private key and server certificate</li> </ul> <h2 id="canyouimplementsinglesignonssoinoracleanalyticscloud">Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?</h2> <p>Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:</p> <p><img src="https://i.imgur.com/5rRKyiH.png" alt="alt"></p> <p>Oracle support <strong>Doc ID 2399789.1</strong> covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.</p> <h2 id="arerpdfilesbarfilesbackwardscompatible">Are RPD files (BAR files) backwards compatible?</h2> <p>This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot</p> <h2 id="howdoyouconnectsecurelytoaws">How do you connect securely to AWS?</h2> <p>There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:</p> <p><img src="https://i.imgur.com/9odjJrb.png" alt="alt"></p> <h2 id="canyoustilleditinstanceconfigandnqsconfigfiles">Can you still edit instanceconfig and nqsconfig files?</h2> <p>Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI</p> <p>Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.</p> <h2 id="whoisresponsibleforwhatregardingsupport">Who is responsible for what regarding support?</h2> <p>Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (<strong>Doc ID 2309936.1</strong>)</p> <p><a href="http://https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=277697121079580&amp;id=2309936.1&amp;displayIndex=1&amp;_afrWindowMode=0&amp;_adf.ctrl-state=qwe5xzsil_210#aref_section33">Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services</a></p> </div> Chris Redgrave 5b5a56f45000960018e69b47 Mon Jul 23 2018 08:59:34 GMT-0400 (EDT) Oracle Analytics Cloud Workshop FAQ https://www.rittmanmead.com/blog/2018/07/oac-workshop-faq/ <p>A few weeks ago, I had the opportunity to present the Rittman Mead Oracle Analytics Cloud workshop in Oracle's head office in London. The aim of the workshop was to educate potential OAC customers and give them the tools and knowledge to decide whether or not OAC was the right solution for them. We had a great cross section of multiple industries (although telecoms were over represented!) and OBIEE familiarity. Together we came up with a series of questions that needed to be answered to help in the decision making process. In the coming workshops we will add more FAQ style posts to the blog to help flesh out the features of the product.</p> <p>If you are interested in coming along to one of the workshops to get some hands on time with OAC, send an email to <strong>training@rittmanmead.com</strong> and we can give you the details.</p> <h2 id="dooracleprovideafeaturecomparisonlistbetweenobieeonpremiseandoac">Do Oracle provide a feature comparison list between OBIEE on premise and OAC?</h2> <p>Oracle do not provide a feature comparison between on-premise and OAC. However, Rittman Mead have done an initial comparison between OAC and traditional on-premise OBIEE 12c installations:</p> <h3 id="highlevel">High Level</h3> <ul> <li>Enterprise Analytics is identical to 12c Analytics</li> <li>Only two Actions available in OAC: Navigate to BI content, Navigate to Web page</li> <li>BI Publisher is identical in 12c and OAC</li> <li>Data Visualiser has additional features and a slightly different UI in OAC compared to 12c</li> </ul> <h3 id="bideveloperclienttoolforoac">BI Developer Client Tool for OAC</h3> <ul> <li>Looks exactly the same as the OBIEE client</li> <li>Available only for Windows, straightforward installation</li> <li>OAC IP address and BI Server port must be provided to create an ODBC data source</li> <li>Allows to open and edit online the OAC model</li> <li>Allows offline development. Snapshots interface used to upload it to OAC (it will completely replace existing model)</li> </ul> <h3 id="datamodeler">Data Modeler</h3> <ul> <li>Alternative tool to create and manage metadata models</li> <li>Very easy to use, but limited compared to the BI Developer Client.</li> </ul> <h3 id="catalog">Catalog</h3> <ul> <li>It's possible to archive/unarchive catalog folders from on-premise to OAC.</li> </ul> <h3 id="barfile">BAR file</h3> <ul> <li>It's possible to create OAC bar files</li> <li>It's possible to migrate OAC bar files to OBIEE 12c</li> </ul> <h2 id="canyoueverbechargedbynetworkusageforexampleconnectiontoanonpremisedatasourceusingrdc">Can you ever be charged by network usage, for example connection to an on premise data source using RDC?</h2> <p>Oracle will not charge you for network usage as things stand. Your charges come from the following:</p> <ul> <li>Which version of OAC you have (Standard, Data Lake or Enterprise)</li> <li>Whether you are using Pay-as-you-go or Monthly Commitments</li> <li>The amount of disk space you have specified during provisioning</li> <li>The combination of OCPU and RAM currently in use (size).</li> <li>The up-time of your environment.</li> </ul> <p>So for example an environment that has 1 OCPU with 7.5 GB RAM will cost less than an environment with 24 OCPUs with 180 GB RAM if they are up for the same amount of time, everything else being equal. This being said, there is an additional charge to the analytics license as a cloud database is required to configure and launch an analytics instance which should be taken into consideration when choosing Oracle Analytics Cloud.</p> <h2 id="doyouneedtorestarttheoacenvironmentwhenyouchangetheramandocpusettings">Do you need to restart the OAC environment when you change the RAM and OCPU settings?</h2> <p>Configuring the number of OCPUs and associated RAM is done from the Analytics Service Console. This can be done during up time without a service restart, however the analytics service will be unavailable:</p> <p><img src="https://i.imgur.com/AY0IXeI.png&amp;width=250&amp;height=150" alt="alt"></p> <p>PaaS Service Manager Command Line Interface (PSM Cli), which Francesco covered <a href="https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/">here</a>, will allow this to be scripted and scheduled. An interesting use case for this would be to allow an increase in resources during month end processing where your concurrent users are at its highest, whilst in the quieter parts of the month you can scale back down.</p> <p>This is done using the 'scale' command, this command takes a json file as a parameter which contains information about what the environment should look like. You will notice in the example below that the json file refers to an object called 'shape'; this is the combination of OCPU and RAM that you want the instance to scale to. Some examples of shapes are:</p> <ul> <li>oc3 — 1 OCPU with 7.5 GB RAM</li> <li>oc4 — 2 OCPUs with 15 GB RAM</li> <li>oc5 — 4 OCPUs with 30 GB RAM</li> <li>oc6 — 8 OCPUs with 60 GB RAM</li> <li>oc7 — 16 OCPUs with 120 GB RAM</li> <li>oc8 — 24 OCPUs with 180 GB RAM</li> <li>oc9 — 32 OCPUs with 240 GB RAM</li> </ul> <p>For example:</p> <p>The following example scales the rittmanmead-analytics-prod service to the oc9 shape.</p> <p>$ psm analytics scale -s rittmanmead-analytics-prod -c ~/oac-obiee/scale-to-monthend.json where the JSON file contains the following:</p> <p><code>{ "components" : { "BI" : "shape" : "oc9", "hosts":["rittmanmead-prod-1"] } } }</code></p> <p>Oracle supply documentation for the commands required here: <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html">https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-scale2.html</a> .</p> <h2 id="howishighavailabilityprovisionedinoracleanalyticscloud">How is high availability provisioned in Oracle Analytics Cloud?</h2> <p>Building a high available infrastructure in the cloud needs to take into consideration three main areas:</p> <p><strong>Server Failure:</strong> Oracle Analytics Cloud can be clustered, additional nodes (up to 10) can be added dynamically in the Cloud 'My Services' console should they need to be:</p> <p><img src="https://i.imgur.com/SHAhPB8.png&amp;width=451&amp;height=250" alt="alt"></p> <p>It is also possible to provision a load balancer, as you can see from the screenshot below: </p> <p><img src="https://i.imgur.com/YffRsnW.png&amp;width=451&amp;height=250" alt="alt"></p> <p><strong>Zone Failure:</strong> Sometimes it more than just a single server that causes the failure. Cloud architecture is built in server farms, which themselves can be network issues, power failures and weather anomalies. Oracle Analytics Cloud allows you to create an instance in a region, much like Amazons "availability zone". A sensible precaution would be to create a disaster recover environment in different region to your main prod environment, to help reduce costs this can be provisioned on the Pay-as-you-go license model and therefore only be chargeable when its being used.</p> <p><strong>Cloud Failure:</strong> Although rare, sometimes the cloud platform can fail. For example both your data centres that you have chosen to counter the previous point could be victim to a weather anomaly. Oracle Analytics Cloud allows you to take regular backups of your reports, dashboards and metadata which can be downloaded and stored off-cloud and re implemented in another 12c Environment. </p> <p>In addition to these points, its advisable to automate and test everything. Oracle supply a very handy set of scripts and API called PaaS Service Manager Command Line Interface (PSM Cli) which can be used to achieve this. For example it can be used to automate backups, set up monitoring and alerting and finally and arguably most importantly it can be used to test your DR and HA infrastructure.</p> <h2 id="canyoupushtheusercredentialsdowntothedatabase">Can you push the user credentials down to the database?</h2> <p>At this point in time there is no way to configure database authentication providers in a similar way to Weblogic providors of the past. However, Oracle IDCS does have a REST API that could be used to simulate this functionality, documentation can be found here: <a href="https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html">https://docs.oracle.com/en/cloud/paas/identity-cloud/rest-api/OATOAuthClientWebApp.html</a></p> <p>You can store user group memberships in a database and for your service’s authentication provider to access this information when authenticating a user's identity. You can use the script <em>configure_bi_sql_group_provider</em> to set up the provider and create the tables that you need (GROUPS and GROUPMEMBERS). After you run the script, you must populate the tables with your group and group member (user) information.</p> <p>Group memberships that you derive from the SQL provider don't show up in the Users and Roles page in Oracle Analytics Cloud Console as you might expect but the member assignments work correctly.</p> <p><img src="https://i.imgur.com/LxqZtIt.png&amp;width=451&amp;height=250" alt="alt"></p> <p>These tables are in the Oracle Database Cloud Service you configured for Oracle Analytics Cloud and in the schema created for your service. Unlike the on-premises equivalent functionality, you can’t change the location of these tables or the SQL that retrieves the results. <br> The script to achieve this is stored on the analytics server itself, and can be accessed using SSH (using the user 'opc') and the private keys that you created during the instance provisioning process. They are stored in: /bi/app/public/bin/configure<em>bi</em>sql<em>group</em>provider</p> <h2 id="canyouimplementsslcertificatesinoracleanalyticscloud">Can you implement SSL certificates in Oracle Analytics Cloud?</h2> <p>The short answer is yes.</p> <p>When Oracle Analytics Cloud instances are created, similarly to on-premise OBIEE instances, a a self-signed certificate is generated. The self-signed certificate is intended to be temporary and you must replace it with a new private key and a certificate signed by a certification authority. <strong>Doc ID 2334800.1</strong> on support.oracle.com has the full details on how to implement this, but the high level steps (take from the document itself) are:</p> <ul> <li>Associate a custom domain name against the public ip of your OAC instance</li> <li>Get the custom SSL certificate from a Certificate Authority</li> <li>Specify the DNS registered host name that you want to secure with SSL in servername.conf</li> <li>Install Intermediate certificateRun the script to Register the new private key and server certificate</li> </ul> <h2 id="canyouimplementsinglesignonssoinoracleanalyticscloud">Can you implement Single Sign On (SSO) in Oracle Analytics Cloud?</h2> <p>Oracle Identity Cloud Service (IDCS) allows administrators to create security providors for OAC, much like the providors in on premise OBIEE weblogic providors. These can be created/edited to include single sign on URLs,Certificates etc, as shown in the screenshot below:</p> <p><img src="https://i.imgur.com/5rRKyiH.png" alt="alt"></p> <p>Oracle support <strong>Doc ID 2399789.1</strong> covers this in detail between Microsoft Azure AD and OAC, and is well worth the read.</p> <h2 id="arerpdfilesbarfilesbackwardscompatible">Are RPD files (BAR files) backwards compatible?</h2> <p>This would depend what has changed between the releases. The different version numbers of OAC doesn't necessarily include changes to the OBIEE components themselves (e.g. it could just be an improvement to the 'My Services' UI). However, if there have been changes to the way the XML is formed in reports for example, these wont be compatible with different previous versions of the catalog. This all being said, the environments look like they can be upgraded at any time so you should be able to take a snapshot of your environment and upgrade it to match the newer version and then redeploy/refresh from your snapshot</p> <h2 id="howdoyouconnectsecurelytoaws">How do you connect securely to AWS?</h2> <p>There doesn't seem to be any documentation on how exactly Visual Analyzer connects to Amazon Redshift using the 'Create Connection' wizard. However, there is an option to create an SSL ODBC connection to the Redshift database that can then be used to connect using the Visual Analyzer ODBC connection wizard:</p> <p><img src="https://i.imgur.com/9odjJrb.png" alt="alt"></p> <h2 id="canyoustilleditinstanceconfigandnqsconfigfiles">Can you still edit instanceconfig and nqsconfig files?</h2> <p>Yes you can, you need to use your ssh keys to sign into the box (using the user 'opc'). They are contained in the following locations:</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml</p> <p>/bi/domain/fmw/user_projects/domains/bi/config/fmwconfig/biconfig/OBIS/NQSConfig.INI</p> <p>Its also worth mentioning that there is a guide here which explains where the responsibility lies should anything break during customisations of the platform.</p> <h2 id="whoisresponsibleforwhatregardingsupport">Who is responsible for what regarding support?</h2> <p>Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services (<strong>Doc ID 2309936.1</strong>)</p> <p><a href="http://https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=277697121079580&amp;id=2309936.1&amp;displayIndex=1&amp;_afrWindowMode=0&amp;_adf.ctrl-state=qwe5xzsil_210#aref_section33">Guide to Customer vs Oracle Management Responsibilities in Oracle Infrastructure and Platform Cloud Services</a></p> Chris Redgrave 1bafd34e-380e-4a22-8264-c9bdceff901a Mon Jul 23 2018 08:59:34 GMT-0400 (EDT) 10 Tips to Improve ETL Performance – Revised for ADWC https://danischnider.wordpress.com/2018/07/20/10-tips-to-improve-etl-performance-revised-for-adwc/ <p>The Autonomous Data Warehouse Cloud (ADWC) is a self-configuring, fast, secure and scalable platform for data warehouses. Does this mean we don’t have to take care anymore about performance of our ETL processes? Which performance tips are still important for us, and where can we hand over the responsibility to ADWC? A revised version of an old blog post, with regard to Oracle&#8217;s Data Warehouse Cloud solution.</p> <p><span id="more-594"></span></p> <p>Last summer, I published a blog post with performance tips for ETL jobs: <a href="https://danischnider.wordpress.com/2017/07/23/10-tips-to-improve-etl-performance/">10 Tips to Improve ETL Performance</a>. Now, it’s summer again, and I am running several ETL jobs on the Autonomous Data Warehouse Cloud to test its features and limitations.This is a good time for a revised version of my blog post, with a special focus on ADWC environments. What is still the same, what changes with ADWC?</p> <p><img title="midnight_adwc.jpg" src="https://danischnider.files.wordpress.com/2018/07/imidnight_adwc.jpg?w=598&#038;h=299" alt="Midnight adwc" width="598" height="299" border="0" /><br /><em>What is the impact of the Autonomous Data Warehouse Cloud on ETL performance? Is the night still too short?</em></p> <p>In my <a href="https://danischnider.wordpress.com/2017/07/23/10-tips-to-improve-etl-performance/">original blog post</a>, I wrote about the following performance tips for ETL:</p> <ol> <li>Use Set-based Operations</li> <li>Avoid Nested Loops</li> <li>Drop Unnecessary Indexes</li> <li>Avoid Functions in WHERE Condition</li> <li>Take Care of OR in WHERE Condition</li> <li>Reduce Data as Early as Possible</li> <li>Use WITH to Split Complex Queries</li> <li>Run Statements in Parallel</li> <li>Perform Direct-Path INSERT</li> <li>Gather Statistics after Loading each Table</li> </ol> <p>Of course, all these tips are still valid, and I recommend them to use in every ETL process. But some of them are more important, some of them not relevant anymore, if you run your data warehose on ADWC. Let’s go through the list step by step.</p> <h1>1. Use Set-based Operations</h1> <p>Architecture and configuration of ADWC are designed for a high throughput of large data sets in parallel mode. If you run your load jobs with row-by-row executions, using cursor loops in a procedural language or a row-based ETL tool, ADWC is the wrong environment for you. Of course, it is possible to load data with such programs and tools into an ADWC database, but don’t expect high performance improvements compared to any other database environment.</p> <p>Data Warehouse Automation frameworks and modern ELT tools are able to use the benefits of the target database and run set-based operations. If you use any tools that are able to generate or execute SQL statements, you are on the right track with ADWC.</p> <h1>2. Avoid Nested Loops</h1> <p>As I already mentioned in the original blog post, Nested Loop Joins are one of the main causes for ETL performance problems. This join method is usually not feasible for ETL jobs and often the reason for poor load performance. In most situations when the optimizer decides to choose a nested loop, this is in combination with an index scan. Because almost no indexes exist in an ADWC environment (see next section), this problem is not relevant anymore.</p> <h1>3. Drop Unnecessary Indexes</h1> <p>ADWC doesn’t like indexes. If you try to create an index, you will get an error message:</p> <pre>CREATE INDEX s_order_item_delivery_date_idx<br />ON s_order_item (delivery_date);<br /><br /><strong>ORA-01031: insufficient privileges</strong></pre> <p>Although this seems to be a very hard restriction, it is actually a good approach for a data warehouse environment. There are only a few reasons for indexes in a data warehouse:</p> <ul> <li>Unique indexes are used to prove the uniqueness of primary key and unique constraints. This is the only case where ADWC still allows to create indexes. If you define such constraints, an index is created as usual. To improve the ETL performance, it is even possible to create the primary keys with <em>RELY DISABLE NOVALIDATE</em>. In this case, no index is created, but you have to guarantee in the load process or with additional quality checks that no duplicates are loaded.</li> <li>In a star schema, bitmap indexes on the dimension keys of a fact table are required to perform a Star Transformation. In ADWC, a Vector Transformation is used instead (this transformation was introduced with Oracle Database In-Memory). So, there is no need for these indexes anymore.</li> <li>For selective queries that return only a small subset of data, an index range scan may be useful. For these kind of queries, the optimizer decides to use a Bloom filter in ADWC as an alternative to the (missing) index.</li> </ul> <p>So, the only case where indexes are created in ADWC, are primary key and unique constraints. No other indexes are allowed. This solves a lot of performance issues in ETL jobs.</p> <h1>Performance Tips 4 to 7</h1> <p>These are general tips for writing fast SQL statements. Complex queries and expressions in WHERE conditions are hard to be evaluated by the query optimizer and can lead to wrong estimations and poor execution plans. This does not change if you move your data warehouse to ADWC. Of course, performance issues can be “solved” in ADWC by increasing the number of CPUs (see next section), but a more elegant and sustainable approach is to <a href="https://danischnider.wordpress.com/2018/04/03/keep-your-sql-simple-and-fast/">keep your SQL simple and fast</a>. This is the case on all databases on premises and in the Cloud.</p> <h1>8. Run Statements in Parallel</h1> <p>Queries and DML statements are executed in parallel by default in ADWC, if more than 1 CPU core is allocated. Parallel DML (PDML) is enabled by default for all sessions. Normally, PDML has to be enabled per session with an <em>ALTER SESSION ENABLE PARALLEL DML</em> command. This is not necessary in ADWC.</p> <p>The typical way of performance tuning in ADWC is to increase the number of CPUs and therefore the parallel degree of the executed SQL statements. Some call this KIWI (“kill it with iron”) approach, Oracle calls it “elastic scaling”. The number of CPU cores can be assigned to your data warehouse environment at runtime. This works fine. The number of CPUs can be adjusted any time on the web interface of the Oracle Cloud Infrastructure. After changing the number to the new value, the system is scaled up or down. This takes a few minutes, but no interrupt of services or restart of the database is required. The only detail you have to keep in mind: the number of CPU cores has an impact on the costs of the Cloud infrastructure.</p> <p><img title="ADWC_Scale_Up_Down.jpg" src="https://danischnider.files.wordpress.com/2018/07/iadwc_scale_up_down.jpg?w=556&#038;h=310" alt="ADWC Scale Up Down" width="556" height="310" border="0" /><br /><em>The number of CPU cores can be adjusted any time in the Autonomous Data Warehouse Cloud</em></p> <p>The degree of parallelism (DOP) is computed by the optimizer with the Auto DOP mechanism (<em>PARALLEL_DEGREE_POLICY = AUTO</em>). All initialization parameters for parallel execution are configured automatically and cannot be changed, not even on session level.</p> <p><em>PARALLEL</em> hints are neighter required nor recommended. By default, they are ignored in ADWC. But if you need them (or you think you need them), it is possible to enable them on session level:</p> <pre>ALTER SESSION SET OPTIMIZER_IGNORE_PARALLEL_HINTS = FALSE;</pre> <p>The parameter <em>OPTIMIZER_IGNORE_PARALLEL_HINTS</em> was introduced with Oracle 18c, but is available in ADWC, too (ADWC is currently a mixture of Oracle 12.2.0.1 and Oracle 18c). It is one of the few initialization parameters that can be modified in ADWC (see <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/experienced-database-users.html#GUID-7CF648C1-0822-4602-8ED1-6F5719D6779E">documentation</a>). By default, it is TRUE, so all parallel hints are ignored.</p> <h1>9. Perform Direct-Path INSERT</h1> <p>Because Parallel DML is enabled by default in ADWC, most INSERT and MERGE statements are executed as Direct-Path load operations. Only for serial DML statements (which only occur on an ADWC database with one CPU core), the <em>APPEND</em> hint has to be added to the INSERT and MERGE statements. This is the only hint that is not ignored by default in ADWC.</p> <p>But pay attention: Parallel DML and even an <em>APPEND</em> hint do not guarantee a Direct-Path INSERT. If referential integrity is enabled on the target table, Direct-Path is disabled and a Conventional INSERT is performed. This can be avoided with reliable constraints, as described in blog post <a href="https://danischnider.wordpress.com/2015/12/01/foreign-key-constraints-in-an-oracle-data-warehouse/">Foreign Key Constraints in an Oracle Data Warehouse</a>.</p> <h1>10. Gather Statistics after Loading each Table</h1> <p>Direct-Path load operations are not only much faster than Conventional DML statements, they have another good side effect in ADWC: Online statistics gathering was improved and is now able to gather object statistics automatically after each Direct-Path load operation. I explained this in my last blog post <a href="https://danischnider.wordpress.com/2018/07/11/gathering-statistics-in-the-autonomous-data-warehouse-cloud/">Gathering Statistics in the Autonomous Data Warehouse Cloud</a>. Only after Conventional DML statements, it is required to call DBMS_STATS to gather statistics. Unfortunately, this is not done (yet) automatically.</p> <h1>Conclusion</h1> <p>As you can see from the length of this blog post, the Autonomous Data Warehouse Cloud is not a complete self-configuring environment that solves all performance issues automatically. It is still important to know how the Oracle database works and how efficient ETL processes have to be designed. Set-based operations and reliable constraints are mandatory, and bad SQL statements will still be bad, even in an Autonomous Database.</p> <p>But there are many simplifications in ADWC. The consistent usage of Parallel DML and Direct-Path load operations, including online statistics gathering, makes it easier to implement fast ETL jobs. And many performance problems of ETL jobs are solved because no indexes are allowed.</p> Dani Schnider http://danischnider.wordpress.com/?p=594 Fri Jul 20 2018 09:29:32 GMT-0400 (EDT) Self-Service Data Transformation: Getting In-Depth with Oracle Data Flow https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20part%203_22.jpg?t=1533950236061" alt="self-service part 3_22" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In part one, I talked about the common concepts and goals that Tableau Prep and Oracle Data Flow share. In part two, I looked at a brief overview of both tools and took an in-depth look at Tableau Prep.</p> <p>In this third post, let's dive deeper into Oracle Data Flow.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-three&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-three Tue Jul 17 2018 15:14:25 GMT-0400 (EDT) Self-Service Data Transformation: Getting In-Depth with Tableau Prep https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20part%202_13.jpg?t=1533950236061" alt="self-service part 2_13" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In part one of this series, I shared an overview of the common concepts that Tableau Prep and Oracle Data Flow share as well as a brief look at the tools themselves. In part two, I want to take a more in-depth look at Tableau Prep and share my experiences using it.</p> <p>In my first example, I have three spreadsheets containing data collected from every World Cup Match from 1930 to 2014. One contains detailed information about each match individually.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-two&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-two Tue Jul 17 2018 11:53:51 GMT-0400 (EDT) Oracle Developer Champion http://www.oralytics.com/2018/07/oracle-developer-champion.html Yesterday evening I received an email titled 'Invitation to Developer Champion Program'.<br /><br />What a surprise!<br /><div style="text-align: center;"><img alt="Oracle dev champion" border="0" height="160" src="https://lh3.googleusercontent.com/-DrLlMdy6dN8/W0eC91un_1I/AAAAAAAAAeE/6OCN-wDgaa8-rZgzNpJXrJliigCeScd-QCHMYCw/oracle_dev_champion.png?imgmax=1600" title="oracle_dev_champion.png" width="250" /></div>The <a href="https://developer.oracle.com/devchampion">Oracle Developer Champion program</a> was setup just a year ago and is aimed at people who are active in generating content and sharing their knowledge on new technologies including cloud, micro services, containers, Java, open source technologies, machine learning and various types of databases.<br />For me, I fit into the machine learning, cloud, open source technologies, a bit on chatbots and various types of databases areas. Well I think I do!<br /><br />This made me look back over my activities for the past 12-18 months. As an <a href="http://www.oracle.com/technetwork/community/oracle-ace/index.html">Oracle ACE Director</a>, we have to record all our activities. I'd been aware that the past 12-18 months had been a bit quieter than previous years. But when I looked back at all the blog posts, articles for numerous publications, books, and code contributions, etc. Even I was impressed with what I had achieved, even though it was a quiet period for me.<br /><br />Membership of <a href="https://developer.oracle.com/devchampion">Oracle Developer Champion program</a> is for one year, and the good people in Oracle Developer Community (ODC) will re-evaluate what I, and the others in the program, have been up to and will determine if you can continue for another year.<br /><br />In addition to writing, contributing to projects, presenting, etc Oracle Developer Champions typically have leadership roles in user groups, answering questions on forums and providing feedback to product managers.<br /><br />The list of existing Oracle Developer Champions is very impressive. I'm honoured to be joining this people.<br /><br />Click on the image to go to the Oracle Developer Champion website to find out more.<br /><div style="text-align: center;"><a href="https://developer.oracle.com/devchampion"><img alt="Screen Shot 2018 07 12 at 17 21 32" border="0" height="168" src="https://lh3.googleusercontent.com/-QCtNHUdnH2k/W0eC9Ve1IeI/AAAAAAAAAeA/zhL2bFfS8uoCV8qhOZWQn9rAD5Ni1spdgCHMYCw/Screen%2BShot%2B2018-07-12%2Bat%2B17.21.32.png?imgmax=1600" title="Screen Shot 2018-07-12 at 17.21.32.png" width="599" /></a> </div><br />And check out the <a href="https://apex.oracle.com/pls/apex/f?p=19297:3::IR_DEV_CHAMPS:NO:CIR,RIR">list of existing Oracle Developer Champions</a>.<br />&nbsp;<img alt="Oracle dev champion" border="0" height="160" src="https://lh3.googleusercontent.com/-DrLlMdy6dN8/W0eC91un_1I/AAAAAAAAAeE/6OCN-wDgaa8-rZgzNpJXrJliigCeScd-QCHMYCw/oracle_dev_champion.png?imgmax=1600" title="oracle_dev_champion.png" width="250" /> <img alt="O ACEDirectorLogo clr" border="0" height="100" src="https://lh3.googleusercontent.com/-HflTDNal8cE/W0eC-cIgdYI/AAAAAAAAAeI/Xoce4SGffzckAlKIjDdFOb559HQCatE-QCHMYCw/O_ACEDirectorLogo_clr.png?imgmax=1600" title="O_ACEDirectorLogo_clr.png" width="250" /> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-2594705535599083107 Thu Jul 12 2018 12:34:00 GMT-0400 (EDT) Self-Service Data Transformation: Intro to Oracle Data Flow & Tableau Prep https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/self-service%20data%20transformation%20part%201.jpg?t=1533950236061" alt="self-service data transformation part 1" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In the world of self-service analytics, Tableau and <strong><a href="https://www.us-analytics.com/hyperionblog/oracle-data-visualization-v4">Oracle Data Visualization</a></strong> are two tools that are often put on the same discussion platform. In the last year, the conversations surrounding these two tools have increased dramatically — with most of our clients using self-service analytics. In this blog, I am not going to do a comparison rundown between Tableau and Oracle DV. What I do want to show you is two similar tools which introduce exciting new possibilities: <strong>Tableau Prep </strong>and <strong>Oracle Data Flow</strong>.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fself-service-data-transformation-part-one&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Matthew Walding https://www.us-analytics.com/hyperionblog/self-service-data-transformation-part-one Wed Jul 11 2018 14:42:26 GMT-0400 (EDT) Gathering Statistics in the Autonomous Data Warehouse Cloud https://danischnider.wordpress.com/2018/07/11/gathering-statistics-in-the-autonomous-data-warehouse-cloud/ <p>Optimizer statistics are essential for good execution plans and fast performance of the SQL queries. Of course, this is also the case in the Autonomous Data Warehouse Cloud. But the handling of gathering statistics is slightly different from what we know from other Oracle databases.</p> <p><span id="more-585"></span></p> <p> </p> <p>Since a couple of days, I’m testing several features and behaviors of the Autonomous Data Warehouse Cloud (ADWC) to find out, how useful this Cloud platform solution is for real DWH projects and what has to be considered for the development of a Data Warehouse. To simulate a typical scenario, I’m running incremental load jobs into multiple target tables several times per day. The example I use for this is a Data Vault schema for a craft beer brewery (if you want to know more about the data model, watch <a href="https://www.youtube.com/watch?v=Q1qj_LjEawc">this video</a> I recorded last year). It the simulated environment on ADWC, I already sold 68 million beers until today &#8211; far away from what we sell in our real micro brewery. But this is not the subject I want to write about in this blog post.</p> <p><img title="craft_beer_dv.jpg" src="https://danischnider.files.wordpress.com/2018/07/icraft_beer_dv.jpg?w=600&#038;h=285" alt="Craft beer dv" width="600" height="285" border="0" /></p> <p> </p> <p>More interesting than the data (which is mostly generated by DBMS_RANDOM) is the fact that no optimizer statistics were gathered so far, although the system is running since more than a week now. I play the role of a “naive ETL developer”, so I don’t care about such technical details. That’s what the Autonomous Data Warehouse should do for me.</p> <h1><span style="color:#373737;">Managing Optimizer Statistics in ADWC</span></h1> <p>For this blog post, I switch my role to the interested developer, that wants to know why there are statistics available. A good starting point &#8211; as often &#8211; is to read the manual. In the documentation of ADWC, we can find the following statements in the section <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/manage-service.html#GUID-69906542-4DF6-4759-ABC1-1817D77BDB02">Managing Optimizer Statistics and Hints on Autonomous Data Warehouse Cloud</a>:</p> <blockquote> <p><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;font-size:14px;box-sizing:border-box;">Autonomous Data Warehouse Cloud</span><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;"> gathers optimizer statistics automatically for tables loaded with direct-path load operations. … </span><span style="color:#333333;font-family:'Helvetica Neue', 'Segoe UI', Roboto, sans-serif-regular, sans-serif;font-size:14px;">If you have tables modified using conventional DML operations you can run commands to gather optimizer statistics for those tables. &#8230;</span></p> </blockquote> <p>What does this mean exactly? Let’s look at some more details.</p> <h1>Statistics for ETL Jobs with Conventional DML</h1> <p>The automatic gathering statistics job that is executed regularly on a “normal” Oracle database, does not run on ADWC. The job is enabled, but the maintenance windows of the scheduler are disabled by default:</p> <p><img title="auto_stats_job.jpg" src="https://danischnider.files.wordpress.com/2018/07/iauto_stats_job.jpg?w=438&#038;h=274" alt="Auto stats job" width="438" height="274" border="0" /></p> <p><span style="color:#373737;font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:15px;">This is a good decision, because many data warehouses are running ETL jobs in the time frame of the default windows. Statistics gathering in a data warehouse should always be part of the ETL jobs. This is also the case in ADWC. After loading data into a target table with a conventional DML operation (INSERT, UPDATE, MERGE), the optimizer statistics are gathered with a DBMS_STATS call:</span></p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="font-variant-ligatures:no-common-ligatures;"><br />BEGIN<br />   dbms_stats.gather_table_stats(USER, &#8216;H_ORDER_ITEM&#8217;);<br />END;<br />  </span></p> <p> </p> <p>Only schema and table name must be specified as parameter. For all other settings, the DBMS_STATS preferences are used. Four of them are defined differently per default in Autonomous Data Warehouse Cloud:</p> <ul> <li><strong>INCREMENTAL</strong> is set to TRUE (default: FALSE). This is only relevant for incremental statistics on partitioned table. Currently, Partitioning is not supported on ADWC, so this preference has no impact.</li> <li><strong>INCREMENTAL_LEVEL</strong> is set to TABLE (default: PARTITION). This is relevant for partition exchange in combination with incremental statistics and therefore currently not relevant, too.</li> <li><strong>METHOD_OPT</strong> is set to ‘FOR ALL COLUMNS SIZE 254’ (default: … SIZE AUTO). With the default setting, histograms are only gathered if a column was used in a WHERE condition of a SQL query before. In ADWC, a histogram with up to 254 buckets i calculated for each column, independent of the queries that were executed so far. This allows more flexibility for ad-hoc queries and is suitable in a data warehouse environment.</li> <li><strong>NO_INVALIDATE</strong> is set to FALSE (default: DBMS_STATS.AUTO_INVALIDATE). For ETL jobs, it is important so set this parameter to FALSE (see my previous blog post <a href="https://danischnider.wordpress.com/2015/01/06/avoid-dbms_stats-auto_invalidate-in-etl-jobs/">Avoid dbms_stats.auto_invalidate in ETL jobs</a>). So, the preference setting is a very good choice for data warehouses.</li> </ul> <p>The configuration of ADWC makes it very easy to gather optimizer statistics in your ETL jobs, but you still have to make sure that a DBMS_STATS call is included at the end of each ETL job.</p> <h1>Statistics for ETL Jobs with Direct-Path Loads</h1> <p>A better approach is to use Direct-Path INSERT statements. This is not only faster for large data sets, but makes it much easier to manage optimizer statistics. The reason is an Oracle 12c feature and two new undocumented parameters.</p> <p>Since Oracle 12.1, statistics are gathered automatically for a Direct-Path INSERT. This works only for empty tables, and no histograms are calculated, as explained in my previous blog post <a href="https://danischnider.wordpress.com/2015/12/23/online-statistics-gathering-in-oracle-12c/">Online Statistics Gathering in Oracle 12c</a>.</p> <p>In ADWC, two new undocumented parameters are available, both are set to TRUE by default:</p> <ul> <li>“<strong>_optimizer_gather_stats_on_load_all</strong>”: When this parameter is TRUE, online statistics are gathered even for a Direct-Path operation into a non-empty target table.</li> <li>“<strong>_optimizer_gather_stats_on_load_hist</strong>”: When this parameter is TRUE, histograms are calculated during online statistics gathering.</li> </ul> <p>The following code fragment shows this behavior: Before an incremental load into the Hub table H_ORDER_ITEM, the number of rows in the table statistics is 68386107. After inserting another 299041 rows, the table statistics are increased to 68685148 (= 68386107 + 299041).</p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;min-height:14px;"> <br />SELECT table_name, num_rows, last_analyzed<br />  FROM user_tab_statistics</p> <p style="margin:0;font-size:10px;line-height:normal;font-family:Monaco;color:#fff900;background-color:#6c6c6c;"><span style="color:#fff900;"> WHERE table_name = &#8216;H_ORDER_ITEM&#8217;;<br /> <br />TABLE_NAME             NUM_ROWS LAST_ANALYZED<br />&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;- &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<br />H_ORDER_ITEM           </span><span style="color:#ffb5b2;">68386107</span><span style="color:#fff900;"> 11.07.2018 09:37:04</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">INSERT /*+ append */ INTO h_order_item</span><br /><span style="color:#fff900;">      ( h_order_item_key</span><br /><span style="color:#fff900;">      , order_no</span><br /><span style="color:#fff900;">      , line_no</span><br /><span style="color:#fff900;">      , load_date</span><br /><span style="color:#fff900;">      , record_source</span><br /><span style="color:#fff900;">      )</span><br /><span style="color:#fff900;">SELECT s.h_order_item_key</span><br /><span style="color:#fff900;">     , s.order_no</span><br /><span style="color:#fff900;">     , s.line_no</span><br /><span style="color:#fff900;">     , v_load_date</span><br /><span style="color:#fff900;">     , c_record_source</span><br /><span style="color:#fff900;">  FROM v_stg_order_details s</span><br /><span style="color:#fff900;">  LEFT OUTER JOIN h_order_item t</span><br /><span style="color:#fff900;">    ON (s.h_order_item_key = t.h_order_item_key)</span><br /><span style="color:#fff900;"> WHERE t.h_order_item_key IS NULL;</span><br /><span style="color:#fff900;"> </span><br /><span><span style="color:#ffb5b2;">299041</span></span><span style="color:#fff900;"> rows inserted.</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">COMMIT;</span><br /><span style="color:#fff900;"> </span><br /><span style="font-variant-ligatures:no-common-ligatures;">SELECT table_name, num_rows, last_analyzed<br /></span><span style="color:#fff900;">  FROM user_tab_statistics</span><br /><span style="color:#fff900;"> WHERE table_name = &#8216;H_ORDER_ITEM&#8217;;</span><br /><span style="color:#fff900;"> </span><br /><span style="color:#fff900;">TABLE_NAME             NUM_ROWS LAST_ANALYZED</span><br /><span style="color:#fff900;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211; &#8212;&#8212;&#8212;- &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-</span><br /><span style="color:#fff900;">H_ORDER_ITEM           </span><span><span style="color:#ffb5b2;">68685148</span></span><span style="color:#fff900;"> 11.07.2018 14:11:09</span><br /><span style="color:#032ce2;">~                                                                                                                                  </span></p> <pre> </pre> <p>The column statistics (including histograms) are adapted for the target table, too. Only index statistics are not affected during online statistics gathering &#8211; but indexes in ADWC are a different story anyway. I will write about it in a separate blog post.</p> <h1>Conclusion</h1> <p>Statistics gathering is still important in the Autonomous Data Warehouse Cloud, and we have to take care that the optimizer statistics are frequently been updated. For Direct-Path operations, this works automatically, so we have nothing to do anymore. Only for conventional DML operations, it is still required to call DBMS_STATS after each ETL job, but the default configuration of ADWC makes it very easy to use.</p> Dani Schnider http://danischnider.wordpress.com/?p=585 Wed Jul 11 2018 11:46:42 GMT-0400 (EDT) Oracle Analytics Roadmap: OBIEE & OAC https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/oracle%20analytics%20roadmap.jpg?t=1533950236061" alt="oracle analytics roadmap" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>After attending Kscope18, one thing is still extremely clear — Oracle continues and will continue to be all about the cloud. For example, the title of the Kscope presentation detailing what’s coming to OBIEE is “Oracle Analytics: How to Get to the Cloud and the Future of On-Premises.”</p> <p>While that does tell you there’s still a future in on-prem Oracle BI, it's also clear that all the innovation will be put into the cloud. In this blog post, we’ll look at…</p> <ul> <li>Innovative features of <a href="/hyperionblog/oracle-analytics-cloud-questions">Oracle Analytics Cloud (OAC</a>)</li> <li>The future of OBIEE</li> </ul> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Foracle-analytics-roadmap&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/oracle-analytics-roadmap Thu Jul 05 2018 14:04:37 GMT-0400 (EDT) Oracle BI Commentary Tools: Open Source or Enterprise? https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/oracle%20bi%20commentary%20-%20enterprise%20vs%20open%20source.jpg?t=1533950236061" alt="oracle bi commentary - enterprise vs open source" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Commentary support in Oracle BI tools has been a commonly requested feature for many years. As with many absent features in our beloved tools, the community has come together to develop methods to implement this functionality themselves. Some of these approaches, such as leveraging Writeback, implement out-of-the-box Oracle BI features.</p> <p>More commonly you’ll find custom-built software extensions or free “open source” applications that provide commentary extensions.</p> <p>In this blog post, we’ll look at the difference between custom-built extensions and open-source applications. We’ll also consider two different tools — one open source, another a custom-built extension.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Foracle-bi-commentary-tools&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/oracle-bi-commentary-tools Thu Jul 05 2018 10:43:00 GMT-0400 (EDT) External Tables in Autonomous Data Warehouse Cloud https://danischnider.wordpress.com/2018/07/04/external-tables-in-autonomous-data-warehouse-cloud/ <p>In Oracle Autonomous Data Warehouse Cloud, External Tables can be used to read files from the cloud-based Object Storage. But take care to do it the official way, otherwise you will see a surprise, but no data.</p> <p><span id="more-580"></span></p> <p>Together with my Trivadis colleague <a href="https://antognini.ch/about/">Christian Antognini</a>, I currently have the opportunity to do several tests in the Autonomous Data Warehouse Cloud (ADWC). We are checking out the features and the performance of Oracle’s new cloud solution for data warehouses. For the kick-off of this project, we met in an idyllic scenery in the garden of Chris’ house in Ticino, the southern part of Switzerland. So, I was able to work on a real external table.</p> <p><img title="external_table.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexternal_table.jpg?w=600&#038;h=456" alt="External table" width="600" height="456" border="0" /><br /><em>Testing External Tables in the Cloud on an external table with view to the clouds.</em></p> <p>A typical way to load data files into a data warehouse is to create an External Table for the file and then read the data from this table into a stage table. In ADWC, the data files must first copied to a specific landing zone, a <em>Bucket</em> in the Oracle Cloud Infrastructure <em>Object Storage</em> service. The first steps to do this are described in the <a href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/adwc/OBE_Loading%20Your%20Data/loading_your_data.html">Oracle Autonomous Data Warehouse Cloud Service Tutorial</a>. The Oracle Cloud Infrastructure command line interface <a href="https://docs.cloud.oracle.com/iaas/Content/API/Concepts/cliconcepts.htm">CLI</a> can also be used to upload the files.</p> <p>The tutorial uses the procedure <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud-reference.html#GUID-9428EA51-5DDD-43C2-B1F5-CD348C156122"><em>DBMS_CLOUD.copy_data</em></a> to load the data into the target tables. The procedure creates a temporary external table in the background and drops it at the end. Another procedure <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/dbmscloud-reference.html#GUID-2AFBEFA4-992E-4F53-96DB-F560084C7DA9"><em>DBMS_CLOUD.create_exernal_table</em></a> is available to create a reusable External Table on a file in the Object Storage. But is it possible to create an External Table manually, too? To check this, I extracted the DDL command of the table CHANNELS (created with <em>DBMS_CLOUD.create_extrenal_table</em>):</p> <p><img title="exttab_ddl.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_ddl.jpg?w=599&#038;h=279" alt="Exttab ddl" width="599" height="279" border="0" /></p> <p>Then, I created a new table CHANNELS_2 with exactly the same definition, only with a different name. It seems to be obvious that both tables should contain the same data. But this is not the case, table CHANNEL_2 returns no data:</p> <p><img title="exttab_query.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_query1.jpg?w=490&#038;h=518" alt="Exttab query" width="490" height="518" border="0" /></p> <p>First, I was confused. Then I thought it has to do with missing privileges. Finally, I assumed to be dazed because of the heat in Chris’ garden. But the reason is a different one: CHANNELS_2 is not an External Table, but a normal heap-organized table. Even it was created with an ORGANIZATION EXTERNAL clause! Extracting the DDL command shows what happened:</p> <p><img title="exttab_ddl_2.jpg" src="https://danischnider.files.wordpress.com/2018/07/iexttab_ddl_2.jpg?w=598&#038;h=196" alt="Exttab ddl 2" width="598" height="196" border="0" /></p> <p>What is the reason for this behavior? The explanation can be found in Appendix B: <a href="https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/experienced-database-users.html#GUID-58EE6599-6DB4-4F8E-816D-0422377857E5">Autonomous Data Warehouse Cloud for Experienced Oracle Database Users</a> (the most interesting part of the ADWC documentation): Most clauses of the CREATE TABLE command are either ignored or not allowed in the Autonomous Data Warehouse. In ADWC, you cannot manually define physical properties such as tablespace name or storage parameters. No additional clauses for logging, compression, partitioning, in-memory, etc. are allowed. They are either not supported in ADWC (like Partitioning), or they are automatically handled by the Autonomous Database. According to documentation, creating an External Table should not be allowed (i.e. return an error message), but instead, the clause is just ignored. The same happens for index-organized tables, by the way.</p> <h1>Conclusion</h1> <p>External Tables are supported (and even recommended) in the Autonomous Data Warehouse Cloud, but they cannot be created manually &#8211; we are in an <span style="text-decoration:underline;">Autonomous</span> Database.</p> <p>If you follow the steps explained in the documentation and use the provided procedures in package DBMS_CLOUD, everything works fine. If you try to do it the “manual way”, you will get a non-expected behavior and probably loose a lot of time to find your data in the files.</p> <p>The PL/SQL package <em>DBMS_CLOUD</em> contains many additional useful procedures for file handling in the Cloud, but not all of them are documented. A complete reference of all its procedures with some examples can be found in Christian Antognini’s blog post <a href="https://antognini.ch/2018/07/dbms_cloud-package-a-reference-guide/">DBMS_CLOUD Package – A Reference Guide</a>.</p> Dani Schnider http://danischnider.wordpress.com/?p=580 Wed Jul 04 2018 17:29:55 GMT-0400 (EDT) Building interactive charts and tables in Power Point with Smart View https://garycris.blogspot.com/2018/07/building-interactive-charts-and-tables.html Do you get tired of recreating the same PowerPoint decks each month when your numbers change?&nbsp; Wouldn't it be great if you could just push a button and have the numbers in your ppt slide update to what is in the database?&nbsp; Wouldn't it be even better if the data was used in visually rich MS Office objects such as tables and charts?&nbsp; And, wouldn't it be awesome if you could interact with the data in real time during your presentation?<br /><br />Well, did you know Smart View for Power Point does all of that?&nbsp; That's right I said Smart View for PowerPoint; Smart View is not just for Excel.<br /><br />Smart View has had the functionality to work with the MS Office suite for some time, but frankly the functionality outside of Excel has been limited and challenging to work with at times.&nbsp; While the Power Point and Word functionality are not 100%, they have come a long way and with some patience you can make some really nice PowerPoint slides that are interactive with your underlying EPM data sources.<br /><br />At Kscope18 I did a presentation with fellow Oracle Ace and GE colleague Gary Adashek.&nbsp; We did a Shark Tank style pitch to "investors" on why they should help us "Save The Beverage Company".&nbsp; If you don't know what The Beverage Company is, your Essbase street credit is going to take a serious hit.&nbsp; Of course I am referring to the fabled bottling company made famous by the Essbase Sample Basic database.&nbsp; Gary and I figured we could use our Smart View skills to create some really slick ppt slides to convince the investors to help us save this favored institution that had been around since the Arbor days.&nbsp; Besides having some fun trying to convince our panel of investors (see the Photoshop pics at the end of this post) while the audience watched, we wanted to convey the very real message that you can do some interesting things in PowerPoint with Smart View.<br /><br />In my effort to communicate useful EPM tips across various mediums, this seemed like a good topic for a blog post.&nbsp; In this tutorial, I am going to walk through how to create interactive Smart View objects in PowerPoint.&nbsp; I am working with Essbase 11.1.2.4, Office 2016, and Smart View 11.1.2.5.800.&nbsp; I suggest using this latest version of Smart View since it has some fixes in it specifically associated with PowerPoint.<br /><br />The first thing you will need to decide is how you are going to link your source data to your ppt.&nbsp; You have three options to create what is referred to in Smart View as a 'function grid'.&nbsp; You can base your function grid on<br /><br /><ol><li>A Hyperion Financial Report (HFR) grid</li><li>A Data Form (Hyperion Planning, (e)PBCS, and HFM-presumably-I have not tested)</li><li>An ad hoc Smart View retrieve in Excel.</li></ol><br />Each one of these have their pros and cons PBCS data forms seem to have the most functionality while also being the most stable.&nbsp; HFR grids are stable, but they lack the ability to change the POV after they have been turned into a function grid.&nbsp; Excel has the most functionality in terms of different objects, but it is less stable since you are creating the function grid from an ad hoc report in Excel.<br /><br /><h3>Forms</h3>So to start off let's take a look at building a ppt slide using a PBCS form as a source for a chart.<br /><br />First step is to either create a form or select one that has the data you are looking for.&nbsp; Keep in mind if your goal is to make a chart, not all forms are set up correctly to make a nice chart.&nbsp; In my experience so far, I have created a separate folder for forms called 'Forms for PPT' where I save the ones I have created specifically for this purpose.<br /><br />This is the form I created for demonstration.&nbsp; You can see it is pretty straightforward, but note that I did add a dimension to the Page section of the form; you'll see why in a little bit.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-vTuT2gwEZaw/WzvqdsSkHFI/AAAAAAAAAas/WT7M0v9kINUKPS_96A3ojZGB3AYi3LrOACLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B5.27.46%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="940" data-original-width="1600" height="233" src="https://1.bp.blogspot.com/-vTuT2gwEZaw/WzvqdsSkHFI/AAAAAAAAAas/WT7M0v9kINUKPS_96A3ojZGB3AYi3LrOACLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B5.27.46%2BPM.png" width="400" /></a></div><br /><br />When working with a Data Form or HFR report as a source you can begin directly from Power Point; there is no need for Excel.<br /><br /><b>Steps</b><br /><br /><ol><li>Open Power Point and start with a blank presentation</li><li>Connect to your data source, in this case PBCS, and navigate to Forms folder and select the form you created as the basis for your chart</li><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-4NU0rx1afoM/WzvgKbCSieI/AAAAAAAAAZs/SLN8tIgRHW4_dGTS733EFduE9X7yf7SGwCLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B4.44.11%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://1.bp.blogspot.com/-4NU0rx1afoM/WzvgKbCSieI/AAAAAAAAAZs/SLN8tIgRHW4_dGTS733EFduE9X7yf7SGwCLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B4.44.11%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><li>At the bottom of the Smart View panel, click on 'Insert Chart'</li><ol><li>Be patient this step may take a minute or so while Office renders the object</li><li>It may also be a good idea to ensure Excel is closed before doing this.&nbsp; <b style="background-color: yellow;">I have found that if Excel is open prior to inserting the chart it times out.</b>&nbsp; Technically they are using the Excel chart engine to render the chart and insert it into Power Point</li></ol><li>Once the chart is rendered you can resize it and move it around your slide to desired location.&nbsp; I do not recommend trying to move it to another slide, if you want it on another slide it seems best to repeat the steps.</li><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-VZxJ7v06eME/Wzvhh5zNypI/AAAAAAAAAZ4/DpJtFEBQJjIpZEqb_tNJqMwjO4En_C7FgCLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B4.49.13%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://1.bp.blogspot.com/-VZxJ7v06eME/Wzvhh5zNypI/AAAAAAAAAZ4/DpJtFEBQJjIpZEqb_tNJqMwjO4En_C7FgCLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B4.49.13%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><li>Once the chart is created you can now make changes to it as you would a typical Office object.&nbsp; You can go to the chart designer and change the chart type or the color theme or various other options.&nbsp; Smart View does provide some of the options in a pop-up menu you will see if you click on the chart, but the options there are similar to the ones on the chart design ribbon, with the exception of the filter function, which allows you to filter out certain members.&nbsp; The filter function gives the option to potentially use a large form with a lot of data and then filter it in ppt, rather than having to create multiple forms.&nbsp; You can also insert your regular ppt content and wind up with something that looks like this.&nbsp; &nbsp;</li><ol><ol><li>&nbsp;<a href="https://4.bp.blogspot.com/-7APnsZLwDW4/Wzvj_4yoW-I/AAAAAAAAAaY/8txfatZH2qodFmpSvDwgS3l93MlnoUIewCLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B4.59.33%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://4.bp.blogspot.com/-7APnsZLwDW4/Wzvj_4yoW-I/AAAAAAAAAaY/8txfatZH2qodFmpSvDwgS3l93MlnoUIewCLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B4.59.33%2BPM.png" width="400" /></a></li></ol></ol><li>Now that I have a nice chart I can take it one step further and make it interactive.&nbsp; Remember before when I mentioned I put a dimension in the page section of the form?&nbsp; Let's go back to the Smart View panel hit the drop down next to the little house icon and select 'Document Contents'.&nbsp; Click on your function grid and then at the bottom of the panel click on 'Insert Reporting Object/Control'.&nbsp; Now, click on the POV object</li><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/--DWd8vVtmgg/WzvsdjVsaEI/AAAAAAAAAa4/Stqo62KdGHYWlFDXCNCVMxWcS0tzjqFkQCLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B5.36.27%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://2.bp.blogspot.com/--DWd8vVtmgg/WzvsdjVsaEI/AAAAAAAAAa4/Stqo62KdGHYWlFDXCNCVMxWcS0tzjqFkQCLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B5.36.27%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><li>You will see a grey box inserted onto the slide.&nbsp; Note that this POV box will not become active until you enter Power Point presentation mode.&nbsp; While in presentation mode you can hit the drop down next to the dimension member that was placed in the Page section of the form and select a different member; hit the refresh button and your objects will re-render with the new data.&nbsp;&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-_tBgp14-J_g/WzvuVR2xUpI/AAAAAAAAAbM/IoVG-B29pFARF04Icsd4chlyc-3C6ZVwwCLcBGAs/s1600/Screen%2BShot%2B2018-07-03%2Bat%2B5.44.27%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://1.bp.blogspot.com/-_tBgp14-J_g/WzvuVR2xUpI/AAAAAAAAAbM/IoVG-B29pFARF04Icsd4chlyc-3C6ZVwwCLcBGAs/s400/Screen%2BShot%2B2018-07-03%2Bat%2B5.44.27%2BPM.png" width="400" /></a></div></li></ol><ol><div class="separator" style="clear: both; text-align: center;"><br /></div></ol><div>So you can see that I was able to very quickly create a presentation quality image based off my PBCS data.&nbsp; Next time my data changes, I can open this ppt file, go to Smart View refresh and the object will pull in the new data and update the object accordingly.</div><div><br /><br /><h3>HFR Grid</h3>Next, let's look at how to insert a HFR report</div><div><br /></div><div>The steps for inserting an HFR report are similar but there are a few differences.&nbsp; First, like the data forms, you need to start off with a report that has the data you want.&nbsp; I created an HFR report similar to the data form in previous example.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-bpELnbouKE4/Wz5qYIDhQII/AAAAAAAAAbo/wkt2K6lbR_4BT2SpRo6yMVhrfHbFc3F-QCLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B2.56.09%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="638" data-original-width="1544" height="165" src="https://1.bp.blogspot.com/-bpELnbouKE4/Wz5qYIDhQII/AAAAAAAAAbo/wkt2K6lbR_4BT2SpRo6yMVhrfHbFc3F-QCLcBGAs/s400/Screen%2BShot%2B2018-07-05%2Bat%2B2.56.09%2BPM.png" width="400" /></a></div><br /><br /><b>Steps</b><br /><br /><ol><li>Open Power Point and start with a blank presentation</li><li>Connect to your data source, in this case we are still using PBCS but we are going to choose the Reporting provider instead of the EPM provider.&nbsp; Navigate to the folder where you saved your report and select it.&nbsp; Then hit the Open link at the bottom of the Smart View panel&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-77jcFJ17jME/Wz5rztKPP3I/AAAAAAAAAb0/Va0atDvM3FEPQPM7h9SgxQ8p-v5WdrhYQCLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B3.04.35%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://3.bp.blogspot.com/-77jcFJ17jME/Wz5rztKPP3I/AAAAAAAAAb0/Va0atDvM3FEPQPM7h9SgxQ8p-v5WdrhYQCLcBGAs/s400/Screen%2BShot%2B2018-07-05%2Bat%2B3.04.35%2BPM.png" width="400" /></a></div></li><li><div class="separator" style="clear: both; text-align: center;"><br /></div></li><li>When you click Open, the Import Workspace Document window will open.&nbsp; By default you can import the report as an image, but we want to hit the drop down and select Function Grid instead<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-RGDCsiea_Fs/Wz5saPZsXqI/AAAAAAAAAb8/pRzLRyEcX88BkuKSSCLp_XrSlXfSGHj-wCLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B3.07.00%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1104" data-original-width="1600" height="275" src="https://3.bp.blogspot.com/-RGDCsiea_Fs/Wz5saPZsXqI/AAAAAAAAAb8/pRzLRyEcX88BkuKSSCLp_XrSlXfSGHj-wCLcBGAs/s400/Screen%2BShot%2B2018-07-05%2Bat%2B3.07.00%2BPM.png" width="400" /></a></div></li><li>&nbsp;Click Finish</li><li>You will be taken back to your slide and the Document Contents pane will be active in the Smart View panel. Click on the Insert New Reporting Object/Control link</li><li>A new window pops up, scroll down and select Chart (note there is no option for POV).&nbsp; Your chart is inserted and associated with the function grid, same as above with PBCS form.</li></ol><div>You can now work with the chart the same way you did in the steps above.&nbsp; So now, let's take a minute to explore some of the other objects (note these work the same if you are using a form).</div><ol><li>Return to the Document Contents pane, select your function grid connection and insert another object&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-AVV8Vpw98x0/Wz5uJlDVSuI/AAAAAAAAAcQ/nfN77jUBV-IMe3amxYhkPDcwvg2RjUzWACLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B3.14.33%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="200" src="https://2.bp.blogspot.com/-AVV8Vpw98x0/Wz5uJlDVSuI/AAAAAAAAAcQ/nfN77jUBV-IMe3amxYhkPDcwvg2RjUzWACLcBGAs/s320/Screen%2BShot%2B2018-07-05%2Bat%2B3.14.33%2BPM.png" width="320" /></a></div></li><li>This time let's add an Office Table</li><li>Once the table is inserted, click on the table and then go to the PowerPoint Design Ribbon and select a format for the table; you can then repeat for the chart.&nbsp; You may also want to increase the font for the table.&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-OC0LaQhGoek/Wz5xqdeyzZI/AAAAAAAAAcs/pC_wy-sm_n8me4w04Gs0J7oazwIqMqJgACLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B3.29.35%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="200" src="https://4.bp.blogspot.com/-OC0LaQhGoek/Wz5xqdeyzZI/AAAAAAAAAcs/pC_wy-sm_n8me4w04Gs0J7oazwIqMqJgACLcBGAs/s320/Screen%2BShot%2B2018-07-05%2Bat%2B3.29.35%2BPM.png" width="320" /></a></div></li><li>Insert a new slide into your ppt</li><li>On slide 2, insert a new reporting object/control, select Function Grid</li><li>Note that unlike the Office table, the function grid inserts multiple text boxes, some with labels, and others with active data links to your data source.&nbsp; You can arrange these objects anywhere you would like and again click on the design ribbon to alter the way the object are formatted.&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-pymaSy-Fczw/Wz50eTMksbI/AAAAAAAAAdA/EX0OsrTZa7MdmMSn35pcRTyOcLlztMeXACLcBGAs/s1600/Screen%2BShot%2B2018-07-05%2Bat%2B3.41.18%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://2.bp.blogspot.com/-pymaSy-Fczw/Wz50eTMksbI/AAAAAAAAAdA/EX0OsrTZa7MdmMSn35pcRTyOcLlztMeXACLcBGAs/s400/Screen%2BShot%2B2018-07-05%2Bat%2B3.41.18%2BPM.png" width="400" /></a></div></li></ol><div><br /></div>There are a number of options to play with to get the format the way you would like.&nbsp; Note that from time to time I have encountered a few bugs and some inconsistencies in behavior between data sources.&nbsp; I encourage you to log an SR with Oracle for any you come across to get this product working as well as possible.</div><div><br /><br /><h3>Excel</h3><br /></div><div>For the last data source, let's look at an ad hoc from Excel.&nbsp; Note I will use PBCS but this works for other data sources such as Essbase as well.</div><div><br /></div><div><br /></div><div><b>Steps</b><br /><br /><ol><li>Open Excel and start with a blank workbook</li><li>Connect to EPM data source via Smart View</li><li>Using ad hoc analysis, create a basic retrieve&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-pohQVkYEUUY/W0PEAyGwmNI/AAAAAAAAAd8/y_S3IZunb7E80rpB64Q52hsdgiddPEFRwCLcBGAs/s1600/Screen%2BShot%2B2018-07-09%2Bat%2B4.22.06%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="816" data-original-width="986" height="330" src="https://4.bp.blogspot.com/-pohQVkYEUUY/W0PEAyGwmNI/AAAAAAAAAd8/y_S3IZunb7E80rpB64Q52hsdgiddPEFRwCLcBGAs/s400/Screen%2BShot%2B2018-07-09%2Bat%2B4.22.06%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div></li><li>Select your data range by dragging mouse over the cells with data</li><li>Go to Smart View ribbon and click Copy (note this is not the same as Excel Copy or ctrl + C)</li><li>Open Power Point blank presentation</li><li>Go to Smart View Ribbon and click Paste (note this is not the same as Paste on the Home ribbon or ctrl + V)</li><li>At this point you will see that the function grid is actually placed in the slide.&nbsp; Go ahead and run the refresh&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-6pbEhtXMO3Y/W0PESLHLd0I/AAAAAAAAAeE/NBk_XXDOjPwJcVgL9NNtxWYXMrRNW6JcQCLcBGAs/s1600/Screen%2BShot%2B2018-07-09%2Bat%2B4.23.22%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://2.bp.blogspot.com/-6pbEhtXMO3Y/W0PESLHLd0I/AAAAAAAAAeE/NBk_XXDOjPwJcVgL9NNtxWYXMrRNW6JcQCLcBGAs/s400/Screen%2BShot%2B2018-07-09%2Bat%2B4.23.22%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div></li><li>Now let's add a chart and a POV slider: go to Smart View panel and go to document contents, select your Smart View link, and then click on Insert New Reporting Object/Control.</li><li>Select chart</li><li>Go back to Document Content, select your Smart View link, and then click on Insert New Reporting Object/Control.</li><li>Scroll to bottom and select Slider</li><li>Select the dimension you want the slider for, I am going to choose Years, with members FY18, FY17, and FY16.</li><li>Now when I enter presentation mode, my slider becomes active.&nbsp; I can use my mouse to slide to different year selections&nbsp;<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-kVkiv8z1fWQ/W0PF-qY7rAI/AAAAAAAAAeQ/BMf7rcniI1cYXQxC-iLrV5epBw2T2GRRQCLcBGAs/s1600/Screen%2BShot%2B2018-07-09%2Bat%2B4.30.11%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="1600" height="250" src="https://4.bp.blogspot.com/-kVkiv8z1fWQ/W0PF-qY7rAI/AAAAAAAAAeQ/BMf7rcniI1cYXQxC-iLrV5epBw2T2GRRQCLcBGAs/s400/Screen%2BShot%2B2018-07-09%2Bat%2B4.30.11%2BPM.png" width="400" /></a></div></li></ol><br /><br /><br /></div><div><b>Conclusion</b><br /><br />There are an endless number of combinations and examples I could show, but I think this is a good stopping point.&nbsp; If you were able to follow along and complete the steps you now have the basic understanding of how to create Smart View Power Point objects that are linked to your EPM data source.&nbsp; Experiment with different objects, and different data sources; I think you will find some very cool features.&nbsp; Don't be discouraged if you run up against something that doesn't work right, the Oracle team has been very responsive and you just need to log an SR so they become aware of the issue.&nbsp; Sometimes it is what we are doing, sometimes it is bug, but as I said at the beginning of the post, the product has come a long way and I believe it can be very useful in the hands of the right users.<br /><br />Best of luck, let me know your thoughts in the comment section.<br /><br /><br /><b>Post-Conclusion</b><br /><b><br /></b>Pics from Kscope18 Save The Beverage Company presentation<br /><br />1. Meet The Sharks, The Founders, and The Presenters!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-8Vbxzx5Bnp0/W0PI6LVDtkI/AAAAAAAAAfA/20NwqLNLZ3URC7P-lkO6VPehHCsHj3f-gCLcBGAs/s1600/DgmUCPAU8AAF3S.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1073" data-original-width="1600" height="267" src="https://2.bp.blogspot.com/-8Vbxzx5Bnp0/W0PI6LVDtkI/AAAAAAAAAfA/20NwqLNLZ3URC7P-lkO6VPehHCsHj3f-gCLcBGAs/s400/DgmUCPAU8AAF3S.jpg" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><br /></div><div><br /><br /></div> Gary Crisci, Oracle Ace tag:blogger.com,1999:blog-5844205335075765519.post-4250408008929775537 Tue Jul 03 2018 17:47:00 GMT-0400 (EDT) UnifyBI Community Edition: Free OBIEE-Tableau Connector https://www.us-analytics.com/hyperionblog/unifybi-community-edition <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/unifybi-community-edition" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/unifybi%20community%20edition.jpg?t=1533950236061" alt="unifybi community edition" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Typically, issues that arise with your enterprise software require costly solutions, but occasionally you’ll find something that’s both helpful and affordable.</p> <p>UnifyBI, the <a href="https://www.us-analytics.com/unifybi">connector for Oracle BI and Tableau</a>, is now available in a community edition and <strong>free to download</strong>. That’s right, free. So, what’s the catch? It doesn’t come with all the bells and whistle of the paid versions, but it will still help the community of Tableau and Oracle BI users solve their biggest problems.</p> <p>This blog post will cover the issues the Community Edition helps solve, installation, as well as how its features stack up to the UnifyBI Professional Edition and Server Edition.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Funifybi-community-edition&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/unifybi-community-edition Tue Jul 03 2018 17:20:00 GMT-0400 (EDT) PBCS and EPBCS Updates (July 2018): New Data Integration Component, Updated Vision Sample Application, Considerations & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-july-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-july-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20and%20epbcs%20july%202018%20updates.jpg?t=1533950236061" alt="pbcs and epbcs july 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The July updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;have arrived!&nbsp;</span>This blog post outlines several new features, including a new data integration component, updated vision sample application, considerations, and more.</p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, July 20 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-july-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-july-updates Mon Jul 02 2018 14:29:20 GMT-0400 (EDT) ARCS Updates (July 2018): Power Calculation, Drag and Drop Attachments, Considerations & More https://www.us-analytics.com/hyperionblog/arcs-product-update-july-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-july-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/arcs%20july%202018%20updates.jpg?t=1533950236061" alt="arcs july 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The July updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) are here. In this blog post, we’ll outline new features in ARCS, including adding power calculation, drag and drop attachments, considerations, and more.</p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, July 20 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-july-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-july-2018 Mon Jul 02 2018 13:12:47 GMT-0400 (EDT) FCCS Updates (July 2018): Migration Export Scenarios, Simpler Member Selector, Upcoming Changes & More https://www.us-analytics.com/hyperionblog/fccs-updates-july-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-july-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/fccs%20july%202018%20updates.jpg?t=1533950236061" alt="fccs july 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The July updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including migration export scenarios, simpler member selector, some upcoming changes, and more.</p> <p><em>The monthly update for FCCS will occur on Friday, July 20 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-july-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-july-2018 Mon Jul 02 2018 12:32:40 GMT-0400 (EDT) EPRCS Updates (July 2018): Row Branding, Preview POV, Considerations & More https://www.us-analytics.com/hyperionblog/eprcs-updates-july-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/eprcs-updates-july-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/eprcs%20july%202018%20updates.jpg?t=1533950236061" alt="eprcs july 2018 updates" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this blog, we'll cover the July updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/enterprise-performance-reporting-cloud">Oracle Enterprise Performance Reporting Cloud Service (EPRCS)</a>&nbsp;including new features and considerations.</p> <p><em>The monthly update for EPRCS will occur on Friday, July 20 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Feprcs-updates-july-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/eprcs-updates-july-2018 Mon Jul 02 2018 11:25:55 GMT-0400 (EDT) Using Oracle’s Baseline Validation Tool with OBIEE — Part 4 https://blog.redpillanalytics.com/using-oracles-baseline-validation-tool-with-obiee-part-4-a1ebf08b0c4a?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*1UPaSgyq7QbrTpuhsONamw.jpeg" /></figure><p>This post is part 4 of 4 about using the Baseline Validation Tool (BVT) with Oracle Business Intelligence (OBIEE). The previous posts are:</p><ul><li><a href="https://blog.redpillanalytics.com/using-oracles-baseline-validation-tool-with-obiee-part-1-328be6fbb6bc">Post 1: An intro to BVT</a></li><li><a href="https://blog.redpillanalytics.com/using-oracles-baseline-validation-tool-with-obiee-part-2-c9a5572f80bb">Post 2: Setting up BVT</a></li><li><a href="https://blog.redpillanalytics.com/using-oracles-baseline-validation-tool-with-obiee-part-3-b79dfff662a0">Post 3: Running the Tests</a></li></ul><p><strong>Interpreting the BVT Results</strong></p><p>As described in my third post in this series, here’s how to find the BVT test results:</p><ul><li>After you run the –compareresults command, the results of the comparison will be stored in a folder in your oracle.bi.bvt folder called Comparisons.</li><li>The next level will contain a folder with the two deployments that you ran the comparison against.</li><li>Within that folder you’ll see a folder for each test plugin that you ran. Finally you’ll see an HTML file named after the plugin (ie. ReportPlugin.html, CatalogPlugin.html, etc.) which is the gold at the end of the rainbow that you were searching for.</li><li>Open this HTML file in a web browser.</li></ul><p>When reading any of your result html files, the first section will be a summary of what was found.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*S6VURqDep4fkigmEE1Tr4A.png" /></figure><p>You’ll notice that in the results tables that I describe below, there are columns for First Deployment and Second Deployment. Image 2 (inline) These correspond to the order of the deployments that you gave when you asked to compare results. If you look at my example from above, my first deployment refers to what I named Dev and the second is Test. These correspond to the order I entered the deployment results when I compared the results:</p><p>bin\obibvt.bat –compareresults SST_Report_Results\Dev SST_Report_Results\Test -config Scripts\SST_Report_Test.xml</p><p>The results will then be grouped into section categories based on the result of the test. The report name will be given but unfortunately the path is not included. If you hover over one of the SERVER LINKs you’ll see the path on the OBI server to the report so you’ll see the folder path within the Catalog there but it’s not easy to read. This is another reason why I like to run BVT on each folder individually because the results can get messy on really large folders with many stakeholders who care about the BVT results.</p><h3><strong>Catalog Test</strong></h3><p>You’ll find the Catalog test results in the comparison folder then in the folder named with the two deployments that you were comparing (“Dev — Test” in my example). Then in the folder called “com.oracle.obiee.bvt.plugin.catalog”, open the HTML file called CatalogPlugin.html.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*akOwRNZwP-bBGWi-T56rOw.png" /></figure><p>The results of the Catalog test do not include a “Passed” category like other tests. You can assume that if a report does not show up in any of the 3 possible categories that it was identical in both instances.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wyJsRMUI2ZaVgB1A92YTdg.png" /></figure><p>The sections you’ll find in the Catalog Plugin Test results are the following:</p><p>Items Only in First Deployment — These are reports found in your first instance that you tested but are not in the second.</p><p>Items Only in Second Deployment — These are reports found in your second instance that you tested but are not in the first.</p><p>Items Different — These reports were found in both instances but something was different between them. The Name column will tell you what was different.</p><p>Acl means that the users with access to this report are different. This is because a server specific user id value is used so if you are comparing reports on different servers, you would expect these to be different.</p><h3><strong>Report Test</strong></h3><p>You’ll find the Report test results in the comparison folder then in the folder named with the two deployments that you were comparing (“Dev — Test” in my example). Then you’ll see folders for each type of export that you could have done for the Report test which includes CSV, PDF, and Excel.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/780/1*_GeFLiTdv0VkEuApSpvhJw.png" /></figure><p>In each of those export folders, you’ll find the export report results. Open the HTML file called ReportPlugin.html.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/982/1*d4JAt8KfuqXUaHNslna-fg.png" /></figure><p>The sections you’ll find within the Report Plugin test results include:</p><p>Report Baseline Verification Passed — The report exports were identical.</p><p>Report Baseline Verification Failures — The exports of this report did not match. Reminder in case you have a lot of results in this list: in the config you could consider changing the threshold for decimal place differences on the CSV export. Otherwise, see note below about discovering differences in the report versions.</p><p>Report Not Found — The report was found on the first deployment but not on the second.</p><p>Reports Failed to Download — BVT was unable to get an export of data from this report. Could be due to the report taking too long to return data (try increasing time limit in config) or the report might be throwing an error.</p><p>Unprocessed Reports — BVT was unable to compare the data of these report exports.</p><p>New Report Found — This report exists in the second deployment but not in the first.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GKns0YDPY1siyvvT7bgxMA.png" /></figure><p><em>Failed Report Tests</em></p><p>An improvement was made in the latest version of BVT (Version 18.1.10) which now shows a Failure Reason. While this is helpful, it still requires you manually comparing your report extracts. Also, it seems that the reason is not being displayed for my CSV exports, only Excel and PDF comparisons.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*bTpc60FYaZNoQ7U4fKg5Cw.png" /></figure><p>It’s quite a manual process to start troubleshooting what’s different on these reports. I use a text comparison tool (like BeyondCompare that includes a 30-day free trial). The file comparison tool will show the file line by line and highlight any differences. You can find the report exports in your Results folder for each instance that you tested against. See the <em>Finding the Test’s Export Files</em> section for help finding these export files.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YdABDy07G_TmW3uHqlEVUA.png" /></figure><p>In each of the result sections, you’ll see the same headers:</p><p># — A sequence number for the reports in this section</p><p>Name — The name of the report</p><p>First Server Link — A hyperlink to the report on the first deployment server</p><p>Second Server Link — A hyperlink to the report on the second deployment server</p><p>Result — The result of the comparison of the two deployment report exports</p><p>First Deployment — A link to the first deployment BVT report export file in the file system. If this link doesn’t take you anywhere, then no report export file exists.</p><p>Second Deployment — A link to the second deployment BVT report export file in the file system. If this link doesn’t take you anywhere, then no report export file exists.</p><h3><strong>UI Test</strong></h3><p>You’ll find the UI test results in the comparison folder then in the folder named with the two deployments that you were comparing (“Dev — Test” in my example). Then in the folder called “com.oracle.obiee.bvt.plugin.ui”, open the HTML file called UIAnalysisPlugin.html.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/944/1*voVlm7FxQF9QfvduSuOvcg.png" /></figure><p>The UI test results HTML page will include the screenshot that BVT captured of each report tested. These are the images that the algorithm scored pixel-by-pixel.</p><p>The sections you’ll find within the UI Plugin test results include:</p><p>UI Analysis Baseline Verification Passed — The screenshots matched enough to meet the Score Threshold defined in the config.<br>UI Analysis Baseline Verification Failures — The screenshot exports from this report do not meet the Score Threshold defined in the config.<br>New Reports — This report exists in the second deployment but not in the first.<br>Reports Not Found —This report was found in the first deployment but not in the second.<br>Reports Failed to Compare — BVT was unable to compare the results from this report.<br>Reports Failed to Download — This report did not render in one or both of the environments. You can see which screenshots were included to see which deployment failed.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Mqrha2JrSqdvWjCO9o3OpA.png" /></figure><p>The third column displays the score that this report received. The test for each report will be marked as pass or fail based on the threshold that you defined in the config. The default threshold is 0.95, meaning that the score must be 95% or above to get a passing score. 1.0 is a perfect match.</p><p>Under each image is a hyperlink to the file location of each screen capture image file within your Results folder.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*OuNu0g8hEdd9AqOaa17QzA.png" /></figure><h3><strong>Dashboard Test</strong></h3><p>You’ll find the Dashboard test results in the comparison folder then in the folder named with the two deployments that you were comparing (“Dev — Test” in my example). Then in the folder called “com.oracle.obiee.bvt.plugin.dashboard”, open the HTML file called DashboardPlugin.html.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/934/1*5ABDLHJ5vfcQcxXoNDSfxg.png" /></figure><p>For the dashboard test, I found that if I point the CatalogRoot parameter to the Dashboards folder directly or a folder within it, BVT does not find the dashboards. I had to point the CatalogRoot to the folder containing the Dashboards folder. Another option is to use the ExportDashboardsToXML parameter set to false as I described previously in this series to request that BVT only exports the dashboards in the prompts file.</p><p>This text exports the xml of the report for comparison. The xml files that it is comparing look like the example below. You can see that it is an export of the data values displayed in the dashboard. Note that the export does not care how the data is displayed in the dashboard such as in a chart. The UI test will test the pixels of the charts. This test compares the data used in the dashboard.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6lEUIoDMVbH1Yz7x3UQSQw.png" /></figure><p>The dashboard test results can be either Fail, Pass, Missing or New Item. Missing means that the dashboard page is in the first deployment but not the second. New Item is the opposite of Missing where it exists in the second deployment but not the first.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*PsPFPQgLDF782fBoyOlMSg.png" /></figure><p>The counts that are displayed in this summary are at a dashboard page level. It’s possible that 4 out of 5 of the reports on one dashboard page passed but if one fails, the page will show up in the failed list.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WyfWS6MatmB0FaFo0HdLMg.png" /></figure><p>As you see in the image above, the results do not tell you piece of data was different. To compare these results, I recommend finding the xml output files and comparing similar to how you would with the report test in a file comparison tool. You can use an xml editor, text editor, or Excel to view these xml export files.</p><h3><strong>BVT Errors I’ve encountered</strong></h3><p><em>Single Sign On (SSO)</em></p><p>If you have SSO enabled in your environment, your BVT config file will be slightly different. For the Dashboard, UI and Report tests, you’ll need to use /analytics-ws as the path to your environment in the config. For the Catalog test you will continue to use /analytics.</p><p><em>Throwable: null SEVERE: Unhandled Exception</em></p><p>I came across this error where BVT immediately failed when I tried to run it against a deployment name that didn’t actually exist in my config file. Check your deployment name and run again.</p><p><em>Dashboard test does not export any dashboards</em></p><p>You must set the Catalog Root for the dashboards test at the level of the folder contain the Dashboard folder or use ExportAllDashboards = false and include urls to all of the dashboards to test in your prompts file.</p><p>Example:</p><p>If I want to test my Usage Tracking dashboards and they are contained in this folder structure (image name Usage Tracking Folder)</p><p>The Catalog Root would be set to “/shared/Usage” in this example.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a1ebf08b0c4a" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/using-oracles-baseline-validation-tool-with-obiee-part-4-a1ebf08b0c4a">Using Oracle’s Baseline Validation Tool with OBIEE — Part 4</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Michelle Kolbe https://medium.com/p/a1ebf08b0c4a Fri Jun 29 2018 12:38:25 GMT-0400 (EDT) PBCS Tutorial: New Rules Usage Report & Suppressed Row Members https://www.us-analytics.com/hyperionblog/pbcs-new-rules-usage-and-suppressed-row-members <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-new-rules-usage-and-suppressed-row-members" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20new%20rules%20usage%20and%20suppressed%20row%20members.jpg?t=1533950236061" alt="pbcs new rules usage and suppressed row members" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <div class="row"> <div class="col-sm-10 col-sm-offset-1"> <p>What’s great about Oracle’s cloud-based solutions is that you get updates every month, but this can also be a detriment. It’s difficult to prepare and adapt to updates every four weeks. Which is why this blog post might be considered “old news,” but those that are ramping up for budget season might benefit from the reminder.</p> <p>In this blog post, we’ll cover a PBCS update that dates back to February 2018.</p> </div> </div> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-new-rules-usage-and-suppressed-row-members&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Melinda McGough https://www.us-analytics.com/hyperionblog/pbcs-new-rules-usage-and-suppressed-row-members Thu Jun 28 2018 15:51:37 GMT-0400 (EDT) Implementing PBCS at ODTUG https://garycris.blogspot.com/2018/06/implementing-pbcs-at-odtug.html As a Director on the ODTUG Board, I have the privilege of serving as the organization's Treasurer.&nbsp; Some of my duties include overseeing the organization's financial reporting as well as budgeting and forecasts.&nbsp; When I took over the role two years ago I did an assessment of our reporting capabilities and I was unhappy about our dependency on Excel as our primary reporting and analysis tool.&nbsp; Our controller did a fantastic job of pulling the data each month for our close and finance review, but we were limited to the reports that were created and any follow up questions or curiosities that required digging deeper posed a challenge and required someone to go off and manually work on it.<br /><br />As an EPM professional, I knew there were better tools out there and my colleagues on the board agreed.&nbsp; Of course this wasn't news, previous boards had also thought about this, but they were impaired by infrastructure requirements.&nbsp; As a not-for-profit organization ODTUG runs on tight margins; an investment in servers to support an on-premise implementation of EPM software was not practical.&nbsp; However, our board now had something our predecessors did not have, we had access to EPM in the Cloud.&nbsp; Running a SaaS application in the cloud would eliminate all the obstacles such as not having a data center or hardware.&nbsp; We also would not have to carry assets on our balance sheet; with a subscription service, EPM cloud would be a monthly operating expense.<br /><br />About a year ago we began discussing this during a board meeting, we wanted to take our FP&amp;A activities into PBCS and we put a plan in place.&nbsp; Working with our partners at Oracle we obtained a small lot of PBCS licenses to build out our application.&nbsp; I knew there would be a number of benefits if we could successfully implement PBCS at ODTUG.&nbsp; We would be able to run the organization better, and we could share the experience with our members as a training opportunity.<br /><br />My role on the board, along with the other board members, is a volunteer position; pretty much everyone at ODTUG is a volunteer, so embarking on a full scale implementation did make me a little nervous.&nbsp; "Would I have enough time to work on this?", "Am I biting off more than I can chew?" were a couple of the thoughts that went through my head, but having prior experience with PBCS I had some confidence that it could be done in a reasonable amount of time.&nbsp; So with the help of our controller from our management company, YCC, and some advice from fellow board member Jake Turrel, I jumped into the project.<br /><br />I've been in EPM for a long time, and I have implemented a number of Hyperion Planning applications.&nbsp; While PBCS may be considered by some to be "Hyperion Planning in the cloud", it's really so much more.&nbsp; I was able to have the basic construct of the application up in a couple of hours and was already beginning to load some test data.&nbsp; When I look at the length of the implementation, which lasted a couple of months in total (working part time on weekends and evenings), the least amount of time was spent on PBCS activities.&nbsp; It was really all the pre-work at the ledger that took the bulk of the time.&nbsp; Early on in the process I discovered I was going to have some issues loading data because our ledger, which is managed in Quickbooks, did not have rigid rules around master data.&nbsp; I quickly found multiple accounts with similar or same name and I knew this was going to be an issue that would haunt us if we didn't deal with it.&nbsp; I discussed with Jake and we agreed the first step to making the project a success was to clean up the ledger and implement a real chart of accounts.&nbsp; I brought it up to the finance committee and we voted unanimously to approve the project.&nbsp; A couple of weeks later I was on a plane to Wilmington, NC for a two day workout with our controller, Nancy.&nbsp; We spent two days recoding the chart, extracting the data, and loading it into PBCS.&nbsp; By the time we were done we had an enterprise class COA in place and all of our history was tied out in PBCS.<br /><br />Over the next couple of weeks I continued to work with the data and took full advantage of what PBCS has to offer, most notably the dashboards.&nbsp; As the Treasurer it is my job to report to the board each month how we are doing financially.&nbsp; Prior to the board meeting the finance sub-committee meets to review the financials.&nbsp; Both of these meetings would take a considerable amount of time as we combed through Excel reports; if we had any questions we would sometimes have to adjourn and reconvene at a later date after the information was collected.&nbsp; With the PBCS implementation underway, I was able to create various dashboards to show quickly and clearly the financial health of the organization.&nbsp; The dashboards showed more content than we had available in the past and with features like drill down enabled, we are able to explore the data more fluidly.&nbsp; Finance reviews now take about 5-10 minutes each month and board members have the ability to login and look at the data anytime they want.<br /><br />In addition to making life easier for the board and other ODTUG leaders and volunteers, we have also tried to identify where this project benefits our members.&nbsp; One thing the board has discussed is how can we be more transparent with the inner workings of ODTUG.&nbsp; We have discussed what financial information is appropriate for us to disclose to our members.&nbsp; In the past this would have incurred additional cost to ODTUG to prepare the information, but now that we have PBCS we can easily add some reports with various metrics for external consumption.&nbsp; This is an enhancement I am currently working on and will be disclosing in the near future.<br /><br />Overall our PBCS implementation has been a huge success and I am happy to be able to share this information with you.&nbsp; As mentioned above, I want to seize this moment and turn it into a learning opportunity for the EPM community.&nbsp; Since the implementation, I have shared this information at a few events and I plan to continue doing so in various formats.<br /><br />Back in May I did a presentation at a NYC meet-up on PBCS that focused on many of the learnings from the ODTUG implementation.&nbsp; At this year's Kscope I did a presentation:&nbsp;<a href="https://kscope18.odtug.com/page/presentations?name=happily-ever-after-odtug-and-oracle-enterprise-planning-and-budgeting-cloud-solution-epbcs" target="_blank">Happily Ever After: ODTUG and Oracle Enterprise Planning and Budgeting Cloud Solution (EPBCS)</a>.&nbsp; If you were unable to attend Kscope, the presentation was recorded and is available to ODTUG&nbsp;<a href="https://www.odtug.com/join-odtug" target="_blank">members</a>.&nbsp; I will also be doing a webinar on Aug 21 at 12pm EDT; you can register <a href="https://register.gotowebinar.com/register/4020109571931350274" target="_blank">here</a>.&nbsp; I am also presenting this topic at <a href="https://oracle.rainfocus.com/widget/oracle/oow18/catalogoow18?=undefined&amp;source=%3Aad%3Apas%3Ago%3Adg%3ARC_WWMK180426P00011%3AAgenda1&amp;search=cas1458" target="_blank">OOW</a> in October.<br /><br />For those of you who prefer to get your content from a blog, I am planning a multi-part blog series on the ODTUG PBCS implementation.&nbsp; I plan to write about the technical approach to building the PBCS app and touch on various topics such as:<br /><ul><li>How to create a new EPBCS application</li><li>How to customize PBCS settings to personalize the look and feel of the application</li><li>How to build and update dimensions using both the web interface and Smart View</li><li>How to import data</li><li>How to build forms, dashboard, and financial reports</li></ul><br />I hope you will follow this series to see all the interesting content, you can follow this blog to get notifications when the posts are added.<br /><br />Thanks for reading and please check back soon.<br /><br /><br />P.S. - here are a couple of other references on this subject to take a look at if you are interested.<br /><br /><a href="https://www.odtug.com/p/bl/et/blogaid=774" target="_blank">ODTUG PBCS press release</a><br /><a href="https://twitter.com/OracleDevs/status/1006952224453373952" target="_blank">DevLive interview with me discussing ODTUG implementation of EPBCS</a><br /><br /><br /> Gary Crisci, Oracle Ace tag:blogger.com,1999:blog-5844205335075765519.post-8342381600660860768 Thu Jun 28 2018 15:06:00 GMT-0400 (EDT) Oracle ACE Program Welcomes US-Analytics' Senior BI Architect https://www.us-analytics.com/hyperionblog/becky-wagner-becomes-oracle-ace-associate <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/becky-wagner-becomes-oracle-ace-associate" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/O_ACELogo_clr.bmp?t=1533950236061" alt="O_ACELogo_clr" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <div class="row"> <div class="col-sm-10 col-sm-offset-1"> <p>US-Analytics' Senior BI Architect was welcomed last week into the worldwide Oracle ACE program, a community of experts recognized for their skills and insights shared throughout the Oracle user community.</p> </div> </div> <div class="row"> <div class="col-sm-10 col-sm-offset-1"> <p><span class="xn-person"><span>Becky Wagner</span></span>, who's been working with Oracle products since 2011, was awarded the Oracle ACE Associate title at a dinner during Kscope18, an annual conference held by Oracle Developer Tools Users Group (ODTUG). Wagner was nominated for the role in<span>&nbsp;</span><span class="xn-chron">December 2017</span><span>&nbsp;</span>by Oracle ACE Director<span>&nbsp;</span><span class="xn-person"><span>Christian Berg</span></span><span>&nbsp;</span>and awarded the title for her involvement in knowledge sharing across several platforms.</p> </div> </div> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fbecky-wagner-becomes-oracle-ace-associate&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/becky-wagner-becomes-oracle-ace-associate Thu Jun 28 2018 15:00:32 GMT-0400 (EDT) My book on Oracle R Enterprise translated into Chinese http://www.oralytics.com/2018/06/my-book-on-oracle-r-enterprise.html <p>A couple of days ago the post man knocked on my door with a package. I hadn't ordered anything, so it was a puzzling what it might be.</p> <p>When I opened the package I found 3 copies of a book in Chinese.</p> <p>It was one of my books !</p> <p>One of my books was translated into Chinese !</p> <p>What a surprise, as I wasn't aware this was happening.</p> <div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-qkHVmWneuEY/WzUONkssIdI/AAAAAAAAAdk/-916tU8mxMg5wMCY3JSCE3MDp2jEMvVFwCLcBGAs/s1600/ORE%2BBook%2BChinese%2Bbook%2Bcover.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-qkHVmWneuEY/WzUONkssIdI/AAAAAAAAAdk/-916tU8mxMg5wMCY3JSCE3MDp2jEMvVFwCLcBGAs/s320/ORE%2BBook%2BChinese%2Bbook%2Bcover.jpg" width="400" height="500" data-original-width="1125" data-original-height="1600" /></a></div> <p>At this time I'm not sure where you can purchase the book, but I'll update this blog post when I find out.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-6280157345145675715 Thu Jun 28 2018 12:34:00 GMT-0400 (EDT) Responsive `Google Data Studio` Inline Frame Embed Code https://blog.redpillanalytics.com/responsive-google-data-studio-inline-frame-embed-code-37037b3f4a78?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Cc8exApLG6iB4ftp_l_qBQ.jpeg" /></figure><p>Google Data Studio (<a href="https://datastudio.google.com/">https://datastudio.google.com/</a>) is a pretty powerful and yet easy to get started with tool to visualize data from a variety of sources (everything from CSV files, to large scale databases). And it lends itself to presenting those graphs in other places on the web and thankfully there is an embed option using the ever popular inline frame (i-frame).</p><p>The issue is that the embedded graphs are not responsive and this presents an issue for both mobile phone/tablet screens and websites which might be optimized or designed around responsive elements, such as grids that auto scale to page width etc.</p><p>So I used an old CSS trick to fix the aspect ratio to the same aspect ratio as the data studio report in question; in this case 16×9. But the aspect ration can easily be adjusted by modifying the padding-top percentage in the inner frame class below. Simply take the desired aspect ratio as an inverse fraction (9/16 = .5625) and convert it to a percentage. This trick is well known and you can read more about how it works elsewhere on the web just google ‘iframe fixed aspect ratio’ or something to that effect. Some css frameworks even have classes pre-built to add this responsive behavior to i-frames.</p><p>There is still an issue with this though and that is that the iframe inner container div will keep its aspect ratio but it will happily overflow bounds and scale indefinitely in vertical/horizontal direction to keep that aspect ratio. So by adding a second wrapping/container div that has a max width you can essentially create a ceiling after which the i-frame will stop scaling up/down creating a responsive frame that scales down when it needs to, retains it’s aspect ratio but won’t keep scaling up infinitely. The second container is really the secret here!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*MpD3v_pHi849Hcnv" /></figure><p>The code is below, feel free to play around with it! I hope this helps someone out there trying to embed a google data studio report into another page and needs it to be responsive.</p><pre>&lt;div class=&quot;responsive-inline-frame&quot;&gt;<br> &lt;div class=&quot;responsive-inline-frame-inner&quot;&gt;<br> &lt;!--iframe here--&gt;<br> &lt;/div&gt;<br>&lt;/div&gt;</pre><pre>.responsive-inline-frame {<br> max-width: 800px; <em>/* 800x450 (16x9 aspect ratio) */<br> </em>margin: auto;<br>}</pre><pre>.responsive-inline-frame-inner {<br> width: 100%;<br> position: relative;<br> overflow: hidden;<br> padding-top: 56.25%;<br> height: 0;<br>}</pre><pre>.responsive-inline-frame-inner iframe {<br> position: absolute;<br> top: 0;<br> left: 0;<br> width: 100%;<br> height: 100%;<br>}</pre><p><em>Originally published at </em><a href="https://emilyplusplus.wordpress.com/2018/06/27/responsive-google-data-studio-inline-frame-embed-code/"><em>emilyplusplus.wordpress.com</em></a><em> on June 27, 2018.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=37037b3f4a78" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/responsive-google-data-studio-inline-frame-embed-code-37037b3f4a78">Responsive `Google Data Studio` Inline Frame Embed Code</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Emily Carlsen https://medium.com/p/37037b3f4a78 Wed Jun 27 2018 18:50:17 GMT-0400 (EDT) Stewart Bryson for the Win https://blog.redpillanalytics.com/stewart-bryson-for-the-win-ca22de5565a7?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Bn23NisLO6thoReAEFM6cA.jpeg" /><figcaption>Photo Cred: <a href="https://twitter.com/mRainey"><strong>Michael Rainey</strong>‏ @mRainey</a></figcaption></figure><p>ODTUG Kscope has two new reasons for being the best conference in the Oracle community. This year, Red Pill Analytics Co-founder and CEO Stewart Bryson was recognized for his accomplishments twice — with the Innovation Award, and the Best Speaker Award in the Big Data &amp; Data Warehousing Track. These kudos added to a week of great food, fantastic people and amazing content that make Kscope one of our favorite events.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xqWtFPOARpcD8l0Jtw_r3Q.jpeg" /><figcaption>ODTUG Board Members, Karen Cannell &amp; Natalie Delemar Presenting the Innovation Award to Stewart Bryson</figcaption></figure><p>At the General Session on Monday, Stewart Bryson was awarded ODTUG’s third annual Innovation Award. This award is given to an ODTUG Community Member who develops a creative and innovative idea using or benefiting an Oracle Development Tool. (The Innovation what? Read more about the award <a href="https://www.odtug.com/p/bl/et/blogid=1&amp;blogaid=781">here</a>.) Stewart was selected for his development of Red Pill Analytics’ free product Checkmate. If you’re new to Checkmate, then here’s a high-level description as well as a link to get <a href="http://redpillanalytics.com/checkmate/">more information</a>:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DzsG1Jjn4lz9yx7Ll5Mnkg.png" /></figure><p>The motivation for building Checkmate was to support basic software development lifecycles and workflows with Oracle Analytics development. This meant supporting real continuous integration and continuous delivery pipelines. Checkmate lets developers develop and helps accomplish this by; increasing productivity using full workstations, adding source control to insulate against human error, and using source control to manage integrations. You can download Checkmate for free and learn more about it <a href="http://redpillanalytics.com/checkmate/">here</a>.</p><h3>And the next award goes to…</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ZhqYtt8v6IW8rA0V_rkhxg.jpeg" /><figcaption>ODTUG’s President Natalie Delemar Presenting the Best Speaker Award to Stewart Bryson</figcaption></figure><p>Kscope18 always culminates with a closing session and awards are presented for the best speakers in each track based on attendee feedback.</p><p>Stewart was honored to take home the award for the Best Speaker in the Big Data &amp; Data Warehousing Track for his presentation <em>What We Learned Building Analytics for Google</em>. This presentation describes the architecture, design decisions, and pure joy we experienced using the Google Cloud Platform for our customer Google Play Marketing &amp; Communications. Stewart discusses how BigQuery, App Engine, PubSub, Cloud Functions, and Data Studio were integrated to build a complete marketing analytics platform measuring user engagement from sources including Twitter, LinkedIn, YouTube, Google Analytics, Google+, and AdWords</p><p>Thank you ODTUG for another successful and eventful Kscope experience. Its an unparalleled event in terms of content and experience. We look forward to 2019! Check out our <a href="https://www.facebook.com/media/set/?set=a.2117213148308603.1073741847.867203309976266&amp;type=1&amp;l=bd7fbb85df">photo album</a> from the week on our <a href="https://www.facebook.com/redpillanalytics/">Facebook page</a>. You can see this event and others we are involved in by following us on social media or on our <a href="http://redpillanalytics.com/events/">Events page</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ca22de5565a7" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/stewart-bryson-for-the-win-ca22de5565a7">Stewart Bryson for the Win</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Lauren Prezby https://medium.com/p/ca22de5565a7 Mon Jun 25 2018 15:20:53 GMT-0400 (EDT) DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/ <div class="kg-card-markdown"><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-20-031-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"><p>This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging!<br> <img src="https://media.giphy.com/media/1gStKEoiGRzJBekJpd/giphy.gif" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>As already written, two weeks ago while in Orlando for <a href="https://www.rittmanmead.com/blog/2018/06/kscope18-its-a-wrap/">Kscope18</a>, I presented a session about <a href="https://speakerdeck.com/ftisiot/devops-and-obiee-do-it-before-its-too-late">DevOps and OBIEE</a> focusing on how to properly source control, promote and test for regression any component of the infrastructure.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_10-02-43.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <h1 id="developmentisolation">Development Isolation</h1> <p>One key aspect of DevOps is providing the <strong>Development Isolation</strong>: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_11-01-07-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>We have been discussing development isolation topic <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">in the past</a> focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD.<br> Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment.</p> <p>Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both <strong>RPD</strong> and <strong>catalog</strong> work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide <strong>isolated feature-related full OBIEE instances</strong> where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment.</p> <h1 id="featurerelatedinstances">Feature-Related Instances</h1> <p>The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can <em>&quot;easily&quot;</em> be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a <a href="https://gianniceresa.com/2017/09/obiee-12c-docker-from-scratch/">template image</a> like the one built by our previous colleague Gianni Ceresa.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_10-56-23-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:</p> <ul> <li>There is a <strong>cost associated with every instance</strong>, thus minimizing the number of instances and the uptime is necessary</li> <li>The OAC provisioning interface is <strong>point and click</strong>, thus automating the instance management seems impossible</li> </ul> <p>The overall OAC instance cost can be mitigated by the <strong>Bring Your Own License (BYOL)</strong> licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details <a href="https://cloud.oracle.com/en_US/oac/pricing">here</a>). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the <strong>PaaS Service Manager Command Line Interface</strong> (<a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html">PSM Cli</a>) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.</p> <h1 id="paasservicemanagercommandlineinterface">PaaS Service Manager Command Line Interface</h1> <p>PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others.<br> When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.</p> <h2 id="installingandconfiguringpsmcli">Installing and Configuring PSM Cli</h2> <p>PSMCLI has two prerequisites before it can be installed:</p> <ul> <li><strong><a href="https://curl.haxx.se">cURL</a></strong> - a command line utility to transfer data with URLs</li> <li><strong>Python 3.3</strong> or later</li> </ul> <p>Once both prerequisites are installed PSM can easily be downloaded with the following cURL call</p> <pre><code>curl -X GET -u &lt;USER&gt;:&lt;PWD&gt; -H X-ID-TENANT-NAME:&lt;IDENTITY_DOMAIN&gt; https://&lt;REST_SERVER&gt;/paas/core/api/v1.1/cli/&lt;IDENTITY_DOMAIN&gt;/client -o psmcli.zip </code></pre> <p>Where</p> <ul> <li><strong>&lt;USER&gt;</strong> and <strong>&lt;PWD&gt;</strong> are the credentials</li> <li><strong>&lt;IDENTITY_DOMAIN&gt;</strong> is the Identity Domain ID specified during the account creation</li> <li><strong>&lt;REST_SERVER&gt;</strong> is the REST API server name which is:</li> <li><em>psm.us.oraclecloud.com</em> if you are using a US datacenter</li> <li><em>psm.aucom.oraclecloud.com</em> if you are in the AuCom region</li> <li><em>psm.europe.oraclecloud.com</em> otherwise</li> </ul> <p>Next step is to install PSM as a Python package with</p> <pre><code>pip3 install -U psmcli.zip </code></pre> <p>After the installation is time for configuration</p> <pre><code>psm setup </code></pre> <p>The configuration command will request the following information:</p> <ul> <li>Oracle Cloud <strong>Username</strong> and <strong>Password</strong></li> <li><strong>Identity Domain</strong></li> <li><strong>Region</strong>, this need to be set to</li> <li><em>emea</em> if the REST_SERVER mentioned above contains emea</li> <li><em>aucom</em> if the REST_SERVER mentioned above contains aucom</li> <li><em>us</em> otherwise</li> <li><strong>Output format</strong>: the choice is between <em>short</em>, <em>json</em> and <em>html</em></li> <li><strong>OAuth</strong>: the communication between the CLI and the REST API can use basic authentication (flag <em>n</em>) or OAuth (flag <em>y</em>). If OAuth is chosen then ClientID, Secret and Access Token need to be specified</li> </ul> <p>A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following</p> <pre><code>{ &quot;username&quot;:&quot;&lt;USER&gt;&quot;, &quot;password&quot;:&quot;&lt;PASSWORD&gt;&quot;, &quot;identityDomain&quot;:&quot;&lt;IDENTITY_DOMAIN&gt;&quot;, &quot;region&quot;:&quot;&lt;REGION&gt;&quot;, &quot;outputFormat&quot;:&quot;&lt;OUTPUT_FORMAT&gt;&quot;, &quot;oAuth&quot;:{ &quot;clientId&quot;:&quot;&quot;, &quot;clientSecret&quot;:&quot;&quot;, &quot;accessTokenServer&quot;:&quot;&quot; } } </code></pre> <p>If the profile is stored in a file <code>profile.json</code> the PSM configuration can be achieved by just executing</p> <pre><code>psm setup -c profile.json </code></pre> <p>One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the <strong>original name</strong> (name at the time of the creation).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_11-58-53.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to <a href="http://labvmr01n01.labo.internal.stepi.net/2018/01/27/cloud-portal-accessing-trial-promotional-account/">Pieter Van Puymbroeck</a>).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_11-56-29-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <h1 id="workingwithpsmcli">Working With PSM Cli</h1> <p>Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-commands.html">PSM documentation</a>.</p> <p>The PSM commands are product related, so each command is in the form:</p> <pre><code>psm &lt;product&gt; &lt;command&gt; &lt;parameters&gt; </code></pre> <p>Where</p> <ul> <li><strong>product</strong> is the Oracle cloud product e.g. <code>dbcs</code>, <code>analytics</code>, <code>BigDataAppliance</code>, for a complete list use <code>psm help</code></li> <li><strong>command</strong> is the action to be executed against the product e.g. <code>services</code>, <code>stop</code>, <code>start</code>, <code>create-service</code></li> <li><strong>parameters</strong> is the list of parameters to pass depending on the command executed</li> </ul> <p>The first step is to check what instances I already created, I can do so for the database by executing</p> <pre><code>psm dbcs services </code></pre> <p>which, as expected, will list all my active instances</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-19-49.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>I can then start and stop it using:</p> <pre><code>psm dbcs start/stop/restart -s &lt;INSTANCE_NAME&gt; </code></pre> <p>Which in my example provides the Id of the Job assigned to the <code>stop</code> operation.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-33-03.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>When I check the status via the <code>service</code> command I get <code>Maintenance</code> like in the web UI.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-35-58.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>The same applies to the <code>start</code> and <code>restart</code> operation. Please keep in mind that all the calls are asynchronous -&gt; the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:</p> <pre><code>psm dbcs operation-status -j &lt;JOB_ID&gt; </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-53-01.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>The same operations described above are available on OAC with the same commands by simply changing the product from <code>dbcs</code> to <code>analytics</code> like:</p> <pre><code>psm analytics start/stop/restart -s &lt;INSTANCE_NAME&gt; </code></pre> <p>On top of the basic operation, PSM Cli allows also the following:</p> <ul> <li><strong>Service Instance</strong>: start/stop/restart, instance creation-deletion</li> <li><strong>Access Control</strong>: lists, creates, deletes, enables and disables access rules for a service.</li> <li><strong>Scaling</strong>: changes the computer shape of an instance and allows scaling up/down.</li> <li><strong>Storage</strong>: extends the storage associated to OAC</li> <li><strong>Backup Configuration</strong>: updates/shows the backup configurations</li> <li><strong>Backups</strong>: lists, creates, deletes backups of the instance</li> <li><strong>Restore</strong>: restores a backup giving detailed information about it and the history of Restores</li> <li><strong>Patches</strong>: allows patching, rollbacking, doing pre-checks, and retrieving patching history</li> </ul> <h2 id="creatinganoacinstance">Creating an OAC Instance</h2> <p>So far we discussed the maintenance on already created instances with <code>start</code>/<code>stop</code>/<code>restart</code> commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:</p> <pre><code>psm analytics create-service -c &lt;CONFIG_FILE&gt; -of &lt;OUTPUT_FORMAT&gt; </code></pre> <p>Where</p> <ul> <li><strong>CONFIG_FILE</strong>: is the file defining all OAC instance configurations</li> <li><strong>OUTPUT_FORMAT</strong>: is the desired output format between short, json and html</li> </ul> <p>The question now is:</p> <blockquote> <p>How do I create a Config File for OAC?</p> </blockquote> <p>The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-20-03.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>On the <strong>Confirm</strong> screen, there is the option to Download the REST payload in JSON format</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-14-20.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>With the resulting json Config File being</p> <pre><code>{ &quot;edition&quot;: &quot;&lt;EDITION&gt;&quot;, &quot;vmPublicKeyText&quot;: &quot;&lt;SSH_TOKEN&gt;&quot;, &quot;enableNotification&quot;: &quot;true&quot;, &quot;notificationEmail&quot;: &quot;&lt;EMAIL&gt;&quot;, &quot;serviceVersion&quot;: &quot;&lt;VERSION&gt;&quot;, &quot;isBYOL&quot;: &quot;false&quot;, &quot;components&quot;: { &quot;BI&quot;: { &quot;adminUserPassword&quot;: &quot;&lt;ADMINPWD&gt;&quot;, &quot;adminUserName&quot;: &quot;&lt;ADMINUSER&gt;&quot;, &quot;analyticsStoragePassword&quot;: &quot;&lt;PWD&gt;&quot;, &quot;shape&quot;: &quot;oc3&quot;, &quot;createAnalyticsStorageContainer&quot;: &quot;true&quot;, &quot;profile_essbase&quot;: &quot;false&quot;, &quot;dbcsPassword&quot;: &quot;&lt;DBCSPWD&gt;&quot;, &quot;totalAnalyticsStorage&quot;: &quot;280.0&quot;, &quot;profile_bi&quot;: &quot;true&quot;, &quot;profile_dv_forced&quot;: &quot;true&quot;, &quot;analyticsStorageUser&quot;: &quot;&lt;EMAIL&gt;&quot;, &quot;dbcsUserName&quot;: &quot;&lt;DBUSER&gt;&quot;, &quot;dbcsPDBName&quot;: &quot;&lt;PDBNAME&gt;&quot;, &quot;dbcsName&quot;: &quot;&lt;DBCSNAME&gt;&quot;, &quot;idcs_enabled&quot;: &quot;false&quot;, &quot;analyticsStorageContainerURL&quot;: &quot;&lt;STORAGEURL&gt;&quot;, &quot;publicStorageEnabled&quot;: &quot;false&quot;, &quot;usableAnalyticsStorage&quot;: &quot;180&quot; } }, &quot;serviceLevel&quot;: &quot;PAAS&quot;, &quot;meteringFrequency&quot;: &quot;HOURLY&quot;, &quot;subscriptionId&quot;: &quot;&lt;SUBSCRIPTIONID&gt;&quot;, &quot;serviceName&quot;: &quot;&lt;SERVICENAME&gt;&quot; } </code></pre> <p>This file can be stored and the parameters changed as necessary to create new OAC instances with the command:</p> <pre><code>psm analytics create-service -c &lt;JSON_PAYLOAD_FILE&gt; -of short/json/html </code></pre> <p>As shown previously, the result of the call is a <code>Job Id</code> that can be monitored with</p> <pre><code>psm analytics operation-status -j &lt;JOB_ID&gt; </code></pre> <p>Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:</p> <pre><code>psm analytics delete-service -s &lt;SERVICE_NAME&gt; -n &lt;DBA_NAME&gt; -p &lt;DBA_PWD&gt; </code></pre> <p>Where</p> <ul> <li><strong>SERVICE_NAME</strong> is the OAC instance name</li> <li><strong>DBA_NAME</strong> and <strong>DBA_PWD</strong> are the DBA credentials where OAC schemas are residing</li> </ul> <h1 id="summary">Summary</h1> <p>Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins.</p> <p>Create an OAC instances automatically only when features need to be developed or tested, stop&amp;start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!</p> <p>One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">Rittman Mead BI Developer Toolkit</a>. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to <a href="mailto:jon+psm@rittmanmead.com">contact us</a>!</p> </div> Francesco Tisiot 5b5a56f45000960018e69b43 Mon Jun 25 2018 11:18:05 GMT-0400 (EDT) DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli https://www.rittmanmead.com/blog/2018/06/devops-in-oac-scripting-oracle-cloud-instance-management-with-psm-cli/ <img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-20-031-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"><p>This summer we unselfish Italians decided to not participate to the World Cup to give another country the opportunity to win (good luck with that England!). This decision, which I strongly support, gives me lot of time for blogging! <br> <img src="https://media.giphy.com/media/1gStKEoiGRzJBekJpd/giphy.gif" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>As already written, two weeks ago while in Orlando for <a href="https://www.rittmanmead.com/blog/2018/06/kscope18-its-a-wrap/">Kscope18</a>, I presented a session about <a href="https://speakerdeck.com/ftisiot/devops-and-obiee-do-it-before-its-too-late">DevOps and OBIEE</a> focusing on how to properly source control, promote and test for regression any component of the infrastructure.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_10-02-43.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <h1 id="developmentisolation">Development Isolation</h1> <p>One key aspect of DevOps is providing the <strong>Development Isolation</strong>: a way of allowing multiple development streams to work independently and merging the outcome of the process into the main environment only after this has been tested and validated. This is needed to avoid the standard situation where code promotions are blocked due to different working streams not being in sync: forcing a team to postpone a code release just because another team doesn't have the UAT OK is just an example of non-isolated development platforms.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_11-01-07-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>We have been discussing development isolation topic <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">in the past</a> focusing mainly on concurrent repository development and how to integrate it with versioning tools like Git and SVN. The concurrent online editing option is not viable since multiple developers are modifying the same artifact (RPD) without a way of testing for regression the changes or to verifying that what has been done is correct before merging the changes in the RPD. <br> Alternative solutions of using MUDE (default multi-user development method provided by the Admintool) or pure offline RPD work encounter the same problems defined above: no feature or regression testing available before merging the RPD in the main development environment. </p> <p>Different RPD development techniques solve only partially the problem: almost any OAC/OBIEE development consist at least in both <strong>RPD</strong> and <strong>catalog</strong> work (creation of analysis/dashboards/VA projects) we need an approach which provides Development Isolation at both levels. The solution, in order to properly build a DevOps framework around OAC/OBIEE, it's to provide <strong>isolated feature-related full OBIEE instances</strong> where the RPD can be edited in online mode, the catalog work can be done independently, and the overall result can be tested and validated before being merged into the common development environment. </p> <h1 id="featurerelatedinstances">Feature-Related Instances</h1> <p>The feature instances, as described above, need to be full OAC/OBIEE development instances where only a feature (or a small set) is worked at the time in order to give the agility to developers to release the code as soon as it's ready and tested. In the on-premises world this can <em>"easily"</em> be achieved by providing a number of dedicated Virtual Machines or, more in line with the recent trends, an automated instance provisioning with Docker using a <a href="https://gianniceresa.com/2017/09/obiee-12c-docker-from-scratch/">template image</a> like the one built by our previous colleague Gianni Ceresa.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-22_10-56-23-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>However, when we think about Oracle Analytics Cloud (OAC), we seem to have two problems:</p> <ul> <li>There is a <strong>cost associated with every instance</strong>, thus minimizing the number of instances and the uptime is necessary</li> <li>The OAC provisioning interface is <strong>point and click</strong>, thus automating the instance management seems impossible</li> </ul> <p>The overall OAC instance cost can be mitigated by the <strong>Bring Your Own License (BYOL)</strong> licensing method which allows customers to migrate on-premises licenses to the cloud and have discounted prices on the hourly/monthly instance cost (more details <a href="https://cloud.oracle.com/en_US/oac/pricing">here</a>). However, since the target is to minimize the cost thus the # of instances and the uptime, we need to find a way to do so that doesn't rely on a human and a point and click interface. Luckily the <strong>PaaS Service Manager Command Line Interface</strong> (<a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html">PSM Cli</a>) allows us to solve this problem by providing a scriptable way of creating, starting and stopping instances.</p> <h1 id="paasservicemanagercommandlineinterface">PaaS Service Manager Command Line Interface</h1> <p>PSMCLI is a command line interface acting as a wrapper over the PaaS REST APIs. Its usage is not limited to OAC, the same interface can be used to create and manage instances of the Oracle's Database Cloud Service or Java Cloud Services amongst the others. <br> When talking about OAC please keep in mind that, as of now, PSM Cli works only with the non-autonomous version but I believe the Autonomous support will be added soon.</p> <h2 id="installingandconfiguringpsmcli">Installing and Configuring PSM Cli</h2> <p>PSMCLI has two prerequisites before it can be installed:</p> <ul> <li><strong><a href="https://curl.haxx.se">cURL</a></strong> - a command line utility to transfer data with URLs</li> <li><strong>Python 3.3</strong> or later</li> </ul> <p>Once both prerequisites are installed PSM can easily be downloaded with the following cURL call</p> <pre><code>curl -X GET -u &lt;USER&gt;:&lt;PWD&gt; -H X-ID-TENANT-NAME:&lt;IDENTITY_DOMAIN&gt; https://&lt;REST_SERVER&gt;/paas/core/api/v1.1/cli/&lt;IDENTITY_DOMAIN&gt;/client -o psmcli.zip </code></pre> <p>Where</p> <ul> <li><strong>&#60;USER&#62;</strong> and <strong>&#60;PWD&#62;</strong> are the credentials</li> <li><strong>&#60;IDENTITY_DOMAIN&#62;</strong> is the Identity Domain ID specified during the account creation</li> <li><strong>&#60;REST_SERVER&#62;</strong> is the REST API server name which is: <ul><li><em>psm.us.oraclecloud.com</em> if you are using a US datacenter</li> <li><em>psm.aucom.oraclecloud.com</em> if you are in the AuCom region</li> <li><em>psm.europe.oraclecloud.com</em> otherwise</li></ul></li> </ul> <p>Next step is to install PSM as a Python package with</p> <pre><code>pip3 install -U psmcli.zip </code></pre> <p>After the installation is time for configuration</p> <pre><code>psm setup </code></pre> <p>The configuration command will request the following information:</p> <ul> <li>Oracle Cloud <strong>Username</strong> and <strong>Password</strong></li> <li><strong>Identity Domain</strong></li> <li><strong>Region</strong>, this need to be set to <ul><li><em>emea</em> if the REST<em>SERVER mentioned above contains emea </em></li> <li><em>aucom</em> if the RESTSERVER mentioned above contains aucom</li> <li><em>us</em> otherwise</li></ul></li> <li><strong>Output format</strong>: the choice is between <em>short</em>, <em>json</em> and <em>html</em></li> <li><strong>OAuth</strong>: the communication between the CLI and the REST API can use basic authentication (flag <em>n</em>) or OAuth (flag <em>y</em>). If OAuth is chosen then ClientID, Secret and Access Token need to be specified</li> </ul> <p>A JSON profile file can also be used to provide the same information mentioned above. The structure of the file is the following</p> <pre><code>{ "username":"&lt;USER&gt;", "password":"&lt;PASSWORD&gt;", "identityDomain":"&lt;IDENTITY_DOMAIN&gt;", "region":"&lt;REGION&gt;", "outputFormat":"&lt;OUTPUT_FORMAT&gt;", "oAuth":{ "clientId":"", "clientSecret":"", "accessTokenServer":"" } } </code></pre> <p>If the profile is stored in a file <code>profile.json</code> the PSM configuration can be achieved by just executing</p> <pre><code>psm setup -c profile.json </code></pre> <p>One quick note: the identity domain Id, shown in the Oracle Cloud header, isn't working if it's not the <strong>original name</strong> (name at the time of the creation). </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_11-58-53.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>In order to get the correct identity domain Id to use, check in an Oracle Cloud instance already created (e.g. a database one) and check the Details, you'll see the original identity domain listed there (credits to <a href="http://labvmr01n01.labo.internal.stepi.net/2018/01/27/cloud-portal-accessing-trial-promotional-account/">Pieter Van Puymbroeck</a>).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_11-56-29-1.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <h1 id="workingwithpsmcli">Working With PSM Cli</h1> <p>Once the PSM has been correctly configured it's time to start checking what options are available, for a detailed list of the options check <a href="https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/analytics-commands.html">PSM documentation</a>. </p> <p>The PSM commands are product related, so each command is in the form:</p> <pre><code>psm &lt;product&gt; &lt;command&gt; &lt;parameters&gt; </code></pre> <p>Where</p> <ul> <li><strong>product</strong> is the Oracle cloud product e.g. <code>dbcs</code>, <code>analytics</code>, <code>BigDataAppliance</code>, for a complete list use <code>psm help</code></li> <li><strong>command</strong> is the action to be executed against the product e.g. <code>services</code>, <code>stop</code>, <code>start</code>, <code>create-service</code></li> <li><strong>parameters</strong> is the list of parameters to pass depending on the command executed</li> </ul> <p>The first step is to check what instances I already created, I can do so for the database by executing </p> <pre><code>psm dbcs services </code></pre> <p>which, as expected, will list all my active instances</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-19-49.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>I can then start and stop it using:</p> <pre><code>psm dbcs start/stop/restart -s &lt;INSTANCE_NAME&gt; </code></pre> <p>Which in my example provides the Id of the Job assigned to the <code>stop</code> operation.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-33-03.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>When I check the status via the <code>service</code> command I get <code>Maintenance</code> like in the web UI.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-35-58.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>The same applies to the <code>start</code> and <code>restart</code> operation. Please keep in mind that all the calls are asynchronous -> the command will call the related REST API and then return the associated Job ID without waiting for the command to be finished. The status of a job can be checked with:</p> <pre><code>psm dbcs operation-status -j &lt;JOB_ID&gt; </code></pre> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_12-53-01.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>The same operations described above are available on OAC with the same commands by simply changing the product from <code>dbcs</code> to <code>analytics</code> like:</p> <pre><code>psm analytics start/stop/restart -s &lt;INSTANCE_NAME&gt; </code></pre> <p>On top of the basic operation, PSM Cli allows also the following:</p> <ul> <li><strong>Service Instance</strong>: start/stop/restart, instance creation-deletion</li> <li><strong>Access Control</strong>: lists, creates, deletes, enables and disables access rules for a service.</li> <li><strong>Scaling</strong>: changes the computer shape of an instance and allows scaling up/down.</li> <li><strong>Storage</strong>: extends the storage associated to OAC</li> <li><strong>Backup Configuration</strong>: updates/shows the backup configurations</li> <li><strong>Backups</strong>: lists, creates, deletes backups of the instance</li> <li><strong>Restore</strong>: restores a backup giving detailed information about it and the history of Restores</li> <li><strong>Patches</strong>: allows patching, rollbacking, doing pre-checks, and retrieving patching history </li> </ul> <h2 id="creatinganoacinstance">Creating an OAC Instance</h2> <p>So far we discussed the maintenance on already created instances with <code>start</code>/<code>stop</code>/<code>restart</code> commands, but PSM Cli allows also the creation of an instance via command line. The call is pretty simple:</p> <pre><code>psm analytics create-service -c &lt;CONFIG_FILE&gt; -of &lt;OUTPUT_FORMAT&gt; </code></pre> <p>Where</p> <ul> <li><strong>CONFIG_FILE</strong>: is the file defining all OAC instance configurations</li> <li><strong>OUTPUT_FORMAT</strong>: is the desired output format between short, json and html</li> </ul> <p>The question now is: </p> <blockquote> <p>How do I create a Config File for OAC?</p> </blockquote> <p>The documentation doesn't provide any help on this, but we can use the same approach as for on-premises OBIEE and response file: create the first instance with the Web-UI, save the payload for future use and change parameters when necessary.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-20-03.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>On the <strong>Confirm</strong> screen, there is the option to Download the REST payload in JSON format </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/2018-06-25_14-14-20.png" alt="DevOps in OAC: Scripting Oracle Cloud Instance Management with PSM Cli"></p> <p>With the resulting json Config File being </p> <pre><code>{ "edition": "&lt;EDITION&gt;", "vmPublicKeyText": "&lt;SSH_TOKEN&gt;", "enableNotification": "true", "notificationEmail": "&lt;EMAIL&gt;", "serviceVersion": "&lt;VERSION&gt;", "isBYOL": "false", "components": { "BI": { "adminUserPassword": "&lt;ADMINPWD&gt;", "adminUserName": "&lt;ADMINUSER&gt;", "analyticsStoragePassword": "&lt;PWD&gt;", "shape": "oc3", "createAnalyticsStorageContainer": "true", "profile_essbase": "false", "dbcsPassword": "&lt;DBCSPWD&gt;", "totalAnalyticsStorage": "280.0", "profile_bi": "true", "profile_dv_forced": "true", "analyticsStorageUser": "&lt;EMAIL&gt;", "dbcsUserName": "&lt;DBUSER&gt;", "dbcsPDBName": "&lt;PDBNAME&gt;", "dbcsName": "&lt;DBCSNAME&gt;", "idcs_enabled": "false", "analyticsStorageContainerURL": "&lt;STORAGEURL&gt;", "publicStorageEnabled": "false", "usableAnalyticsStorage": "180" } }, "serviceLevel": "PAAS", "meteringFrequency": "HOURLY", "subscriptionId": "&lt;SUBSCRIPTIONID&gt;", "serviceName": "&lt;SERVICENAME&gt;" } </code></pre> <p>This file can be stored and the parameters changed as necessary to create new OAC instances with the command:</p> <pre><code>psm analytics create-service -c &lt;JSON_PAYLOAD_FILE&gt; -of short/json/html </code></pre> <p>As shown previously, the result of the call is a <code>Job Id</code> that can be monitored with </p> <pre><code>psm analytics operation-status -j &lt;JOB_ID&gt; </code></pre> <p>Once the Job is finished successfully, the OAC instance is ready to be used. If at a certain point, the OAC instance is not needed anymore, it can be deleted via:</p> <pre><code>psm analytics delete-service -s &lt;SERVICE_NAME&gt; -n &lt;DBA_NAME&gt; -p &lt;DBA_PWD&gt; </code></pre> <p>Where</p> <ul> <li><strong>SERVICE_NAME</strong> is the OAC instance name</li> <li><strong>DBA&#95;NAME</strong> and <strong>DBA&#95;PWD</strong> are the DBA credentials where OAC schemas are residing</li> </ul> <h1 id="summary">Summary</h1> <p>Worried about providing development isolation in OAC while keeping the costs down? Not anymore! With PSM Cli you now have a way of creating instances on demand, start/stop, up/down scaling with a command line tool easily integrable with automation tools like Jenkins. </p> <p>Create an OAC instances automatically only when features need to be developed or tested, stop&amp;start the instances based on your workforce timetables, take the benefit of the cloud minimizing the cost associated to it by using PSM Cli!</p> <p>One last note; for a full DevOps OAC implementation, PSM Cli is not sufficient: tasks like automated regression testing, code versioning, and promotion can't be managed directly with PSM Cli but require usage of external toolsets like <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">Rittman Mead BI Developer Toolkit</a>. If you are interested in a full DevOps implementation on OAC and understanding the details on how PSM Cli can be used in conjunction with Rittman Mead BI Developer Toolkit don't hesitate to <a href="mailto:jon+psm@rittmanmead.com">contact us</a>!</p> Francesco Tisiot ff6b9edd-737a-4131-9346-b36500e7781f Mon Jun 25 2018 11:18:05 GMT-0400 (EDT) KScope 18! It’s a wrap. https://devepm.com/2018/06/21/kscope-18-its-a-wrap/ That&#8217;s it guys, one more year of KScope finished successfully. This year was a big one for us since we had 3 sessions, one lunch and panel and one lip-sync battle&#8230;. that we lost&#8230; but was a lot of fun (way better than I thought it would be). The sessions were great and we are [&#8230;] RZGiampaoli http://devepm.com/?p=1710 Thu Jun 21 2018 14:20:38 GMT-0400 (EDT) Kscope18: To Infinity & Beyond https://www.us-analytics.com/hyperionblog/kscope18-recap <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/kscope18-recap" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/kscope18%20disney%20costumes.jpg?t=1533950236061" alt="kscope18 disney costumes" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Now that we’ve had time to sleep and rest our feet, it’s time to reflect on Kscope18. A few things stick out right off the bat — Disney, puppies, and (of course) our people.</p> <p>This year had a real Buzz Lightyear “to infinity and beyond” feel to it — and not just because we were at Disney. Our team members made some huge strides (like becoming an Oracle ACE Associate), which we’ll brag about below. In this blog post, we’ll cover what we did, what we loved, and what we hope to see at Kscope19.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fkscope18-recap&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/kscope18-recap Thu Jun 21 2018 13:00:41 GMT-0400 (EDT) Kscope18: It's a Wrap! https://www.rittmanmead.com/blog/2018/06/kscope18-its-a-wrap/ <div class="kg-card-markdown"><img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOpsSlides-1.png" alt="Kscope18: It's a Wrap!"><p>As announced few weeks back <a href="https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/">I represented Rittman Mead at ODTUG's Kscope18</a> hosted in the magnificent <a href="https://www.swandolphin.com/">Walt Disney World Dolphin Resort</a>. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.</p> <img width="400px" src="https://www.rittmanmead.com/blog/content/images/2018/06/UNADJUSTEDNONRAW_thumb_b4d0.jpg" alt="Kscope18: It's a Wrap!"> <p>In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.</p> <h1 id="sundaysymposiumandpresentations">Sunday Symposium and Presentations</h1> <p>I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version <a href="http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/downloads/default-4441820.html">12.2.1.4.0</a>).</p> <p>As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.</p> <p>Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE.<br> The following are just few of the items you could expect in future versions:</p> <ul> <li><strong>Recommendations</strong> during the data preparation phase like GeoLocation and Date enrichments</li> <li><strong>Data Flow</strong> enhancements like incremental updates or parametrized data-flows</li> <li>New <strong>Visualizations</strong> and in general more control over the settings of the single charts.</li> </ul> <p>In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.</p> <p>Special mention to the <strong>Oracle Autonomous Analytics Cloud</strong>, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.</p> <p>During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the <em>Oracle ACE Briefing</em> which gave me ideas for future blog posts, so stay tuned! As <a href="https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/">written previously</a> I had two sessions accepted for Kscope18: &quot;Visualizing Streams&quot; and &quot;DevOps and OBIEE: Do it Before it's too late&quot;, in the following paragraph I'll share details (and link to the slides) of both.</p> <h2 id="visualizingstreams">Visualizing Streams</h2> <p>One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Slides-2.png" alt="Kscope18: It's a Wrap!"></p> <p>The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like <strong>Apache Kafka</strong>, SQL based enrichment tools like <strong><a href="https://www.rittmanmead.com/blog/2017/10/ksql-streaming-sql-for-apache-kafka/">Confluent's KSQL</a></strong> and connections to Self Service BI tools like Oracle's <strong>Data Visualization</strong> via sql-on-Hadoop technologies like <strong>Apache Drill</strong>. The slides of the session are available <a href="https://speakerdeck.com/ftisiot/visualizing-streams">here</a>.</p> <h2 id="devopsandobieedoitbeforeitstoolate">DevOps and OBIEE: Do it Before it's Too Late</h2> <p>In the second session, slides <a href="https://speakerdeck.com/ftisiot/devops-and-obiee-do-it-before-its-too-late">here</a>, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOpsSlides.png" alt="Kscope18: It's a Wrap!"></p> <p>After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:</p> <ul> <li>Code versioning techniques</li> <li>Feature-driven environment creation</li> <li>Automated promotion</li> <li>Automated regression testing</li> </ul> <p>Providing details on how the <strong>Rittman Mead BI Developer Toolkit</strong>, partially described <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">here</a>, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.</p> <p>As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to <a href="https://kscope19.odtug.com">Kscope19</a> in Seattle!</p> </div> Francesco Tisiot 5b5a56f45000960018e69b42 Thu Jun 21 2018 09:23:53 GMT-0400 (EDT) Kscope18: It's a Wrap! https://www.rittmanmead.com/blog/2018/06/kscope18-its-a-wrap/ <img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOpsSlides-1.png" alt="Kscope18: It's a Wrap!"><p>As announced few weeks back <a href="https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/">I represented Rittman Mead at ODTUG's Kscope18</a> hosted in the magnificent <a href="https://www.swandolphin.com/">Walt Disney World Dolphin Resort</a>. It's always hard to be credible when telling people you are going to Disneyworld for work but Kscope is a must-go event if you are in the Oracle landscape.</p> <p><img width="400px" src="https://www.rittmanmead.com/blog/content/images/2018/06/UNADJUSTEDNONRAW_thumb_b4d0.jpg" alt="Kscope18: It's a Wrap!"></p> <p>In the Sunday symposium Oracle PMs share hints about the products latest capabilities and roadmaps, then three full days of presentations spanning from the traditional Database, EPM and BI tracks to the new entries like Blockchain. On top of this the opportunity to be introduced to a network of Oracle experts including Oracle ACEs and Directors, PMs and people willing to share their experience with Oracle (and other) tools.</p> <h1 id="sundaysymposiumandpresentations">Sunday Symposium and Presentations</h1> <p>I attended the Oracle Analytics (BI and Essbase) Sunday Symposium run by Gabby Rubin and Matt Milella from Oracle. It was interesting to see the OAC product enhancements and roadmap as well as the feature catch-up in the latest release of OBIEE on-premises (version <a href="http://www.oracle.com/technetwork/middleware/bi-enterprise-edition/downloads/default-4441820.html">12.2.1.4.0</a>). </p> <p>As expected, most of the push is towards OAC (Oracle Analytics Cloud): all new features will be developed there and eventually (but assurance on this) ported in the on-premises version. This makes a lot of sense from Oracle's point of view since it gives them the ability to produce new features quickly since they need to be tested only against a single set of HW/SW rather than the multitude they are supporting on-premises.</p> <p>Most of the enhancements are expected in the Mode 2/Self Service BI area covered by Oracle Analytics Cloud Standard since a) this is the overall trend of the BI industry b) the features requested by traditional dashboard style reporting are well covered by OBIEE. <br> The following are just few of the items you could expect in future versions: </p> <ul> <li><strong>Recommendations</strong> during the data preparation phase like GeoLocation and Date enrichments</li> <li><strong>Data Flow</strong> enhancements like incremental updates or parametrized data-flows</li> <li>New <strong>Visualizations</strong> and in general more control over the settings of the single charts.</li> </ul> <p>In general Oracle's idea is to provide a single tool that meets both the needs of Mode 1 and Mode 2 Analytics (Self Service vs Centralized) rather than focusing on solving one need at a time like other vendors do.</p> <p>Special mention to the <strong>Oracle Autonomous Analytics Cloud</strong>, released few weeks ago, that differs from traditional OAC for the fact that backups, patching and service monitoring are now managed automatically by Oracle thus releasing the customer from those tasks.</p> <p>During the main conference days (mon-wed) I assisted a lot of very insightful presentations and the <em>Oracle ACE Briefing</em> which gave me ideas for future blog posts, so stay tuned! As <a href="https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/">written previously</a> I had two sessions accepted for Kscope18: "Visualizing Streams" and "DevOps and OBIEE: Do it Before it's too late", in the following paragraph I'll share details (and link to the slides) of both.</p> <h2 id="visualizingstreams">Visualizing Streams</h2> <p>One of the latest trends in the data and analytics space is the transition from the old style batch based reporting systems which by design were adding a delay between the event creation and the appearance in the reporting to the concept of streaming: ingesting and delivering event information and analytics as soon as the event is created.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Slides-2.png" alt="Kscope18: It's a Wrap!"></p> <p>The session explains how the analytics space changed in recent times providing details on how to setup a modern analytical platform which includes streaming technologies like <strong>Apache Kafka</strong>, SQL based enrichment tools like <strong><a href="https://www.rittmanmead.com/blog/2017/10/ksql-streaming-sql-for-apache-kafka/">Confluent's KSQL</a></strong> and connections to Self Service BI tools like Oracle's <strong>Data Visualization</strong> via sql-on-Hadoop technologies like <strong>Apache Drill</strong>. The slides of the session are available <a href="https://speakerdeck.com/ftisiot/visualizing-streams">here</a>.</p> <h2 id="devopsandobieedoitbeforeitstoolate">DevOps and OBIEE: Do it Before it's Too Late</h2> <p>In the second session, slides <a href="https://speakerdeck.com/ftisiot/devops-and-obiee-do-it-before-its-too-late">here</a>, I've been initially going through the motivations of applying DevOps principles to OBIEE: the self service BI wave started as a response to the long time to delivery associated with the old school centralized reporting projects. Huge monolithic sets of requirements to be delivered, no easy way to provide development isolation, manual testing and code promotion were only few of the stoppers for a fast delivery.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOpsSlides.png" alt="Kscope18: It's a Wrap!"></p> <p>After an initial analysis of the default OBIEE developments methods, the presentation explains how to apply DevOps principles to an OBIEE (or OAC) environment and precisely:</p> <ul> <li>Code versioning techniques</li> <li>Feature-driven environment creation </li> <li>Automated promotion</li> <li>Automated regression testing</li> </ul> <p>Providing details on how the <strong>Rittman Mead BI Developer Toolkit</strong>, partially described <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">here</a>, can act as an accelerator for the adoption of these practices in any custom OBIEE implementation and delivery process.</p> <p>As mentioned before, the overall Kscope experience is great: plenty of technical presentation, roadmap information, networking opportunities and also much fun! Looking forward to <a href="https://kscope19.odtug.com">Kscope19</a> in Seattle!</p> Francesco Tisiot 2f1beed8-807d-4e61-8aee-3632e5589b16 Thu Jun 21 2018 09:23:53 GMT-0400 (EDT) Extracting Data from Fusion SaaS via View Objects with Data Sync http://www.ateam-oracle.com/extracting-data-from-fusion-saas-via-view-objects-with-data-sync/ For other A-Team articles by Richard, click here Introduction The Oracle Data Sync tool provides the ability to extract from both on-premise, and cloud data sources, and to load that data into Oracle Analytics Cloud Service (OAC), Oracle Autonomous Analytics Cloud Service (OAAC), and other oracle relational databases, or into Essbase. In the most recent 2.5 [&#8230;] Richard Williams http://www.ateam-oracle.com/?p=50943 Wed Jun 20 2018 17:48:01 GMT-0400 (EDT) EPM Roadmap: The Future of HFM, FCM & FCCS https://www.us-analytics.com/hyperionblog/future-of-hfm-fcm-and-fccs <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/future-of-hfm-fcm-and-fccs" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/whats%20new%20in%20fccs%20hfm%20and%20fcm.jpg?t=1533950236061" alt="whats new in fccs hfm and fcm" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>So, you didn’t make it to Kscope18 or, if you did, you missed the part about the financial close roadmap. No worries — we’ll cover the timeline and new features coming to on-prem financial close tools in this blog post. We’ll even give you an update of what’s coming to Oracle’s Financial Consolidation and Close Service (FCCS). With the release date for the new version (11.2) right around the corner, we understand how important it is for your organization to get the full picture, so decision-makers can make the best next step.</p> <p>In this blog post, we’ll cover the timeline for 11.2 and previous versions, features coming to HFM and FCM, as well as how FCCS stacks up.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffuture-of-hfm-fcm-and-fccs&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/future-of-hfm-fcm-and-fccs Tue Jun 19 2018 17:21:14 GMT-0400 (EDT) ODTUG Kscope18 Award Winners https://www.odtug.com/p/bl/et/blogaid=810&source=1 We are thrilled about the top-notch content that was presented at ODTUG Kscope18 this year! A very special congratulations goes out to our speaker award winners, including Best First-Time Speaker, Best Overall Speaker, and Top Speakers and Co-Speakers by track. ODTUG https://www.odtug.com/p/bl/et/blogaid=810&source=1 Tue Jun 19 2018 16:43:40 GMT-0400 (EDT) EPM Roadmap: The Future of Hyperion Planning vs. PBCS https://www.us-analytics.com/hyperionblog/future-of-hyperion-planning-vs-pbcs <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/future-of-hyperion-planning-vs-pbcs" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/what%27s%20coming%20to%20pbcs%20and%20hyperion%20planning.jpg?t=1533950236061" alt="what's coming to pbcs and hyperion planning" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Kscope18 has ended — which means those who didn’t attend (or missed this particular session) are eager to hear about the changes to Oracle’s plans for the future of on-premises EPM solutions.</p> <p>It’s no secret that Oracle is putting all the innovation in the cloud, but users can still look forward to the release of the <span><a href="https://www.us-analytics.com/hyperionblog/the-death-of-hyperion-on-prem-support">last on-prem version announced last year at Kscope17.</a></span></p> <p>In this blog post, we’ll specifically look at Hyperion Planning and how the roadmap has changed for Planning since last year. We’ll also talk about versions that have lost support from Oracle, as well as the disparity of features between PBCS vs. Hyperion Planning.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffuture-of-hyperion-planning-vs-pbcs&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/future-of-hyperion-planning-vs-pbcs Tue Jun 19 2018 16:33:39 GMT-0400 (EDT) Twitter Analytics using Python - Part 3 http://www.oralytics.com/2018/06/twitter-analytics-using-python-part-3.html <p>This is my third (of five) post on using Python to process Twitter data.</p> <p>Check out my all the posts in the series.</p> <p>In this post I'll have a quick look at how to save the tweets you have download. By doing this allows you to access them at a later point and to perform more analysis. You have a few instances of saving the tweets. The first of these is to save them to files and the second option is to save them to a table in a database.</p> <span style='text-decoration:underline;'><strong>Saving Tweets to files</strong></span><p>In the previous blog post (in this series) I had converged the tweets to Pandas and then used the panda structure to perform some analysis on the data and create some charts. We have a very simple command to save to CSV.</p> <pre><br /># save tweets to a file<br />tweets_pd.to_csv('/Users/brendan.tierney/Dropbox/tweets.csv', sep=',')<br /></pre> <p>We can inspect this file using a spreadsheet or some other app that can read CSV files and get the following.</p> <p><img src="https://lh3.googleusercontent.com/-bZWB7qFM7dQ/WyGDggKOWoI/AAAAAAAAAco/ajhG_l5iqlIhZmJ_WMAzofjZd0P9YzoyACHMYCw/twitter_app8.png?imgmax=1600" alt="Twitter app8" title="twitter_app8.png" border="0" width="598" height="284" /></p> <p>When you want to read these tweets back into your Python environment, all you need to do is the following.</p> <pre><br /># and if we want to reuse these tweets at a later time we can reload them<br />old_tweets = pd.read_csv('/Users/brendan.tierney/Dropbox/tweets.csv')<br /><br />old_tweets<br /></pre><p><img src="https://lh3.googleusercontent.com/-DpTzFpeG5AE/WyGDhR9JUAI/AAAAAAAAAcs/oTmXtDFJvcUNP72Y_LWOE9qX_A5P1fD-ACHMYCw/tweet_app9.png?imgmax=1600" alt="Tweet app9" title="tweet_app9.png" border="0" width="549" height="171" /></p> <p>That's all very easy!</p> <p><br><span style='text-decoration:underline;'><strong>Saving Tweets to a Database</strong></span></p><p>There are two ways to add tweets to table in the database. There is the slow way (row-by-row) or the fast way doing a bulk insert. </p> <p>Before we get started with inserting data, lets get our database connection setup and the table to store the tweets for our date. To do this we need to use the cx_oracle python library. The following codes shows the setting up of the connections details (without my actual login details), establishes the connects and then retrieves some basic connection details to prove we are connected.</p> <pre><br /># import the Oracle Python library<br />import cx_Oracle<br /><br /># define the login details<br />p_username = "..."<br />p_password = "..."<br />p_host = "..."<br />p_service = "..."<br />p_port = "1521"<br /><br /># create the connection<br />con = cx_Oracle.connect(user=p_username, password=p_password, dsn=p_host+"/"+p_service+":"+p_port)<br />cur = con.cursor()<br /><br /># print some details about the connection and the library<br />print("Database version:", con.version)<br />print("Oracle Python version:", cx_Oracle.version)<br /><br /><br />Database version: 12.1.0.1.0<br />Oracle Python version: 6.3.1<br /></pre> <p>Now we can create a table based on the current date. </p> <pre><br /># drop the table if it already exists<br />#drop_table = "DROP TABLE TWEETS_" + cur_date<br />#cur.execute(drop_table)<br /><br />cre_table = "CREATE TABLE TWEETS_" + cur_date + " (tweet_id number, screen_name varchar2(100), place varchar2(2000), lang varchar2(20), date_created varchar2(40), fav_count number, retweet_count number, tweet_text varchar2(200))"<br /><br />cur.execute(cre_table)<br /></pre> <p>Now lets first start with the slow (row-by-row) approach. To do this we need to take our Panda data frame and convert it to lists that can be indexed individually.</p> <pre><br />lst_tweet_id = [item[0] for item in rows3]<br />lst_screen_name = [item[1] for item in rows3]<br />lst_lang =[item[3] for item in rows3]<br />lst_date_created = [item[4] for item in rows3]<br />lst_fav_count = [item[5] for item in rows3]<br />lst_retweet_count = [item[6] for item in rows3]<br />lst_tweet_text = [item[7] for item in rows3]<br /><br />#define a cursor to use for the the inserts<br />cur = con.cursor()<br />for i in range(len(rows3)):<br /> #do the insert using the index. This can be very slow and should not be used on big data<br /> cur3.execute("insert into TWEETS_2018_06_12 (tweet_id, screen_name, lang, date_created, fav_count, retweet_count, tweet_text) values (:arg_1, :arg_2, :arg_3, :arg_4, :arg_5, :arg_6, :arg_7)",<br /> {'arg_1':lst_tweet_id[i], 'arg_2':lst_screen_name[i], 'arg_3':lst_lang[i], 'arg_4':lst_date_created[i],<br /> 'arg_5':lst_fav_count[i], 'arg_6':lst_retweet_count[i], 'arg_7':lst_tweet_text[i]})<br /><br />#commit the records to the database and close the cursor<br />con.commit()<br />cur.close()<br /></pre> <p><img src="https://lh3.googleusercontent.com/-jFvSMaBNBEY/WyGDiVdXhpI/AAAAAAAAAcw/z7WV9rW7XA4l5ZMCn1KZ14-ZwbayEpmIgCHMYCw/tweet_app10.png?imgmax=1600" alt="Tweet app10" title="tweet_app10.png" border="0" width="572" height="219" /></p> <p>Now let us look a quicker way of doing this.</p><p><strong>WARNING:</strong> It depends on the version of the cx_oracle library you are using. You may encounter some errors relating to the use of floats, etc. You might need to play around with the different versions of the library until you get the one that works for you. Or these issues might be fixed in the most recent versions.</p> <p>The first step is to convert the panda data frame into a list.</p> <pre><br />rows = [tuple(x) for x in tweets_pd.values]<br />rows<br /></pre> <p><img src="https://lh3.googleusercontent.com/-L2CkrdjvGMw/WyGDjJ-dKfI/AAAAAAAAAc0/aVKQfbBUpAsX1TMPBoszc-pCWUxKYzRfgCHMYCw/tweet_app11.png?imgmax=1600" alt="Tweet app11" title="tweet_app11.png" border="0" width="558" height="211" /></p> <p>Now we can do some cursor setup like setting the array size. This determines how many records are sent to the database in each batch. Better to have a larger number than a single digit number.</p> <pre><br />cur = con.cursor()<br /><br />cur.bindarraysize = 100<br /><br />cur2.executemany("insert into TWEETS_2018_06_12 (tweet_id, screen_name, place, lang, date_created, fav_count, retweet_count, tweet_text) values (:1, :2, :3, :4, :5, :6, :7, :8)", rows)<br /></pre> <br><p>Check out the other blog posts in this series of Twitter Analytics using Python.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-2278173445221640965 Mon Jun 18 2018 09:49:00 GMT-0400 (EDT) ChitChat for OBIEE - Now Available as Open Source! https://www.rittmanmead.com/blog/2018/06/chitchat-for-obiee-now-available-as-open-source/ <div class="kg-card-markdown"><p><a href="https://www.rittmanmead.com/chitchat">ChitChat</a> is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.</p> <p><strong>Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: <a href="https://github.com/RittmanMead/ChitChat">https://github.com/RittmanMead/ChitChat</a></strong></p> <p>Highlights of the features that ChitChat provides includes:</p> <ul> <li> <p><strong>Annotate</strong> - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.</p> </li> <li> <p><strong>Document</strong> - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.</p> </li> <li> <p><strong>Share</strong> - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.</p> </li> <li> <p><strong>Verified Compatibility</strong> - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.</p> </li> </ul> <center><iframe src="https://player.vimeo.com/video/194511213" width="640" height="360" frameborder="0" allowfullscreen></iframe></center> <h1 id="gettingstarted">Getting Started</h1> <p>In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.</p> <p>First, <a href="https://github.com/RittmanMead/ChitChat/archive/master.zip">download</a> the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.</p> <p>See the <a href="https://github.com/RittmanMead/ChitChat/wiki/Installation-Guide">Installation Guide</a> for full detail on how to install ChitChat.</p> <h2 id="databasesetup">Database Setup</h2> <p>Build the required database tables using the installer:</p> <pre><code>cd /home/federico/ChitChatInstaller java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:&quot;jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1&quot; -DatabaseUser:&quot;sys as sysdba&quot; -DatabasePassword:password -NewDBUserPassword:password1 </code></pre> <p>The installer will create a new user (<code>RMREP</code>), and tables required for the application to operate correctly. <code>-DatabasePath</code> flag tells the installer where to place the datafiles for ChitChat in your database server. <code>-JDBC</code> indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. <code>-DatabaseUser</code> specifies the user to access the database with. <code>-DatabasePassword</code> specifies the password for the user previously given. <code>-NewDBUserPassword</code> indicates the password for the new user (<code>RMREP</code>) being created.</p> <h2 id="weblogicdatasourcesetup">WebLogic Data Source Setup</h2> <p>Add a Data Source object to WebLogic using WLST:</p> <pre><code>cd /home/federico/ChitChatInstaller/jndiInstaller $ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py </code></pre> <p>To use this script, modify the <code>ds.properties</code> file using the method of your choice. The following parameters must be updated to reflect your installation: <code>domain.name</code>, <code>admin.url</code>, <code>admin.userName</code>, <code>admin.password</code>, <code>datasource.target</code>, <code>datasource.url</code> and <code>datasource.password</code>.</p> <h2 id="deployingtheapplicationonweblogic">Deploying the Application on WebLogic</h2> <p>Deploy the application to WebLogic using WLST:</p> <pre><code>cd /home/federico/ChitChatInstaller $ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py </code></pre> <p>To use this script, modify the <code>deploySocialize.py</code> file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in <code>deploy</code> command must be updated to reflect your ChitChat access location.</p> <h2 id="configuringtheapplication">Configuring the Application</h2> <p>ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the <code>RMREP</code> user, and update the values manually into the <code>APPLICATION_CONSTANT</code> table.</p> <p>See the <a href="https://github.com/RittmanMead/ChitChat/wiki/Installation-Guide">Installation Guide</a> for full detail on the available configuration and integration options.</p> <h2 id="enablingtheapplication">Enabling the Application</h2> <p>To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:</p> <pre><code>&lt;rm id=&quot;socializePageParams&quot; user=&quot;@{biServer.variables['NQ_SESSION.USER']}&quot; tab=&quot;@{dashboard.currentPage.name}&quot; page=&quot;@{dashboard.name}&quot;&gt; &lt;/rm&gt; &lt;script src=&quot;/Socialize/js/dashboard.js&quot;&gt;&lt;/script&gt; </code></pre> <p>Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the <a href="https://github.com/RittmanMead/ChitChat/wiki/User-Guide">User Guide</a>.</p> <h1 id="problems">Problems?</h1> <p>Please raise any issues on the <a href="https://github.com/RittmanMead/ChitChat/issues">github issue tracker</a>. This is open source, so bear in mind that it's no-one's &quot;job&quot; to maintain the code - it's open to the community to use, benefit from, and maintain.</p> <p>If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do <a href="mailto:jon+chitchat@rittmanmead.com">get in touch</a> with Jon Mead or DM us on Twitter <a href="https://twitter.com/rittmanmead">@rittmanmead</a> to get access to our Slack channel for support about ChitChat.</p> <p>Please contact us on the same channels to request a demo.</p> </div> Federico Venturin 5b5a56f45000960018e69b40 Fri Jun 15 2018 04:20:00 GMT-0400 (EDT) ChitChat for OBIEE - Now Available as Open Source! https://www.rittmanmead.com/blog/2018/06/chitchat-for-obiee-now-available-as-open-source/ <p><a href="https://www.rittmanmead.com/chitchat">ChitChat</a> is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.</p> <p><strong>Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: <a href="https://github.com/RittmanMead/ChitChat">https://github.com/RittmanMead/ChitChat</a></strong></p> <p>Highlights of the features that ChitChat provides includes:</p> <ul> <li><p><strong>Annotate</strong> - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.</p></li> <li><p><strong>Document</strong> - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.</p></li> <li><p><strong>Share</strong> - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.</p></li> <li><p><strong>Verified Compatibility</strong> - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.</p></li> </ul> <p><center><iframe src="https://player.vimeo.com/video/194511213" width="640" height="360" frameborder="0" allowfullscreen></iframe></center></p> <h1 id="gettingstarted">Getting Started</h1> <p>In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.</p> <p>First, <a href="https://github.com/RittmanMead/ChitChat/archive/master.zip">download</a> the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.</p> <p>See the <a href="https://github.com/RittmanMead/ChitChat/wiki/Installation-Guide">Installation Guide</a> for full detail on how to install ChitChat.</p> <h2 id="databasesetup">Database Setup</h2> <p>Build the required database tables using the installer:</p> <pre><code>cd /home/federico/ChitChatInstaller java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:"jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1" -DatabaseUser:"sys as sysdba" -DatabasePassword:password -NewDBUserPassword:password1 </code></pre> <p>The installer will create a new user (<code>RMREP</code>), and tables required for the application to operate correctly. <code>-DatabasePath</code> flag tells the installer where to place the datafiles for ChitChat in your database server. <code>-JDBC</code> indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. <code>-DatabaseUser</code> specifies the user to access the database with. <code>-DatabasePassword</code> specifies the password for the user previously given. <code>-NewDBUserPassword</code> indicates the password for the new user (<code>RMREP</code>) being created.</p> <h2 id="weblogicdatasourcesetup">WebLogic Data Source Setup</h2> <p>Add a Data Source object to WebLogic using WLST:</p> <pre><code>cd /home/federico/ChitChatInstaller/jndiInstaller $ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py </code></pre> <p>To use this script, modify the <code>ds.properties</code> file using the method of your choice. The following parameters must be updated to reflect your installation: <code>domain.name</code>, <code>admin.url</code>, <code>admin.userName</code>, <code>admin.password</code>, <code>datasource.target</code>, <code>datasource.url</code> and <code>datasource.password</code>.</p> <h2 id="deployingtheapplicationonweblogic">Deploying the Application on WebLogic</h2> <p>Deploy the application to WebLogic using WLST:</p> <pre><code>cd /home/federico/ChitChatInstaller $ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py </code></pre> <p>To use this script, modify the <code>deploySocialize.py</code> file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in <code>deploy</code> command must be updated to reflect your ChitChat access location.</p> <h2 id="configuringtheapplication">Configuring the Application</h2> <p>ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the <code>RMREP</code> user, and update the values manually into the <code>APPLICATION_CONSTANT</code> table.</p> <p>See the <a href="https://github.com/RittmanMead/ChitChat/wiki/Installation-Guide">Installation Guide</a> for full detail on the available configuration and integration options.</p> <h2 id="enablingtheapplication">Enabling the Application</h2> <p>To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:</p> <pre><code>&lt;rm id="socializePageParams" user="@{biServer.variables['NQ_SESSION.USER']}" tab="@{dashboard.currentPage.name}" page="@{dashboard.name}"&gt; &lt;/rm&gt; &lt;script src="/Socialize/js/dashboard.js"&gt;&lt;/script&gt; </code></pre> <p>Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the <a href="https://github.com/RittmanMead/ChitChat/wiki/User-Guide">User Guide</a>.</p> <h1 id="problems">Problems?</h1> <p>Please raise any issues on the <a href="https://github.com/RittmanMead/ChitChat/issues">github issue tracker</a>. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.</p> <p>If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do <a href="mailto:jon+chitchat@rittmanmead.com">get in touch</a> with Jon Mead or DM us on Twitter <a href="https://twitter.com/rittmanmead">@rittmanmead</a> to get access to our Slack channel for support about ChitChat.</p> <p>Please contact us on the same channels to request a demo.</p> Federico Venturin 65b24146-b1c6-4169-8973-0428322b3ee7 Fri Jun 15 2018 04:20:00 GMT-0400 (EDT) Adaptive Plans at Kscope Community Service Day https://danischnider.wordpress.com/2018/06/14/adaptive-plans-at-kscope-community-service-day/ <p>Since Oracle 12c, the query optimizer is able to change execution plans at runtime. This feature is called “Adaptive Plans”. Something similar happened on the ODTUG Community Service Day at the Kscope conference in Orlando.</p> <p><span id="more-572"></span></p> <p>The ODTUG Community Service Day always takes place the Saturday before the <a href="https://kscope18.odtug.com">Kscope</a> conference of <a href="https://www.odtug.com">ODTUG</a> (Oracle Developer Tools User Group). Conference attendees and speakers are asked as volunteers to provide help. Because I attended Kscope for the first time, my Trivadis colleague Kim Berg Hansen recommended to join this day because it’s a good chance to meet people and have interesting chats.</p> <p>In the morning, we did several jobs in a member grocery store of <a href="https://uporlando.org">UP Orlando</a>, a super market for people in poverty. My job was to clean the shelves of the “crackers and cookies” department and to check the food in the shelves for the best-before date (as a craft beer brewer, I’m familiar with this kind of work).</p> <p><a href="https://twitter.com/wrkingtechworld/status/1005526593224237056"><img title="community_service_day.jpg" src="https://danischnider.files.wordpress.com/2018/06/icommunity_service_day.jpg?w=400&#038;h=400" alt="Community service day" width="400" height="400" border="0" /><br /></a><em>Photo: <a href="https://twitter.com/wrkingtechworld/status/1005526593224237056">Twitter / @wrkingtechworld</a></em></p> <p>In the afternoon, we had to pack more than 1000 bags for the conference attendees with marketing flyers and advertising gadgets. During this “boring” job, it was interesting to see, how many smart people with experience in performance optimization were helping here. What happened, was a kind of optimization at runtime, an adaptive plan to improve packing performance during its execution.</p> <h1>Default Plan to Pack Conference Bags</h1> <p>The original plan was to collect the material in two lines of tables (real tables, not database tables). Several slave processes, called “Community Service Day Volunteers”, were in a line, each of them took one piece of paper after another from a stack. Each collection of advertising flyers was then packed into a bag, and afterwards the bag was filled with additional gadgets like water bottle, USB charger, bottle opener, etc.. At the end of the line, the packed bag was placed on a cart and the slave process walked back to the start of the line.</p> <p><a href="https://twitter.com/OliverLemm/status/1005852940588994560"><img title="DfWB6Y8W4AM_iPe.jpg" src="https://danischnider.files.wordpress.com/2018/06/idfwb6y8w4am_ipe.jpg?w=600&#038;h=450" alt="DfWB6Y8W4AM iPe" width="600" height="450" border="0" /><br /></a><em>Photo: <a href="https://twitter.com/OliverLemm/status/1005852940588994560">Twitter / @OliverLemm</a></em></p> <h1>Increase Parallel Degree</h1> <p>Some of the volunteers immediately realized that this process can be optimized by increasing the parallel degree. They collected two sets of material concurrently, but unfortunately, this slowed down the queue and created wait events for the succeeding volunteers. A more appropriate way of optimization was to collect the stuff from both sides of the tables. So the parallel degree of the whole packing line could be increased from 2 to 4. This helped to reduce the wait events on the paper line.</p> <h1>Build a More Advanced Plan</h1> <p>Because grabbing the paper sheets took more time than collecting the gadgets, the workload at the beginning of each queue was much higher than at the end, So, we decided to build a more advanced execution plan with three steps:</p> <p><img title="bag_filling_plan.jpg" src="https://danischnider.files.wordpress.com/2018/06/ibag_filling_plan.jpg?w=600&#038;h=274" alt="Bag filling plan" width="600" height="274" border="0" /></p> <ol> <li>Four parallel groups of “Paper Grabbers” (A), worked in the Paper Section. Each of them collected the marketing flyers for one bag and gave them to the “Bag Fillers” (B). Because collecting the sheets of paper takes a lot of elapsed time, four parallel queues were allocated for this job.</li> <li>Two “Bag Fillers” (B) took the marketing flyer collections from the “Paper Grabbers” and put them into empty ODTUG conference bags. Then they gave each bag to one of the “Gadget Collectors” (C).</li> <li>In two parallel queues, the “Gadget Collectors” (C) added all the additional gadgets provided in the Gadgets Section into the pre-filled bags. At the end, they put the bag on a cart that was brought to the reception desk at the end.</li> </ol> <p>This plan worked well, and we already had plans to improve it even more. But unfortunately, all the work was already done. So, there was no need for further improvement.</p> <h1>Lessions Learned</h1> <p>As you can see in this example, it is a good approach to improve a process during its execution. This can be done in manual work like filling ODTUG conference bags, but also in executing a SQL statement. The query optimizer of Oracle 12c has a similar approach with Adaptive Plans &#8211; although the implementation is slightly different from what we did on the Kscope Community Service Day.</p> Dani Schnider http://danischnider.wordpress.com/?p=572 Thu Jun 14 2018 16:51:09 GMT-0400 (EDT) Real-time Sailing Yacht Performance - Kafka (Part 2) https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/ <div class="kg-card-markdown"><p>In the last two blogs <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Getting Started (Part 1)</a> and <a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-stepping-back-a-bit-part-1-1/">Stepping back a bit (Part 1.1)</a> I looked at what data I could source from the boat's instrumentation and introduced some new hardware to the boat to support the analysis.</p> <p>Just to recap I am looking to create the yachts Polars with a view to improving our knowledge of her abilities (whether we can use this to improve our race performance is another matter).</p> <blockquote> <p>Polars give us a plot of the boat's speed given a true wind speed and angle. This, in turn, informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed.</p> </blockquote> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-11-at-10.01.38.png" style="width: 515px; height:640px"> <p>In the first blog I wrote a reader in Python that takes messages from a TCP/IP feed and writes the data to a file. The reader is able, using a hash key to validate each message (See <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Getting Started (Part 1)</a>). I'm also converting valid messages into a JSON format so that I can push meaningful structured data downstream. In this blog, I'll cover the architecture and considerations around the setup of Kafka for this use case. I will not cover the installation of each component, there has been a lot written in this area. (We have some internal IP to help with configuration). I discuss the process I went through to get the data in real time displayed in a Grafana dashboard.</p> <h1 id="introducingkafka">Introducing Kafka</h1> <p>I have introduced Kafka into the architecture as a next step.</p> <h2 id="whykafka">Why Kafka?</h2> <p>I would like to be able to stream this data real time and don't want to build my own batch mechanism or create a publish/ subscribe model. With Kafka I don't need to check that messages have been successfully received and if there is a failure while consuming messages the consumers will keep track of what has been consumed. If a consumer fails it can be restarted and it will pick up where it left off (consumer offset stored in Kafka as a topic). In the future, I could scale out the platform and introduce some resilience through clustering and replication (this shouldn't be required for a while). Kafka therefore is saving me a lot of manual engineering and will support future growth (should I come into money and am able to afford more sensors for the boat).</p> <h2 id="highlevelarchitecture">High level architecture</h2> <p>Let's look at the high-level components and how they fit together. Firstly I have the instruments transmitting on wireless TCP/IP and these messages are read using my Python I wrote earlier in the year.</p> <p>I have enhanced the Python I wrote to read and translate the messages and instead of writing to a file I stream the JSON messages to a topic in Kafka.</p> <p>Once the messages are in Kafka I use Kafka Connect to stream the data into <a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a>. The messages are written to topic-specific measurements (tables in InfluxdDB).</p> <p><a href="https://grafana.com/">Grafana</a> is used to display incoming messages in real time.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-17.18.23.png" alt=""></p> <h3 id="kafkacomponents">Kafka components</h3> <p>I am running the application on a MacBook Pro. Basically a single node instance with <a href="https://zookeeper.apache.org/">zookeeper</a>, Kafka broker and a Kafka connect worker. This is the minimum setup with very little resilience.</p> <img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-21.29.48.png" style="width: 400px; height:300px"> <h3 id="insummary">In summary</h3> <p><strong>ZooKeeper</strong> is an open-source server that enables distributed coordination of configuration information. In the Kafka architecture ZooKeeper stores metadata about brokers, topics, partitions and their locations.<br> ZooKeeper is configured in <code>zookeeper.properties</code>.</p> <p><strong>Kafka broker</strong> is a single Kafka server.</p> <blockquote> <p>&quot;The broker receives messages from producers, assigns offsets to them, and commits the messages to storage on disk. It also services consumers, responding to fetch requests for partitions and responding with the messages that have been committed to disk.&quot; <sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup></p> </blockquote> <p>The broker is configured in <code>server.properties</code>. In this setup I have set <code>auto.create.topics.enabled=false</code>. Setting this to false gives me control over the environment as the name suggests it disables the auto-creation of a topic which in turn could lead to confusion.</p> <p><strong>Kafka connect worker</strong> allows us to take advantage of predefined connectors that enable the writing of messages to known external datastores from Kafka. The worker is a wrapper around a <a href="https://docs.confluent.io/current/clients/consumer.html">Kafka consumer</a>. A consumer is able to read messages from a topic partition using offsets. Offsets keep track of what has been read by a particular consumer or consumer group. (Kafka connect workers can also write to Kafka from datastores but I am not using this functionality in this instance). The connect worker is configured in <code>connect-distributed-properties</code>. I have defined the location of the plugins in this configuration file. Connector definitions are used to determine how to write to an external data source.</p> <h3 id="producertoinfluxdb">Producer to InfluxDB</h3> <p>I use <a href="https://github.com/dpkp/kafka-python">kafka-python</a> to stream the messages into kafka. Within kafka-python there is a KafkaProducer that is intended to work in a similar way to the official <a href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html">java client</a>.</p> <p>I have created a <a href="https://docs.confluent.io/current/clients/producer.html">producer</a> for each message type (parameterised code). Although each producer reads the entire stream from the TCP/IP port it only processes it's assigned message type (wind or speed) this increasing parallelism and therefore throughput.</p> <pre><code> producer = KafkaProducer(bootstrap_servers='localhost:9092' , value_serializer=lambda v: json.dumps(v).encode('utf-8')) producer.send(topic, json_str) </code></pre> <p>I have created a topic per message type with a single partition. Using a single partition per topic guarantees I will consume messages in the order they arrive. There are other ways to increase the number of partitions and still maintain the read order but for this use case a topic per message type seemed to make sense. I basically have optimised throughput (well enough for the number of messages I am trying to process).</p> <pre><code>kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wind-json kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic speed-json kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic gps-json </code></pre> <p>When defining a topic you specify the <code>replaication-factor</code> and the number of <code>partitions</code>.</p> <blockquote> <p>The topic-level configuration is replication.factor. At the broker level, you control the default.replication.factor for automatically created topics. <sup class="footnote-ref"><a href="#fn1" id="fnref1:1">[1:1]</a></sup> (I have turned off the automatic creation of topics).</p> </blockquote> <p>The messages are consumed using Stream reactor which has an InfluxDB sink mechanism and writes directly to the measurements within a performance database I have created. The following parameters showing the topics and inset mechanism are configured in <code><em>performance</em>.influxdb-sink.properties</code>.</p> <pre><code>topics=wind-json,speed-json,gps-json connect.influx.kcql=INSERT INTO wind SELECT * FROM wind-json WITHTIMESTAMP sys_time();INSERT INTO speed SELECT * FROM speed-json WITHTIMESTAMP sys_time();INSERT INTO gps SELECT * FROM gps-json WITHTIMESTAMP sys_time()</code></pre> <p>The following diagram shows the detail from producer to InfluxDB.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-02-at-21.22.40.png" alt=""></p> <p>If we now run the producers we get data streaming through the platform.</p> <p>Producer Python log showing JSON formatted messages:</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-22.28.25.png" alt=""></p> <p>Status of consumers show minor lag reading from two topics, the describe also shows the current offsets for each consumer task and partitions being consumed (if we had a cluster it would show multiple hosts):</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-22.27.57.png" alt=""></p> <p>Inspecting the InfluxDB measurements:<br> <img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-04-at-20.18.27.png" alt=""></p> <blockquote> <p>When inserting into a measurement in InfluxDB if the measurement does not exist it gets created automatically. The datatypes of the fields are determined from the JSON object being inserted. I needed to adjust the creation of the JSON message to cast the values to floats otherwise I ended up with the wrong types. This caused reporting issues in Grafana. This would be a good case for using Avro and Schema Registry to handle these definitions.</p> </blockquote> <p>The following gif shows Grafana displaying some of the wind and speed measurements using a D3 Gauge plugin with the producers running to the right of the dials.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/medium-dashboard-1.gif" alt=""></p> <h3 id="nextsteps">Next Steps</h3> <p>I'm now ready to do some real-life testing on our next sailing passage.</p> <p>In the next blog, I will look at making the setup more resilient to failure and how to monitor and automatically recover from some of these failures. I will also introduce the WorldMap pannel to Grafana so I can plot the location the readings were taken and overlay tidal data.</p> <h3 id="references">References</h3> <hr class="footnotes-sep"> <section class="footnotes"> <ol class="footnotes-list"> <li id="fn1" class="footnote-item"><p><a href="https://www.confluent.io/resources/kafka-the-definitive-guide/">Kafka the definitive guide</a> <a href="#fnref1" class="footnote-backref">↩︎</a> <a href="#fnref1:1" class="footnote-backref">↩︎</a></p> </li> </ol> </section> </div> Paul Shilling 5b5a56f45000960018e69b3c Thu Jun 14 2018 09:18:34 GMT-0400 (EDT) Real-time Sailing Yacht Performance - Kafka (Part 2) https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/ <p>In the last two blogs <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Getting Started (Part 1)</a> and <a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-stepping-back-a-bit-part-1-1/">Stepping back a bit (Part 1.1)</a> I looked at what data I could source from the boat's instrumentation and introduced some new hardware to the boat to support the analysis. </p> <p>Just to recap I am looking to create the yachts Polars with a view to improving our knowledge of her abilities (whether we can use this to improve our race performance is another matter). </p> <blockquote> <p>Polars give us a plot of the boat's speed given a true wind speed and angle. This, in turn, informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed.</p> </blockquote> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-11-at-10.01.38.png" style="width: 515px; height:640px"></p> <p>In the first blog I wrote a reader in Python that takes messages from a TCP/IP feed and writes the data to a file. The reader is able, using a hash key to validate each message (See <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Getting Started (Part 1)</a>). I'm also converting valid messages into a JSON format so that I can push meaningful structured data downstream. In this blog, I'll cover the architecture and considerations around the setup of Kafka for this use case. I will not cover the installation of each component, there has been a lot written in this area. (We have some internal IP to help with configuration). I discuss the process I went through to get the data in real time displayed in a Grafana dashboard.</p> <h1 id="introducingkafka">Introducing Kafka</h1> <p>I have introduced Kafka into the architecture as a next step. </p> <h2 id="whykafka">Why Kafka?</h2> <p>I would like to be able to stream this data real time and don't want to build my own batch mechanism or create a publish/ subscribe model. With Kafka I don't need to check that messages have been successfully received and if there is a failure while consuming messages the consumers will keep track of what has been consumed. If a consumer fails it can be restarted and it will pick up where it left off (consumer offset stored in Kafka as a topic). In the future, I could scale out the platform and introduce some resilience through clustering and replication (this shouldn't be required for a while). Kafka therefore is saving me a lot of manual engineering and will support future growth (should I come into money and am able to afford more sensors for the boat).</p> <h2 id="highlevelarchitecture">High level architecture</h2> <p>Let's look at the high-level components and how they fit together. Firstly I have the instruments transmitting on wireless TCP/IP and these messages are read using my Python I wrote earlier in the year. </p> <p>I have enhanced the Python I wrote to read and translate the messages and instead of writing to a file I stream the JSON messages to a topic in Kafka. </p> <p>Once the messages are in Kafka I use Kafka Connect to stream the data into <a href="https://www.influxdata.com/time-series-platform/influxdb/">InfluxDB</a>. The messages are written to topic-specific measurements (tables in InfluxdDB).</p> <p><a href="https://grafana.com/">Grafana</a> is used to display incoming messages in real time. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-17.18.23.png" alt=""></p> <h3 id="kafkacomponents">Kafka components</h3> <p>I am running the application on a MacBook Pro. Basically a single node instance with <a href="https://zookeeper.apache.org/">zookeeper</a>, Kafka broker and a Kafka connect worker. This is the minimum setup with very little resilience. </p> <p><img alt="Image Description" src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-21.29.48.png" style="width: 400px; height:300px"></p> <h3 id="insummary">In summary</h3> <p><strong>ZooKeeper</strong> is an open-source server that enables distributed coordination of configuration information. In the Kafka architecture ZooKeeper stores metadata about brokers, topics, partitions and their locations. ZooKeeper is configured in <code>zookeeper.properties</code>.</p> <p><strong>Kafka broker</strong> is a single Kafka server. </p> <blockquote> <p>"The broker receives messages from producers, assigns offsets to them, and commits the messages to storage on disk. It also services consumers, responding to fetch requests for partitions and responding with the messages that have been committed to disk." <sup id="fnref:1"><a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/#fn:1" rel="footnote">1</a></sup></p> </blockquote> <p>The broker is configured in <code>server.properties</code>. In this setup I have set <code>auto.create.topics.enabled=false</code>. Setting this to false gives me control over the environment as the name suggests it disables the auto-creation of a topic which in turn could lead to confusion.</p> <p><strong>Kafka connect worker</strong> allows us to take advantage of predefined connectors that enable the writing of messages to known external datastores from Kafka. The worker is a wrapper around a <a href="https://docs.confluent.io/current/clients/consumer.html">Kafka consumer</a>. A consumer is able to read messages from a topic partition using offsets. Offsets keep track of what has been read by a particular consumer or consumer group. (Kafka connect workers can also write to Kafka from datastores but I am not using this functionality in this instance). The connect worker is configured in <code>connect-distributed-properties</code>. I have defined the location of the plugins in this configuration file. Connector definitions are used to determine how to write to an external data source.</p> <h3 id="producertoinfluxdb">Producer to InfluxDB</h3> <p>I use <a href="https://github.com/dpkp/kafka-python">kafka-python</a> to stream the messages into kafka. Within kafka-python there is a KafkaProducer that is intended to work in a similar way to the official <a href="https://kafka-python.readthedocs.io/en/master/apidoc/KafkaProducer.html">java client</a>. </p> <p>I have created a <a href="https://docs.confluent.io/current/clients/producer.html">producer</a> for each message type (parameterised code). Although each producer reads the entire stream from the TCP/IP port it only processes it's assigned message type (wind or speed) this increasing parallelism and therefore throughput.</p> <pre><code> producer = KafkaProducer(bootstrap_servers='localhost:9092' , value_serializer=lambda v: json.dumps(v).encode('utf-8')) producer.send(topic, json_str) </code></pre> <p>I have created a topic per message type with a single partition. Using a single partition per topic guarantees I will consume messages in the order they arrive. There are other ways to increase the number of partitions and still maintain the read order but for this use case a topic per message type seemed to make sense. I basically have optimised throughput (well enough for the number of messages I am trying to process).</p> <pre><code>kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wind-json kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic speed-json kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic gps-json </code></pre> <p>When defining a topic you specify the <code>replaication-factor</code> and the number of <code>partitions</code>. </p> <blockquote> <p>The topic-level configuration is replication.factor. At the broker level, you control the default.replication.factor for automatically created topics. <sup id="fnref:1"><a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/#fn:1" rel="footnote">1</a></sup> (I have turned off the automatic creation of topics).</p> </blockquote> <p>The messages are consumed using Stream reactor which has an InfluxDB sink mechanism and writes directly to the measurements within a performance database I have created. The following parameters showing the topics and inset mechanism are configured in <code><em>performance</em>.influxdb-sink.properties</code>. </p> <pre><code>topics=wind-json,speed-json,gps-json connect.influx.kcql=INSERT INTO wind SELECT * FROM wind-json WITHTIMESTAMP sys_time();INSERT INTO speed SELECT * FROM speed-json WITHTIMESTAMP sys_time();INSERT INTO gps SELECT * FROM gps-json WITHTIMESTAMP sys_time()</code></pre> <p>The following diagram shows the detail from producer to InfluxDB. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-02-at-21.22.40.png" alt=""></p> <p>If we now run the producers we get data streaming through the platform.</p> <p>Producer Python log showing JSON formatted messages:</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-22.28.25.png" alt=""></p> <p>Status of consumers show minor lag reading from two topics, the describe also shows the current offsets for each consumer task and partitions being consumed (if we had a cluster it would show multiple hosts):</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Screen-Shot-2018-05-31-at-22.27.57.png" alt=""></p> <p>Inspecting the InfluxDB measurements: <br> <img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-04-at-20.18.27.png" alt=""></p> <blockquote> <p>When inserting into a measurement in InfluxDB if the measurement does not exist it gets created automatically. The datatypes of the fields are determined from the JSON object being inserted. I needed to adjust the creation of the JSON message to cast the values to floats otherwise I ended up with the wrong types. This caused reporting issues in Grafana. This would be a good case for using Avro and Schema Registry to handle these definitions. </p> </blockquote> <p>The following gif shows Grafana displaying some of the wind and speed measurements using a D3 Gauge plugin with the producers running to the right of the dials. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/medium-dashboard-1.gif" alt=""></p> <h3 id="nextsteps">Next Steps</h3> <p>I'm now ready to do some real-life testing on our next sailing passage.</p> <p>In the next blog, I will look at making the setup more resilient to failure and how to monitor and automatically recover from some of these failures. I will also introduce the WorldMap pannel to Grafana so I can plot the location the readings were taken and overlay tidal data.</p> <h3 id="references">References</h3> <div class="footnotes"><ol><li class="footnote" id="fn:1"><p><a href="https://www.confluent.io/resources/kafka-the-definitive-guide/">Kafka the definitive guide</a> <a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/#fnref:1" title="return to article">↩</a></p></li></ol></div> Paul Shilling 464901ab-7b98-473f-9a94-d2160b01c91a Thu Jun 14 2018 09:18:34 GMT-0400 (EDT) OAC - Thoughts on Moving to the Cloud https://www.rittmanmead.com/blog/2018/06/oac-thoughts-on-moving-to-the-cloud/ <p>Last week, I spent a couple of days with Oracle at Thames Valley Park and this presented me with a perfect opportunity to sit down and get to grips with the full extent of the Oracle Analytics Cloud (OAC) suite...without having to worry about client requirements or project deadlines!</p> <p>As a company, Rittman Mead already has solid experience of OAC, but my personal exposure has been limited to presentations, product demonstrations, reading the various postings in the blog community and my existing experiences of Data Visualisation and BI cloud services (DVCS and BICS respectively). You’ll find Francesco’s <a href="https://www.rittmanmead.com/blog/2017/04/oracle-analytics-cloud-product-overview/">post</a> a good starting place if you need an overview of OAC and how it differs (or aligns) to Data Visualisation and BI Cloud Services.</p> <p>So, having spent some time looking at the overall suite and, more importantly, trying to interpret what it could mean for organisations thinking about making a move to the cloud, here are my top three takeaways: <br> <br></p> <h2 id="cloudscomeindifferentshapesandflavours">Clouds Come In Different Shapes and Flavours</h2> <p>Two of the main benefits that a move to the cloud offers are simplification in platform provisioning and an increase in flexibility, being able to ramp up or scale down resources at will. These both comes with a potential cost benefit, depending on your given scenario and requirement. The first step is understanding the different options in the OAC licensing and feature matrix.</p> <p>First, we need to draw a distinction between Analytics Cloud and the Autonomous Analytics Cloud (interestingly, both options point to the same page on <a href="https://cloud.oracle.com/en_US/oac">cloud.oracle.com</a>, which makes things immediately confusing!). In a nutshell though, the distinction comes down to who takes responsibility for the service management: Autonomous Analytics Cloud is managed by Oracle, whilst Analytics Cloud is managed by yourself. It’s interesting to note that the Autonomous offering is marginally cheaper.</p> <p>Next, Oracle have chosen to extend their <em>BYOL</em> (Bring Your Own License) option from their IaaS services to now incorporate PaaS services. This means that if you have existing licenses for the on-premise software, then you are able to take advantage of what appears to be a significantly discounted cost. Clearly, this is targeted to incentivise existing Oracle customers to make the leap into the Cloud, and should be considered against your ongoing annual support fees.</p> <p>Since the start of the year, Analytics Cloud now comes in three different versions, with the <strong>Standard</strong> and <strong>Enterprise</strong> editions now being separated by the new <strong>Data Lake</strong> edition. The important things to note are that (possibly confusingly) Essbase is now incorporated into the Data Lake edition of the Autonomous Analytics Cloud and that for the full enterprise capability you have with OBIEE, you will need the Enterprise edition. Each version inherits the functionality of its preceding version: Enterprise edition gives you everything in the Data Lake edition; Data Lake edition incorporates everything in the Standard edition.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.14.40.png" alt="alt"></p> <p>Finally, it’s worth noting that OAC aligns to the Universal Credit consumption model, whereby the cost is determined based on the size and shape of the cloud that you need. Services can be purchased as <strong>Pay as You Go</strong> or <strong>Monthly Flex</strong> options (with differential costing to match). The PAYG model is based on hourly consumption and is paid for in arrears, making it the obvious choice for short term prototyping or POC activities. Conversely, the Monthly Flex model is paid in advance and requires a minimum 12 month investment and therefore makes sense for full scale implementations. Then, the final piece of the jigsaw comes with the shape of the service you consume. This is measured in OCPU’s (Oracle Compute Units) and the larger your memory requirements, the more OCPU’s you consume. <br> <br></p> <h2 id="whereyouputyourdatawillalwaysmatter">Where You Put Your Data Will Always Matter</h2> <p>Moving your analytics platform into the cloud may make a lot of sense and could therefore be a relatively simple decision to make. However, the question of where your data resides is a more challenging subject, given the sensitivities and increasing legislative constraints that exist around where your data can or should be stored. The answer to that question will influence the performance and data latency you can expect from your analytics platform.</p> <p>OAC is architected to be flexible when it comes to its data sources and consequently the options available for data access are pretty broad. At a high level, your choices are similar to those you would have when implementing on-premise, namely: </p> <ul> <li>perform ELT processing to transform and move the data (into the cloud); </li> <li>replicate data from source to target (in the cloud) or;</li> <li>query data sources via direct access. </li> </ul> <p>These are supplemented by a fourth option to use the inbuilt Data Connectors available in OAC to connect to cloud or on-premise databases, other proprietary platforms or any other source accessible via JDBC. This is probably a decent path for exploratory data usage within DV, but I’m not sure it would always make the best long term option.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.16.16.png" alt="alt"></p> <p>Unsurprisingly, with the breadth of options comes a spectrum of tooling that can be used for shifting your data around and it is important to note that depending on your approach, additional cloud services may or may not be required. </p> <p>For accessing data directly at its source, the preferred route seems to be to use RDC (Remote Data Connector), although it is worth noting that support is limited to Oracle (including OLAP), SQL Server, Teradata or DB2 databases. Also, be aware that RDC operates within WebLogic Server and so this will be needed within the on-premise network. </p> <p>Data replication is typically achieved using Data Sync (the reincarnation of the DAC, which OBIA implementers will already be familiar with), although it is worth mentioning that there are other routes that could be taken, such as APEX or SQL Developer, depending on the data volumes and latency you have to play with.</p> <p>Classic ELT processing can be achieved via Oracle Data Integrator (either the Cloud Service, a traditional on-premise implementation or a hybrid-model).</p> <p>Ultimately, due care and attention needs to be taken when deciding on your data architecture as this will have a fundamental effect on the simplicity with which data can be accessed and interpreted, the query performance achieved and the data latency built into your analytics. <br> <br></p> <h2 id="dataflowsmakeformodernanalyticssimplification">Data Flows Make For Modern Analytics Simplification</h2> <p>A while back, I wrote a post titled <a href="https://www.rittmanmead.com/blog/2017/07/enabling-a-modern-analytics-platform/">Enabling a Modern Analytics Platform</a> in which I attempted to describe ways that Mode 1 (departmental) and Mode 2 (enterprise) analytics could be built out to support each other, as opposed to undermining one another. One of the key messages I made was the importance of having an effective mechanism for transitioning your Mode 1 outputs back into Mode 2 as seamlessly as possible. (The same is true in reverse for making enterprise data available as an Mode 1 input.)</p> <p>One of the great things about OAC is how it serves to simplify this transition. Users are able to create analytic content based on data sourced from a broad range of locations: at the simplest level, <strong>Data Sets</strong> can be built from flat files or via one of the available <strong>Data Connectors</strong> to relational, NoSQL, proprietary database or Essbase sources. Moreover, enterprise curated metadata (via RPD lift-and-shift from an on-premise implementation) or analyst developed Subject Areas can be exposed. These sources can be ‘mashed’ together directly in a DV project or, for more complex or repeatable actions, <strong>Data Flows</strong> can be created to build Data Sets. Data Flows are pretty powerful, not only allowing users to join disparate data but also perform some useful data preparation activities, ranging from basic filtering, aggregation and data manipulation actions to more complex sentiment analysis, forecasting and even some machine learning modelling features. Importantly, Data Flows can be set to output their results to disk, either written to a Data Set or even to a database table and they can be scheduled for repetitive refresh.</p> <p>For me, one of the most important things about the Data Flows feature is that it provides a clear and understandable interface which shows the sequencing of each of the data preparation stages, providing valuable information for any subsequent reverse engineering of the processing back into the enterprise data architecture.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.17.14.png" alt="alt"> <br></p> <p>In summary, there are plenty of exciting and innovative things happening with Oracle Analytics in the cloud and as time marches on, the case for moving to the cloud in one shape or form will probably get more and more compelling. However, beyond a strategic decision to ‘Go Cloud’, there are many options and complexities that need to be addressed in order to make a successful start to your journey - some technical, some procedural and some organisational. Whilst a level of planning and research will undoubtedly smooth the path, the great thing about the cloud services is that they are comparatively cheap and easy to initiate, so getting on and building a prototype is always going to be a good, exploratory starting point. </p> Mike Vickers cc14df9e-cfd3-4920-8df0-8b803be22644 Tue Jun 12 2018 13:43:00 GMT-0400 (EDT) OAC - Thoughts on Moving to the Cloud https://www.rittmanmead.com/blog/2018/06/oac-thoughts-on-moving-to-the-cloud/ <div class="kg-card-markdown"><p>Last week, I spent a couple of days with Oracle at Thames Valley Park and this presented me with a perfect opportunity to sit down and get to grips with the full extent of the Oracle Analytics Cloud (OAC) suite...without having to worry about client requirements or project deadlines!</p> <p>As a company, Rittman Mead already has solid experience of OAC, but my personal exposure has been limited to presentations, product demonstrations, reading the various postings in the blog community and my existing experiences of Data Visualisation and BI cloud services (DVCS and BICS respectively). You’ll find Francesco’s <a href="https://www.rittmanmead.com/blog/2017/04/oracle-analytics-cloud-product-overview/">post</a> a good starting place if you need an overview of OAC and how it differs (or aligns) to Data Visualisation and BI Cloud Services.</p> <p>So, having spent some time looking at the overall suite and, more importantly, trying to interpret what it could mean for organisations thinking about making a move to the cloud, here are my top three takeaways:<br> <br></p> <h2 id="cloudscomeindifferentshapesandflavours">Clouds Come In Different Shapes and Flavours</h2> <p>Two of the main benefits that a move to the cloud offers are simplification in platform provisioning and an increase in flexibility, being able to ramp up or scale down resources at will. These both comes with a potential cost benefit, depending on your given scenario and requirement. The first step is understanding the different options in the OAC licensing and feature matrix.</p> <p>First, we need to draw a distinction between Analytics Cloud and the Autonomous Analytics Cloud (interestingly, both options point to the same page on <a href="https://cloud.oracle.com/en_US/oac">cloud.oracle.com</a>, which makes things immediately confusing!). In a nutshell though, the distinction comes down to who takes responsibility for the service management: Autonomous Analytics Cloud is managed by Oracle, whilst Analytics Cloud is managed by yourself. It’s interesting to note that the Autonomous offering is marginally cheaper.</p> <p>Next, Oracle have chosen to extend their <em>BYOL</em> (Bring Your Own License) option from their IaaS services to now incorporate PaaS services. This means that if you have existing licenses for the on-premise software, then you are able to take advantage of what appears to be a significantly discounted cost. Clearly, this is targeted to incentivise existing Oracle customers to make the leap into the Cloud, and should be considered against your ongoing annual support fees.</p> <p>Since the start of the year, Analytics Cloud now comes in three different versions, with the <strong>Standard</strong> and <strong>Enterprise</strong> editions now being separated by the new <strong>Data Lake</strong> edition. The important things to note are that (possibly confusingly) Essbase is now incorporated into the Data Lake edition of the Autonomous Analytics Cloud and that for the full enterprise capability you have with OBIEE, you will need the Enterprise edition. Each version inherits the functionality of its preceding version: Enterprise edition gives you everything in the Data Lake edition; Data Lake edition incorporates everything in the Standard edition.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.14.40.png" alt="alt"></p> <p>Finally, it’s worth noting that OAC aligns to the Universal Credit consumption model, whereby the cost is determined based on the size and shape of the cloud that you need. Services can be purchased as <strong>Pay as You Go</strong> or <strong>Monthly Flex</strong> options (with differential costing to match). The PAYG model is based on hourly consumption and is paid for in arrears, making it the obvious choice for short term prototyping or POC activities. Conversely, the Monthly Flex model is paid in advance and requires a minimum 12 month investment and therefore makes sense for full scale implementations. Then, the final piece of the jigsaw comes with the shape of the service you consume. This is measured in OCPU’s (Oracle Compute Units) and the larger your memory requirements, the more OCPU’s you consume.<br> <br></p> <h2 id="whereyouputyourdatawillalwaysmatter">Where You Put Your Data Will Always Matter</h2> <p>Moving your analytics platform into the cloud may make a lot of sense and could therefore be a relatively simple decision to make. However, the question of where your data resides is a more challenging subject, given the sensitivities and increasing legislative constraints that exist around where your data can or should be stored. The answer to that question will influence the performance and data latency you can expect from your analytics platform.</p> <p>OAC is architected to be flexible when it comes to its data sources and consequently the options available for data access are pretty broad. At a high level, your choices are similar to those you would have when implementing on-premise, namely:</p> <ul> <li>perform ELT processing to transform and move the data (into the cloud);</li> <li>replicate data from source to target (in the cloud) or;</li> <li>query data sources via direct access.</li> </ul> <p>These are supplemented by a fourth option to use the inbuilt Data Connectors available in OAC to connect to cloud or on-premise databases, other proprietary platforms or any other source accessible via JDBC. This is probably a decent path for exploratory data usage within DV, but I’m not sure it would always make the best long term option.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.16.16.png" alt="alt"></p> <p>Unsurprisingly, with the breadth of options comes a spectrum of tooling that can be used for shifting your data around and it is important to note that depending on your approach, additional cloud services may or may not be required.</p> <p>For accessing data directly at its source, the preferred route seems to be to use RDC (Remote Data Connector), although it is worth noting that support is limited to Oracle (including OLAP), SQL Server, Teradata or DB2 databases. Also, be aware that RDC operates within WebLogic Server and so this will be needed within the on-premise network.</p> <p>Data replication is typically achieved using Data Sync (the reincarnation of the DAC, which OBIA implementers will already be familiar with), although it is worth mentioning that there are other routes that could be taken, such as APEX or SQL Developer, depending on the data volumes and latency you have to play with.</p> <p>Classic ELT processing can be achieved via Oracle Data Integrator (either the Cloud Service, a traditional on-premise implementation or a hybrid-model).</p> <p>Ultimately, due care and attention needs to be taken when deciding on your data architecture as this will have a fundamental effect on the simplicity with which data can be accessed and interpreted, the query performance achieved and the data latency built into your analytics.<br> <br></p> <h2 id="dataflowsmakeformodernanalyticssimplification">Data Flows Make For Modern Analytics Simplification</h2> <p>A while back, I wrote a post titled <a href="https://www.rittmanmead.com/blog/2017/07/enabling-a-modern-analytics-platform/">Enabling a Modern Analytics Platform</a> in which I attempted to describe ways that Mode 1 (departmental) and Mode 2 (enterprise) analytics could be built out to support each other, as opposed to undermining one another. One of the key messages I made was the importance of having an effective mechanism for transitioning your Mode 1 outputs back into Mode 2 as seamlessly as possible. (The same is true in reverse for making enterprise data available as an Mode 1 input.)</p> <p>One of the great things about OAC is how it serves to simplify this transition. Users are able to create analytic content based on data sourced from a broad range of locations: at the simplest level, <strong>Data Sets</strong> can be built from flat files or via one of the available <strong>Data Connectors</strong> to relational, NoSQL, proprietary database or Essbase sources. Moreover, enterprise curated metadata (via RPD lift-and-shift from an on-premise implementation) or analyst developed Subject Areas can be exposed. These sources can be ‘mashed’ together directly in a DV project or, for more complex or repeatable actions, <strong>Data Flows</strong> can be created to build Data Sets. Data Flows are pretty powerful, not only allowing users to join disparate data but also perform some useful data preparation activities, ranging from basic filtering, aggregation and data manipulation actions to more complex sentiment analysis, forecasting and even some machine learning modelling features. Importantly, Data Flows can be set to output their results to disk, either written to a Data Set or even to a database table and they can be scheduled for repetitive refresh.</p> <p>For me, one of the most important things about the Data Flows feature is that it provides a clear and understandable interface which shows the sequencing of each of the data preparation stages, providing valuable information for any subsequent reverse engineering of the processing back into the enterprise data architecture.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/06/Screen-Shot-2018-06-12-at-21.17.14.png" alt="alt"><br> <br></p> <p>In summary, there are plenty of exciting and innovative things happening with Oracle Analytics in the cloud and as time marches on, the case for moving to the cloud in one shape or form will probably get more and more compelling. However, beyond a strategic decision to ‘Go Cloud’, there are many options and complexities that need to be addressed in order to make a successful start to your journey - some technical, some procedural and some organisational. Whilst a level of planning and research will undoubtedly smooth the path, the great thing about the cloud services is that they are comparatively cheap and easy to initiate, so getting on and building a prototype is always going to be a good, exploratory starting point.</p> </div> Mike Vickers 5b5a56f45000960018e69b3f Tue Jun 12 2018 13:43:00 GMT-0400 (EDT) Why DevOps Matters for Enterprise BI https://www.rittmanmead.com/blog/2018/06/why-devops-matters-for-enterprise-bi/ <div class="kg-card-markdown"><img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOps.png" alt="Why DevOps Matters for Enterprise BI"><p>Why are people frustrated with their existing enterprise BI tools such as OBIEE? My view is because it costs too much to produce relevant content. I think some of this is down to the tools themselves, and some of it is down to process.</p> <p>Starting with the tools, they are not “bad” tools; the traditional licensing model can be expensive in today’s market, and traditional development methods are time-consuming and hence expensive. The vendor’s response is to move to the cloud and to highlight cost savings that can be made by having a managed platform. Oracle Analytics Cloud (OAC) is essentially OBIEE installed on Oracle’s servers in Oracle’s data centres with Oracle providing your system administration, coupled with the ability to flex your licensing on a monthly or annual basis.</p> <p>Cloud does give organisations the potential for more agility. Provisioning servers can no longer hold up the start of a project, and if a system needs to increase capacity, then more CPUs or nodes can be added. This latter case is a bit murky due to the cost implications and the option to try and resolve performance issues through query efficiency on the database.</p> <p>I don’t think this solves the problem. Tools that provide reports and dashboards are becoming more commoditised, up and coming vendors and platform providers are offering the service for a fraction of the cost of the traditional vendors. They may lack some of the enterprise features like open security models; however, these are an area that platform providers are continually improving. Over the last 10 years, Oracle's focus for OBIEE has been on more on integration than innovation. Oracle DV was a significant change; however, there is a danger that Oracle lost the first-mover advantage to tools such as Tableau and QlikView. Additionally, some critical features like lineage, software lifecycle development, versioning and process automation are not built in to OBIEE and worse still, the legacy design and architecture of the product often hinders these.</p> <p>So this brings me back round to process. Defining “good” processes and having tools to support them is one of the best ways you can keep your BI tools relevant to the business by reducing the friction in generating content.</p> <p>What is a “good” process? Put simply, a process that reduces the time between the identification of a business need and the realising it with zero impact on existing components of the system. Also, a “good” process should provide visibility of any design, development and testing, plus documentation of changes, typically including lineage in a modern BI system. Continuous integration is the Holy Grail.</p> <p>This why DevOps matters. Using automated migration across environments, regression tests, automatically generated documentation in the form of lineage, native support for version control systems, supported merge processes and ideally a scripting interface or API to automate the generation of repetitive tasks such as changing the data type of a group of fields system-wide, can dramatically reduce the gap from idea to realisation.</p> <p>So, I would recommend that when looking at your enterprise BI system, you not only consider the vendor, location and features but also focus on the potential for process optimisation and automation. Automation could be something that the vendor builds into the tool, or you may need to use accelerators or software provided by a third party. Over the next few weeks, we will be publishing some examples and case studies of how our BI and DI Developer Toolkits have helped clients and enabled them to automate some or all of the BI software development cycle, reducing the time to release new features and increasing the confidence and robustness of the system.</p> </div> Jon Mead 5b5a56f45000960018e69b3e Tue Jun 12 2018 10:44:25 GMT-0400 (EDT) Why DevOps Matters for Enterprise BI https://www.rittmanmead.com/blog/2018/06/why-devops-matters-for-enterprise-bi/ <img src="https://www.rittmanmead.com/blog/content/images/2018/06/DevOps.png" alt="Why DevOps Matters for Enterprise BI"><p>Why are people frustrated with their existing enterprise BI tools such as OBIEE? My view is because it costs too much to produce relevant content. I think some of this is down to the tools themselves, and some of it is down to process.</p> <p>Starting with the tools, they are not “bad” tools; the traditional licensing model can be expensive in today’s market, and traditional development methods are time-consuming and hence expensive. The vendor’s response is to move to the cloud and to highlight cost savings that can be made by having a managed platform. Oracle Analytics Cloud (OAC) is essentially OBIEE installed on Oracle’s servers in Oracle’s data centres with Oracle providing your system administration, coupled with the ability to flex your licensing on a monthly or annual basis.</p> <p>Cloud does give organisations the potential for more agility. Provisioning servers can no longer hold up the start of a project, and if a system needs to increase capacity, then more CPUs or nodes can be added. This latter case is a bit murky due to the cost implications and the option to try and resolve performance issues through query efficiency on the database.</p> <p>I don’t think this solves the problem. Tools that provide reports and dashboards are becoming more commoditised, up and coming vendors and platform providers are offering the service for a fraction of the cost of the traditional vendors. They may lack some of the enterprise features like open security models; however, these are an area that platform providers are continually improving. Over the last 10 years, Oracle's focus for OBIEE has been on more on integration than innovation. Oracle DV was a significant change; however, there is a danger that Oracle lost the first-mover advantage to tools such as Tableau and QlikView. Additionally, some critical features like lineage, software lifecycle development, versioning and process automation are not built in to OBIEE and worse still, the legacy design and architecture of the product often hinders these.</p> <p>So this brings me back round to process. Defining “good” processes and having tools to support them is one of the best ways you can keep your BI tools relevant to the business by reducing the friction in generating content.</p> <p>What is a “good” process? Put simply, a process that reduces the time between the identification of a business need and the realising it with zero impact on existing components of the system. Also, a “good” process should provide visibility of any design, development and testing, plus documentation of changes, typically including lineage in a modern BI system. Continuous integration is the Holy Grail.</p> <p>This why DevOps matters. Using automated migration across environments, regression tests, automatically generated documentation in the form of lineage, native support for version control systems, supported merge processes and ideally a scripting interface or API to automate the generation of repetitive tasks such as changing the data type of a group of fields system-wide, can dramatically reduce the gap from idea to realisation.</p> <p>So, I would recommend that when looking at your enterprise BI system, you not only consider the vendor, location and features but also focus on the potential for process optimisation and automation. Automation could be something that the vendor builds into the tool, or you may need to use accelerators or software provided by a third party. Over the next few weeks, we will be publishing some examples and case studies of how our BI and DI Developer Toolkits have helped clients and enabled them to automate some or all of the BI software development cycle, reducing the time to release new features and increasing the confidence and robustness of the system.</p> Jon Mead b2e3a0d4-ca3b-42d4-be2f-e06224d0466e Tue Jun 12 2018 10:44:25 GMT-0400 (EDT) Introducing ODTUG Kscope18 Livestream Sessions https://www.odtug.com/p/bl/et/blogaid=809&source=1 If you can't make it to ODTUG Kscope18, you can still participate from home! Check out the list of sessions we're bringing to you live from Orlando, Florida! ODTUG https://www.odtug.com/p/bl/et/blogaid=809&source=1 Mon Jun 11 2018 13:52:33 GMT-0400 (EDT) Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1) https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-stepping-back-a-bit-part-1-1/ <p>Slight change to the planned article. At the end of my analysis in <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Part 1</a> I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA <a href="http://digitalyacht.co.uk/product/st-nmea-iso/">converter</a> from DigitalYachts (discounted at the London boat show I'm glad to say).</p> <p>This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.</p> <p>The next blog will be very much back on track. Looking at Kafka in the architecture.</p> <h2 id="theexistingwiring">The existing wiring</h2> <p>The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Part 1</a>. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and <a href="https://en.wikipedia.org/wiki/Digital_selective_calling">DSC VHF</a>. The top left image also shows a spare GPS (Garmin) and far left an <a href="http://www.epirb.com/">EPIRB</a>.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/chart-table.png" alt=""></p> <h2 id="approach">Approach</h2> <p>I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.</p> <pre><code> import json import sys #DEF Function to test message def is_message_valid (orig_line): ........ [Function same code described in Part 1] #main body f = open("/Development/step_1.log", "r") valid_messages = 0 invalid_messages = 0 total_messages = 0 my_list = [""] #process file main body for line in f: orig_line = line if is_message_valid(orig_line): valid_messages = valid_messages + 1 #look for wind message #print "message valid" if orig_line[0:1] == "$": if len(my_list) == 0: #print "ny list is empty" my_list.insert(0,orig_line[0:6]) else: #print orig_line[0:5] my_list.append(orig_line[0:6]) #print orig_line[20:26] else: invalid_messages = invalid_messages + 1 total_messages = total_messages + 1 new_list = list(set(my_list)) i = 0 while i < len(new_list): print(new_list[i]) i += 1 #Hight tech report print "Summary" print "#######" print "valid messages -> ", valid_messages print "invalid messages -> ", invalid_messages print "total mesages -> ", total_messages f.close() </code></pre> <p>For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.</p> <pre><code>nc -l 192.168.1.1 2000 > step_x.log</code></pre> <p>While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/speedo.png" alt=""></p> <h3 id="step1">Step 1;</h3> <p>I spool and analyse about ten minutes of data without making any changes to the existing setup.</p> <p>The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/current-setup.png" alt=""></p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPRMC -> GPS (form AIS hub) $GPGGA $GPGLL $GPGBS $IIDBT -> Depth sensor $IIMTW -> Sea temperature sensor $IIMWV -> Wind speed Summary ####### valid messages -> 2129 invalid messages -> 298 total mesages -> 2427 12% error </code></pre> <h3 id="step2">Step 2;</h3> <p>I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.</p> <p>I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).</p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPRMC -> GPS (form AIS hub) $GPGGA $GPGLL $GPGBS No $II messages as expected Summary ####### valid messages -> 3639 invalid messages -> 232 total mesages -> 3871 6% error </code></pre> <h3 id="step3">Step 3;</h3> <p>I wire in the new hardware both NMEA in and out then directly into the course computer. </p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/wiring-it-up.png" alt=""></p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPGBS -> GPS messages $GPGGA $GPGLL $GPRMC $IIMTW -> Sea temperature sensor $IIMWV -> Wind speed $IIVHW -> Heading & Speed $IIRSA -> Rudder Angle $IIHDG -> Heading $IIVLW -> Distance travelled Summary ####### valid messages -> 1661 invalid messages -> 121 total mesages -> 1782 6.7% error </code></pre> <h2 id="conclusion">Conclusion;</h2> <p>I get all the messages I am after (for now) the hardware seems to be working.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/Screen-Shot-2018-02-06-at-21.24.38.png" alt=""></p> <p>Now to put all the panels back in place!</p> <p>In the next article, I will get back to technology and the use of Kafka in the architecture.</p> <p><a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Real-time Sailing Yacht Performance - Getting Started (Part 1)</a></p> <p><a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/">Real-time Sailing Yacht Performance - Kafka (Part 2)</a></p> Paul Shilling bb2cf79b-56e7-4f26-a8f1-61029265b293 Mon Jun 11 2018 08:20:33 GMT-0400 (EDT) Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1) https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-stepping-back-a-bit-part-1-1/ <div class="kg-card-markdown"><p>Slight change to the planned article. At the end of my analysis in <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Part 1</a> I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA <a href="http://digitalyacht.co.uk/product/st-nmea-iso/">converter</a> from DigitalYachts (discounted at the London boat show I'm glad to say).</p> <p>This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.</p> <p>The next blog will be very much back on track. Looking at Kafka in the architecture.</p> <h2 id="theexistingwiring">The existing wiring</h2> <p>The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in <a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Part 1</a>. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and <a href="https://en.wikipedia.org/wiki/Digital_selective_calling">DSC VHF</a>. The top left image also shows a spare GPS (Garmin) and far left an <a href="http://www.epirb.com/">EPIRB</a>.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/chart-table.png" alt=""></p> <h2 id="approach">Approach</h2> <p>I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.</p> <pre><code> import json import sys #DEF Function to test message def is_message_valid (orig_line): ........ [Function same code described in Part 1] #main body f = open("/Development/step_1.log", "r") valid_messages = 0 invalid_messages = 0 total_messages = 0 my_list = [""] #process file main body for line in f: orig_line = line if is_message_valid(orig_line): valid_messages = valid_messages + 1 #look for wind message #print "message valid" if orig_line[0:1] == "$": if len(my_list) == 0: #print "ny list is empty" my_list.insert(0,orig_line[0:6]) else: #print orig_line[0:5] my_list.append(orig_line[0:6]) #print orig_line[20:26] else: invalid_messages = invalid_messages + 1 total_messages = total_messages + 1 new_list = list(set(my_list)) i = 0 while i < len(new_list): print(new_list[i]) i += 1 #Hight tech report print "Summary" print "#######" print "valid messages -> ", valid_messages print "invalid messages -> ", invalid_messages print "total mesages -> ", total_messages f.close() </code></pre> <p>For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.</p> <pre><code>nc -l 192.168.1.1 2000 > step_x.log</code></pre> <p>While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/speedo.png" alt=""></p> <h3 id="step1">Step 1;</h3> <p>I spool and analyse about ten minutes of data without making any changes to the existing setup.</p> <p>The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/current-setup.png" alt=""></p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPRMC -> GPS (form AIS hub) $GPGGA $GPGLL $GPGBS $IIDBT -> Depth sensor $IIMTW -> Sea temperature sensor $IIMWV -> Wind speed Summary ####### valid messages -> 2129 invalid messages -> 298 total mesages -> 2427 12% error </code></pre> <h3 id="step2">Step 2;</h3> <p>I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.</p> <p>I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).</p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPRMC -> GPS (form AIS hub) $GPGGA $GPGLL $GPGBS No $II messages as expected Summary ####### valid messages -> 3639 invalid messages -> 232 total mesages -> 3871 6% error </code></pre> <h3 id="step3">Step 3;</h3> <p>I wire in the new hardware both NMEA in and out then directly into the course computer.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/wiring-it-up.png" alt=""></p> <h3 id="results">Results;</h3> <pre><code> $AITXT -> AIS (from AIS hub) $GPGBS -> GPS messages $GPGGA $GPGLL $GPRMC $IIMTW -> Sea temperature sensor $IIMWV -> Wind speed $IIVHW -> Heading & Speed $IIRSA -> Rudder Angle $IIHDG -> Heading $IIVLW -> Distance travelled Summary ####### valid messages -> 1661 invalid messages -> 121 total mesages -> 1782 6.7% error </code></pre> <h2 id="conclusion">Conclusion;</h2> <p>I get all the messages I am after (for now) the hardware seems to be working.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/02/Screen-Shot-2018-02-06-at-21.24.38.png" alt=""></p> <p>Now to put all the panels back in place!</p> <p>In the next article, I will get back to technology and the use of Kafka in the architecture.</p> <p><a href="https://www.rittmanmead.com/blog/2018/01/real-time-yacht-performance/">Real-time Sailing Yacht Performance - Getting Started (Part 1)</a></p> <p><a href="https://www.rittmanmead.com/blog/2018/06/real-time-sailing-yacht-performance-getting-started-part-2/">Real-time Sailing Yacht Performance - Kafka (Part 2)</a></p> </div> Paul Shilling 5b5a56f45000960018e69b37 Mon Jun 11 2018 08:20:33 GMT-0400 (EDT) PBCS and EPBCS Updates (June 2018): New Smart View System Setting, Planning Task Types Added to the Simplified Interface, Upcoming Changes & More https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-june-updates <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-june-updates" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/pbcs%20june%202018.jpg?t=1533950236061" alt="pbcs june 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The May updates for Oracle's<span>&nbsp;</span><a href="https://www.us-analytics.com/hyperionblog/pbcs-vs-epbcs-comparing-oracle-cloud-planning-applications">Planning &amp; Budgeting Cloud Service (PBCS) and Enterprise Planning and Budgeting Cloud Service (EPBCS)</a><span>&nbsp;have arrived!&nbsp;</span>This blog post outlines several new features, including a new Smart View system setting, planning task types added to the simplified interface, and more.</p> <p><em>The monthly update for PBCS and EPBCS will occur on Friday, June 15 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Fpbcs-and-epbcs-2018-june-updates&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/pbcs-and-epbcs-2018-june-updates Thu Jun 07 2018 16:54:36 GMT-0400 (EDT) ARCS Updates (June 2018): Accessing Exported Transactions File, Updates to Snapshot Retention Policy & More https://www.us-analytics.com/hyperionblog/arcs-product-update-june-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/arcs-product-update-june-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/arcs%20june%202018.jpg?t=1533950236061" alt="arcs june 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The May updates for Oracle's&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-account-reconciliation-cloud-service-arcs">Account Reconciliation Cloud Service (ARCS</a>) are here. In this blog post, we’ll outline new features in ARCS, including accessing exported transactions file, updates to snapshot retention policy, and more.&nbsp;</p> <p>We’ll let you know any time there are updates to ARCS or any other Oracle EPM cloud products. Check the&nbsp;<a href="https://www.us-analytics.com/hyperionblog">US-Analytics Oracle EPM &amp; BI Blog</a><span>&nbsp;</span>every month.</p> <p><em>The monthly update for Oracle ARCS will occur on Friday, June 15 during your normal daily maintenance window.</em></p> <h3 style="text-align: center;"></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Farcs-product-update-june-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/arcs-product-update-june-2018 Thu Jun 07 2018 15:55:16 GMT-0400 (EDT) FCCS Updates (June 2018): Power Users Can Lock and Unlock Data, More Intuitive POV Bar in Dashboards, Upcoming Changes & More https://www.us-analytics.com/hyperionblog/fccs-updates-june-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fccs-updates-june-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/fccs%20june%202018.jpg?t=1533950236061" alt="fccs june 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>The May updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/faq-oracle-financial-consolidation-and-close-cloud-service-fccs">Oracle's<span>&nbsp;Financial Consolidation and Close Cloud Service</span>&nbsp;(FCCS)</a><span>&nbsp;are here!</span><span>&nbsp;</span>This blog post outlines new features, including power users being able to lock and unlock data, more intuitive POV bar as well as more upcoming changes.</p> <p><em>The monthly update for FCCS will occur on Friday, June 15 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffccs-updates-june-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/fccs-updates-june-2018 Thu Jun 07 2018 15:35:42 GMT-0400 (EDT) EPRCS Updates (June 2018): Drill Content to a Cell File Attachment, Custom Background Image Scales, Create New Report Package Structure in Smart View & More https://www.us-analytics.com/hyperionblog/eprcs-updates-june-2018 <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/eprcs-updates-june-2018" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/eprcs%20june%202018.jpg?t=1533950236061" alt="eprcs june 2018" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>In this blog, we'll cover the June updates for&nbsp;<a href="https://www.us-analytics.com/hyperionblog/enterprise-performance-reporting-cloud">Oracle Enterprise Performance Reporting Cloud Service (EPRCS)</a>&nbsp;including new features and considerations.</p> <p><em>The monthly update for EPRCS will occur on Friday, June 15 during your normal daily maintenance window.</em></p> <h3></h3> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Feprcs-updates-june-2018&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Michelle Heath https://www.us-analytics.com/hyperionblog/eprcs-updates-june-2018 Thu Jun 07 2018 14:09:29 GMT-0400 (EDT) Summertime and The Presenting is Easy https://blog.redpillanalytics.com/summertime-and-the-presenting-is-easy-f54e657be770?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3BveNxR6PettAS98SNHAlQ.jpeg" /><figcaption>Photo cred: <a href="https://unsplash.com/@etiennegirardet">Etienne Girardet</a></figcaption></figure><p>Do your summer plans include Orlando and <a href="https://kscope18.odtug.com/page/home">Kscope18</a>? Ours do! We are excited to be part of the amazing lineup at the Swan and Dolphin in Orlando, FL on June 10–14.</p><p>We know it’s tough to decide which sessions to attend but make sure to add these Red Pill Analytics’ sessions to your schedule builder.</p><p><a href="http://redpillanalytics.com/contact/">Let us know</a> if you are attending, we’d love to buy you a drink and talk tech.</p><h3><strong>Geek Game Night</strong></h3><p>Sunday, June 10 | 8:00–9:00 PM | Southern Hemisphere III</p><p>Welcome to another edition of ODTUG Kscope Geek Game Night! Join us after the Sunday night Welcome Reception as we continue the fun through a formal geek on! This year’s geek game night will include Kscope’s version of the popular ’70s game show, Hollywood Squares: In Kscope Squares, our “experts” (one of them our very own Stewart Bryson) will attempt to answer a wide range of technical questions. The contestants — which will be selected from the audience — will try to decide who is telling the truth and who is lying…won’t you help them out?</p><h3><strong>Become an Analytics Lifecycle Superhero: Dramatically Reduce Your Time to Patch/Upgrade/Move to the Cloud!</strong></h3><p><a href="https://kscope18.odtug.com/page/speakers?name=Mike%20Durran">Mike Durran</a>, Oracle Corporation<br><a href="https://odtug.bluetonemedia.com/cospeakers/name/Stewart%20Bryson">Stewart Bryson</a>, Red Pill Analytics</p><p>Tuesday, June 12, 2018 | Session 5, 9:00 AM — 10:00 AM<br>Northern Hemisphere E3, Fifth Level</p><p>In this session, you will learn how to dramatically reduce your system testing costs by using a regression testing utility called the baseline validation tool (BVT). This tool enables you to plan your upgrades and execute patching and developer lifecycle more effectively across on-premises systems and the cloud. Attend this session for an overview of the BVT, including its testing capabilities, new features, and some real world application examples.</p><h3><strong>DevOps for the Analytics Cloud Using the Developer Cloud Service</strong></h3><p><a href="https://kscope18.odtug.com/page/speakers?name=Stewart%20Bryson">Stewart Bryson</a>, <em>Red Pill Analytics</em></p><p>Tuesday, June 12, 2018 | Session 8, 3:45–4:45 PM<br>Northern Hemisphere A2, Fifth Level</p><p>The Oracle Analytics Cloud (OAC) has changed the way we build and deliver analytics. It has also changed our approach to managing our analytics investments, including the development lifecycle and the automation of processes ranging from testing to deployments. It’s all in the Cloud now; that should make things easier, right?</p><p>With the Oracle Developer Cloud Service (DCS), it can be! DCS is a free cloud service that features issue tracking, code versioning, team collaboration, Agile project management, and continuous integration and delivery. In this presentation, we’ll explore how DCS can manage the entire lifecycle for OAC development. Attendees will see a live demonstration showing how to use DCS to automate testing and deployment for OAC.</p><h3><strong>What We Learned Building Analytics for Google</strong></h3><p><a href="https://kscope18.odtug.com/page/speakers?name=Stewart%20Bryson">Stewart Bryson</a>, <em>Red Pill Analytics</em></p><p>Wednesday, June 13, 2018, Session 9, 9:00–10:00 AM<br>Oceanic 2, Lobby/Third Level</p><p>How should we approach analytics when the serverless cloud is the only option? Cloud-native data platforms enable developers to build applications quickly. In this session, learn how Red Pill Analytics stitched together data from across the internet using the Google Cloud Platform to tell a social media story for our client Google. Hear how we used BigQuery, App Engine, PubSub, Cloud Functions, and Data Studio to build a complete marketing analytics platform measuring user engagement from sources including Twitter, LinkedIn, YouTube, Google Analytics, Google+, and AdWords.</p><p>What challenges did we face? What decisions did we make? And how did the Google Cloud Platform enable us to deliver all of this in less than three months? We can only show you the door — you must choose to walk through it.</p><h3>Careers and the Changing Landscape Panel for DBA/Developers</h3><p>Wednesday, June 13, 2018, 10:45 AM — 12:15 PM<br>Oceanic 7, Lobby/Third Level</p><p>In these panel discussions, industry and consulting experts will explore product and employment trends, automation and other factors affecting the future of technology and resource demands.</p><p><a href="https://kscope18.odtug.com/page/speakers?name=Danny%20Bryant">Danny Bryant</a>, Snowflake Computing(Moderator)<br>Jeff Smith, Oracle Corporation<br>Stewart Bryson, Red Pill Analytics<br>Anton Nielson, Insum<br>Dan McGhan, Oracle Corporation<br>Kent Graziano, Snowflake Computing<br>Julian Dontcheff, Accenture Enkitec Group</p><h3><strong>Using Apache Kafka to Add Streaming Analytics to Your Application</strong></h3><p><a href="https://kscope18.odtug.com/page/speakers?name=Bjoern%20Rost">Bjoern Rost</a>, Pythian<br><a href="https://odtug.bluetonemedia.com/cospeakers/name/Stewart%20Bryson">Stewart Bryson</a>, Red Pill Analytics</p><p>Wednesday, June 13, 2018 | Session 13, 3:30 PM — 4:30 P M<br>Asia 5, Lobby/Third Level</p><p>While relational databases are still the kings of transaction processing systems, they have a hard time keeping up with the increasing demand for real-time analytics. In this session, we will build and demonstrate an end-to-end data processing pipeline. We will discuss how to turn changes from database state into events and stream them into Apache Kafka. We will also explain the basic concepts of streaming transformations using windows and KSQL before ingesting the transformed stream in a dashboard application. Lastly, we will explore the possibilities of adding microservices as subscribers. Live demos will be performed using Apache Kafka and Cloud infrastructure, as well as analytics and visualization services.</p><h3><strong>Thursday Deep Dive — Drill to Detail, Live!</strong></h3><p><a href="https://kscope18.odtug.com/page/speakers?name=Mark%20Robert%20Rittman,%20Mr">Mark Rittman<strong><br></strong></a><a href="https://kscope18.odtug.com/page/speakers?name=Stewart%20Bryson">Stewart Bryson</a>, <em>Red Pill Analytics</em></p><p>Thursday, June 14, 2018, 9:30–11:00 a.m.<br>Northern Hemisphere E4, Fifth Level</p><p>Be a part of the studio audience during a live recording of the Drill to Detail podcast! Mark Rittman, an independent analyst and Oracle ACE Director, began the podcast over a year ago, focusing on the business and strategy of analytics, big data, and distributed processing in the cloud. Throughout the session, Mark will be joined by several esteemed guests, including his first ever podcast guest, Oracle ACE Director Stewart Bryson. Join us for what is sure to be a lively discussion on everything big data and data warehousing.</p><p>Make sure to keep an eye on our <a href="http://events.redpillanalytics.com">events</a> page for what Red Pill Analytics is up to this year.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=f54e657be770" width="1" height="1"><hr><p><a href="https://blog.redpillanalytics.com/summertime-and-the-presenting-is-easy-f54e657be770">Summertime and The Presenting is Easy</a> was originally published in <a href="https://blog.redpillanalytics.com">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Lauren Prezby https://medium.com/p/f54e657be770 Wed Jun 06 2018 13:17:10 GMT-0400 (EDT) Kscope 18 is coming fast and DevEPM will be all around the place! https://devepm.com/2018/06/04/kscope-18-is-coming-fast-and-devepm-will-be-all-around-the-place/ Hi guys how are you? It has been so long that I almost forgot about the blog :). We have being very busy lately, with a lot of projects and Kscope. This year we&#8217;ll be all around the place in Kscope. Yes we&#8217;ll be presenting 3 sessions, one lunch and learn panel and a lip-sync [&#8230;] RZGiampaoli http://devepm.com/?p=1708 Mon Jun 04 2018 14:26:11 GMT-0400 (EDT) Battle in India for e-commerce market leadership is no longer between just Amazon and Flipkart http://bi.abhinavagarwal.net/2018/06/battle-in-india-for-e-commerce-market.html <div dir="ltr" style="text-align: left;" trbidi="on"><h2 style="clear: both;">Amazon Launches Prime Music in India. What It Means for the Indian e-commerce Market</h2><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-0ksDfimXChE/Wp_sOXDcAbI/AAAAAAAAPC4/2381eN1IT0sU-St96L5iFaPWvrXDA1A-wCLcBGAs/s1600/04171.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="826" height="418" src="https://2.bp.blogspot.com/-0ksDfimXChE/Wp_sOXDcAbI/AAAAAAAAPC4/2381eN1IT0sU-St96L5iFaPWvrXDA1A-wCLcBGAs/s640/04171.png" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="color: black; float: left; font-family: &quot;times&quot; , serif , &quot;georgia&quot;; font-size: 48px; line-height: 30px; padding-right: 2px; padding-top: 2px;">O</span><br />n a day when it was reported that the online streaming music app Gaana was raising $115 million (about ₹750 crores) from Chinese Internet investment company Tencent Holdings Ltd and Times Internet Ltd (<a href="http://www.livemint.com/Home-Page/6dSB97I2dryglkHY0Vm88N/Gaana-to-raise-115-million-from-Tencent-Times-Internet.html">Gaana to raise $115 million from Tencent, Times Internet – Livemint</a>), came the news that online retailer Amazon had launched its PrimeMusic streaming music service in India.<br /><br /><a href="https://2.bp.blogspot.com/--1EFZuqPTtQ/Wp_sOvJ6C2I/AAAAAAAAPC8/09s-QgTWxdcZuLAp7-AET5j18zXn3SfcACLcBGAs/s1600/04172.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="448" data-original-width="585" height="245" src="https://2.bp.blogspot.com/--1EFZuqPTtQ/Wp_sOvJ6C2I/AAAAAAAAPC8/09s-QgTWxdcZuLAp7-AET5j18zXn3SfcACLcBGAs/s320/04172.png" width="320" /></a>According to Amazon, “Prime Music provides unlimited, ad-free access to on-demand streaming of curated playlists and stations, plus millions of songs and albums at no additional cost for eligible Amazon Prime members.”<br /><br />The Amazon Prime service in India costs ₹999 annually and provides “free One-Day, Two-Day and Standard Delivery on eligible items”, PrimeVideo – Amazon’s video streaming service, and now PrimeMusic. According to&nbsp;<a href="https://musicindustryblog.wordpress.com/2017/07/14/amazon-is-now-the-3rd-biggest-music-subscription-service/">Midis Research</a>, Amazon had become the third-largest music subscription service globally, behind Spotify (40%) and Apple Music (19%).<br /><br /><a name='more'></a><br /><a href="https://1.bp.blogspot.com/--rBjOpRNPSE/Wp_sMd5XFNI/AAAAAAAAPCs/C-oV2DcPtzgOHHivYObpjihKhfP2u1regCLcBGAs/s1600/04149.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="765" data-original-width="1593" height="190" src="https://1.bp.blogspot.com/--rBjOpRNPSE/Wp_sMd5XFNI/AAAAAAAAPCs/C-oV2DcPtzgOHHivYObpjihKhfP2u1regCLcBGAs/s400/04149.png" width="400" /></a>I gave the PrimeMusic service a spin, and after two days of trying it out, I came away reasonably impressed. If you are already a subscriber to Amazon’s Prime service, you don’t need to do anything more than downloading the PrimeMusic app (available on most mobile operating systems, including iOS and Android). The service is also available on web browsers. The selection is impressive, and the curated lists are fairly comprehensive. For someone like me who has a preference for older Hindi soundtracks and likes listening to Mohd Rafi, I was pleasantly surprised to find a broad array of choices at my disposal. I can create my own playlists, listen to curated playlists, or listen all day long to stations. I spent all Sunday streaming songs from the service to my speakers. The service restricts streaming on only one device at a time, however. For morning listening, there is an acceptable, though not comprehensive, selection of devotional songs. The price too is right. Where that leaves other services like Gaana, Saavn, Wynk, Hungama, and others remains to be seen. The future does not look very promising, to be frank. But more on this later.<br /><br />From a first look, the Amazon PrimeMusic service seems to have hit the ground running.<br /><h3 style="text-align: left;">Flashback - Flipkart First and Flyte</h3><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-rhnDHAJMS-E/Wp_vAZ0NobI/AAAAAAAAPDQ/Ox2ufywf7TsBAbKLPEqKi8xvC4Zt16nkQCLcBGAs/s1600/Flipkart-First-1170x480%255B1%255D.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="480" data-original-width="1170" height="131" src="https://4.bp.blogspot.com/-rhnDHAJMS-E/Wp_vAZ0NobI/AAAAAAAAPDQ/Ox2ufywf7TsBAbKLPEqKi8xvC4Zt16nkQCLcBGAs/s320/Flipkart-First-1170x480%255B1%255D.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="font-size: 12.8px; text-align: center;">Flipkart First graphic</td></tr></tbody></table>It is worth noting that it was in May 2014 that Flipkart had launched Flipkart First, a membership program patterned on Amazon Prime, which cost ₹500 a year, and which promised “Free shipping on your orders*“, “in a day” delivery for most products, and “Discounted same day delivery*“. In the more than three years since its launch, Flipkart has not added any new benefits to its First program. It was more than two years after Flipkart First’s launch that Amazon launched its Prime service in India (<a href="https://medium.com/@abhinavagarwal_/amazon-launches-prime-in-india-can-flipkart-stay-first-bb9e1e1bd672">see my article</a>). At the time, I had remarked on the lack of focus on Flipkart’s part to promote its FlipkartFirst service.<br /><blockquote class="tr_bq">“But for reasons best known to Flipkart, after an initial flurry of promotion and advertising, including a three-month giveaway to 75000 customers, Flipkart did not seem to pursue it with any sort of vigor. Customers complained of many products being excluded from Flipkart First, and in the absence of any sustained campaign to make customers aware of the programme, it has slowly faded from memory. … Worse, there was a news story on July 20th about Flipkart planning to launch a programme called “F-Assured”, as a replacement to Flipkart Advantage. The story suggested that the launch of F-Assured was also meant to “preempts the launch of Amazon Prime” — something that did not come to pass.”</blockquote>It, therefore, came as no surprise when there were news reports in late 2017 of Flipkart looking to “relaunch” its Flipkart First program in collaboration with other e-commerce vendors and startups like MakeMyTrip, Ola, and BookMyShow (<a href="http://www.livemint.com/Industry/6Z2ueZHBRXQ3UxilTqaBhP/Flipkart-may-relaunch-loyalty-programme-against-Amazon-Prime.html">article in Mint</a>&nbsp;and&nbsp;<a href="https://economictimes.indiatimes.com/small-biz/startups/newsbuzz/flipkart-amazon-to-adopt-new-strategies-in-new-year/articleshow/62246689.cms">Economic Times</a>). The article also mentioned that Flipkart “lost focus in making it work. Customers didn’t take to it and the service fizzled out.”<br /><br />Contrast this with Amazon’s focus on making its Prime program a success. So much so that this is what Amazon founder and CEO Jeff Bezos had to say in his&nbsp;<a href="http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9NjI4NTg1fENoaWxkSUQ9MzI5NTMxfFR5cGU9MQ==&amp;t=1">annual letter to shareholders in 2015</a>:<br /><blockquote class="tr_bq">“AWS, Marketplace and Prime are all examples of bold bets at Amazon that worked, and we’re fortunate to have those three big pillars.”</blockquote><a href="https://1.bp.blogspot.com/-CBcPqhra8yI/Wp_sMUwEAgI/AAAAAAAAPCo/_q6VYOf5ZqI8-aTnsF0n91VrBj_yiCV4wCLcBGAs/s1600/04150.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="339" data-original-width="703" height="192" src="https://1.bp.blogspot.com/-CBcPqhra8yI/Wp_sMUwEAgI/AAAAAAAAPCo/_q6VYOf5ZqI8-aTnsF0n91VrBj_yiCV4wCLcBGAs/s400/04150.png" width="400" /></a>It is not as if Flipkart had not ventured into the music business. In Feb 2012, it had launched its&nbsp;<a href="https://www.thehindubusinessline.com/companies/Flipkart-unveils-digital-music-store-Flyte/article20402612.ece">online music store,&nbsp;<b>Flyte</b></a>, and in June 2013 shuttered it. I wrote about it in my article in&nbsp;<a href="http://www.dnaindia.com/authors/abhinav-agarwal">DNA</a>&nbsp;in&nbsp;<a href="http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185">2015</a>. According to one website, the reasons for Flyte being shut down were several, from poor marketing support, digital piracy, low revenues, and more.<br /><blockquote class="tr_bq">“Flyte, with low revenues and low growth, remained a low priority business, and it never got the marketing support needed to push for an increase in sales. A couple of people we spoke with repeatedly emphasized the lack of marketing support as a key reason for Flyte’s lack of success – Flyte built traction almost entirely on word of mouth, while the physical goods business got all the marketing spends.”</blockquote>Second, the financial reasons for which Flipkart shut down Flyte, even if true, defy belief. For a company that had raised thousands of crores of rupees (billions of dollars) in funding, persisting with a new and promising line of business that would burn a hole of a few million dollars a year should have been a no-brainer. Except, that it wasn’t.<br /><blockquote class="tr_bq">"Flyte Music had struck deals for India based music downloads on web and app by paying music labels an aggregate minimum guarantee (MG) of around $1 million (Rs 5.5-6 crores) for the year, multiple sources told MediaNama. … Revenues from song downloads were fairly low – not even 50% of the minimum guarantee amount (only around Rs 2-3 crore is what we heard), and the ARPU was around Rs 9-12 per user, which made it difficult to justify the minimum guarantee, and any significant customer acquisition costs." [<a href="https://www.medianama.com/2013/05/223-why-flipkart-shut-flyte-music/" target="_blank">Medianama article</a>]</blockquote>One can only speculate what Flipkart’s competitive response would have been had it not prematurely abandoned its foray into digital music.<br /><h3 style="text-align: left;">The Battle-lines and the Battleground</h3><a href="https://4.bp.blogspot.com/-YGazrPeebQk/Wp_sMBMj4hI/AAAAAAAAPCk/wtUypr2oS0wqfthiZV8iYH6wk4ZFEdCAQCLcBGAs/s1600/04151.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="577" data-original-width="439" height="320" src="https://4.bp.blogspot.com/-YGazrPeebQk/Wp_sMBMj4hI/AAAAAAAAPCk/wtUypr2oS0wqfthiZV8iYH6wk4ZFEdCAQCLcBGAs/s320/04151.png" width="242" /></a>More pertinently, what seems to be clear is that Amazon is betting heavily on India. Having lost the race in China, it has pulled out all stops to turn its India operations into the market leader in the country. According to this&nbsp;<a href="https://www.emarketer.com/Article/Alibabas-Tmall-Maintains-Ecommerce-Lead-China/1016432">article</a>, Alibaba had a 51% market share, while Amazon had less than 1%. Clearly, India is a market Amazon can ill-afford to lose.<br /><br /><br />The battle for space on the consumer’s smartphone screen is also one of numbers. On this dimension, Flipkart has but&nbsp;<a href="https://play.google.com/store/search?q=flipkart&amp;hl=en">one app in its arsenal</a>&nbsp;– its shopping app. The second app is the “Flipkart Seller Hub”.<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-IQV1GWflzMQ/Wp_sNrm-CfI/AAAAAAAAPCw/PUlcJFCNGRgV5hMfg1YJlBpChSWZGJmaACLcBGAs/s1600/04162.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="311" data-original-width="348" height="178" src="https://4.bp.blogspot.com/-IQV1GWflzMQ/Wp_sNrm-CfI/AAAAAAAAPCw/PUlcJFCNGRgV5hMfg1YJlBpChSWZGJmaACLcBGAs/s200/04162.png" width="200" /></a></td></tr><tr><td class="tr-caption" style="font-size: 12.8px;">Flipkart apps on Android</td></tr></tbody></table>On the other hand, Amazon has a&nbsp;<a href="https://play.google.com/store/search?q=amazon&amp;hl=en">formidable presence</a>&nbsp;that allows it to land-and-expand: its shopping app, Music, Prime Video, Now (for hyperlocal grocery shopping), Kindle, Drive, Assistant, Photos, Fire TV, Go, Alexa, and more. In all but the shopping space, Flipkart is more of an out-of-sight-out-of-mind case.<br /><div class="separator" style="clear: both; text-align: center;"></div><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-9kdg0MIBpl0/Wp_sN9mUHpI/AAAAAAAAPC0/ivC7T_ZlFAEfezUtZ-F2lgQ3fb5wftGlQCLcBGAs/s1600/04163.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="327" data-original-width="1218" height="169" src="https://1.bp.blogspot.com/-9kdg0MIBpl0/Wp_sN9mUHpI/AAAAAAAAPC0/ivC7T_ZlFAEfezUtZ-F2lgQ3fb5wftGlQCLcBGAs/s640/04163.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="font-size: 12.8px;"><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td class="tr-caption" style="font-size: 12.8px;">Amazon apps on Android</td></tr></tbody></table></td></tr></tbody></table>People may remember that it was in&nbsp;<a href="https://inc42.com/flash-feed/flipkart-launches-flipkart-nearby/">November 2015</a>&nbsp;that Flipkart had launched, on an experimental basis, its hyperlocal grocery delivery app,&nbsp;<b>Flipkart Nearby</b>. It was a smart and timely competitive, pre-emptive move against Amazon. Unfortunately, even that&nbsp;<a href="http://www.livemint.com/Companies/HtDM8KSQDcIktzXurKAnTI/Flipkart-shuts-grocery-delivery-app-Nearby.html">was shut down</a>&nbsp;a short few months later in 2016 as Flipkart doubled down on clamping down on its bleeding bottom-line.<br /><br />But this is only one half of the picture. To understand the other half, it is important to look at the entities that have been funding Flipkart in recent times. According to website&nbsp;<a href="https://www.crunchbase.com/organization/flipkart">Crunchbase</a>, Flipkart saw investments from&nbsp;<b>eBay</b>,&nbsp;<b>Microsoft</b>,&nbsp;<b>Tencent</b>, and&nbsp;<b>Softbank&nbsp;</b>to the tune of&nbsp;<b>$2.5 billion</b>&nbsp;in its last funding round. According to online news site&nbsp;<a href="https://www.recode.net/2017/12/7/16747706/where-has-softbank-vision-fund-invested">Recode</a>, Flipkart is the largest e-commerce investment Softbank has made from its $100 billion fund. Furthermore, there have been persistent rumours that brick-and-mortal behemoth&nbsp;<b>Walmart</b>&nbsp;has been looking to invest in Flipkart. As recently as January 2018, there&nbsp;<a href="http://www.business-standard.com/article/companies/walmart-to-invest-several-billion-dollars-for-20-stake-in-flipkart-report-118020900347_1.html">were</a>&nbsp;<a href="https://economictimes.indiatimes.com/small-biz/startups/newsbuzz/walmart-in-talks-to-buy-a-significant-minority-stake-in-flipkart/articleshow/62715558.cms">several</a>&nbsp;<a href="http://fortune.com/2018/01/31/walmart-amazon-flipkart-india/">news</a>&nbsp;<a href="http://www.businessinsider.com/walmart-considers-investment-flipkart-india-2018-1">stories</a>&nbsp;that talked about the possibility of Walmart acquiring a 20% stake in Flipkart at valuations that could touch $20 billion. To put that in perspective, Walmart’s biggest acquisition to date was its purchase of online e-commerce site&nbsp;<b>Jet.com</b>&nbsp;in August 2016 for $3 billion in cash and $300 million in Walmart shares. Jet.com had been founded by&nbsp;<b>Marc Lore</b>, who had earlier been a co-founder of&nbsp;<b>Quidsi</b>, the company behind the website&nbsp;<b>Diapers.com</b>, and which had a very bruising battle with Amazon a decade back. This is another fascinating story that I mentioned in my article&nbsp;<a href="http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185">here</a>.<br /><h3 style="text-align: left;">Conclusion</h3>All this points to one inescapable conclusion – the battle in India for e-commerce market leadership is no longer between just Amazon and Flipkart. On one side you have Amazon with its relentless focus on execution and remarkable track record of having won many more battles than it has lost. On the other side, you have Flipkart that is now backed not only by billions of dollars in fresh funding, but also a diverse array of interests like Tencent, Alibaba, Microsoft, eBay, Softbank, and possibly even Walmart. These entities seem to be bound by their unanimous need to put all their wood behind the arrow that is Flipkart. This is no longer about investing in the Indian e-commerce market to get a hefty multiple on their investments. It is now looking more and more like the OK Corral where the final shootout will take place. Place your bets on who will come out of this scrap as Wyatt Earp.<br /><br /><i>This article first appeared in&nbsp;<a href="http://www.opindia.com/" target="_blank">OpIndia</a>&nbsp;on&nbsp;<a href="http://www.opindia.com/2018/03/battle-in-india-for-e-commerce-market-leadership-is-no-longer-between-just-amazon-and-flipkart/" target="_blank">March 7th, 2018</a>.</i><br /><i>This was also cross-posted on my personal <a href="http://blog.abhinavagarwal.net/2018/03/battle-in-india-for-e-commerce-market.html" target="_blank">blog</a>.</i><br /><i><br /></i><br /><div><br /><span style="color: #666666; font-size: x-small;">© 2018, Abhinav Agarwal. All rights reserved.</span></div></div> Abhinav Agarwal tag:blogger.com,1999:blog-13714584.post-7389516864699052403 Mon Jun 04 2018 13:30:00 GMT-0400 (EDT) Twitter Analytics using Python - Part 2 http://www.oralytics.com/2018/06/twitter-analytics-using-python-part-2.html <p>This is my second (of five) post on using Python to process Twitter data.</p> <p>Check out my all the posts in the series.</p> <p>In this post I was going to look at two particular aspects. The first is the converting of Tweets to Pandas. This will allow you to do additional analysis of tweets. The second part of this post looks at how to setup and process streaming of tweets. The first part was longer than expected so I'm going to hold the second part for a later post.</p> <p>Step 6 - Convert Tweets to Pandas</p><p>In my previous blog post I show you how to connect and download tweets. Sometimes you may want to convert these tweets into a structured format to allow you to do further analysis. A very popular way of analysing data is to us <a href="https://pandas.pydata.org/">Pandas</a>. Using Pandas to store your data is like having data stored in a spreadsheet, with columns and rows. There are also lots of analytic functions available to use with Pandas.</p> <p>In my previous blog post I showed how you could extract tweets using the Twitter API and to do selective pulls using the Tweepy Python library. Now that we have these tweet how do I go about converting them into Pandas for additional analysis? But before we do that we need to understand a bit more a bout the structure of the Tweet object that is returned by the Twitter API. We can examine the structure of the User object and the Tweet object using the following commands.</p> <pre><br />dir(user)<br /><br />['__class__',<br /> '__delattr__',<br /> '__dict__',<br /> '__dir__',<br /> '__doc__',<br /> '__eq__',<br /> '__format__',<br /> '__ge__',<br /> '__getattribute__',<br /> '__getstate__',<br /> '__gt__',<br /> '__hash__',<br /> '__init__',<br /> '__init_subclass__',<br /> '__le__',<br /> '__lt__',<br /> '__module__',<br /> '__ne__',<br /> '__new__',<br /> '__reduce__',<br /> '__reduce_ex__',<br /> '__repr__',<br /> '__setattr__',<br /> '__sizeof__',<br /> '__str__',<br /> '__subclasshook__',<br /> '__weakref__',<br /> '_api',<br /> '_json',<br /> 'contributors_enabled',<br /> 'created_at',<br /> 'default_profile',<br /> 'default_profile_image',<br /> 'description',<br /> 'entities',<br /> 'favourites_count',<br /> 'follow',<br /> 'follow_request_sent',<br /> 'followers',<br /> 'followers_count',<br /> 'followers_ids',<br /> 'following',<br /> 'friends',<br /> 'friends_count',<br /> 'geo_enabled',<br /> 'has_extended_profile',<br /> 'id',<br /> 'id_str',<br /> 'is_translation_enabled',<br /> 'is_translator',<br /> 'lang',<br /> 'listed_count',<br /> 'lists',<br /> 'lists_memberships',<br /> 'lists_subscriptions',<br /> 'location',<br /> 'name',<br /> 'needs_phone_verification',<br /> 'notifications',<br /> 'parse',<br /> 'parse_list',<br /> 'profile_background_color',<br /> 'profile_background_image_url',<br /> 'profile_background_image_url_https',<br /> 'profile_background_tile',<br /> 'profile_banner_url',<br /> 'profile_image_url',<br /> 'profile_image_url_https',<br /> 'profile_link_color',<br /> 'profile_location',<br /> 'profile_sidebar_border_color',<br /> 'profile_sidebar_fill_color',<br /> 'profile_text_color',<br /> 'profile_use_background_image',<br /> 'protected',<br /> 'screen_name',<br /> 'status',<br /> 'statuses_count',<br /> 'suspended',<br /> 'time_zone',<br /> 'timeline',<br /> 'translator_type',<br /> 'unfollow',<br /> 'url',<br /> 'utc_offset',<br /> 'verified']<br /></pre> <br><pre><br />dir(tweets)<br /><br />['__class__',<br /> '__delattr__',<br /> '__dict__',<br /> '__dir__',<br /> '__doc__',<br /> '__eq__',<br /> '__format__',<br /> '__ge__',<br /> '__getattribute__',<br /> '__getstate__',<br /> '__gt__',<br /> '__hash__',<br /> '__init__',<br /> '__init_subclass__',<br /> '__le__',<br /> '__lt__',<br /> '__module__',<br /> '__ne__',<br /> '__new__',<br /> '__reduce__',<br /> '__reduce_ex__',<br /> '__repr__',<br /> '__setattr__',<br /> '__sizeof__',<br /> '__str__',<br /> '__subclasshook__',<br /> '__weakref__',<br /> '_api',<br /> '_json',<br /> 'author',<br /> 'contributors',<br /> 'coordinates',<br /> 'created_at',<br /> 'destroy',<br /> 'entities',<br /> 'favorite',<br /> 'favorite_count',<br /> 'favorited',<br /> 'geo',<br /> 'id',<br /> 'id_str',<br /> 'in_reply_to_screen_name',<br /> 'in_reply_to_status_id',<br /> 'in_reply_to_status_id_str',<br /> 'in_reply_to_user_id',<br /> 'in_reply_to_user_id_str',<br /> 'is_quote_status',<br /> 'lang',<br /> 'parse',<br /> 'parse_list',<br /> 'place',<br /> 'retweet',<br /> 'retweet_count',<br /> 'retweeted',<br /> 'retweets',<br /> 'source',<br /> 'source_url',<br /> 'text',<br /> 'truncated',<br /> 'user']<br /></pre> <p>We can see all this additional information to construct what data we really want to extract.</p> <p>The following example illustrates the searching for tweets containing a certain word and then extracting a subset of the metadata associated with those tweets.</p> <pre><br />oracleace_tweets = tweepy.Cursor(api.search,q="oracleace").items()<br />tweets_data = []<br />for t in oracleace_tweets:<br /> tweets_data.append((t.author.screen_name,<br /> t.place,<br /> t.lang,<br /> t.created_at,<br /> t.favorite_count,<br /> t.retweet_count,<br /> t.text.encode('utf8')))<br /></pre> <p>We print the contents of the tweet_data object.</p> <pre><br />print(tweets_data)<br /><br />[('jpraulji', None, 'en', datetime.datetime(2018, 5, 28, 13, 41, 59), 0, 5, 'RT @tanwanichandan: Hello Friends,\n\nODevC Yatra is schedule now for all seven location.\nThis time we have four parallel tracks i.e. Databas…'), ('opal_EPM', None, 'en', datetime.datetime(2018, 5, 28, 13, 15, 30), 0, 6, "RT @odtug: Oracle #ACE Director @CaryMillsap is presenting 2 #Kscope18 sessions you don't want to miss! \n- Hands-On Lab: How to Write Bette…"), ('msjsr', None, 'en', datetime.datetime(2018, 5, 28, 12, 32, 8), 0, 5, 'RT @tanwanichandan: Hello Friends,\n\nODevC Yatra is schedule now for all seven location.\nThis time we have four parallel tracks i.e. Databas…'), ('cmvithlani', None, 'en', datetime.datetime(2018, 5, 28, 12, 24, 10), 0, 5, 'RT @tanwanichandan: Hel ......<br /></pre> <p>I've only shown a subset of the tweets_data above.</p> <p>Now we want to convert the tweets_data object to a panda object. This is a relative trivial task but an important steps is to define the columns names otherwise you will end up with columns with labels 0,1,2,3...</p> <pre><br />import pandas as pd<br /><br />tweets_pd = pd.DataFrame(tweets_data,<br /> columns=['screen_name', 'place', 'lang', 'created_at', 'fav_count', 'retweet_count', 'text'])<br /></pre> <p>Now we have a panda structure that we can use for additional analysis. This can be easily examined as follows.</p> <pre><br />tweets_pd<br /><br /> screen_name place lang created_at fav_count retweet_count text<br />0 jpraulji None en 2018-05-28 13:41:59 0 5 RT @tanwanichandan: Hello Friends,\n\nODevC Ya...<br />1 opal_EPM None en 2018-05-28 13:15:30 0 6 RT @odtug: Oracle #ACE Director @CaryMillsap i...<br />2 msjsr None en 2018-05-28 12:32:08 0 5 RT @tanwanichandan: Hello Friends,\n\nODevC Ya...<br /></pre> <p>Now we can use all the analytic features of pandas to do some analytics. For example, in the following we do a could of the number of times a language has been used in our tweets data set/panda, and then plot it.</p> <pre><br />import matplotlib.pyplot as plt<br /><br />tweets_by_lang = tweets_pd['lang'].value_counts()<br />print(tweets_by_lang)<br /><br />lang_plot = tweets_by_lang.plot(kind='bar')<br />lang_plot.set_xlabel("Languages")<br />lang_plot.set_ylabel("Num. Tweets")<br />lang_plot.set_title("Language Frequency")<br /><br />en 182<br />fr 7<br />es 2<br />ca 2<br />et 1<br />in 1<br /></pre> <p><img src="https://lh3.googleusercontent.com/-hctvIYBDhAo/Ww7tEv-StbI/AAAAAAAAAcE/gfyWIECUiFQSaEeXW4yso7zCPVNvlCvpQCHMYCw/pandas1.png?imgmax=1600" alt="Pandas1" title="pandas1.png" border="0" width="430" height=320" /></p> <p>Similarly we can analyse the number of times a twitter screen name has been used, and limited to the 20 most commonly occurring screen names.</p> <pre><br />tweets_by_screen_name = tweets_pd['screen_name'].value_counts()<br />#print(tweets_by_screen_name)<br /><br />top_twitter_screen_name = tweets_by_screen_name[:20]<br />print(top_twitter_screen_name)<br /><br />name_plot = top_twitter_screen_name.plot(kind='bar')<br />name_plot.set_xlabel("Users")<br />name_plot.set_ylabel("Num. Tweets")<br />name_plot.set_title("Frequency Twitter users using oracleace")<br /><br />oraesque 7<br />DBoriented 5<br />Addidici 5<br />odtug 5<br />RonEkins 5<br />opal_EPM 5<br />fritshoogland 4<br />svilmune 4<br />FranckPachot 4<br />hariprasathdba 3<br />oraclemagazine 3<br />ritan2000 3<br />yvrk1973 3<br />...<br /></pre> <p><img src="https://lh3.googleusercontent.com/-TnpBmYY572c/Ww7tFTb-wbI/AAAAAAAAAcI/X4tvkMXpiC0oBmJpPkncCcz8FOBPGAWHgCHMYCw/pandas2.png?imgmax=1600" alt="Pandas2" title="pandas2.png" border="0" width="430" height="320" /></p> <p>There you go, this post has shown you how to take twitter objects, convert them in pandas and then use the analytics features of pandas to aggregate the data and create some plots.</p> <br><p>Check out the other blog posts in this series of Twitter Analytics using Python.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-3560380170077312107 Mon Jun 04 2018 09:24:00 GMT-0400 (EDT) Beacon Technology at ODTUG Kscope18 https://kscope18.odtug.com/p/bl/et/blogaid=806&source=1 At ODTUG Kscope18, we are using wearable beacon technology to make the event better. The technology will also help us understand what works and what does not. ODTUG https://kscope18.odtug.com/p/bl/et/blogaid=806&source=1 Mon Jun 04 2018 08:40:37 GMT-0400 (EDT) Beacon Technology at ODTUG Kscope18 https://www.odtug.com/p/bl/et/blogaid=806&source=1 At ODTUG Kscope18, we are using wearable beacon technology to make the event better. The technology will also help us understand what works and what does not. ODTUG https://www.odtug.com/p/bl/et/blogaid=806&source=1 Mon Jun 04 2018 08:40:37 GMT-0400 (EDT) Want to change or add a #DataVault Standard? http://danlinstedt.com/allposts/datavaultcat/want-to-change-or-add-a-datavault-standard/ For many years, I have built, authored and maintained the #Datavault standards.  This includes Data Vault 1.0, and Data Vault 2.0.  There are others in the community who believe that &#8220;these standards should evolve and be changed by consensus of the general public&#8221;. I have a number of issues with this approach.  In this article [&#8230;] Dan Linstedt http://danlinstedt.com/?p=2970 Sun Jun 03 2018 14:58:24 GMT-0400 (EDT) Data Vault Data Modeling Standards v2.0.1 http://danlinstedt.com/allposts/datavaultcat/data-vault-data-modeling-standards-v2-0-1/ I have published brand new updated standards for Data Vault Modeling.  V2.0.1  The document is available for FREE, but you must register on http://DataVaultAlliance.com to get it.  Don&#8217;t worry, you can ALWAYS unsubscribe after if you like. NOTE: I will be moving my blog to the new site as we go forward. Thanks, Dan Linstedt Dan Linstedt http://danlinstedt.com/?p=2967 Sun Jun 03 2018 03:22:21 GMT-0400 (EDT) False Rumors and Slander about Data Vault and my role http://danlinstedt.com/allposts/datavaultcat/false-rumors-my-role/ Hello everyone, apparently there are those out there in the market place who are spreading false rumors &#8211; they believe that I am only in Data Vault for the money.   Let me put these rumors to rest right now. Sadly I feel I have to address these issues, as these folks whom are spreading these [&#8230;] Dan Linstedt http://danlinstedt.com/?p=2943 Thu May 31 2018 15:16:41 GMT-0400 (EDT) Rittman Mead at Kscope 2018 https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/ <img src="https://www.rittmanmead.com/blog/content/images/2018/05/DevOpsSlides-1.png" alt="Rittman Mead at Kscope 2018"><p>Kscope 2018 is just a week away! Magnificent location (<a href="https://www.swandolphin.com/groupres/KSCNR/">Walt Disney World Swan and Dolphin Resort</a>) for one of the best tech conferences of the year! The agenda is impressive (look <a href="https://kscope18.odtug.com/page/presentations">here</a>) spanning over ten different tracks from the traditional EPM, BI Analytics and Data Visualization, to the newly added Blockchain! Plenty of great content and networking opportunities!</p> <p>I'll be representing Rittman Mead with two talks: one about <strong>Visualizing Streams</strong> (<em>Wednesday at 10:15 Northern Hemisphere A2, Fifth Level</em>) on how to build a modern analytical platform including Apache Kafka, Confluent's KSQL, Apache Drill and Oracle's Data Visualization (Cloud or Desktop).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Slides-2.png" alt="Rittman Mead at Kscope 2018"></p> <p>During the second talk, titled <strong>DevOps and OBIEE: <br> Do it Before it's Too Late!</strong> (<em>Monday at 10:45 Northern Hemisphere A1, Fifth Level</em>), I'll be sharing details, based on our experience, on how OBIEE can be fully included in a DevOps framework, what's the cost of "avoiding" DevOps and automation in general and how Rittman Mead's toolkits, partially described <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">here</a>, can be used to accelerate the adoption of DevOps practices in any situation.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/DevOpsSlides.png" alt="Rittman Mead at Kscope 2018"></p> <p>If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.</p> Francesco Tisiot 4af4829e-2269-427c-91dd-25f96c54294a Thu May 31 2018 03:20:00 GMT-0400 (EDT) Rittman Mead at Kscope 2018 https://www.rittmanmead.com/blog/2018/05/rittman-mead-at-kscope-2018/ <div class="kg-card-markdown"><img src="https://www.rittmanmead.com/blog/content/images/2018/05/DevOpsSlides-1.png" alt="Rittman Mead at Kscope 2018"><p>Kscope 2018 is just a week away! Magnificent location (<a href="https://www.swandolphin.com/groupres/KSCNR/">Walt Disney World Swan and Dolphin Resort</a>) for one of the best tech conferences of the year! The agenda is impressive (look <a href="https://kscope18.odtug.com/page/presentations">here</a>) spanning over ten different tracks from the traditional EPM, BI Analytics and Data Visualization, to the newly added Blockchain! Plenty of great content and networking opportunities!</p> <p>I'll be representing Rittman Mead with two talks: one about <strong>Visualizing Streams</strong> (<em>Wednesday at 10:15 Northern Hemisphere A2, Fifth Level</em>) on how to build a modern analytical platform including Apache Kafka, Confluent's KSQL, Apache Drill and Oracle's Data Visualization (Cloud or Desktop).</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/Slides-2.png" alt="Rittman Mead at Kscope 2018"></p> <p>During the second talk, titled <strong>DevOps and OBIEE:<br> Do it Before it's Too Late!</strong> (<em>Monday at 10:45 Northern Hemisphere A1, Fifth Level</em>), I'll be sharing details, based on our experience, on how OBIEE can be fully included in a DevOps framework, what's the cost of &quot;avoiding&quot; DevOps and automation in general and how Rittman Mead's toolkits, partially described <a href="https://www.rittmanmead.com/blog/2017/02/concurrent-rpd-development-with-git/">here</a>, can be used to accelerate the adoption of DevOps practices in any situation.</p> <p><img src="https://www.rittmanmead.com/blog/content/images/2018/05/DevOpsSlides.png" alt="Rittman Mead at Kscope 2018"></p> <p>If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.</p> </div> Francesco Tisiot 5b5a56f45000960018e69b3b Thu May 31 2018 03:20:00 GMT-0400 (EDT) Call for Papers : UKOUG Annual Conferences : Closes 4th June at 9am (UK) http://www.oralytics.com/2018/05/call-for-papers-ukoug-annual.html <p>The <a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAbsSubmitterLogin.csp?pageID=319&eventID=2">call for Papers (presentations) for the UKOUG Annual Conferences</a> is open until 9am (UK time) on Monday 4th June.</p> <p><a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAbsSubmitterLogin.csp?pageID=319&eventID=2"><img src="https://lh3.googleusercontent.com/-ccRv5sTj8B0/Ww2vyZ2mSTI/AAAAAAAAAb0/FV_x1gu9aPUOLJyCkPF-bRTZCBHu5shKACHMYCw/ukoug18.jpg?imgmax=1600" alt="Ukoug18" title="ukoug18.jpg" border="0" width="563" height="188" ></a></p> <p>Me: What are you waiting for? Go and submit a topic! Why not!</p> <p>You: Humm, well..., (excuse, excuse, ...)</p> <p>Me: What?</p> <p>You: I couldn't do that! Present at a conference?</p> <p>Me: Why not?</p> <p>You: That is only for experts and I'm not one.</p> <p>Me: Wrong! If you have a story to tell, then you can present. </p> <p>You: But I've never presented before, it scares me, but one day I'd like to try.</p> <p>Me: Go for it, do it. If you want you can co-present with me.</p> <p>You: But, But, But .....</p><br> <p>I'm sure you have experienced something like the above conversation before. You don't have to be an expert to present, you don't have to know everything about a product to present, you don't have to be using the latest and brightest technologies to present, you don't have to present about something complex, etc. (and the list goes on and on)</p> <p>The main thing to remember is, if you have a story to tell then that is your presentation. Be it simple, complex, only you might be interested in it, it involves making lots of bits of technology work, you use a particular application in a certain way, you found something interesting, you used a new process, etc (and the list goes on and on)</p> <p>I've talked to people who "ranted" for two hours about a certain topic (its was about Dates in Oracle), but when I said you should give a presentation on that, they say NO, I couldn't do that!. (If you are that person and you are reading this, then <a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAbsSubmitterLogin.csp?pageID=319&eventID=2">go on and submit that presentation</a>).</p> <p>If you don't want to present alone, then reach out to someone else and ask them if they are interested in co-presenting. Most experienced presenters would be very happy to do this. </p> <p><iframe width="400" height="250" src="https://www.youtube.com/embed/7w0ZyfkukUs" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe></p> <p>You: But the topic area I'll talk about is not <a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAbsSubmitterLogin.csp?pageID=319&eventID=2">listed on the submission page</a>?</p> <p>Me: Good point, just submit it and pick the topic area that is closest.</p> <p>You: But my topic would be of interest to the APPs and Tech conference, what do I do?</p> <p>Me: Submit it to both, and let the agenda planners work out where it will fit.</p> <p>I've presented at both APPs and Tech over the years and sometimess my Tech submission has been moved and accepted for the APPs conf, and vice versa.</p> <p><a href="https://www.ukougconferences.org.uk/ukoug/frontend/reg/tAbsSubmitterLogin.csp?pageID=319&eventID=2">Just do it! </a></p><p><img src="https://lh3.googleusercontent.com/-G8lxIdbqJ-A/Ww2vxWpIcZI/AAAAAAAAAbw/LTVQWdrcdXM0hQUuw5P-LgU4ilkQjQQDgCHMYCw/just_do_it.jpg?imgmax=1600" alt="Just do it" title="just_do_it.jpg" border="0" width="220" height="130" /></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-4471486086004504456 Wed May 30 2018 09:53:00 GMT-0400 (EDT) Fortune 500 and the Art of Execution http://bi.abhinavagarwal.net/2018/05/fortune-500-and-art-of-execution.html <div dir="ltr" style="text-align: left;" trbidi="on">The&nbsp;<a href="http://fortune.com/fortune500/">Fortune 500 Companies 2018</a>&nbsp;rankings came out last week, and browsing the list, the following random thoughts struck me about the list and the technology industry:<br /><br /><ul style="text-align: left;"><li><a href="http://fortune.com/fortune500/walmart/" target="_blank"><b>Walmart</b></a> - you can be in a very, very traditional brick-and-mortar business (yes, they have been making inroads into e-commerce, but for the most part, Walmart is a traditional retailer), but as long as you keep doing things well, you can be in the top 10. Not only that, you can be the top-ranked company by revenues for a sixth year in a row. In this case, you can be numero uno, with annual revenues that top five-hundred billion dollars -&nbsp;<b>$500 billion</b>, be more than twice the size of the second-ranked company (<a href="http://fortune.com/fortune500/exxon-mobil/" target="_blank">Exxon-Mobile</a> is ranked second, with annual revenues of $244B), and also employ <b>the most employees</b> (2.3 million).</li><li><a href="http://fortune.com/fortune500/apple/" target="_blank"><b>Apple</b></a> - you can be a mass-market luxury brand (yes, that is a contradiction in terms), sell only a handful of products (its Mac, iPhone, and iPad product lines bring in 79% of its revenues) and be in the top 10 - ranked fourth. You will also get to make the profits of any company - <b>$48 billion</b>. You also get to be the most highly valued company - at <b>$922 billion</b>.</li><li><b><a href="http://fortune.com/fortune500/amazon-com/" target="_blank">Amazon</a></b> - you can sell almost everything under the sun, sell it almost all online (its foray into physical stores and its acquisition of Whole Foods notwithstanding), employ the second-most employers of any company in America, be a $100 billion plus company, yet grow revenues by more than thirty per-cent (to $177 billion), and crack the top 10 - ranked eighth. You also get to be the second-most highly valued company on earth, at <b>$765 billion</b>.</li><li><a href="http://fortune.com/fortune500/netflix/" target="_blank"><b>Netflix</b></a>: you do only one thing: in this case, streaming video content on-demand and producing your own content, almost triple your profits (199% jump year-on-year), not be in the top 200, and yet deliver the best <a href="http://fortune.com/2018/05/21/fortune-500-companies-2018/" target="_blank">10-year returns to shareholders</a> (<b>48%, annualized!</b>)&nbsp;</li><li>The top five <a href="http://fortune.com/2018/05/21/fortune-500-most-valuable-companies-2018/" target="_blank"><b>most valuable companies</b></a> on the list are all technology companies - Apple, Amazon, Alphabet (the parent company of Google), Microsoft, and Facebook.</li></ul><div>Bottom line? What is common across all these companies is a relentless focus on <b>execution</b>. Execution - a simple lesson to learn, yet incredibly difficult to practice. Flipkart, the Indian e-commerce giant in which Walmart (<a href="https://news.walmart.com/2018/05/09/walmart-to-invest-in-flipkart-group-indias-innovative-ecommerce-company" target="_blank">press release</a>) bought a <a href="http://money.cnn.com/2018/05/09/investing/walmart-flipkart-india-softbank/index.html" target="_blank">77% stake for $16 billion</a>, valuing the company at $22 billion, learned that the hard way, when it lost focus in its fight against Amazon.</div><div><br /></div><div>Further suggested reading:</div><div><ol style="text-align: left;"><li><a href="http://www.dnaindia.com/analysis/standpoint-why-flipkart-seems-to-be-losing-focus-2076806">Why Flipkart seems to be losing focus</a></li><li><a href="http://www.dnaindia.com/analysis/standpoint-flipkart-vs-amazon-beware-the-whispering-death-2079185">Flipkart vs Amazon: Beware the Whispering Death</a></li><li><a href="http://www.opindia.com/2018/03/battle-in-india-for-e-commerce-market-leadership-is-no-longer-between-just-amazon-and-flipkart/">Battle in India for e-commerce market leadership is no longer between just Amazon and Flipkart</a>&nbsp;- in March, 2018, this was particularly prescient.</li><li><br /></li></ol></div><br /><i>This is an expanded version of my <a href="https://www.linkedin.com/in/abhinavagarwal/" target="_blank">LinkedIn</a> <a href="https://www.linkedin.com/feed/update/urn:li:activity:6404290298297581569" target="_blank">post</a>.</i><br /><br /><i>© 2018, Abhinav Agarwal. All rights reserved.</i></div> Abhinav Agarwal tag:blogger.com,1999:blog-13714584.post-8798097664168665446 Wed May 30 2018 08:53:00 GMT-0400 (EDT) ODTUG Kscope18 Final Update https://kscope18.odtug.com/p/bl/et/blogaid=805&source=1 Now that we’re less than two weeks out from the conference, the madness has started to settle. It’s the calm before the storm. In this final update, I’d like to discuss the changes we’ve made to make the conference experience better for our attendees. ODTUG https://kscope18.odtug.com/p/bl/et/blogaid=805&source=1 Tue May 29 2018 14:38:51 GMT-0400 (EDT) ODTUG Kscope18 Final Update https://www.odtug.com/p/bl/et/blogaid=805&source=1 Now that we’re less than two weeks out from the conference, the madness has started to settle. It’s the calm before the storm. In this final update, I’d like to discuss the changes we’ve made to make the conference experience better for our attendees. ODTUG https://www.odtug.com/p/bl/et/blogaid=805&source=1 Tue May 29 2018 14:38:51 GMT-0400 (EDT) Big Data Introduction - Workshop http://bi.abhinavagarwal.net/2018/05/big-data-introduction-workshop.html <div dir="ltr" style="text-align: left;" trbidi="on"><div class="separator" style="clear: both; text-align: center;"></div><div style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><a href="https://media.licdn.com/dms/image/C5112AQHIq5JKAndESw/article-inline_image-shrink_1500_2232/0?e=2127081600&amp;v=beta&amp;t=idwSj88o5uhJcnlNhQQ3daiTxCup3itRD3tjob7rhhU" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="266" src="https://media.licdn.com/dms/image/C5112AQHIq5JKAndESw/article-inline_image-shrink_1500_2232/0?e=2127081600&amp;v=beta&amp;t=idwSj88o5uhJcnlNhQQ3daiTxCup3itRD3tjob7rhhU" width="400" /></a></div><div style="text-align: left;">Our focus was clear - this was a level 101 class, for IT professionals in Bangalore who had heard of Big Data, were interested in Big Data, but were unsure how and where to dig their toe in the world of analytics and Big Data. A one-day workshop - with a mix of slides, white-boarding, case-study, a small game, and a mini-project - we felt, was the ideal vehicle for getting people to wrap their minds around the fundamental concepts of Big Data.</div><br /><br /><br /><br /><div style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></div><br /><a href="https://media.licdn.com/media/gcrc/dms/image/C5112AQGSR-qIifuMBw/article-cover_image-shrink_720_1280/0?e=2127081600&amp;v=beta&amp;t=0doX3eOgwZK6PaXTzk8zvYAYKVWtsIDlzS-2CUrMhXk" imageanchor="1" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="326" data-original-width="800" height="260" src="https://media.licdn.com/media/gcrc/dms/image/C5112AQGSR-qIifuMBw/article-cover_image-shrink_720_1280/0?e=2127081600&amp;v=beta&amp;t=0doX3eOgwZK6PaXTzk8zvYAYKVWtsIDlzS-2CUrMhXk" width="640" /></a><br />On a pleasant Saturday morning in January, Prakash Kadham and I conducted a one-day workshop, "<b>Introduction to Big Data &amp; Analytics</b>". As the name suggests, it was a breadth-oriented introduction to the world of Big Data and the landscape of technologies, tools, platforms, distributions, and business use-cases in the brave new world of big data.<br /><br /><a href="https://media.licdn.com/dms/image/C5112AQHSZ4PaKQKr5w/article-inline_image-shrink_1000_1488/0?e=2127081600&amp;v=beta&amp;t=oeBIswMbdSibao2RtoYmQOTYE4W0EIWTSXlSFcdYTxM" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="https://media.licdn.com/dms/image/C5112AQHSZ4PaKQKr5w/article-inline_image-shrink_1000_1488/0?e=2127081600&amp;v=beta&amp;t=oeBIswMbdSibao2RtoYmQOTYE4W0EIWTSXlSFcdYTxM" width="320" /></a>We started out by talking about the need for analytics in general, the kinds of questions analytics - also known as business intelligence sometimes - is supposed answer, and how most analytics platforms used to look like at the beginning of the decade. We then moved to what changed this decade, and the growth of data volumes, the velocity of data generation, and the increasing variety of data that rendered traditional means of data ingestion and analysis inadequate.<br /><br />A fun game with cards turned out to be an ideal way to introduce the participants to the concepts behind <b>MapReduce</b>, the fundamental paradigm behind the processing and ingestion of massive amounts of data. After all the slides and illustrations of MapReduce, we threw in a curve-ball to the participants by telling them that some companies, like Google, had started to move away from MapReduce since it was deemed unsuitable for data volumes greater than petabyte!<br /><br />The proliferation of Apache projects in almost every sphere of the Hadoop ecosystem meant that there are many, many choices for the big data engineer to choose from. Just on the subject of data ingestion, there is <b>Apache Flume</b>, <b>Apache Sqoop</b>, <b>Apache Kafka</b>, Apache <b>Samza</b>, Apache <b>NiFi</b>, and many others. Or take databases, where you have columnar, noSQL, document-oriented, graph databases to choose from, each optimized for slightly different use-cases - <b>Hbase </b>(the granddaddy of of noSQL databases), <b>Cassandra</b> (that took birth at Facebook), <b>MongoDB</b> (most suited for documents), <b>Neo4j</b> (a graph database), and so on.<br /><br />Working through a case-study helps bring theory closer to practice, and the participants got to work on just that - two case-studies, one in the retail segment and the other in healthcare. Coming off the slides and lectures, the participants dove into the case-studies with enthusiasm and high-decibel interactions among all the participants.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://media.licdn.com/dms/image/C5112AQF5V58fPxdGIQ/article-inline_image-shrink_1500_2232/0?e=2127081600&amp;v=beta&amp;t=ByBX935Nqo4jrtmYKxUiLFPw1qx1aEK1bvjHaqt5NYY" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="536" data-original-width="800" height="267" src="https://media.licdn.com/dms/image/C5112AQF5V58fPxdGIQ/article-inline_image-shrink_1500_2232/0?e=2127081600&amp;v=beta&amp;t=ByBX935Nqo4jrtmYKxUiLFPw1qx1aEK1bvjHaqt5NYY" width="400" /></a></div>The day passed off fast enough and we ended the day with a small visualization exercise, using the popular tool, <b>Tableau</b>. At the end of the long but productive day, the participants had one last task to complete - fill out a feedback form, which contained six objective questions and three free-form ones. It was hugely gratifying that all but one filled out the questionnaire. After the group photo and the workshop was formally over, Prakash and I took a look at the survey questionnaire that the participants had filled out, and did a quick, back-of-the-envelope <b>NPS </b>(Net Promoter Score) calculation. We rechecked our calculations and found we had managed an NPS of 100!<br /><br />The suggestions we received have been most useful, and we are now working to incorporate the suggestions in the workshop. Among the suggestions was for us to hold a more advanced, Level 200, workshop. That remains our second goal!<br /><br />Thank you to all the participants who took time out to spend an entire Saturday with us, for their active and enthusiastic participation, and to the valuable feedback they shared with us! A most encouraging start to 2018!<br /><br /><i>This post was first published on <a href="https://www.linkedin.com/in/abhinavagarwal" target="_blank">LinkedIn</a> on <a href="https://www.linkedin.com/pulse/big-data-introduction-workshop-abhinav-agarwal/" target="_blank">Feb 5, 2018</a>.</i><br /><i>© 2018, Abhinav Agarwal.</i></div> Abhinav Agarwal tag:blogger.com,1999:blog-13714584.post-3007309628474331574 Mon May 28 2018 13:30:00 GMT-0400 (EDT) Twitter Analytics using Python - Part 1 http://www.oralytics.com/2018/05/twitter-analytics-using-python-part-1.html <p>(This is probably the first part of, probably, a five part blog series on twitter analytics using Python. Make sure to check out the other posts and I'll post a wrap up blog post that will point to all the posts in the series)</p> <p>(Yes there are lots of other examples out there, but I've put these notes together as a reminder for myself and a particular project I'm testing)</p> <p>In this first blog post I will look at what you need to do get get your self setup for analysing Tweets, to harvest tweets and to do some basics. These are covered in the following five steps.</p> <p><span style='text-decoration:underline;'><strong>Step 1 - Setup your Twitter Developer Account & Codes</strong></span></p><p>Before you can start writing code you need need to get yourself setup with Twitter to allow you to download their data using the Twitter API.</p> <p>To do this you need to register with Twitter. To do this go to <a href="https://apps.twitter.com/">apps.twitter.com</a>. Log in using your twitter account if you have one. If not then you need to go create an account.</p> <p>Next click on the Create New App button.</p> <p><img src="https://lh3.googleusercontent.com/-wXFAGBGOtXw/WwhGa2NXVGI/AAAAAAAAAaw/LfDkhexWCKk6myVFYbpEyxIfglfsSYVTgCHMYCw/twitter_app1.png?imgmax=1600" alt="Twitter app1" title="twitter_app1.png" border="0" width="484" height="113" /></p> <p>Then give the Name of your app (Twitter Analytics using Python), a description, a webpage link (eg your blog or something else), click on the 'add a Callback URL' button and finally click the check box to agree with the Developer Agreement. Then click the 'Create your Twitter Application' button.</p> <p>You will then get a web page like the following that contains lots of very important information. Keep the information on this page safe as you will need it later when creating your connection to Twitter.</p><p><img src="https://lh3.googleusercontent.com/-cR_ppxUTlVk/WwhGbRTZzOI/AAAAAAAAAa0/S1f8DBODX1QzOVmgNYR73ZctdXKDCJIMACHMYCw/twitter_app2.png?imgmax=1600" alt="Twitter app2" title="twitter_app2.png" border="0" width="486" height="322" /></p> <p>The details contained on this web page (and below what is shown in the above image) will allow you to use the Twitter REST APIs to interact with the Twitter service.</p> <p><span style='text-decoration:underline;'><strong>Step 2 - Install libraries for processing Twitter Data</strong></span></p><p>As with most languages there is a bunch of code and libraries available for you to use. Similarly for Python and Twitter. There is the <a href="http://tweepy.readthedocs.io/en/v3.5.0/">Tweepy library</a> that is very popular. Make sure to check out the <a href="http://tweepy.readthedocs.io/en/v3.5.0/">Tweepy</a> web site for full details of what it will allow you to do.</p> <p>To install Tweepy, run the following.</p> <pre><br />pip3 install tweepy<br /></pre> <p>It will download and install tweepy and any dependencies.</p> <p><span style='text-decoration:underline;'><strong>Step 3 - Initial Python code and connecting to Twitter</strong></span></p><p>You are all set to start writing Python code to access, process and analyse Tweets.</p> <p>The first thing you need to do is to import the tweepy library. After that you will need to use the important codes that were defined on the Twitter webpage produced in Step 1 above, to create an authorised connection to the Twitter API.</p> <p><img src="https://lh3.googleusercontent.com/-kbJSBx-tzvQ/WwhGcFGd2lI/AAAAAAAAAa4/bqZizCOjHBIPK-0bK5c2Vl4e6SIFcj4cgCHMYCw/twitter_app3.png?imgmax=1600" alt="Twitter app3" title="twitter_app3.png" border="0" width="566" height="180" /></p> <p>After you have filled in your consumer and access token values and run this code, you will not get any response.</p> <p><span style='text-decoration:underline;'><strong>Step 4 - Get User Twitter information</strong></span></p><p>The easiest way to start exploring twitter is to find out information about your own twitter account. There is a API function called 'me' that gathers are the user object details from Twitter and from there you can print these out to screen or do some other things with them. The following is an example about my Twitter account.</p> <pre><br />#Get twitter information about my twitter account<br />user = api.me()<br /><br />print('Name: ' + user.name)<br />print('Twitter Name: ' + user.screen_name)<br />print('Location: ' + user.location)<br />print('Friends: ' + str(user.friends_count))<br />print('Followers: ' + str(user.followers_count))<br />print('Listed: ' + str(user.listed_count))<br /></pre> <p><img src="https://lh3.googleusercontent.com/-SAv_W_ZBQ0o/WwhGcwWHUBI/AAAAAAAAAa8/LB0b1O1V_Y0tkVUtLEELEG3VcE7CAzjdwCHMYCw/twitter_app4.png?imgmax=1600" alt="Twitter app4" title="twitter_app4.png" border="0" width="378" height="141" /></p> <p>You can also start listing the last X number of tweets from your timeline. The following will take the last 10 tweets.</p> <pre><br />for tweets in tweepy.Cursor(api.home_timeline).items(10):<br /> # Process a single status<br /> print(tweets.text)<br /></pre> <img src="https://lh3.googleusercontent.com/-p9xYhi3iKI4/WwhGdu2rMaI/AAAAAAAAAbA/IExkGR2NBvccqVCgvhw1k3VppFNm1Sz6QCHMYCw/twitter_app5.png?imgmax=1600" alt="Twitter app5" title="twitter_app5.png" border="0" width="377" height="203" /> <p>An alternative is, that returns only 20 records, where the example above can return X number of tweets.</p> <pre><br />public_tweets = api.home_timeline()<br />for tweet in public_tweets:<br /> print(tweet.text)<br /></pre> <p><span style='text-decoration:underline;'><strong>Step 5 - Get Tweets based on a condition</strong></span></p><p>Tweepy comes with a Search function that allows you to specify some text you want to search for. This can be hash tags, particular phrases, users, etc. The following is an example of searching for a hash tag.</p> <pre><br />for tweet in tweepy.Cursor(api.search,q="#machinelearning",<br /> lang="en",<br /> since="2018-05-01").items(10):<br /> print(tweet.created_at, tweet.text)<br /></pre> <p><img src="https://lh3.googleusercontent.com/-z4EEo3AVYLA/WwhGedXUEGI/AAAAAAAAAbE/aP8ZyY7CNzU4Ljw3YGmg3UmKtWiXy-_gQCHMYCw/twitter_app7.png?imgmax=1600" alt="Twitter app7" title="twitter_app7.png" border="0" width="363" height="246" /></p> <p>You can apply additional search criteria to include restricting to a date range, number of tweets to return, etc</p> <br><p>Check out the other blog posts in this series of Twitter Analytics using Python.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-4625828400074172063 Mon May 28 2018 10:22:00 GMT-0400 (EDT) UKOUG EPM & Hyperion Event 6th June, 2018 http://www.oralytics.com/2018/05/ukoug-epm-hyperion-event-6th-june-2018.html <p>Come up really soon is the annual <a href="http://www.ukoug.org/2018-events/epmhyperion-2018/">EPM and Hyperion</a> event organised by the UKOUG. This year it will be on 6th June at Sandown Park Racecourse (Portsmouth Rd, Esher, KT10 9AJ).</p> <a href="http://www.ukoug.org/2018-events/epmhyperion-2018/"><img src="https://lh3.googleusercontent.com/-AjkLjXAUw1M/WwlcRsJpuXI/AAAAAAAAAbg/_hzrlIsGCFEBVt7AyKX0q8dW0tHU3ledACHMYCw/epm1.png?imgmax=1600" alt="Epm1" title="epm1.png" border="0" width="297" height="55" /></a> <p>If you have ever been to Sandown Park Racecourse you will know how fantastic a venue it is. And if you have been to a previous UKOUG EPM & Hyperion event before you will know how amazing it is.</p> <p>Just go and book you place for this event now. If you are a <a href="http://www.ukoug.org/membership-new/">UKOUG member</a> this event could be free (depending on level of <a href="http://www.ukoug.org/membership-new/">membership</a>) but if you aren't a member, you can still go to this event. You can either become a <a href="http://www.ukoug.org/membership-new/">UKOUG member</a> and attend the event or pay the small event fee and attend the event.</p> <p>From what I've heard a lot of people have already signed up to go, and I can see why. The agenda is jammed packed with end-user and customer case studies, as well as presentations from key people from Oracle people and and leading Oracle partners.</p> <a href="http://oug.org/Hyperion/Hyperion18/Agenda.pdf"><img src="https://lh3.googleusercontent.com/-7ivK0qhuPW8/WwlcSfn_DhI/AAAAAAAAAbk/SnerqzQpCDkBY6aVA2Ektpf57fEGedhHQCHMYCw/epm2.png?imgmax=1600" alt="Epm2" title="epm2.png" border="0" width="593" height="466" /></a> <p>The exhibition space has been sold out! This will give you plenty of opportunities to get talking to various partners and service providers, and get your key questions answered at this event.</p> <p>There will be a Panel Session at the end of the day. I love these panel sessions and gives everyone a chance to ask questions or to listen, or to join in with the discussion.</p> <p>Lots and lots of value and learning to be had at this event. <a href="http://www.ukoug.org/2018-events/epmhyperion-2018/">Go register now.</a></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-1518738226606116564 Sat May 26 2018 09:08:00 GMT-0400 (EDT) ODTUG Kscope18 Update #4 https://www.odtug.com/p/bl/et/blogaid=804&source=1 Are you looking for a way to volunteer at ODTUG Kscope18? It’s a low- commitment role, and we’d welcome you with open arms. Anyone attending ODTUG Kscope18 can find a volunteer position that fits their schedule! ODTUG https://www.odtug.com/p/bl/et/blogaid=804&source=1 Tue May 22 2018 14:37:43 GMT-0400 (EDT) ODTUG Kscope18 Update #4 https://kscope18.odtug.com/p/bl/et/blogaid=804&source=1 Are you looking for a way to volunteer at ODTUG Kscope18? It’s a low- commitment role, and we’d welcome you with open arms. Anyone attending ODTUG Kscope18 can find a volunteer position that fits their schedule! ODTUG https://kscope18.odtug.com/p/bl/et/blogaid=804&source=1 Tue May 22 2018 14:37:43 GMT-0400 (EDT) Announcing the 2018 ODTUG Innovation Award Nominations https://www.odtug.com/p/bl/et/blogaid=803&source=1 Introducing this year's 2018 ODTUG Innovation Award Nominations! ODTUG https://www.odtug.com/p/bl/et/blogaid=803&source=1 Tue May 22 2018 08:38:16 GMT-0400 (EDT) Fixing Oracle BI Performance Issues: A Non-Technical Guide https://www.us-analytics.com/hyperionblog/fixing-oracle-bi-performance-issues <div class="hs-featured-image-wrapper"> <a href="https://www.us-analytics.com/hyperionblog/fixing-oracle-bi-performance-issues" title="" class="hs-featured-image-link"> <img src="https://www.us-analytics.com/hubfs/obiee%20performance.jpg?t=1533950236061" alt="obiee performance" class="hs-featured-image" style="width:auto !important; max-width:50%; float:left; margin:0 15px 15px 0;"> </a> </div> <p>Nearly all Oracle BI customers we speak with have noticeable performance issues. Business users complain about having to wait too long for their reports, and developers are at a loss as to what their options are. And, no one wants to spend time tuning an environment instead of implementing direct business requirements. Unfortunately, there are no shortcuts or quick-fixes to improving the performance of an Oracle BI environment. Normal approaches involve either outsourcing the efforts to a consultancy or attempting to wade through the difficult process on your own. While both options initially seem fine, they’re not maintainable.</p> <img src="https://track.hubspot.com/__ptq.gif?a=135305&amp;k=14&amp;r=https%3A%2F%2Fwww.us-analytics.com%2Fhyperionblog%2Ffixing-oracle-bi-performance-issues&amp;bu=https%253A%252F%252Fwww.us-analytics.com%252Fhyperionblog&amp;bvt=rss" alt="" width="1" height="1" style="min-height:1px!important;width:1px!important;border-width:0!important;margin-top:0!important;margin-bottom:0!important;margin-right:0!important;margin-left:0!important;padding-top:0!important;padding-bottom:0!important;padding-right:0!important;padding-left:0!important; "> Nicholas Padgett https://www.us-analytics.com/hyperionblog/fixing-oracle-bi-performance-issues Mon May 21 2018 17:42:03 GMT-0400 (EDT) Reflecting Changes in Business Objects in UI Tables with Visual Builder https://blogs.oracle.com/xmlpublisher/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 <p>While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don&#39;t have wizards for.</p> <p>For example - let&#39;s suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the &quot;Add Data&quot; quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you&#39;ll notice it is not automatically shown in the UI. That makes sense since we don&#39;t want to automatically show all the fields in a BO in the UI.</p> <p>But how do you add this new column to the UI?</p> <p>The table&#39;s Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!</p> <p><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/0b2fe944c1e4730fddb73318062e1804/screen_shot_2018_05_16_at_9_34_36_am.png" style="width: 650px; height: 391px;" /></p> <p>&nbsp;</p> <p>If you&#39;ll look into the table properties you&#39;ll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you&#39;ll look at the variable, you&#39;ll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you&#39;ll look at the type definition you&#39;ll see where the fields that make up the object are defined.</p> <p><img alt="Type Definition" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/5ed9ff251bdf892ec824a2c10ceedbbd/screen_shot_2018_05_16_at_9_50_54_am.png" style="width: 649px; height: 350px;" /></p> <p>It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just <strong>make sure you are using the column&#39;s id</strong> - and not it&#39;s title - when you define the new field in the items array.</p> <p>Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.</p> <p>Sounds complex? It really isn&#39;t - here is a 3 minute video showing the whole thing end to end:</p> <p>As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.</p> Shay Shmeltzer https://blogs.oracle.com/xmlpublisher/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 Mon May 21 2018 14:14:19 GMT-0400 (EDT) Reflecting Changes in Business Objects in UI Tables with Visual Builder https://blogs.oracle.com/biapps/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 <p>While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don&#39;t have wizards for.</p> <p>For example - let&#39;s suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the &quot;Add Data&quot; quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you&#39;ll notice it is not automatically shown in the UI. That makes sense since we don&#39;t want to automatically show all the fields in a BO in the UI.</p> <p>But how do you add this new column to the UI?</p> <p>The table&#39;s Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!</p> <p><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/0b2fe944c1e4730fddb73318062e1804/screen_shot_2018_05_16_at_9_34_36_am.png" style="width: 650px; height: 391px;" /></p> <p>&nbsp;</p> <p>If you&#39;ll look into the table properties you&#39;ll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you&#39;ll look at the variable, you&#39;ll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you&#39;ll look at the type definition you&#39;ll see where the fields that make up the object are defined.</p> <p><img alt="Type Definition" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/5ed9ff251bdf892ec824a2c10ceedbbd/screen_shot_2018_05_16_at_9_50_54_am.png" style="width: 649px; height: 350px;" /></p> <p>It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just <strong>make sure you are using the column&#39;s id</strong> - and not it&#39;s title - when you define the new field in the items array.</p> <p>Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.</p> <p>Sounds complex? It really isn&#39;t - here is a 3 minute video showing the whole thing end to end:</p> <p>As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.</p> Shay Shmeltzer https://blogs.oracle.com/biapps/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 Mon May 21 2018 14:14:19 GMT-0400 (EDT) Reflecting Changes in Business Objects in UI Tables with Visual Builder https://blogs.oracle.com/zen/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 <p>While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don&#39;t have wizards for.</p> <p>For example - let&#39;s suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the &quot;Add Data&quot; quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you&#39;ll notice it is not automatically shown in the UI. That makes sense since we don&#39;t want to automatically show all the fields in a BO in the UI.</p> <p>But how do you add this new column to the UI?</p> <p>The table&#39;s Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!</p> <p><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/0b2fe944c1e4730fddb73318062e1804/screen_shot_2018_05_16_at_9_34_36_am.png" style="width: 650px; height: 391px;" /></p> <p>&nbsp;</p> <p>If you&#39;ll look into the table properties you&#39;ll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you&#39;ll look at the variable, you&#39;ll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you&#39;ll look at the type definition you&#39;ll see where the fields that make up the object are defined.</p> <p><img alt="Type Definition" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/04324f99-152a-401b-96f2-19c1f695b94b/Image/5ed9ff251bdf892ec824a2c10ceedbbd/screen_shot_2018_05_16_at_9_50_54_am.png" style="width: 649px; height: 350px;" /></p> <p>It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just <strong>make sure you are using the column&#39;s id</strong> - and not it&#39;s title - when you define the new field in the items array.</p> <p>Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.</p> <p>Sounds complex? It really isn&#39;t - here is a 3 minute video showing the whole thing end to end:</p> <p>As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.</p> Shay Shmeltzer https://blogs.oracle.com/zen/reflecting-changes-in-business-objects-in-ui-tables-with-visual-builder-v2 Mon May 21 2018 14:14:19 GMT-0400 (EDT) European Privacy Requirements: Considerations for Retailers https://blogs.oracle.com/zen/european-privacy-requirements%3A-considerations-for-retailers-v2 <p>When retailers throughout Europe adopt a new set of privacy and security regulations this week, it will be the first major revision of data protection guidelines in more than 20 years. The 2018 regulations address personal as well as financial data, and require that retailers use systems already designed to fulfill these protections by default.</p> <p>In 1995, the European Commission adopted a Data Protection Directive that regulates the processing of personal data within the European Union. This gave rise to 27 different national data regulations, all of which remain intact today. In 2012, the EC announced that it would supersede these national regulations and unify data protection law across the EU by adopting a new set of requirements called the General Data Protection Regulation (GDPR).</p> <p>The rules apply to any retailer selling to European consumers. The GDPR, which takes effect May 25, 2018, pertains to any company doing business in, or with citizens of, the European Union, and to both new and existing products and services. Organizations found to be in violation of the GDPR will face a steep penalty of 20 million euros or four percent of their gross annual revenue, whichever is greater.</p> Retailers Must Protect Consumers While Personalizing Offers <p>GDPR regulations will encompass personal as well as financial data, including much of the data found in a robust customer engagement system, CRM, or loyalty program. It also includes information not historically considered to be personal data: device IDs, IP addresses, log data, geolocation data, and, very likely, cookies.</p> <p>For the majority of retailers relying on customer data to personalize offers, it is critically important to understand how to fulfill GDPR requirements and execute core retail, customer, and marketing operations. Developing an intimate relationship with consumers and delivering personalized offers means tapping into myriad data sources.</p> <p>This can be done, but systems must be GDPR-compliant by design and by default. A key concept underlying the GDPR is Privacy by Design (PBD), which essentially stipulates that systems be designed to minimize the amount of personal data they collect. Beginning this week, Privacy by Design features will become a regulatory requirement for both Oracle and our customers and GDPR stipulates that these protections are, by default, turned on.</p> Implementing Security Control Features <p>While the GDPR requires &ldquo;appropriate security and confidentiality,&rdquo; exact security controls are not specified. However, a number of security control features are discussed in the text and will likely be required for certain types of data or processing. Among them are multi-factor authentication for cloud services, customer-configurable IP whitelisting, granular access controls (by record, data element, data type, or logs), encryption, anonymization, and tokenization.</p> <p>Other security controls likely to be required are &ldquo;separation of duties&rdquo; (a customer option requiring two people to perform certain administrative tasks); customer options for marking some fields as sensitive and restricted; limited access on the part of the data controller (i.e. Oracle) to customer information; displaying only a portion of a data field; and the permanent removal of portions of a data element.</p> Summary of Critical GDPR Requirements <p>The GDPR includes a number of recommendations and requirements governing users&rsquo; overall approach to data gathering and use. Among the more important are:</p> <ul> <li><em>Minimization. </em>Users are required to minimize the amount of data used, length of time it is stored, the number of people who have access to it, and the extent of that access.</li> <li><em>Retention and purging. </em>Data may be retained for only as long as reasonably necessary. This applies in particular to personal data, which should be processed only if the purpose of processing cannot reasonably be fulfilled by other means. Services must delete customer data on completion of the services.</li> <li><em>Exports and portability. </em>End users must be provided with copies of their data in a structured, commonly used digital format. Customers will be required to allow end users to send data directly to a competing service provider for some services.</li> <li><em>Access, correction, and deletion. </em>End-user requests for data access, correction, and deletion for data they store in any service. Users may have a &ldquo;right to be forgotten&rdquo;&mdash;a right to have all their data erased.</li> <li><em>Notice and consent. </em>When information is collected, end-user notice and consent for data processing is generally required.</li> <li><em>Backup and disaster recovery. </em>Timely availability of end-user data must be ensured.</li> </ul> <p><strong>Are you prepared? </strong></p> <p>Oracle is prepared for the EU General Data Protection Regulation (GDPR) that was adopted by the European Parliament in April 2016 and will become effective on May 25, 2018. We welcome the positive changes it is expected to bring to our service offerings by providing a consistent and unified data protection regime for businesses across Europe. Oracle is committed to helping its customers address the GDPR&rsquo;s new requirements that are relevant to our service offerings, including any applicable processor accountability requirements.</p> <p>Our customers can rest assured that Oracle Retail&rsquo;s omnichannel suite will empower them to continue delivering personalized customer experiences that meet complex global data privacy regulations. Contact Oracle Retail to learn more about Oracle systems, services and GDPR compliance: <a href="mailto:oneretailvoice_ww@oracle.com">oneretailvoice_ww@oracle.com</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> Raymond Martin https://blogs.oracle.com/zen/european-privacy-requirements%3A-considerations-for-retailers-v2 Mon May 21 2018 12:52:00 GMT-0400 (EDT) European Privacy Requirements: Considerations for Retailers https://blogs.oracle.com/xmlpublisher/european-privacy-requirements%3A-considerations-for-retailers-v2 <p>When retailers throughout Europe adopt a new set of privacy and security regulations this week, it will be the first major revision of data protection guidelines in more than 20 years. The 2018 regulations address personal as well as financial data, and require that retailers use systems already designed to fulfill these protections by default.</p> <p>In 1995, the European Commission adopted a Data Protection Directive that regulates the processing of personal data within the European Union. This gave rise to 27 different national data regulations, all of which remain intact today. In 2012, the EC announced that it would supersede these national regulations and unify data protection law across the EU by adopting a new set of requirements called the General Data Protection Regulation (GDPR).</p> <p>The rules apply to any retailer selling to European consumers. The GDPR, which takes effect May 25, 2018, pertains to any company doing business in, or with citizens of, the European Union, and to both new and existing products and services. Organizations found to be in violation of the GDPR will face a steep penalty of 20 million euros or four percent of their gross annual revenue, whichever is greater.</p> Retailers Must Protect Consumers While Personalizing Offers <p>GDPR regulations will encompass personal as well as financial data, including much of the data found in a robust customer engagement system, CRM, or loyalty program. It also includes information not historically considered to be personal data: device IDs, IP addresses, log data, geolocation data, and, very likely, cookies.</p> <p>For the majority of retailers relying on customer data to personalize offers, it is critically important to understand how to fulfill GDPR requirements and execute core retail, customer, and marketing operations. Developing an intimate relationship with consumers and delivering personalized offers means tapping into myriad data sources.</p> <p>This can be done, but systems must be GDPR-compliant by design and by default. A key concept underlying the GDPR is Privacy by Design (PBD), which essentially stipulates that systems be designed to minimize the amount of personal data they collect. Beginning this week, Privacy by Design features will become a regulatory requirement for both Oracle and our customers and GDPR stipulates that these protections are, by default, turned on.</p> Implementing Security Control Features <p>While the GDPR requires &ldquo;appropriate security and confidentiality,&rdquo; exact security controls are not specified. However, a number of security control features are discussed in the text and will likely be required for certain types of data or processing. Among them are multi-factor authentication for cloud services, customer-configurable IP whitelisting, granular access controls (by record, data element, data type, or logs), encryption, anonymization, and tokenization.</p> <p>Other security controls likely to be required are &ldquo;separation of duties&rdquo; (a customer option requiring two people to perform certain administrative tasks); customer options for marking some fields as sensitive and restricted; limited access on the part of the data controller (i.e. Oracle) to customer information; displaying only a portion of a data field; and the permanent removal of portions of a data element.</p> Summary of Critical GDPR Requirements <p>The GDPR includes a number of recommendations and requirements governing users&rsquo; overall approach to data gathering and use. Among the more important are:</p> <ul> <li><em>Minimization. </em>Users are required to minimize the amount of data used, length of time it is stored, the number of people who have access to it, and the extent of that access.</li> <li><em>Retention and purging. </em>Data may be retained for only as long as reasonably necessary. This applies in particular to personal data, which should be processed only if the purpose of processing cannot reasonably be fulfilled by other means. Services must delete customer data on completion of the services.</li> <li><em>Exports and portability. </em>End users must be provided with copies of their data in a structured, commonly used digital format. Customers will be required to allow end users to send data directly to a competing service provider for some services.</li> <li><em>Access, correction, and deletion. </em>End-user requests for data access, correction, and deletion for data they store in any service. Users may have a &ldquo;right to be forgotten&rdquo;&mdash;a right to have all their data erased.</li> <li><em>Notice and consent. </em>When information is collected, end-user notice and consent for data processing is generally required.</li> <li><em>Backup and disaster recovery. </em>Timely availability of end-user data must be ensured.</li> </ul> <p><strong>Are you prepared? </strong></p> <p>Oracle is prepared for the EU General Data Protection Regulation (GDPR) that was adopted by the European Parliament in April 2016 and will become effective on May 25, 2018. We welcome the positive changes it is expected to bring to our service offerings by providing a consistent and unified data protection regime for businesses across Europe. Oracle is committed to helping its customers address the GDPR&rsquo;s new requirements that are relevant to our service offerings, including any applicable processor accountability requirements.</p> <p>Our customers can rest assured that Oracle Retail&rsquo;s omnichannel suite will empower them to continue delivering personalized customer experiences that meet complex global data privacy regulations. Contact Oracle Retail to learn more about Oracle systems, services and GDPR compliance: <a href="mailto:oneretailvoice_ww@oracle.com">oneretailvoice_ww@oracle.com</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> Raymond Martin https://blogs.oracle.com/xmlpublisher/european-privacy-requirements%3A-considerations-for-retailers-v2 Mon May 21 2018 12:52:00 GMT-0400 (EDT) European Privacy Requirements: Considerations for Retailers https://blogs.oracle.com/biapps/european-privacy-requirements%3A-considerations-for-retailers-v2 <p>When retailers throughout Europe adopt a new set of privacy and security regulations this week, it will be the first major revision of data protection guidelines in more than 20 years. The 2018 regulations address personal as well as financial data, and require that retailers use systems already designed to fulfill these protections by default.</p> <p>In 1995, the European Commission adopted a Data Protection Directive that regulates the processing of personal data within the European Union. This gave rise to 27 different national data regulations, all of which remain intact today. In 2012, the EC announced that it would supersede these national regulations and unify data protection law across the EU by adopting a new set of requirements called the General Data Protection Regulation (GDPR).</p> <p>The rules apply to any retailer selling to European consumers. The GDPR, which takes effect May 25, 2018, pertains to any company doing business in, or with citizens of, the European Union, and to both new and existing products and services. Organizations found to be in violation of the GDPR will face a steep penalty of 20 million euros or four percent of their gross annual revenue, whichever is greater.</p> Retailers Must Protect Consumers While Personalizing Offers <p>GDPR regulations will encompass personal as well as financial data, including much of the data found in a robust customer engagement system, CRM, or loyalty program. It also includes information not historically considered to be personal data: device IDs, IP addresses, log data, geolocation data, and, very likely, cookies.</p> <p>For the majority of retailers relying on customer data to personalize offers, it is critically important to understand how to fulfill GDPR requirements and execute core retail, customer, and marketing operations. Developing an intimate relationship with consumers and delivering personalized offers means tapping into myriad data sources.</p> <p>This can be done, but systems must be GDPR-compliant by design and by default. A key concept underlying the GDPR is Privacy by Design (PBD), which essentially stipulates that systems be designed to minimize the amount of personal data they collect. Beginning this week, Privacy by Design features will become a regulatory requirement for both Oracle and our customers and GDPR stipulates that these protections are, by default, turned on.</p> Implementing Security Control Features <p>While the GDPR requires &ldquo;appropriate security and confidentiality,&rdquo; exact security controls are not specified. However, a number of security control features are discussed in the text and will likely be required for certain types of data or processing. Among them are multi-factor authentication for cloud services, customer-configurable IP whitelisting, granular access controls (by record, data element, data type, or logs), encryption, anonymization, and tokenization.</p> <p>Other security controls likely to be required are &ldquo;separation of duties&rdquo; (a customer option requiring two people to perform certain administrative tasks); customer options for marking some fields as sensitive and restricted; limited access on the part of the data controller (i.e. Oracle) to customer information; displaying only a portion of a data field; and the permanent removal of portions of a data element.</p> Summary of Critical GDPR Requirements <p>The GDPR includes a number of recommendations and requirements governing users&rsquo; overall approach to data gathering and use. Among the more important are:</p> <ul> <li><em>Minimization. </em>Users are required to minimize the amount of data used, length of time it is stored, the number of people who have access to it, and the extent of that access.</li> <li><em>Retention and purging. </em>Data may be retained for only as long as reasonably necessary. This applies in particular to personal data, which should be processed only if the purpose of processing cannot reasonably be fulfilled by other means. Services must delete customer data on completion of the services.</li> <li><em>Exports and portability. </em>End users must be provided with copies of their data in a structured, commonly used digital format. Customers will be required to allow end users to send data directly to a competing service provider for some services.</li> <li><em>Access, correction, and deletion. </em>End-user requests for data access, correction, and deletion for data they store in any service. Users may have a &ldquo;right to be forgotten&rdquo;&mdash;a right to have all their data erased.</li> <li><em>Notice and consent. </em>When information is collected, end-user notice and consent for data processing is generally required.</li> <li><em>Backup and disaster recovery. </em>Timely availability of end-user data must be ensured.</li> </ul> <p><strong>Are you prepared? </strong></p> <p>Oracle is prepared for the EU General Data Protection Regulation (GDPR) that was adopted by the European Parliament in April 2016 and will become effective on May 25, 2018. We welcome the positive changes it is expected to bring to our service offerings by providing a consistent and unified data protection regime for businesses across Europe. Oracle is committed to helping its customers address the GDPR&rsquo;s new requirements that are relevant to our service offerings, including any applicable processor accountability requirements.</p> <p>Our customers can rest assured that Oracle Retail&rsquo;s omnichannel suite will empower them to continue delivering personalized customer experiences that meet complex global data privacy regulations. Contact Oracle Retail to learn more about Oracle systems, services and GDPR compliance: <a href="mailto:oneretailvoice_ww@oracle.com">oneretailvoice_ww@oracle.com</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> Raymond Martin https://blogs.oracle.com/biapps/european-privacy-requirements%3A-considerations-for-retailers-v2 Mon May 21 2018 12:52:00 GMT-0400 (EDT) New Oracle E-Business Suite Person Data Removal Tool Now Available https://blogs.oracle.com/xmlpublisher/new-oracle-e-business-suite-person-data-removal-tool-now-available <p>Oracle is pleased to announce the availability of the Oracle E-Business Suite Person Data Removal Tool, designed to remove (obfuscate) data associated with people in E-Business Suite systems. Customers can apply the tool to select information in their E-Business Suite production systems to help address internal operational and external regulatory requirements, such as the EU General Data Protection Regulation (GDPR).</p> <p>For more details, see:</p> <ul> <li><a href="http://www.oracle.com/us/products/applications/ebs-person-data-removal-tool-4490004.pdf">Release Announcement&nbsp;</a></li> <li><a href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=904&amp;get_params=cloudId:243,objectId:21844">Oracle E-Business Suite Person Data Removal Tool: Solution Overview Video</a></li> </ul> Steven Chan https://blogs.oracle.com/xmlpublisher/new-oracle-e-business-suite-person-data-removal-tool-now-available Mon May 21 2018 11:27:00 GMT-0400 (EDT) New Oracle E-Business Suite Person Data Removal Tool Now Available https://blogs.oracle.com/biapps/new-oracle-e-business-suite-person-data-removal-tool-now-available <p>Oracle is pleased to announce the availability of the Oracle E-Business Suite Person Data Removal Tool, designed to remove (obfuscate) data associated with people in E-Business Suite systems. Customers can apply the tool to select information in their E-Business Suite production systems to help address internal operational and external regulatory requirements, such as the EU General Data Protection Regulation (GDPR).</p> <p>For more details, see:</p> <ul> <li><a href="http://www.oracle.com/us/products/applications/ebs-person-data-removal-tool-4490004.pdf">Release Announcement&nbsp;</a></li> <li><a href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=904&amp;get_params=cloudId:243,objectId:21844">Oracle E-Business Suite Person Data Removal Tool: Solution Overview Video</a></li> </ul> Steven Chan https://blogs.oracle.com/biapps/new-oracle-e-business-suite-person-data-removal-tool-now-available Mon May 21 2018 11:27:00 GMT-0400 (EDT)